url
stringlengths 14
1.76k
| text
stringlengths 100
1.02M
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|
https://repository.uantwerpen.be/link/irua/93687
|
Title New anion-conducting solid solutions $Bi_{1-x}Te_{x}(O,F)_{2+\delta}$ (x > 0.5) and glassceramic material on their baseNew anion-conducting solid solutions $Bi_{1-x}Te_{x}(O,F)_{2+\delta}$ (x > 0.5) and glassceramic material on their base Author Prituzhalov, V.A. Ardashnikova, E.I. Vinogradov, A.A. Dolgikh, V.A. Videau, J.-J. Fargin, E. Abakumov, A.M. Tarakina, N.V. Van Tendeloo, G. Faculty/Department Faculty of Sciences. Physics Research group Electron microscopy for materials research (EMAT) Publication type article Publication 2011Lausanne, 2011 Subject Physics Chemistry Source (journal) Journal of fluorine chemistry. - Lausanne Volume/pages 132(2011):12, p. 1110-1116 ISSN 0022-1139 ISI 000296936300011 Carrier E Target language English (eng) Full text (Publishers DOI) Affiliation University of Antwerp Abstract The anion-excess fluorite-like solid solutions with general composition Bi1−xTex(O,F)2+δ (x > 0.5) have been synthesized by a solid state reaction of TeO2, BiF3 and Bi2O3 at 873 K with following quenching. The homogeneity areas and polymorphism of the I ↔ IV Bi1−xTex(O,F)2+δ phases were investigated. The crystal structure of the low temperature IV-Bi1−xTex(O,F)2+δ phase has been solved using electron diffraction and X-ray powder diffraction (a = 11.53051(9) Å, S.G. Ia-3, RI = 0.046, RP = 0.041). Glass formation area in the Bi2O3BiF3TeO2 (10% TiO2) system was investigated. IVBi1−xTex(O,F)2+δ phase starts to crystallize at short-time (0.53 h) annealing of oxyfluoride glasses at temperatures above Tg (600615 K). The ionic conductivity of the crystalline Bi1−xTex(O,F)2+δ phase and corresponding glass-ceramics was investigated. Activation energy of conductivity Ea = 0.41(2) eV for the IV-Bi1−xTex(O,F)2+δ crystalline samples and Ea = 0.73 eV for the glass-ceramic samples were obtained. Investigation of the oxyfluoride samples with a constant cation ratio demonstrates essential influence of excess fluorine anions on the ionic conductivity. E-info https://repository.uantwerpen.be/docman/iruaauth/7ce5f6/3d198f0fca5.pdf http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000296936300011&DestLinkType=RelatedRecords&DestApp=ALL_WOS&UsrCustomerID=ef845e08c439e550330acc77c7d2d848 http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000296936300011&DestLinkType=FullRecord&DestApp=ALL_WOS&UsrCustomerID=ef845e08c439e550330acc77c7d2d848 http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000296936300011&DestLinkType=CitingArticles&DestApp=ALL_WOS&UsrCustomerID=ef845e08c439e550330acc77c7d2d848 Handle
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6576108932495117, "perplexity": 24602.208460843387}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698540798.71/warc/CC-MAIN-20161202170900-00285-ip-10-31-129-80.ec2.internal.warc.gz"}
|
https://www.lmfdb.org/ModularForm/GL2/Q/holomorphic/2800/2/a/a/
|
Properties
Label 2800.2.a.a Level $2800$ Weight $2$ Character orbit 2800.a Self dual yes Analytic conductor $22.358$ Analytic rank $1$ Dimension $1$ CM no Inner twists $1$
Related objects
Newspace parameters
Level: $$N$$ $$=$$ $$2800 = 2^{4} \cdot 5^{2} \cdot 7$$ Weight: $$k$$ $$=$$ $$2$$ Character orbit: $$[\chi]$$ $$=$$ 2800.a (trivial)
Newform invariants
Self dual: yes Analytic conductor: $$22.3581125660$$ Analytic rank: $$1$$ Dimension: $$1$$ Coefficient field: $$\mathbb{Q}$$ Coefficient ring: $$\mathbb{Z}$$ Coefficient ring index: $$1$$ Twist minimal: no (minimal twist has level 140) Fricke sign: $$1$$ Sato-Tate group: $\mathrm{SU}(2)$
$q$-expansion
$$f(q)$$ $$=$$ $$q - 3 q^{3} - q^{7} + 6 q^{9} + O(q^{10})$$ $$q - 3 q^{3} - q^{7} + 6 q^{9} - 3 q^{11} + q^{13} - 5 q^{17} + 8 q^{19} + 3 q^{21} - 2 q^{23} - 9 q^{27} - q^{29} + 2 q^{31} + 9 q^{33} + 10 q^{37} - 3 q^{39} - 6 q^{41} + 4 q^{43} - 11 q^{47} + q^{49} + 15 q^{51} + 6 q^{53} - 24 q^{57} + 10 q^{59} - 6 q^{63} + 10 q^{67} + 6 q^{69} - 10 q^{73} + 3 q^{77} + 7 q^{79} + 9 q^{81} - 12 q^{83} + 3 q^{87} + 8 q^{89} - q^{91} - 6 q^{93} + 3 q^{97} - 18 q^{99} + O(q^{100})$$
Embeddings
For each embedding $$\iota_m$$ of the coefficient field, the values $$\iota_m(a_n)$$ are shown below.
For more information on an embedded modular form you can click on its label.
Label $$\iota_m(\nu)$$ $$a_{2}$$ $$a_{3}$$ $$a_{4}$$ $$a_{5}$$ $$a_{6}$$ $$a_{7}$$ $$a_{8}$$ $$a_{9}$$ $$a_{10}$$
1.1
0
0 −3.00000 0 0 0 −1.00000 0 6.00000 0
$$n$$: e.g. 2-40 or 990-1000 Significant digits: Format: Complex embeddings Normalized embeddings Satake parameters Satake angles
Atkin-Lehner signs
$$p$$ Sign
$$2$$ $$-1$$
$$5$$ $$-1$$
$$7$$ $$1$$
Inner twists
This newform does not admit any (nontrivial) inner twists.
Twists
By twisting character orbit
Char Parity Ord Mult Type Twist Min Dim
1.a even 1 1 trivial 2800.2.a.a 1
4.b odd 2 1 700.2.a.j 1
5.b even 2 1 2800.2.a.bf 1
5.c odd 4 2 560.2.g.a 2
12.b even 2 1 6300.2.a.t 1
15.e even 4 2 5040.2.t.s 2
20.d odd 2 1 700.2.a.a 1
20.e even 4 2 140.2.e.a 2
28.d even 2 1 4900.2.a.b 1
40.i odd 4 2 2240.2.g.f 2
40.k even 4 2 2240.2.g.e 2
60.h even 2 1 6300.2.a.c 1
60.l odd 4 2 1260.2.k.c 2
140.c even 2 1 4900.2.a.w 1
140.j odd 4 2 980.2.e.b 2
140.w even 12 4 980.2.q.f 4
140.x odd 12 4 980.2.q.c 4
By twisted newform orbit
Twist Min Dim Char Parity Ord Mult Type
140.2.e.a 2 20.e even 4 2
560.2.g.a 2 5.c odd 4 2
700.2.a.a 1 20.d odd 2 1
700.2.a.j 1 4.b odd 2 1
980.2.e.b 2 140.j odd 4 2
980.2.q.c 4 140.x odd 12 4
980.2.q.f 4 140.w even 12 4
1260.2.k.c 2 60.l odd 4 2
2240.2.g.e 2 40.k even 4 2
2240.2.g.f 2 40.i odd 4 2
2800.2.a.a 1 1.a even 1 1 trivial
2800.2.a.bf 1 5.b even 2 1
4900.2.a.b 1 28.d even 2 1
4900.2.a.w 1 140.c even 2 1
5040.2.t.s 2 15.e even 4 2
6300.2.a.c 1 60.h even 2 1
6300.2.a.t 1 12.b even 2 1
Hecke kernels
This newform subspace can be constructed as the intersection of the kernels of the following linear operators acting on $$S_{2}^{\mathrm{new}}(\Gamma_0(2800))$$:
$$T_{3} + 3$$ $$T_{11} + 3$$ $$T_{13} - 1$$
Hecke characteristic polynomials
$p$ $F_p(T)$
$2$ $$T$$
$3$ $$3 + T$$
$5$ $$T$$
$7$ $$1 + T$$
$11$ $$3 + T$$
$13$ $$-1 + T$$
$17$ $$5 + T$$
$19$ $$-8 + T$$
$23$ $$2 + T$$
$29$ $$1 + T$$
$31$ $$-2 + T$$
$37$ $$-10 + T$$
$41$ $$6 + T$$
$43$ $$-4 + T$$
$47$ $$11 + T$$
$53$ $$-6 + T$$
$59$ $$-10 + T$$
$61$ $$T$$
$67$ $$-10 + T$$
$71$ $$T$$
$73$ $$10 + T$$
$79$ $$-7 + T$$
$83$ $$12 + T$$
$89$ $$-8 + T$$
$97$ $$-3 + T$$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9136690497398376, "perplexity": 12681.326166779505}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301720.45/warc/CC-MAIN-20220120035934-20220120065934-00669.warc.gz"}
|
https://pos.sissa.it/340/580/
|
Volume 340 - The 39th International Conference on High Energy Physics (ICHEP2018) - Parallel: Quark and Lepton Flavor Physics
Testing discrete symmetries with neutral kaons at KLOE-2
A. Di Domenico* on behalf of the KLOE-2 Collaboration
*corresponding author
Full text: pdf
Published on: August 02, 2019
Abstract
The KLOE-2 experiment at the INFN Laboratori Nazionali di Frascati has successfully concluded its data-taking at the DA$\Phi$NE collider collecting an integrated luminosity of 5.5 fb$^{-1}$ at the $\phi$ resonance peak.
Together with the data sample collected by its predecessor KLOE, the total integrated luminosity of 8 fb$^{-1}$ represents the largest existing data sample in the world collected at an $e^+e^-$ collider at the $\phi$ meson peak, corresponding to $\sim 2.4 \times 10^{10}$ $\phi$-mesons produced.
The entanglement in the neutral kaon pairs produced at the DA$\Phi$NE $\phi$-factory is a unique tool to test discrete symmetries and quantum coherence at the utmost sensitivity, in particular strongly motivating the experimental searches of possible CPT violating effects, which would unambiguously signal New Physics.
The lepton charge asymmetry measured in $K_S$ semileptonic decays with $1.63$ fb$^{-1}$ of KLOE data, improving the statistical uncertainty of the present result by about a factor two, is reported, together with the tests of Time reversal and CPT symmetries in neutral kaon transition processes, and the search for the CP violating $K_S \rightarrow 3\pi^0$ decay with the newly acquired KLOE-2 data.
DOI: https://doi.org/10.22323/1.340.0580
How to cite
Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete.
Open Access
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7718294858932495, "perplexity": 3700.644368665791}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738746.41/warc/CC-MAIN-20200811090050-20200811120050-00393.warc.gz"}
|
http://physics.aps.org/story/v3/st15
|
# Focus: In Search of Hidden Dimensions
Phys. Rev. Focus 3, 15
Most of us feel comfortable with life in four dimensions–the three dimensions of space and one of time. However, some physicists think that at least six more dimensions exist, compacted into tiny sizes. This weird 10-dimensional universe is part of string theory, a sweeping but unproven framework that ties together nature’s basic forces. In the 15 March PRL, theorists offer a detailed blueprint of how to search for these extra dimensions in particle accelerators. Surprisingly, this shadow world may be far more accessible than physicists thought. Indeed, some of the new dimensions may inhabit spaces as large as a millimeter across.
Physicists believe that in the cauldron of the big bang the four fundamental forces–strong and weak forces in atomic nuclei, electromagnetism, and gravity–acted as one. In the simplest extrapolation from current knowledge, this unification occurs only at extremely high energies and at the smallest possible spatial scale, just ${10}^{-35}$ m. String theory, the most popular unification theory, dictates that particles such as quarks and electrons are tiny loops that vibrate in 10-dimensional space. Six of those dimensions might have curled up at the smallest scale after the big bang, while the others grew into the cosmos we see today. “We are in some sense trapped on a wall in a higher-dimensional universe. We can’t turn our heads in the direction that those extra dimensions lie in,” explains Joseph Lykken of the Fermi National Accelerator Laboratory (Fermilab) in Illinois.
In the past decade string theorists have suggested that at least some of the extra spatial dimensions can be probed indirectly at energies far lower than the big bang, and perhaps even at length scales as shockingly large as 1 mm. Now, several teams have calculated exactly how physicists could measure those dimensions in high-energy experiments. Eugene Mirabelli and his colleagues at the Stanford Linear Accelerator Center (SLAC) analyzed collisions in which gravitons–the postulated carriers of the gravitational force–radiate into the other dimensions. Their PRL paper describes the precise signature of such events: a single photon or particle jet zooming off in one direction, but no observed particles balancing the reaction on the other side. Current experiments at CERN’s LEP 2 accelerator in Switzerland are sensitive enough to probe for new dimensions to a size of 0.48 mm, the team concludes. Published results from the Tevatron collider at Fermilab rule out hidden dimensions about twice as big, but new data will allow physicists to probe on a finer scale.
Other teams have derived similar theoretical constraints, including Gian Giudice and coworkers at CERN. “The idea that the Universe has extra dimensions is very compelling,” says SLAC coauthor Michael Peskin. “The probability of a millimeter scale is very low, but it’s fantastically interesting, so it’s important to tell the experimenters exactly what to look for.”
Theorists praise the new papers as essential steps. “This is the detailed follow-up needed to put some meat on these ideas and see how we can test them experimentally,” says Thomas Banks of Rutgers University. Adds Lykken: “In the next year or two, either people will lose interest in this idea or it will enter a data-driven phase. This actually has the smell of something promising.”
–Robert Irion
Robert Irion is a freelance science writer based in Santa Cruz, CA.
String Theory
## Related Articles
String Theory
### Synopsis: Out of Bounds
Liquids that approach an often-quoted lower limit for viscosity are deemed “perfect,” but now this lower limit is itself being questioned. Read More »
String Theory
### Viewpoint: In pursuit of a nameless metal
Although theoretical approaches to modeling “strange metals,” such as the cuprate superconductors, originate from apparently different sources, research now suggests they flow toward a single universal model. Read More »
Cosmology
### Synopsis: Cosmic nudity
How do gravitational singularities avoid being trapped by event horizons? Read More »
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5337874293327332, "perplexity": 1884.5872075872592}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257823963.50/warc/CC-MAIN-20160723071023-00000-ip-10-185-27-174.ec2.internal.warc.gz"}
|
http://physics.stackexchange.com/questions/8954/what-are-the-main-differences-between-p-p-and-p-bar-p-colliders?answertab=oldest
|
# What are the main differences between $p p$ and $p \bar p$ colliders
I know that it is somehow related to the parton distribution functions, allowing specific reactions with gluons instead of quarks and anti-quarks, but I would really appreciate more detailed answers !
Thanks
-
The difference in scattering cross sections is more evident the lower the energy of collisions. Fig 41.11. At the energies of TeV the probability of new physics observations is the same for both choices of collisions.
The reason is that at low energies the fact that the proton has three quarks and the anti proton three anti quarks predominates. Quark antiquark scattering at low energies has much higher cross section than quark quark due to the extra possibility of annihilation of the quarks. At low energy the gluon "sea" plays a small part. The higher the energy of interactions the higher the number of energetic gluons that scatter and finally at TEV energies that is what predominates and the two cross sections converge. Thus for physics it makes no difference whether one uses as targets protons or antiproitons, as far as discovery potential goes.
There may be some technical advantage in the construction, in that in principle the antiproton-proton beams can circulate in the same magnetic configuration as mirror images and make the magnet construction circuits simpler. I guess that the need for high luminosity made LHC a proton proton collider, since it is more difficult to store antiprotons. I would have to research this guess.
-
But using proton proton isn't a way to reduce $q \bar q$ reaction and favour g g ? – gdz Apr 22 '11 at 14:35
My remark is related to higgs production by gluon fusion – gdz Apr 22 '11 at 14:47
Within each, nucleon and antinucleon, the distributions of the gluon "sea" are the same, that is why at high energies it makes no difference which one uses, since then the percentage of energy carried by the original quarks of the incoming quarks/antiquarks is small in the interactions of interest ( high transverse momentum, i.e. deep inelastic) and the sea gluons predominate. – anna v Apr 23 '11 at 4:30
I would add to @anna's answer that $p\bar{p}$ collider such as the Tevatron is CP symmetric. This was one of the arguments for continuing the Tevatron Experiments. To quote from the proposal
Measurements that get a special advantage from the p-pbar environment. The primary example in this category is CP-violation, which strongly limits the range of allowed models of new physics up to scales of several TeV. There are good a priori reasons to expect the existence of some non-SM CP-violating processes, and finding them is of comparable importance to addressing electroweak symmetry breaking. Precision measurements at the 1% level or better are accessible at the Tevatron due to the CP-symmetric initial state (p-pbar), and symmetry of the detectors that allow cancellation of systematics. Some of these measurements already show tantalizing effects, like the recently published di-muon asymmetry result from the DZero experiment, showing the first indication of a deviation from the Standard Model picture of CP-violation. Other measurements are exploring a completely new field, as the recent CPVmeasurement with the D 0 mesons at CDF, yielding a substantial improvement in precision with respect to previous B-factories data. This has provided a proof of feasibility of an exciting program of precision measurement with a unique possibility to find anomalous interactions in up-type quarks. A non CP-related example in this category is the forward-backward asymmetry in top quark production. Current measurements by both CDF and DZero indicate an asymmetry above the Standard Model prediction. If this persists with more data, it can be interpreted as new dynamics. This is not an easy measurement to replicate in a proton-proton environment.
http://www.fnal.gov/directorate/Tevatron/Tevatron_whitepaper.pdf
-
This is an important addition, since this difference between proton-proton and antiproton-proton collisions exists at all energies. – anna v Apr 21 '11 at 18:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9455534815788269, "perplexity": 1024.0470256833032}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507445299.20/warc/CC-MAIN-20141017005725-00081-ip-10-16-133-185.ec2.internal.warc.gz"}
|
https://en.m.wikipedia.org/wiki/Hack%27s_law
|
# Hack's law
Hack's law is an empirical relationship between the length of streams and the area of their basins. If L is the length of the longest stream in a basin, and A is the area of the basin, then Hack's law may be written as
${\displaystyle L=CA^{h}\ }$
for some constant C where the exponent h is slightly less than 0.6 in most basins. h varies slightly from region to region and slightly decreases for larger basins (>8,000 mi², or 20,720 km²). A theoretical value h = 4/7 ≈ 0.571 for the exponent has been derived (Birnir, 2008).
The law is named after American geomorphologist John Tilton Hack.
## ReferencesEdit
• Birnir, B., 2008, "Turbulent rivers", Quart. Appl. Math., 66, 3, pp. 565–594.
• Hack, J., 1957, "Studies of longitudinal stream profiles in Virginia and Maryland", U.S. Geological Survey Professional Paper, 294-B.
• Rigon, R., et al., 1996, "On Hack's law" Water Resources Research, 32, 11, pp. 3367–3374.
• Willemin, J.H., 2000, "Hack’s law: Sinuosity, convexity, elongation". Water Resources Research, 36, 11, pp. 3365–3374.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6766631007194519, "perplexity": 4275.27140089952}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891976.74/warc/CC-MAIN-20180123131643-20180123151643-00308.warc.gz"}
|
https://rd.springer.com/chapter/10.1007/978-1-4613-8156-3_5
|
Fourier Series pp 234-276
# Lacunary Fourier Series
• R. E. Edwards
Part of the Graduate Texts in Mathematics book series (GTM, volume 85)
## Abstract
As the name suggests, a lacunary trigonometric series is, roughly speaking, a trigonometric series $$\sum\nolimits_{{{\text{n\^I Z}}}} {{c_{{\text{n}}}}{e^{{{\text{inx}}}}}}$$ in which c n = 0 for all integers n save perhaps those belonging to a relatively sparse subset E of Z. Examples of such series have appeared momentarily in Exercises 5.6 and 6.13. Indeed for the Cantor group ℒ, the good behaviour of a lacunary Walsh-Fourier series
$$\sum\limits_{{\zeta\in\mathcal{R}}} {{c_{\zeta }}\zeta}$$
(whose coefficients vanish outside the subset ℛ of ℒ^) has already been noted: by Exercise 14.9, if the lacunary series belongs to C(ℒ) then it belongs to A(ℒ); and, by 14.2.1, if it belongs to L p (ℒ) for some p < 0, then it also belongs to L q (ℒ) for q ∈ [p, ∞]. In this chapter we shall be mainly concerned with lacunary Fourier series on the circle group and will deal more systematically with some (though by no means all) aspects of their curious behaviour.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9546939134597778, "perplexity": 1049.3572039894166}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812579.21/warc/CC-MAIN-20180219091902-20180219111902-00574.warc.gz"}
|
http://aleph.se/andart2/author/admin/
|
# Popper vs. Macrohistory: what can we really say about the long-term future?
Talk I gave at the Oxford Karl Popper Society:
The quick summary: Physical eschatology, futures studies and macrohistory try to talk about the long-term future in different ways. Karl Popper launched a broadside against historicism, the approach to the social sciences which assumes that historical prediction is their principal aim. While the main target was the historicism supporting socialism and fascism, the critique has also scared away many from looking at the future – a serious problem for making the social sciences useful. In the talk I look at various aspects of Popper’s critique and how damaging they are. Some parts are fairly unproblematic because they demand too high precision or determinism, and can be circumvented by using a more Bayesian approach. His main point about knowledge growth making the future impossible to determine still stands and is a major restriction on what we can say – yet there are some ways to reason about the future even with this restriction. The lack of ergodicity of history may be a new problem to recognize: we should not think it would repeat if we re-run it. That does not rule out local patterns, but the overall endpoint appears random… or perhaps selectable. Except that doing it may turn out to be very, very hard.
My main conclusions are that longtermist views like most Effective Altruism are not affected much by the indeterminacy of Popper’s critique (or the non-ergodicity issue); here the big important issue is how much we can affect the future. That seems to be an open question, well worth pursuing. Macrohistory may be set for a comeback, especially if new methodologies in experimental history, big data history, or even Popper’s own “technological social science” were developed. That one cannot reach certitude does not prevent relevant and reliable (enough) input to decisions in some domains. Knowing which domains that are is another key research issue. In futures studies the critique is largely internalized by now, but it might be worth telling other disciplines about it. To me the most intriguing conclusion is that physical eschatology needs to take the action of intelligent life into account – and that means accepting some pretty far-reaching indeterminacy and non-ergodicity on vast scales.
# What is going on in the world?
Inspired by Katja Grace’s list, what do I think are the big narratives, the “plots” that describe a lot of what is going on? Here is a list I hacked together after breakfast.
These are not trends. They are not predictions. They are stories one can tell about what has been happening and what may happen, directing attention towards different domains. Some make for better stories than others. Some urge us to action, others just reflection.
# Earth
• The dance between gravity and entropy: initial high-homogeneity state of universe turns lumpy, producing energy release that drives non-equilibrium processes. Entropy tries to smear out differences, driving further non-equilibria until in the very long run it “wins” through the most convoluted evolution imaginable.
• The stelliferous era going from the wild galaxy forming youth to the current staid adulthood with peak star formation behind it, peak star number just ahead, and a long middle and old age dominated by red dwarf ellipticals separated by accelerating expansion.
• A phase transition of molecular matter from non-living forms into living and technological forms, expanding outwards from a small nucleation event.
• A species with modest intelligence, language and tech ability getting enough of each to make itself the dominant biological and geological force on the planet in a biologically short time. Transitioning from a part of nature to defining nature, making its future evolution contingent on cultural decisions.
• Humans causing a complex biotic crisis, unique in that the key species involved has become aware of what it is doing.
• Transition from a biosphere to a cyborg biosphere, where the technosphere is inseparable from the biological and geophysical systems.
• The transition from scarcity economics to post-scarcity economics.
# Water
• Humanity having being thrust straight into a globalized world, in some cases going from a tribal society to member of the global village in a single generation. Coming to terms with the close presence of vast diversity of nearly any human property causes turbulent transients.
• The crisis of academia: challenged by new competitors, hemmed in by old structures and sources of money, trying to be the key source or gatekeeper of intellectual capital in society.
• Rise of the global middle class: much of the world is far more “middle class” than many think. As long as the middle class experiences a rise in living standards or feeling things get better they tend to accept whatever government they have.
• Rise of the tech billionaires: unlike traditional trade-billionaires not beholden to the standard structure of society, but interested in disruptive new possibilities they can drive. This also goes for non-monetary counterparts (e.g. influence “billionaires”). Individual accumulation of wealth has become much more able to generate individual, idiosyncratic projects.
• Humanity dealing with attentional heavy tails: traditionally only celebrities had to deal with massive attention, now it can happen to anybody who goes viral (for positive or negative reasons). Humans generally do not handle super-attention well.
• Split between “somewhere” and “anywhere” people – globalized cosmopolitanism(s) versus nationalist rootedness.
• Urbanisation shifting: from a steady urbanisation driven by the economics of scale of cities to new styles. Remote working driving at least temporary exurbanisation, forcing accommodations in tax-base losing cities, culturally changing small locations, and people inventing new social structure. Clusters may start forming in new places, or placeless clusters may finally start emerging.
• Multigenerational society: longer healthy lives mean more generations alive at same time, their different experiences clashing and interacting. Demographic winter drives “beanpole” families with few members per generation but many generations. Time horizons widening due to generations, life extension, better storage, environmental concerns.
• Controlling the means of production used to be controlling farmers, then factories, now infrastructure. Liberation struggles about infrastructure – platforms, right to repair, open source, micromanufacture, biotechnology, nanotechnology.
• Broadcasting media created shared cultural and event touchstones, first nationally and then internationally. Network media reduces this impact, making it more subtle for any nation to exert cultural power. Covid-19 may have been the last truly global event.
• First lifetime servitude to a lord, then a lifetime career, then fluid consulting and gig work. Since average human lifetime considerably longer than average corporate lifetime secure positions require either a lord or long-lived institutions (universities, states, cities).
• The sexual rights liberations continue (first women, sex in general, homosexuality, trans rights, then furries and technosexuals) while new taboos and puritanisms emerge to channel them into approved forms (debates on prostitution, deepfakes, sexbots…) and make forbidden activities more delightful.
• Virginia Postrel’s struggle between dynamists (the future is positive, let’s try a diversity of approaches) and stasists (the future is dark and risky, either need reactionary policies or technocracy) playing out across different domains, producing odd bedfellows.
• The end of the War on Drugs and the start of the Trade Wars on Drugs.
• Maturation of the social media ecosystem: it takes about 20 years to integrate new technologies into society. Changes to traditional social media function are now resisted by the everyday life and institutional integration, making them less likely to occur. New social media arrive from time to time and will overlay old media.
• Emergence and death of the lifelong subculture in the broadcast age, followed by the lightweight fandoms in the network age.
• The battle between traditional small scale degrowth environmentalism and pro-tech “bright greens”.
• The gradual shift from survival-linked values towards self-expression values, and possibly from traditional values to secular-rational values (the later scale can move far more erratically in response to many factors, and could evolve into new traditionalities if new ideologies became available).
• The rise of Effective Anything: data-driven, consequentialist analysis pushes for improvement in many domains – and runs straight into vested interests, nonconsequentialist mores, and that “activity X is not about X”. Evidence based medicine case in point. Can lead to shallow A/B testing of minor options and policymaking set by focus group, or actually optimizing important domains.
• Shifting from a unipolar to multipolar world. Competition between three (?) partial world systems, actually just jockeying for being the central part of the global world system that has emerged. Nobody wants to be periphery, just watch Russia.
• A shift from society-building ideologies towards a post-ideological risk society. Unhappiness with lack of progress and risk-taking building leads to counter-movements, some that may become real ideologies.
• Corrosive cynicism undermines any mainstream project attempting to build something, including fixing the cynicism.
• Systemic risk growth leads to deliberate modularisation of systems (economy, tech, food); only maintainable as long as people remember the last time everything crashed.
• Rise of religious fundamentalism as reaction to lagging behind, complicated by emergence of integrated and secularized second generation emigrants. Managing intra-family ideological range in a connected world becomes ever more important.
• The rise of high-tech totalitarian states that may not just lock in their citizens but their leadership. The Algorithm may control the Party.
• Capitalism getting destroyed by its internal contradictions and reinventing itself, as always.
# Air
• Shift from a text-dominant world to a multimedia world. Influence and status does no longer necessarily accrue to masters of text. Yet search, translation, and curation methods lag and may become AI dependent.
• The new AI winter when deep learning hype does not immediately deliver everything, followed by a world where at least perception-based jobs are heavily impacted, and possibly many high-prestige and importance domains like design, engineering, planning…
• The shift from directly experienced reality to mediated reality.
• Molecular manufacturing going from science fiction to serious futuristic research program, hi-jacking by material science and chemistry, resurgence using protein engineering.
• A world of rising biotechnological opportunity, risk and knowledge. The biohacker becomes the public hero/villain, while the actual biotechnological institutions grow in power.
• 70s/80s home computer era seeds a generation comfortable with computers, enabling 90s/00s internet revolution. Wearables, quantified self, neurohacking in 20s seeds enhancement revolution in 40s?
• Information limits on society radically changed by information technology, with institutions lagging behind. This drives a challenge of explosive change in epistemic systems (how to achieve filtering, authority, trust, etc.), challenges to political institutions (many new forms made possible but not yet invented, tried and tested), as well as personal epistemics (information virtues and habits, new forms of awareness and social links, …)
• Diagnosis overshoots treatment: better instruments, statistics and AI makes detecting many states and situations easier, but does not necessarily help fix them. Leads to a situation where everybody and everything is diagnosed
• Identity technology makes everybody and everything identifiable – automatically, remotely, at any time, any purpose.
• Moore’s law shifting from serial processor performance to parallel performance driving shift in how code works (advantaging things like deep learning, data science, graphics over serial tasks). Next shifts may be towards energy efficiency (may trigger/be triggered by another new device class arriving a la Bell’s law), and/or 3D structures (favouring concentrated computation).
• The replication crisis in parts of science leading to new methodological orthodoxies, possibly creating better evidence but impairing crisis decision-making and innovation.
• The shift of energy use from fossil to renewable, linked to shifts in energy infrastructure (transmission, storage, centralization), world politics (loss of importance of middle east), and new forms of problems (fluctuations and instabilities).
• Technology potentially makes the world as mercurial as the software world, making cultural and institutional constraints now become the main issue.
• Scientific and technological stagnation as low-hanging fruits are picked and exponentially growing resources needed to make linear progress.
• Scientific and technological singularity as tools for making progress improve, feeding back on the process and leading to an intelligence explosion (or a capability swell across society).
# Fire
• Existential risk as a way of framing global problems, competing with the old rights based approach. Human security versus national security versus global security.
• Enlightenment modernists and conservative religious people jointly battling their mutual enemy postmodernism.
• The broadening moral circles of concern continue to broaden – other tribes, other nations, other races, genders, species, complex systems…
• Unbundling of key human concepts. Love, sex, reproduction and social roles split apart due to contraception, IVF, online dating etc. Illness, death, pain, ageing and bodies unbundles due to medicine, analgesics, cryonics, ageing interventions, brain emulation… As they unbundle they become legible and subject to individual and social decisions.
• The search for extraterrestrial intelligence going from pointless (since it is obvious that planets inhabited) to pointless (since they appear dead) to possible (radio telescopes, big galaxy) to exotic and disquieting (exoplanets, Dysonian SETI, great filter considerations) – the choice used to be between loneliness and little green men, but now might include threatening emptiness, postbiology and weirder things.
• Transition from a mythical worldview (causes follow narrative) to a scientific worldview (causes follow regular, universal rules) to a systems worldview (causes can be complex and entangled, some domains more scientific, some more mythical).
• A transformation of the world from highly constrained to underconstrained (possibly returning to externally or internally imposed constraints in the future, or expanding into something radically underconstrained).
• Business as usual: every era has been crucial to its inhabitants, growth and change are perennial, problems are solved or superseded in ways that gives rise to new problems. History does not end.
• The “hinge of history” where choices, path-dependencies and accidents may set long-term trajectories.
# A small step for machinekind?
(Originally published at https://qz.com/1666726/we-should-stop-sending-humans-into-space/ with a title that doesn’t quite fit my claim)
50 years ago humans left their footprints on the moon. We have left footprints on Earth for millions of years, but the moon is the only other body with human footprints.
Yet there are track marks on Mars. There is a deliberately made crater on the comet 9P/Tempel. There are landers on the Moon, Venus, Mars, Titan, the asteroid Eros, and the comet Churyumov–Gerasimenko. Not to mention a number of probes of varying levels of function across and outside the solar system.
As people say, Mars is the only planet in the solar system solely inhabited by robots. In 50 years, will there be a human on Mars… or just even more robots?
There are of course entirely normal reasons to go to space – communication satellites, GPS, espionage, ICBMs – and massive scientific reasons. But were they the only reasons to explore space it would be about as glorious as marine exploration. Worth spending taxpayer and private money on, but hardly to the extent we have done it.
Space is inconceivably harsher than any terrestrial environment, but also fundamentally different. It is vast beyond imagination. It contains things that have no counterpart on Earth. In many ways it has replaced our supernatural realms and gods with a futuristic realm of exotic planets and – maybe – extra-terrestrial life and intelligence. It is fundamentally The Future.
Again, there are good objective reasons for this focus. In the long run we are not going to survive as a species if we are not distributed across different biospheres or can leave this one when the sun turns a red giant.
Is space a suitable place for a squishy species?
Humans are adapted to a narrow range of conditions. A bit too much or too little pressure, oxygen, water, temperature, radiation and acceleration and we die. In fact, most of the Earth’s surface is largely uninhabitable unless we surround ourselves with protective clothing and technology. In going to space we need to not just bring with ourselves a controlled environment hermit-crab style, but we need to function in conditions we have not evolved for at all. All our ancestors lived with gravity. All our ancestors had reflexes and intuitions that were adequate for earth’s environment. But this means that our reflexes and intuitions are likely to be wrong in deadly ways without extensive retraining.
Meanwhile robots can be designed to not requite the life support, have reactions suited to the space environment and avoid the whole mortality thing. Current robotic explorers are rare and hence extremely expensive, motivating endless pre-mission modelling and careful actions. But robotics is becoming cheaper and more adaptable and if space access becomes cheaper we should expect a more ruthless use of robots. Machine learning allows robots to learn from their experiences, and if a body breaks down or is lost another copy of the latest robot software can be downloaded.
Our relations to robots and artificial intelligence are complicated. For time immemorial we have imagined making artificial servants or artificial minds, yet such ideas invariably become mirrors for ourselves. When we consider the possibility we begin to think about humanity’s place in the world (if man was made in God’s image, whose image is the robot?), our own failings (endless stories about unwise creators and rebellious machines), mysteries about what we are (what is intelligence, consciousness, emotions, dignity…?) When trying to build them we have learned that tasks that are simple for a 5-year old are hard to do while tasks that stump PhDs can be done easily, that our concepts of ethics may be in for a very practical stress test in the near future…
In space robots have so far not been seen as very threatening. Few astronauts have worried about their job security. Instead people seem to adopt their favourite space probes and rovers, becoming sentimental about their fate.
(Full disclosure: I did not weep for the end of Opportunity, but I did shed a tear for Cassini)
What kind of exploration do we wish for?
So, should we leave space to tele-operated or autonomous robots reporting back their findings for our thrills and education while patiently building useful installations for our benefit?
My thesis is: we want to explore space. Space is unsuitable for humans. Robots and telepresence may be better for exploration. Yet what we want is not just exploration in the thin sense of knowing stuff. We want exploration in the thick sense of being there.
There is a reason MarsOne got volunteers despite planning a one-way trip to Mars. There is a reason we keep astronauts at fabulous expense on the ISS doing experiments (beside that their medical state in a sense is the most valuable experiment): getting glimpses of our planet from above and touching the fringe of the Overview Effect is actually healthy for our culture.
Were we only interested in the utilitarian and scientific use of space we would be happy to automate it. The value from having people be present is deeper: it is aspirational, not just in the sense that maybe one day we or our grandchildren could go there but in the sense that at least some humans are present in the higher spheres. It literally represents the “giant leap for humanity” Neil Armstrong referred to.
A sceptic may wonder if it is worth it. But humanity seldom performs grand projects based on a practical utility calculation. Maybe it should. But the benefits of building giant telescopes, particle accelerators, the early Internet, or cathedrals were never objective and clear. A saner species might not perform these projects and would also abstain from countless vanity projects, white elephants and overinvestments, saving much resources for other useful things… yet this species would likely never have discovered much astronomy or physics, the peculiarities of masonry and managing Internetworks. It might well have far slower technological advancement, becoming poorer in the long run despite the reasonableness of its actions.
This is why so many are unenthusiastic about robotic exploration. We merely send tools when we want to send heroes.
Maybe future telepresence will be so excellent that we can feel and smell the Martian environment through our robots, but as evidenced by the queues in front of the Mona Lisa or towards the top of Mount Everest we put a premium on authenticity. Not just because it is rare and expensive but because we often think it is worthwhile.
As artificial intelligence advances those tools may become more like us, but it will always be a hard sell to argue that they represent us in the same way a human would. I can imagine future AI having just as vivid or even better awareness of its environment than we could, and in a sense being a better explorer. But to many people this would not be a human exploring space, just another (human-made) species exploring space: it is not us. I think this might be a mistake if the AI actually is a proper continuation of our species in terms of culture, perception, and values but I have no doubt this will be a hard sell.
What kind of settlement do we wish for?
We may also want to go to space to settle it. If we could get it prepared by automation, that is great.
While exploration is about establishing a human presence, relating to an environment from the peculiar human perspective of the world and maybe having the perspective changed, settlement is about making a home. By its nature it involves changing the environment into a human environment.
A common fear in science fiction and environmental literature is that humans would transform everything into more of the same: a suburbia among the stars. Against this another vision is contrasted: to adapt and allow the alien to change us to a suitable extent. Utopian visions of living in space not only deal with the instrumental freedom of a post-scarcity environment but the hope that new forms of culture can thrive in the radically different environment.
Some fear/hope we may have to become cyborgs to do it. Again, there is the issue of who “we” are. Are we talking about us personally, humanity-as-we-know-it, transhumanity, or the extension of humanity in the form of our robotic mind children? We might have some profound disagreements about this. But to adapt to space we will likely have to adapt more than ever before as a species, and that will include technological changes to our lifestyle, bodies and minds that will call into question who we are on an even more intimate level than the mirror of robotics.
A small step
If a time traveller told me that in 50 years’ time only robots had visited the moon, I would be disappointed. It might be the rational thing to do, but it shows a lack of drive on behalf of our species that would be frankly worrying – we need to get out of our planetary cradle.
If the time traveller told me that in 50 years’ time humans but no robots had visited the moon, I would also be disappointed. Because that implies that we either fail to develop automation to be useful – a vast loss of technological potential – or that we make space to be all about showing off our emotions rather than a place we are serious about learning, visiting and inhabiting.
# Obligatory Covid-19 blogging
Over at Practical Ethics I have blogged a bit:
The Unilateralist Curse and Covid-19, or Why You Should Stay Home: why we are in a unilateralist curse situation in regards to staying home, making it rational to stay home even when it seems irrational.
Taleb and Norman had a short letter Ethics of Precaution: Individual and Systemic Risk making a similar point, noting that recognizing the situation type and taking contagion dynamics into account is a reason to be more cautious. It is different from our version in the effect of individual action: we had single actor causing full consequences, the letter has exponential scale-up. Also, way more actors: everyone rather than just epistemic peers, and incentives that are not aligned since actors do not bear the full costs of their actions. The problem is finding strategies robust to stupid, selfish actors. Institutional channeling of collective rationality and coordination is likely the only way for robustness here.
Never again – will we make Covid-19 a warning shot or a dud? deals with the fact that we are surprisingly good at forgetting harsh lessons (1918, 1962, Y2K, 2006…), giving us a moral duty to try to ensure appropriate collective memory of what needs to be recalled.
This is why Our World In Data, the Oxford COVID-19 Government Response Tracker and IMF’s policy responses to Covid-19 are so important. The disjointed international responses act as natural experiments that will tell us important things about best practices and the strengths/weaknesses of various strategies.
# When the inverse square stops working
In physics inverse square forces are among the most reliable things. You can trust that electric and gravitational fields from monopole charges decay like $1/r^2$. Sure, dipoles and multipoles may add higher order terms, and extended conductors like wires and planes produce other behaviour. But most of us think we can trust the $1/r^2$ behaviour for spherical objects.
I was surprised to learn that this is not at all true recently when a question at the Astronomy Stack Exchange asked about whether gravity changes near the surface of dense objects.
Electromagnetism does not quite obey the inverse square law
The cause was this paper by John Lekner, that showed that there can be attraction between conductive spheres even when they have the same charge! (Popular summary in Nature by Philip Ball) The “trick” here is that when the charged spheres approach each other the charges on the surface redistribute themselves which leads to a polarization. Near the the other sphere charges are pushed away, and if one sphere has a different radius from the other the “image charge” can be opposite, leading to a net attraction.
The formulas in the paper are fairly non-intuitive, so I decided to make an approximate numeric model. I put 500 charges on two spheres (radius 1.0 and 2.0) and calculated the mutual electrostatic repulsion/attraction moving them along the surface. Iterate until it stabilizes, then calculate the overall force on one of the spheres.
The result is indeed that the $1/r^2$ law fails as the spheres approach each other. The force times squared distance is constant until they get within a few radii, and then the equally charged sphere begins to experience less repulsion and the oppositely charged spheres more attraction than expected. My numerical method is too sloppy to do a really good job of modelling the near-touching phenomena, but it is enough to show that that the inverse square effect is not true for conductors close enough.
Gravity doesn’t obey the inverse square law either
My answer to the question was along the same lines: if two spherical bodies get close to each other they are going to deform, and this will also affect the force between them. In this case it is not charges moving on the surface, but rather gravitational and tidal distortion turning them into ellipsoids. Strictly speaking, they will turn into more general teardrop shapes, but we can use the ellipsoid as an approximation. If they have fixed centres of mass they will be prolate ellipsoids, while if they are orbiting each other they would be general three-axis ellipsoids.
Calculating the gravitational field of an ellipsoid has been done and has a somewhat elegant answer that unfortunately needs to be expressed terms of special functions. The gravitational potential in the system is just the sum of the potentials from both ellipsoids. The equilibrium shapes would correspond to ellipsoids with the same potential along their entire surface; maybe there is an explicit solution, but it does look likely to be an algebraic mess of special functions.
I did a numeric exploration instead. To find the shape I started with spheres and adjusted the semi-major axis (while preserving volume) so the potential along the surface became more equal at the poles. After a few iterations this gives a self-consistent shape. Then I calculated the force (the derivative of the potential) due to this shape on the other mass.
The result is indeed that the force increases faster than $1/r^2$ as the bodies approach each other, since they elongate and eventually merge (a bit before this they will deviate from my ellipsoidal assumption).
This was the Newtonian case. General relativity is even messier. In intense gravitational fields space-time is curved and expanded, making even the meaning of the distance in the inverse square law problematic. For black holes the Paczyński–Wiita potential is an approximation of the potential, and it deviates from the $U(r)=-GM/r$ potential as $U_{PW}(r)=-GM/(r-R_S)$ (where $R_S$ is the Schwarzschild radius). It makes the force increase faster than the classical potential as we approach $r=R_S$.
Normally we assume that charges and masses stay where they are supposed to be, just as we prefer to reason as if objects are perfectly rigid or well described by point masses. In many situations this stops being true and then the effective forces can shift as the objects and their charges shift around.
# What is the smallest positive integer that will never be used?
This is a bit like the “what is the smallest uninteresting number?” paradox, but not paradoxical: we do not have to name it (and hence make it thought about/interesting) to reason about it.
I will first give a somewhat rough probabilistic bound, and then a much easier argument for the scale of this number. TL;DR: the number is likely smaller than $10^{173}$.
# Probabilistic bound
If we think about $k$ numbers with frequencies $N(x)$, $N(x)$ approaches some probability distribution $p(x)$. To simplify things we assume $p(x)$ is a decreasing function of $x$; this is not strictly true (see below) but likely good enough.
If we denote the cumulative distribution function $P(x)=\Pr[X we can use the k:th order statistics to calculate the distribution of the maximum of the numbers: $F_{(k)}(x) = [P(x)]^{k}$. We are interested in the point where it becomes is likely that the number $x$ has not come up despite the trials, which is somewhere above the median of the maximum: $F_{(k)}(x^*)=1/2$.
What shape does $p(x)$ have? (Dorogovtsev, Mendes, Oliveira 2005) investigated numbers online and found a complex, non-monotonic shape. Obviously dates close to the present are overrepresented, as are prices (ending in .99 or .95), postal codes and other patterns. Numbers in exponential notation stretch very far up. But mentions of explicit numbers generally tend to follow $N(x)\sim 1/\sqrt{x}$, a very flat power-law.
So if we have $k$ uses we should expect roughly $x since much larger x are unlikely to occur even once in the sample. We can hence normalize to get $p(x)=\frac{1}{2(k-1)}\frac{1}{\sqrt{x}}$. This gives us $P(x)=(\sqrt{x}-1)/(k-1)$, and hence $F_{(k)}(x)=[(\sqrt{x}-1)/(k-1)]^k$. The median of the maximum becomes $x^* = ((k-1)2^{-1/k}+1)^2 \approx k^2 - 2k \ln(2)$. We are hence not entirely bumping into the $k^2$ ceiling, but we are close – a more careful argument is needed to take care of this.
So, how large is $k$ today? Dorogovtsev et al. had on the order of $k=10^{12}$, but that was just searchable WWW pages back in 2005. Still, even those numbers contain numbers that no human ever considered since many are auto-generated. So guessing $x^* \approx 10^{24}$ is likely not too crazy. So by this argument, there are likely 24 digit numbers that nobody ever considered.
# Consider a number…
Another approach is to assume each human considers a number about once a minute throughout their lifetime (clearly an overestimate given childhood, sleep, innumeracy etc. but we are mostly interested in orders of magnitude anyway and making an upper bound) which we happily assume to be about a century, giving a personal $k$ across a life as about $10^{8}$. There has been about 100 billion people, so humanity has at most considered $10^{19}$ numbers. This would give an estimate using my above formula as $x^* \approx 10^{38}$.
But that assumes “random” numbers, and is a very loose upper bound, merely describing a “typical” small unconsidered number. Were we to systematically think through the numbers from 1 and onward we would have the much lower $x^* \approx 10^{19}$. Just 19 digits!
One can refine this a bit: if we have time $T$ and generate new numbers at a rate $r$ per second, then $k=rT$ and we will at most get $k$ numbers. Hence the smallest number never considered has to be at most $k+1$.
Seth Lloyd estimated that the observable universe cannot have performed more than $10^{120}$ operations on $10^{90}$ bits. If each of those operations was a consideration of a number we get a bound on the first unconsidered number as $<10^{120}$.
This can be used to consider the future too. Computation of our kind can continue until proton decay in $\sim 10^{36}$ years or so, giving a bound of $10^{173}$ if we use Lloyd’s formula. That one uses the entire observable universe; if we instead consider our own future light cone the number is going to be much smaller.
But the conclusion is clear: if you write a 173 digit number with no particular pattern of digits (a bit more than two standard lines of typing), it is very likely that this number would never have shown up across the history of the entire universe except for your action. And there is a smaller number that nobody – no human, no alien, no posthuman galaxy brain in the far future – will ever consider.
# Newtonmas fractals: rose of gravity
Continuing my intermittent Newtonmas fractal tradition (2014, 2016, 2018), today I play around with a very suitable fractal based on gravity.
The problem
On Physics StackExchange NiveaNutella asked a simple yet tricky to answer question:
If we have two unmoving equal point masses in the plane (let’s say at $(\pm 1,0)$) and release particles from different locations they will swing around the masses in some trajectory. If we colour each point by the mass it approaches closest (or even collides with) we get a basin of attraction for each mass. Can one prove the boundary is a straight line?
User Kasper showed that one can reframe the problem in terms of elliptic coordinates and show that this implies a straight boundary, while User Lineage showed it more simply using the second constant of motion. I have the feeling that there ought to be an even simpler argument. Still, Kasper’s solution show that the generic trajectory will quasiperiodically fill a region and tend to come arbitrarily close to one of the masses.
The fractal
In any case, here is a plot of the basins of attraction shaded by the time until getting within a small radius $r_{trap}$ around the masses. Dark regions take long to approach any of the masses, white regions don’t converge within a given cut-off time.
The boundary is a straight line, and surrounding the simple regions where orbits fall nearly straight into the nearest mass are the wilder regions where orbits first rock back and forth across the x-axis before settling into ellipses around the masses.
The case for 5 evenly spaced masses for $r_{trap}=0.1$ and 0.01 (assuming unit masses at unit distance from origin and $G=1$) is somewhat prettier.
As $r_{trap}\rightarrow 0$ the basins approach ellipses around their central mass, corresponding to orbits that loop around them in elliptic orbits that eventually get close enough to count as a hit. The onion-like shading is due to different number of orbits before this happens. Each basin also has a tail or stem, corresponding to plunging orbits coming in from afar and hitting the mass straight. As the trap condition is made stricter they become thinner and thinner, yet form an ever more intricate chaotic web oughtside the central region. Due to computational limitations (read: only a laptop available) these pictures are of relatively modest integration times.
I cannot claim credit for this fractal, as NiveaNutella already plotted it. But it still fascinates me.
Wada basins and mille-feuille collision manifolds
These patterns are somewhat reminiscent of the classic Newton’s root-finding iterative formula fractals: several basins of attraction with a fractal border where pairs of basins encounter interleaved tiny parts of basins not member of the pair.
However, this dynamics is continuous rather than discrete. The plane is a 2D section through a 4D phase space, where starting points at zero velocity accelerate so that they bob up and down/ana and kata along the velocity axes. This also leads to a neat property of the basins of attraction: they are each arc-connected sets, since for any two member points that are the start of trajectories they end up in a small ball around the attractor mass. One can hence construct a map from $[0,1]$ to $(x,y,\dot{x},\dot{x})$ that is a homeomorphism. There are hence just N basins of attraction, plus a set of unstable separatrix points that never approach the masses. Some of these border points are just invariant (like the origin in the case of the evenly distributed masses), others correspond to unstable orbits.
Each mass is surrounded by a set of trajectories hitting it exactly, which we can parametrize by the angle they make and the speed they have inwards when they pass some circle around the mass point. They hence form a 3D manifold $\theta \times v \times t$ where $t\in (0,\infty)$ counts the time until collision (i.e. backwards). These collision manifolds must extend through the basin of attraction, approaching the border in ever more convoluted ways as $t$ approaches $\infty$. Each border point has a neighbourhood where there are infinitely many trajectories directly hitting one of the masses. They form 3D sheets get stacked like an infinitely dense mille-feuille cake in the 4D phase space. And typically these sheets are interleaved with the sheets of the other attractors. The end result is very much like the Lakes of Wada. Proving the boundary actually has the Wada property is tricky, although new methods look promising.
The magnetic pendulum
This fractal is similar to one I made back in 1990 inspired by the dynamics of the magnetic decision-making desk toy, a pendulum oscillating above a number of magnets. Eventually it settles over one. The basic dynamics is fairly similar (see Zhampres’ beautiful images or this great treatment). The difference is that the gravity fractal has no dissipation: in principle orbits can continue forever (but I end when they get close to the masses or after a timeout) and in the magnetic fractal the force dependency was bounded, a $K/(r^2 + c)$ force rather than the $G/r^2$.
That simulation was part of my epic third year project in the gymnasium. The topic was “Chaos and self-organisation”, and I spent a lot of time reading the dynamical systems literature, running computer simulations, struggling with WordPerfect’s equation editor and producing a manuscript of about 150 pages that required careful photocopying by hand to get the pasted diagrams on separate pieces of paper to show up right. My teacher eventually sat down with me and went through my introduction and had me explain Poincaré sections. Then he promptly passed me. That was likely for the best for both of us.
Appendix: Matlab code
showPlot=0; % plot individual trajectories
randMass = 0; % place masses randomly rather than in circle
RTRAP=0.0001; % size of trap region
tmax=60; % max timesteps to run
S=1000; % resolution
x=linspace(-2,2,S);
y=linspace(-2,2,S);
[X,Y]=meshgrid(x,y);
N=5;
theta=(0:(N-1))*pi*2/N;
PX=cos(theta); PY=sin(theta);
if (randMass==1)
s = rng(3);
PX=randn(N,1); PY=randn(N,1);
end
clf
hit=X*0;
hitN = X*0; % attractor basin
hitT = X*0; % time until hit
closest = X*0+100;
closestN=closest; % closest mass to trajectory
tic; % measure time
for a=1:size(X,1)
disp(a)
for b=1:size(X,2)
[t,u,te,ye,ie]=ode45(@(t,y) forceLaw(t,y,N,PX,PY), [0 tmax], [X(a,b) 0 Y(a,b) 0],odeset('Events',@(t,y) finishFun(t,y,N,PX,PY,RTRAP^2)));
if (showPlot==1)
plot(u(:,1),u(:,3),'-b')
hold on
end
if (~isempty(te))
hit(a,b)=1;
hitT(a,b)=te;
mind2=100^2;
for k=1:N
dx=ye(1)-PX(k);
dy=ye(3)-PY(k);
d2=(dx.^2+dy.^2);
if (d2<mind2) mind2=d2; hitN(a,b)=k; end
end
end
for k=1:N
dx=u(:,1)-PX(k);
dy=u(:,3)-PY(k);
d2=min(dx.^2+dy.^2);
closest(a,b)=min(closest(a,b),sqrt(d2));
if (closest(a,b)==sqrt(d2)) closestN(a,b)=k; end
end
end
if (showPlot==1)
drawnow
pause
end
end
elapsedTime = toc
if (showPlot==0)
% Make colorful plot
co=hsv(N);
mag=sqrt(hitT);
mag=1-(mag-min(mag(:)))/(max(mag(:))-min(mag(:)));
im=zeros(S,S,3);
im(:,:,1)=interp1(1:N,co(:,1),closestN).*mag;
im(:,:,2)=interp1(1:N,co(:,2),closestN).*mag;
im(:,:,3)=interp1(1:N,co(:,3),closestN).*mag;
image(im)
end
% Gravity
function dudt = forceLaw(t,u,N,PX,PY)
%dudt = zeros(4,1);
dudt=u;
dudt(1) = u(2);
dudt(2) = 0;
dudt(3) = u(4);
dudt(4) = 0;
dx=u(1)-PX;
dy=u(3)-PY;
d=(dx.^2+dy.^2).^-1.5;
dudt(2)=dudt(2)-sum(dx.*d);
dudt(4)=dudt(4)-sum(dy.*d);
% for k=1:N
% dx=u(1)-PX(k);
% dy=u(3)-PY(k);
% d=(dx.^2+dy.^2).^-1.5;
% dudt(2)=dudt(2)-dx.*d;
% dudt(4)=dudt(4)-dy.*d;
% end
end
% Are we close enough to one of the masses?
function [value,isterminal,direction] =finishFun(t,u,N,PX,PY,r2)
value=1000;
for k=1:N
dx=u(1)-PX(k);
dy=u(3)-PY(k);
d2=(dx.^2+dy.^2);
value=min(value, d2-r2);
end
isterminal=1;
direction=0;
end
# Telescoping
Wednesday August 10 1960
Robert lit his pipe while William meticulously set the coordinates from the computer printout. “Want to bet?”
William did not look up from fine-tuning the dials and re-checking the flickering oscilloscope screen. “Five dollars that we get something.”
“’Something’ is not going to be enough to make Edward or the General happy. They want the future on film.”
“If we get more delays we can always just step out with a camera. We will be in the future already.”
“Yeah, and fired.”
“I doubt it. This is just a blue-sky project Ed had to try because John and Richard’s hare-brained one-electron idea caught the eye of the General. It will be like the nuclear mothball again. There, done. You can start it.”
Robert put the pipe in the ashtray and walked over to the Contraption controls. He noted down the time and settings in the log, then pressed the button. “Here we go.” The Contraption hummed for a second, the cameras clicked. “OK, you bet we got something. You develop the film.”
“We got something!” William was exuberant enough to have forgotten the five dollars. He put down the still moist prints on Robert’s desk. Four black squares. He thrust a magnifying glass into Robert’s hands and pointed at a corner. “Recognize it?”
It took Robert a few seconds to figure out what he was looking at. First he thought there was nothing there but noise, then eight barely visible dots became a familiar shape: Orion. He was seeing a night sky. In a photo taken inside a basement lab. During the day.
“Well… that is really something.”
Tuesday August 16 1960
The next attempt was far more meticulous. William had copied the settings from the previous attempt, changed them slightly in the hope of a different angle, and had Raymond re-check it all on the computer despite the cost. This time they developed the film together. As the seal of the United States of America began to develop on the film they both simultaneously turned to each other.
“Am I losing my mind?”
“That would make two of us. Look, there is text there. Some kind of plaque…”
The letters gradually filled in. “THIS PLAQUE COMMEMORATES THE FIRST SUCCESSFUL TRANSCHRONOLOGICAL OBSERVATION August 16 1960 to July 12 2054.” Below was more blurry text.
“Darn, the date is way off…”
“What do you mean? That is today’s date.”
“The other one. Theory said it should be a month in the future.”
“Idiot! We just got a message from the goddamn future! They put a plaque. In space. For us.”
Wednesday 14 December 1960
The General was beaming. “Gentlemen, you have done your country a great service. The geographic coordinates on Plaque #2 contained excellent intel. I am not at liberty to say what we found over there in Russia, but this project has already paid off far beyond expectation. You are going to get medals for this.” He paused and added in a lower voice: “I am relieved. I can now go to my grave knowing that the United States is still kicking communist butt 90 years in the future.”
One of the general’s aides later asked Robert: “There is something I do not understand, sir. How did the people in the future know where to put the plaques?”
Robert smiled. “That bothered us too for a while. Then we realized that it was the paperwork that told them. You guys have forced us to document everything. Just saying, it is a huge bother. But that also meant that every setting is written down and archived. Future people with the right clearances can just look up where we looked.”
“And then go into space and place a plaque?”
“Yes. Future America is clearly spacefaring. The most recent plaques also contain coordinate settings for the next one, in addition to the intel.”
He did not mention the mishap. When they entered the coordinates for Plaque #4 given on Plaque #3, William had made a mistake – understandable, since the photo was blurry – and they photographed the wrong spacetime spot. Except that Plaque #4 was there. It took them a while to realize that what mattered was what settings they entered into the archive, not what the plaque said.
“They knew where we would look.” Robert had said with wonder.
“Why did they put in different coordinates on #3 then? We could just set random coordinates and they will put a plaque there.”
“Have a heart. I assume that would force them to run around the entire solar system putting plaques in place. Theory says the given coordinates are roughly in Earth’s vicinity – more convenient for our hard-working future astronauts.”
“You know, we should try putting the wrong settings into the archive.”
“You do that if the next plaque is a dud.”
Friday 20 January 1961
Still, something about the pattern bothered Robert. The plaques contained useful information, including how to make a better camera and electronics. The General was delighted, obviously thinking of spy satellites not dependent on film cannisters. But there was not much information about the world: if he had been sending information back to 1866, wouldn’t he have included some modern news clippings, maybe a warning about stopping that Marx guy?
Suppose things did not go well in the future. The successors of that shoe-banging Khrushchev somehow won and instituted their global dictatorship. They would pore over the remaining research of the formerly free world, having their minions squeeze every creative idea out the archives. Including the coordinates for the project. Then they could easily fake messages from a future America to fool his present, maybe even laying the groundwork for their success…
William was surprisingly tricky to convince. Robert had assumed he would be willing to help with the scheme just because it was against the rules, but he had been at least partially taken in by the breath-taking glory of the project and the prospect of his own future career. Still, William was William and could not resist a technical challenge. Setting up an illicit calculation on the computer disguised as an abortive run with a faulty set of punch cards was just his kind of thing. He had always loved cloak-and-dagger stuff. Robert made use of the planned switch to the new cameras to make the disappearance of one roll of film easy to overlook. The security guards knew both of them worked on ungodly hours.
“Bet what? That we will see a hammer and sickle across the globe?”
“Something simpler: that there will be a plaque saying ‘I see you peeping!’.”
Robert shivered. “No thanks. I just want to be reassured.”
“It is a shame we can’t get better numerical resolution; if we are lucky we will just see Earth. Imagine if we could get enough decimal places to put the viewport above Washington DC.”
The photo was beautiful. Black space, and slightly off-centre there was a blue and white marble. Robert realized that they were the first people ever to see the entire planet from this distance. Maybe in a decade or so, a man on the moon would actually see it like that. But the planet looked fine. Was there maybe glints of something in orbit?
“Glad I did not make the bet with you. No plaque.”
“The operational security of the future leaves a bit to be desired.”
“…that is actually a good answer.”
“What?”
“Imagine you are running Future America, and have a way of telling the past about important things. Like whatever was in Russia, or whatever is in those encrypted sequences on Plaque #9. Great. But Past America can peek at you, and they don’t have all the counterintelligence gadgets and tricks you do. So if they peek at something sensitive – say the future plans for a super H-bomb – then the Past Commies might steal it from you.”
“So the plaques are only giving us what we need, or is safe if there are spies in the project.”
“Future America might even do a mole-hunt this way… But more importantly, you would not want Past America to watch you too freely since that might leak information to not just our adversaries or the adversaries of Future America, but maybe mid-future adversaries too.”
“You are reading too many spy novels.”
“Maybe. But I think we should not try peeking too much. Even if we know we are trustworthy, I have no doubt there are some sticklers in the project – now or in the future – who are paranoid.”
“More paranoid than us? Impossible. But yes.”
With regret Robert burned the photo later that night.
February 1962
As the project continued its momentum snowballed and it became ever harder to survey. Manpower was added. Other useful information was sent back – theory, technology, economics, forecasts. All benign. More and more was encrypted. Robert surmised that somebody simply put the encryption keys in the archive and let the future send things back securely to the right recipients.
His own job was increasingly to run the work on building a more efficient “Conduit”. The Contraption would retire in favour of an all-electronic version, all integrated circuits and rapid information flow. It would remove the need for future astronauts to precisely place plaques around the solar system: the future could send information as easily as using ComLogNet teletype terminals.
William was enthusiastically helping the engineers implement the new devices. He seemed almost giddy with energy as new tricks arrived weekly and wonders emerged from the workshop. A better camera? Hah, the new computers were lightyears ahead of anything anybody else had.
So why did Robert feel like he was being fooled?
Wednesday 28 February 1962
In a way this was a farewell to the Contraption around which his life had circulated the past few years: tomorrow the Conduit would take over upstairs.
Robert quietly entered the coordinates into the controls. This time he had done most of the work himself: he could run jobs on the new mainframe and the improved algorithms Theory had worked out made a huge difference.
It was also perhaps his last chance to actually do things himself. He had found himself increasingly insulated as a manager – encapsulated by subordinates, regulations, and schedules. The last time he had held a soldering iron was months ago. He relished the muggy red warmth of the darkroom as he developed the photos.
The angles were tilted, but the photos were more unexpected than he had anticipated. One showed what he thought was in the DC region but the whole area was an empty swampland dotted with overgrown ruins. New York was shrouded in a thunderstorm, but he could make out glowing skyscrapers miles high shedding von Kármán vortices in the hurricane strength winds. One photo showed a gigantic structure near the horizon that must have been a hundred kilometres tall, surmounted by an aurora. This was not a communist utopia. Nor was it the United States in any shape or form. It was not a radioactive wasteland – he was pretty sure at least one photo showed some kind of working rail line. This was an alien world.
When William put his hand on his shoulder Robert nearly screamed.
“Anything interesting?”
Wordlessly he showed the photos to William, who nodded. “Thought so.”
“What do you mean?”
“When do you think this project will end?”
Robert gave it some thought. “I assume it will run as long as it is useful.”
“And then what? It is not like we would pack up the Conduit and put it all in archival storage.”
“Of course not. It is tremendously useful.”
“Future America still has the project. They are no doubt getting intel from further down the line. From Future Future America.”
Robert saw it. A telescoping series of Conduits shipping choice information from further into the future to the present. Presents. Some of which would be sending it further back. And at the futuremost end of the series…
“I read a book where they discussed progress, and the author suggested that all of history is speeding up towards some super-future. The Contraption and Conduit allows the super-future to come here.”
“It does not look like there are any people in the super-future.”
“We have been evolving for millions of years, slowly. What if we could just cut to the chase?”
“Humanity ending up like that?” He gestured towards Thunder New York.
“I think that is all computers. Maybe electronic brains are the inhabitants of the future.”
“We must stop it! This is worse than commies. Russians are at least human. We must prevent the Conduit…”
William smiled broadly. “That won’t happen. If you blew up the Conduit, don’t you think there would be a report? A report archived for the future? And if you were Future America, you would probably send back an encrypted message addressed to the right person saying ‘Talk Robert out of doing something stupid tonight’? Even better, a world where someone gets your head screwed on straight, reports accurately about it, and the future sends back a warning to the person is totally consistent.”
Robert stepped away from William in horror. The red gloom of the darkroom made him look monstrous. “You are working for them!”
“Call it freelancing. I get useful tips, I do my part, things turn out as they should. I expect a nice life. But there is more to it than that, Robert. I believe in moral progress. I think those things in your photos probably know better than we do – yes, they are probably more alien than little green men from Mars, but they have literally eons of science, philosophy and whatever comes after that.”
“Mice.”
“Mice?”
“MICE: Money, Ideology, Coercion, Ego. The formula for recruiting intelligence assets. They got you with all but the coercion part.”
“They did not have to. History, or rather physical determinism, coerces us. Or, ‘you can’t fight fate’.”
“I’m doing this to protect free will! Against the Nazis. The commies! Your philosophers!”
“Funny way you are protecting it. You join this organisation, you allow yourself to become a cog in the machine, feel terribly guilty about your little experiments. No, Robert, you are protecting your way of life. You are protecting normality. You could just as well have been in Moscow right now working to protect socialism.”
“Enough! I am going to stop the Conduit!”
William held up a five dollar bill. “Want to bet?”
# And the pedestrians are off! Oh no, that lady is jaywalking!
In 1983 Swedish Television began an all-evening entertainment program named Razzel. It was centred around the state lottery draw, with music, sketch comedy, and television series interspersed between the blocks. Yes, this was back in the day when there was two TV channels to choose from and more or less everybody watched. The ice age had just about ended.
One returning feature consisted of camera footage of a pedestrian crossing in Stockholm. A sports commenter well-known for his coverage of horse-racing narrated the performance of the unknowing pedestrians as if they were competing in a race. In some cases I even think he even showed up to deliver flowers to the “winner”. But you would get disqualified if you had a false start or went outside the stripes!
I suspect this feature noticeably improved traffic safety for a generation.
I was reminded of this childhood memory earlier today when discussing the use of face recognition in China to detect jaywalkers and display them on a billboard to shame them. The typical response in a western audience is fear of what looks like a totalitarian social engineering program. The glee with which many responded to the news that the system had been confused by a bus ad, putting a celebrity on the board of shame, is telling.
# Is there a difference?
But compare the Chinese system to the TV program. In the China case the jaywalker may be publicly shamed from the billboard… but in the cheerful 80s TV program they were shamed in front of much of the nation.
There is a degree of increased personalness in the Chinese case since it also displays their name, but no doubt friends and neighbours would recognize you if they saw you on TV (remember, this was back when we only had two television channels and a fair fraction of people watched TV on Friday evening). There may also be SMS messages involved in some versions of the system. This acts differently: now it is you who gets told off when you misbehave.
A fundamental difference may be the valence of the framing. The TV show did this as happy entertainment, more of a parody of sport television than an attempt at influencing people. The Chinese system explicitly aims at discouraging misbehaviour. The TV show encouraged positive behaviour (if only accidentally).
So the dimensions here may be the extent of the social effect (locally, or nationwide), the degree the feedback is directly personal or public, and whether it is a positive or negative feedback. There is also a dimension of enforcement: is this something that happens every time you transgress the rules, or just randomly?
In terms of actually changing behaviour making the social effect broad rather than close and personal might not have much effect: we mostly care about our standing relative to our peers, so having the entire nation laugh at you is certainly worse than your friends laughing, but still not orders of magnitude more mortifying. The personal message on the other hand sends a signal that you were observed; together with an expectation of effective enforcement this likely has a fairly clear deterrence effect (it is often not the size of the punishment that deters people from crime, but their expectation of getting caught). The negative stick of acting wrong and being punished is likely stronger than the positive carrot of a hypothetical bouquet of flowers.
# Where is the rub?
From an ethical standpoint, is there a problem here? We are subject to norm enforcement from friends and strangers all the time. What is new is the application of media and automation. They scale up the stakes and add the possibility of automated enforcement. Shaming people for jaywalking is fairly minor, but some people have lost jobs, friends or been physically assaulted when their social transgressions have become viral social media. Automated enforcement makes the panopticon effect far stronger: instead of suspecting a possibility of being observed it is a near certainty. So the net effect is stronger, more pervasive norm enforcement…
…of norms that can be observed and accurately assessed. Jaywalking is transparent in a way being rude or selfish often isn’t. We may end up in a situation where we carefully obey some norms, not because they are the most important but because they can be monitored. I do not think there is anything in principle impossible about a rudeness detection neural network, but I suspect the error rates and lack of context sensitivity would make it worse than useless in preventing actual rudeness. Goodhart’s law may even make it backfire.
So, in the end, the problem is that automated systems encode a formalization of a social norm rather than the actual fluid social norm. Having a TV commenter narrate your actions is filtered through the actual norms of how to behave, while the face recognition algorithm looks for a pattern associated with transgression rather than actual transgression. The problem is that strong feedback may then lock in obedience to the hard to change formalization rather than actual good behaviour.
# Thinking long-term, vast and slow
This spring Richard Fisher at BBC Future has commissioned a series of essays about long-termism: Deep Civilisation. I really like this effort (and not just because I get the last word):
“Deep history” is fascinating because it gives us a feeling of the vastness of our roots – not just the last few millennia, but a connection to our forgotten stone-age ancestors, their hominin ancestors, the biosphere evolving over hundreds of millions and billions of years, the planet, and the universe. We are standing on top of a massive sedimentary cliff of past, stretching down to an origin unimaginably deep below.
Yet the sky above, the future, is even more vast and deep. Looking down the 1,857 m into Grand Canyon is vertiginous. Yet above us the troposphere stretches more than five times further up, followed by an even vaster stratosphere and mesosphere, in turn dwarfed by the thermosphere… and beyond the exosphere fades into the endlessness of deep space. The deep future is in many ways far more disturbing since it is moving and indefinite.
That also means there is a fair bit of freedom in shaping it. It is not very easy to shape. But if we want to be more than just some fossils buried inside the rocks we better do it.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 58, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3469540774822235, "perplexity": 2419.4771604483303}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991514.63/warc/CC-MAIN-20210518191530-20210518221530-00395.warc.gz"}
|
http://vene.ro/blog/quick-memory-usage-benchmarking-in-ipython.html
|
# Quick memory usage benchmarking in IPython
Everybody loves %timeit, there’s no doubt about it. So why not have something like that, but for measuring how much memory your line takes? Well, now you can; grab a hold of the script in the following gist and run it like in the example.
[gist id=3022718]
Instead of taking care of the dirty process inspection stuff myself, I decided to delegate this to Fabian’s simple but very good [memory_profiler][]. There is also Guppy available, but its design seems a bit and overkill for this task.
Please contact me if you find problems with this implementation, this is a preliminary, quick hack-y version. :)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20495222508907318, "perplexity": 1318.555344995556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398446218.95/warc/CC-MAIN-20151124205406-00122-ip-10-71-132-137.ec2.internal.warc.gz"}
|
http://www.physicsforums.com/showthread.php?t=564772
|
# Is it possible that Mass is the /same thing/ as curved spacetime?
by tgm1024
Tags: curved, mass, or same, spacetime, thing or
P: 20 It's fairly easily to visualize space bending as a result of the mass of an object, and that the bending of space is effectively gravity. But if mass always results in bending space (how else could it hold it in this universe?), is it possible that mass and the bending of space is precisely the same thing? IOW, if I were to wave a magic wand and place a bend distortion in space, did I just create mass? It would behave as a gravitational field and pull things toward its center, no? THANKS!!!!!!!!!!!!!
P: 5,632
IOW, if I were to wave a magic wand and place a bend distortion in space, did I just create mass?
not a crazy idea...but no. Waving you arm and a wand causes gravitational waves...distortions in SPACETIME but not mass.
Turns out that momentum, energy, pressure all contribute to gravity...the distortion in SPACETIME .
Note 'SPACETIME': if you bend time, you create gravity, too. In general relativity all these are part of the stress energy tensor which is the source of the gravitational field.
P: 4 I'm fairly sure not. As far as I'm aware, the bending of space is purely a gravity thing which acts on a mass. Mass itself, however, effects how all forces (EM, Gravity, Strong and Weak) are applied to a body. You can mimic a massive object with your magic wand and things would be gravitationally drawn towards it, but if that area of curved space-time didn't actually contain mass, it wouldn't react to external forces in the same way that a massive object would (note that I'm assuming your magic wand doesn't just use energy to make its distortion, which would just be mass in a different form).
P: 20
Is it possible that Mass is the /same thing/ as curved spacetime?
Quote by pdyxs I'm fairly sure not. As far as I'm aware, the bending of space is purely a gravity thing which acts on a mass. Mass itself, however, effects how all forces (EM, Gravity, Strong and Weak) are applied to a body. You can mimic a massive object with your magic wand and things would be gravitationally drawn towards it, but if that area of curved space-time didn't actually contain mass, it wouldn't react to external forces in the same way that a massive object would (note that I'm assuming your magic wand doesn't just use energy to make its distortion, which would just be mass in a different form).
So given my magic wand: would'nt the external forces act on that bent region as if there were a mass there? <----broken way of saying it, I'll rephrase.
It sounds as if you both are in concert with that there are things that can act upon a mass independent of gravity. I suppose I have to wonder about this: Aren't the fact that "momentum/energy/pressure" are all issues related to matter make it such that there would be a momentum/energy/pressure the moment that space was bent?
I believe you both, but I'm not sure that I understand how what you're saying actually flies in the face of what I'm saying.
P: 20
Quote by Naty1 Note 'SPACETIME': if you bend time, you create gravity, too. In general relativity all these are part of the stress energy tensor which is the source of the gravitational field.
Quick note on this. In all things, it's easiest for me to view time as just another dimension---makes more sense to me. So I'm assuming that bending space along any one of the (for now, say 4) axes will beget (or be) gravity.
P: 260 The problem with what you are asking is that it isn't well defined. How do you propose to generate spacetime curvature without any non-zero components of the stress-energy tensor? A "magic wand" isn't a valid answer; you're asking a physics question based on the false premise that magic exists. We don't know the physics of magic, so we can't give you a well defined answer.
P: 20
Quote by elfmotat The problem with what you are asking is that it isn't well defined. How do you propose to generate spacetime curvature without any non-zero components of the stress-energy tensor? A "magic wand" isn't a valid answer; you're asking a physics question based on the false premise that magic exists. We don't know the physics of magic, so we can't give you a well defined answer.
Entirely incorrect. It's entirely valid to use an absurd postulation as an alternative way of explaining a prior question about something real. The notion of a "magic wand" has *nothing* to do with a magic wand per se. The question is not about magic, nor is it in particular "what does a magic wand do."
Example: Suppose someone questions whether or not an angry cat and it's ears pointing back is a causal relationship or two facets of precisely the same phenomenon. They would be perfectly valid in saying:
"Is it possible for an angry cat to not have its ears point backward, or for a cat with its ears pushed forward to be angry? In other words, if I were to wave a magic wand and have an obviously angry cat's ears moved forward would it cease to be angry?"
It would not be appropriate then to question the nature of magic wands.
Sci Advisor PF Gold P: 5,027 There is a sense in which you can do this. Posit a metric tensor for a manifold (spacetime). Derive the Einstein tensor (no assumptions about matter needed). Now, by equality (and a few constants), you have stress energy tensor that can be interpreted as mass-energy, pressure, etc. If you choose an 'arbitrary' metric, you will get physically implausible stress energy tensors (and there is an active and unresolved research effort into how to characterize physically plausible stress energy tensors). However, you can still view this approach as instantiating the idea of geometry producing mass.
P: 96
Quote by tgm1024 Entirely incorrect. It's entirely valid to use an absurd postulation as an alternative way of explaining a prior question about something real. The notion of a "magic wand" has *nothing* to do with a magic wand per se. The question is not about magic, nor is it in particular "what does a magic wand do." Example: Suppose someone questions whether or not an angry cat and it's ears pointing back is a causal relationship or two facets of precisely the same phenomenon. They would be perfectly valid in saying: "Is it possible for an angry cat to not have its ears point backward, or for a cat with its ears pushed forward to be angry? In other words, if I were to wave a magic wand and have an obviously angry cat's ears moved forward would it cease to be angry?" It would not be appropriate then to question the nature of magic wands.
I interpret the question along the lines of these descriptions floating about regarding spacetime and gravity:
1) The effect of gravity is to create a "depression" in spacetime that then causes objects to roll into it, like marbles to a low spot.
2) If the low spot could be created in another way (Thus far unknown) would objects still roll into that depression?
I believe the magic wand analogy was merely used to represent the unknown way of creating that low spot without an object/mass.
So, the answers thus far have seemed to indicate that we really don't know of another way to create that depression in the spacetime fabric, so its an experiment we can't perform yet.
As a thought experiment, so far, we are not really offering any clue as to what the outcome would be, but, logically, if the DEPRESSION in the spacetime fabric was being used as yet another analogy, and not as an actual physical description, then, the question loses meaning.
If the depression in the spacetime fabric is meant to be a literal physical description, or at least describe an effect that works in that fashion, then it is implied that the effect, and not the object creating it, was necessary to roll our marbles.
IE: If the mass "creating the depression" is actually the attractive force at play, and the depression description is an analogy, then creating the depression otherwise would not attract objects.
If the mass "creating the depression" was actually creating the moral equivalent of a depression, such the at the objects were drawn in by the depression itself, then creating the depression alone would actually be sufficient to draw in the objects.
Physics
PF Gold
P: 6,058
Quote by tgm1024 But if mass always results in bending space (how else could it hold it in this universe?), is it possible that mass and the bending of space is precisely the same thing?
It depends on what you mean by "the same thing". If you mean "the same" as in "appears the same to our experience", then certainly not: mass is a very different thing from spacetime curvature. If you mean "the same" as in "equivalent according to the laws of physics", then it's not just "possible" that they're the same, it's actual; according to the Einstein Field Equation, spacetime curvature *is* mass (more precisely, stress-energy; as other posters have pointed out, mass is not the only component of the stress-energy tensor); the one is equal to the other.
Quote by tgm1024 Example: Suppose someone questions whether or not an angry cat and it's ears pointing back is a causal relationship or two facets of precisely the same phenomenon. They would be perfectly valid in saying: "Is it possible for an angry cat to not have its ears point backward, or for a cat with its ears pushed forward to be angry? In other words, if I were to wave a magic wand and have an obviously angry cat's ears moved forward would it cease to be angry?"
No, it wouldn't be appropriate, it would be obfuscating a very simple issue. The question in quotes above is a simple question about the correlation, averaged over all cats, between being angry and having ears pointed backward. The way you answer such a question is to look at the observed correlation. You will find, of course, that the correlation is high but not perfect, indicating that these two phenomena are contingently closely related but are not "the same thing". If you want to go deeper and ask "why", you investigate the biology of anger and ear behavior in cats. Talk about "magic wands" does nothing but obscure the sorts of things you actually need to look at to answer the question.
Similarly, your question about mass and spacetime curvature is a simple question about the correlation between observing mass (stress-energy) and observing spacetime curvature. In this case, the correlation is perfect, indicating a stronger relationship than just being contingently linked. General relativity explains this through the Einstein Field Equation, which requires the correlation to be perfect.
Emeritus Sci Advisor P: 7,599 Given that the general mathematical representation of space-time curvature is a 4 dimensonal rank 4 tensor - the so called Riemann curvature tensor - and that mass is a scalar, I would say that they're not "the same thing".
PF Gold
P: 5,027
Quote by pervect Given that the general mathematical representation of space-time curvature is a 4 dimensonal rank 4 tensor - the so called Riemann curvature tensor - and that mass is a scalar, I would say that they're not "the same thing".
Agreed, but a looser interpretation of the OP is:
If I change the curvature of spacetime in an appropriate way, have I necessarily changed/produced mass?
In classical GR, I would answer this: yes.
P: 20
Quote by Tea Jay I interpret the question along the lines of these descriptions floating about regarding spacetime and gravity: 1) The effect of gravity is to create a "depression" in spacetime that then causes objects to roll into it, like marbles to a low spot. 2) If the low spot could be created in another way (Thus far unknown) would objects still roll into that depression?
Not quite. For #1 above, I'm already assuming (correctly or incorrectly) that a depression in spacetime *is* gravity. For #2 above, I'm already assuming that if the 4D low spot were created another way that it would draw objects toward its center.
The question is: is bent space actually matter itself? Or perhaps: is it a misinterpretation to view matter and bent-space (or matter and gravity if you like) as separate phenomenons when they are actually the same thing.
Quote by =Tea Jay I believe the magic wand analogy was merely used to represent the unknown way of creating that low spot without an object/mass.
Yes, that's dead on.
P: 20
Quote by PAllen Agreed, but a looser interpretation of the OP is: If I change the curvature of spacetime in an appropriate way, have I necessarily changed/produced mass? In classical GR, I would answer this: yes.
If that's the case, and the "yes" is reassurring, then broadening this to all of physics:
Is it the case that whenever "thing" A (in our case, mass/momentum/etc) cannot exist without "effect" B, and "effect" B cannot exist without "thing" A, is it unreasonable to assume (generally) that "thing" A and "effect" B are "the same"? I don't view this as a semantic question.
I understand that you can have two views of an item. The marble is not the bend in the rubber sheet it is sitting on. But that's only in the context of All Things. If you constrain the context to the marble and the rubber sheet only (that's all there is), then yes, the bend *is* the marble. Or adding an item *outside* the context (magic wand) then bending the rubber sheet does [create or beget or form or require] the marble. I'm not sure that's entirely nuts.
PF Gold
P: 5,027
Quote by tgm1024 If that's the case, and the "yes" is reassurring, then broadening this to all of physics: Is it the case that whenever "thing" A (in our case, mass/momentum/etc) cannot exist without "effect" B, and "effect" B cannot exist without "thing" A, is it unreasonable to assume (generally) that "thing" A and "effect" B are "the same"? I don't view this as a semantic question. I understand that you can have two views of an item. The marble is not the bend in the rubber sheet it is sitting on. But that's only in the context of All Things. If you constrain the context to the marble and the rubber sheet only (that's all there is), then yes, the bend *is* the marble. Or adding an item *outside* the context (magic wand) then bending the rubber sheet does [create or beget or form or require] the marble. I'm not sure that's entirely nuts.
Your first question is philosophic to me, so my answer is more opinion than science. I would say just because A implies the existence of B and B implies the existence of A within some physical theory, it is not necessarily useful to think of A and B as the same thing.
As to your second question, I have been trying to answer an underlying, more valid question implied by your 'poetic' description. However, I can't any longer. Please erase the 'rubber sheet' analogy from you mind. There is nothing analogous to a rubber sheet, with mass sitting on it and bending it. There is also nothing analogous to the space in which the rubber sheet sits (as is implied by this image). The best I can say in words is:
There is an intrinsic geometry of spacetime (similar to a 2-d being living on a balloon can tell they have non-euclidean geometry by adding up angles of triangles). An aspect of this geometry (the Einstein tensor) can be simultaneous described as a distribution of mass/energy density and pressure/stress distribution.
P: 20
Quote by PAllen Your first question is philosophic to me, so my answer is more opinion than science. I would say just because A implies the existence of B and B implies the existence of A within some physical theory, it is not necessarily useful to think of A and B as the same thing. As to your second question, I have been trying to answer an underlying, more valid question implied by your 'poetic' description. However, I can't any longer. Please erase the 'rubber sheet' analogy from you mind. There is nothing analogous to a rubber sheet, with mass sitting on it and bending it. There is also nothing analogous to the space in which the rubber sheet sits (as is implied by this image). The best I can say in words is: There is an intrinsic geometry of spacetime (similar to a 2-d being living on a balloon can tell they have non-euclidean geometry by adding up angles of triangles). An aspect of this geometry (the Einstein tensor) can be simultaneous described as a distribution of mass/energy density and pressure/stress distribution.
What's poetic? You seem angry.
There's nothing mystical about, say, a 4D surface of a 5D balloon, nor the topological way of discovering it's "shape". Heck, people used to scratch their heads at walking north and getting closer to each other. But that analogy I used was just another magic wand example: don't mistake me for someone thinking that the universe is a rubber sheet with stuff pushing on it. It's just an example of me replacing the causal nature of "the marble causes the bend" with an equivalence. I don't care if that's a common description for gravity or a total mistake in many videos on the subject. I just chose it because there's a "thing" and a bend.
The choice of a marble and a rubber sheet should have done nothing to abort the conversation.
In any case, there's enough in this thread for me to look this up further.
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8216750025749207, "perplexity": 772.898118775115}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510273663.2/warc/CC-MAIN-20140728011753-00292-ip-10-146-231-18.ec2.internal.warc.gz"}
|
http://archived.moe/a/thread/13448208
|
261KiB, 704x480, 154296456823465.png
No.13448208
alright, since when did touko get so fucking GAR? what is this just as planned grin as though she knows everything. WHO IS THIS WOMAN?
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9217277765274048, "perplexity": 26112.681255311552}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719877.27/warc/CC-MAIN-20161020183839-00255-ip-10-171-6-4.ec2.internal.warc.gz"}
|
https://patrickjuli.us/tag/statistics/
|
# The sausage of statistics being made
Nov 11 JDN 2458434
“Laws, like sausages, cease to inspire respect in proportion as we know how they are made.”
Statistics are a bit like laws and sausages. There are a lot of things in statistical practice that don’t align with statistical theory. The most obvious examples are the fact that many results in statistics are asymptotic: they only strictly apply for infinitely large samples, and in any finite sample they will be some sort of approximation (we often don’t even know how good an approximation).
But the problem runs deeper than this: The whole idea of a p-value was originally supposed to be used to assess one single hypothesis that is the only one you test in your entire study.
That’s frankly a ludicrous expectation: Why would you write a whole paper just to test one parameter?
This is why I don’t actually think this so-called multiple comparisons problem is a problem with researchers doing too many hypothesis tests; I think it’s a problem with statisticians being fundamentally unreasonable about what statistics is useful for. We have to do multiple comparisons, so you should be telling us how to do it correctly.
Statisticians have this beautiful pure mathematics that generates all these lovely asymptotic results… and then they stop, as if they were done. But we aren’t dealing with infinite or even “sufficiently large” samples; we need to know what happens when your sample is 100, not when your sample is 10^29. We can’t assume that our variables are independently identically distributed; we don’t know their distribution, and we’re pretty sure they’re going to be somewhat dependent.
Even in an experimental context where we can randomly and independently assign some treatments, we can’t do that with lots of variables that are likely to matter, like age, gender, nationality, or field of study. And applied econometricians are in an even tighter bind; they often can’t randomize anything. They have to rely upon “instrumental variables” that they hope are “close enough to randomized” relative to whatever they want to study.
In practice what we tend to do is… fudge it. We use the formal statistical methods, and then we step back and apply a series of informal norms to see if the result actually makes sense to us. This is why almost no psychologists were actually convinced by Daryl Bem’s precognition experiments, despite his standard experimental methodology and perfect p < 0.05 results; he couldn’t pass any of the informal tests, particularly the most basic one of not violating any known fundamental laws of physics. We knew he had somehow cherry-picked the data, even before looking at it; nothing else was possible.
This is actually part of where the “hierarchy of sciences” notion is useful: One of the norms is that you’re not allowed to break the rules of the sciences above you, but you can break the rules of the sciences below you. So psychology has to obey physics, but physics doesn’t have to obey psychology. I think this is also part of why there’s so much enmity between economists and anthropologists; really we should be on the same level, cognizant of each other’s rules, but economists want to be above anthropologists so we can ignore culture, and anthropologists want to be above economists so they can ignore incentives.
Another informal norm is the “robustness check”, in which the researcher runs a dozen different regressions approaching the same basic question from different angles. “What if we control for this? What if we interact those two variables? What if we use a different instrument?” In terms of statistical theory, this doesn’t actually make a lot of sense; the probability distributions f(y|x) of y conditional on x and f(y|x, z) of y conditional on x and z are not the same thing, and wouldn’t in general be closely tied, depending on the distribution f(x|z) of x conditional on z. But in practice, most real-world phenomena are going to continue to show up even as you run a bunch of different regressions, and so we can be more confident that something is a real phenomenon insofar as that happens. If an effect drops out when you switch out a couple of control variables, it may have been a statistical artifact. But if it keeps appearing no matter what you do to try to make it go away, then it’s probably a real thing.
Because of the powerful career incentives toward publication and the strange obsession among journals with a p-value less than 0.05, another norm has emerged: Don’t actually trust p-values that are close to 0.05. The vast majority of the time, a p-value of 0.047 was the result of publication bias. Now if you see a p-value of 0.001, maybe then you can trust it—but you’re still relying on a lot of assumptions even then. I’ve seen some researchers argue that because of this, we should tighten our standards for publication to something like p < 0.01, but that’s missing the point; what we need to do is stop publishing based on p-values. If you tighten the threshold, you’re just going to get more rejected papers and then the few papers that do get published will now have even smaller p-values that are still utterly meaningless.
These informal norms protect us from the worst outcomes of bad research. But they are almost certainly not optimal. It’s all very vague and informal, and different researchers will often disagree vehemently over whether a given interpretation is valid. What we need are formal methods for solving these problems, so that we can have the objectivity and replicability that formal methods provide. Right now, our existing formal tools simply are not up to that task.
There are some things we may never be able to formalize: If we had a formal algorithm for coming up with good ideas, the AIs would already rule the world, and this would be either Terminator or The Culture depending on whether we designed the AIs correctly. But I think we should at least be able to formalize the basic question of “Is this statement likely to be true?” that is the fundamental motivation behind statistical hypothesis testing.
I think the answer is likely to be in a broad sense Bayesian, but Bayesians still have a lot of work left to do in order to give us really flexible, reliable statistical methods we can actually apply to the messy world of real data. In particular, tell us how to choose priors please! Prior selection is a fundamental make-or-break problem in Bayesian inference that has nonetheless been greatly neglected by most Bayesian statisticians. So, what do we do? We fall back on informal norms: Try maximum likelihood, which is like using a very flat prior. Try a normally-distributed prior. See if you can construct a prior from past data. If all those give the same thing, that’s a “robustness check” (see previous informal norm).
Informal norms are also inherently harder to teach and learn. I’ve seen a lot of other grad students flail wildly at statistics, not because they don’t know what a p-value means (though maybe that’s also sometimes true), but because they don’t really quite grok the informal underpinnings of good statistical inference. This can be very hard to explain to someone: They feel like they followed all the rules correctly, but you are saying their results are wrong, and now you can’t explain why.
In fact, some of the informal norms that are in wide use are clearly detrimental. In economics, norms have emerged that certain types of models are better simply because they are “more standard”, such as the dynamic stochastic general equilibrium models that can basically be fit to everything and have never actually usefully predicted anything. In fact, the best ones just predict what we already knew from Keynesian models. But without a formal norm for testing the validity of models, it’s been “DSGE or GTFO”. At present, it is considered “nonstandard” (read: “bad”) not to assume that your agents are either a single unitary “representative agent” or a continuum of infinitely-many agents—modeling the actual fact of finitely-many agents is just not done. Yet it’s hard for me to imagine any formal criterion that wouldn’t at least give you some points for correctly including the fact that there is more than one but less than infinity people in the world (obviously your model could still be bad in other ways).
I don’t know what these new statistical methods would look like. Maybe it’s as simple as formally justifying some of the norms we already use; maybe it’s as complicated as taking a fundamentally new approach to statistical inference. But we have to start somewhere.
# Demystifying dummy variables
Nov 5, JDN 2458062
Continuing my series of blog posts on basic statistical concepts, today I’m going to talk about dummy variables. Dummy variables are quite simple, but for some reason a lot of people—even people with extensive statistical training—often have trouble understanding them. Perhaps people are simply overthinking matters, or making subtle errors that end up having large consequences.
A dummy variable (more formally a binary variable) is a variable that has only two states: “No”, usually represented 0, and “Yes”, usually represented 1. A dummy variable answers a single “Yes or no” question. They are most commonly used for categorical variables, answering questions like “Is the person’s race White?” and “Is the state California?”; but in fact almost any kind of data can be represented this way: We could represent income using a series of dummy variables like “Is your income greater than $50,000?” “Is your income greater than$51,000?” and so on. As long as the number of possible outcomes is finite—which, in practice, it always is—the data can be represented by some (possibly large) set of dummy variables. In fact, if your data set is large enough, representing numerical data with dummy variables can be a very good thing to do, as it allows you to account for nonlinear effects without assuming some specific functional form.
Most of the misunderstanding regarding dummy variables involves applying them in regressions and interpreting the results.
Probably the most common confusion is about what dummy variables to include. When you have a set of categories represented in your data (e.g. one for each US state), you want to include dummy variables for all but one of them. The most common mistake here is to try to include all of them, and end up with a regression that doesn’t make sense, or if you have a catchall category like “Other” (e.g. race is coded as “White/Black/Other”), leaving out that one and getting results with a nonsensical baseline.
You don’t have to leave one out if you only have one set of categories and you don’t include a constant in your regression; then the baseline will emerge automatically from the regression. But this is dangerous, as the interpretation of the coefficients is no longer quite so simple.
The thing to keep in mind is that a coefficient on a dummy variable is an effect of a change—so the coefficient on “White” is the effect of being White. In order to be an effect of a change, that change must be measured against some baseline. The dummy variable you exclude from the regression is the baseline—because the effect of changing to the baseline from the baseline is by definition zero.
Here’s a very simple example where all the regressions can be done by hand. Suppose you have a household with 1 human and 1 cat, and you want to know the effect of species on number of legs. (I mean, hopefully this is something you already know; but that makes it a good illustration.) In what follows, you can safely skip the matrix algebra; but I included it for any readers who want to see how these concepts play out mechanically in the math.
Your outcome variable Y is legs: The human has 2 and the cat has 4. We can write this as a matrix:
$Y = \begin{bmatrix} 2 \\ 4 \end{bmatrix}$
What dummy variables should we choose? There are actually several options.
The simplest option is to include both a human variable and a cat variable, and no constant. Let’s put the human variable first. Then our human subject has a value of X1 = [1 0] (“Yes” to human and “No” to cat) and our cat subject has a value of X2 = [0 1].
This is very nice in this case, as it makes our matrix of independent variables simply an identity matrix:
$X = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}$
This makes the calculations extremely nice, because transposing, multiplying, and inverting an identity matrix all just give us back an identity matrix. The standard OLS regression coefficient is B = (X’X)-1 X’Y, which in this case just becomes Y itself.
$B = (X’X)^{-1} X’Y = Y = \begin{bmatrix} 2 \\ 4 \end{bmatrix}$
Our coefficients are 2 and 4. How would we interpret this? Pretty much what you’d think: The effect of being human is having 2 legs, while the effect of being a cat is having 4 legs. This amounts to choosing a baseline of nothing—the effect is compared to a hypothetical entity with no legs at all. And indeed this is what will happen more generally if you do a regression with a dummy for each category and no constant: The baseline will be a hypothetical entity with an outcome of zero on whatever your outcome variable is.
So far, so good.
But what if we had additional variables to include? Say we have both cats and humans with black hair and brown hair (and no other colors). If we now include the variables human, cat, black hair, brown hair, we won’t get the results we expect—in fact, we’ll get no result at all. The regression is mathematically impossible, regardless of how large a sample we have.
This is why it’s much safer to choose one of the categories as a baseline, and include that as a constant. We could pick either one; we just need to be clear about which one we chose.
Say we take human as the baseline. Then our variables are constant and cat. The variable constant is just 1 for every single individual. The variable cat is 0 for humans and 1 for cats.
Now our independent variable matrix looks like this:
$X = \begin{bmatrix} 1 & 0 \\ 1 & 1 \end{bmatrix}$
The matrix algebra isn’t quite so nice this time:
$X’X = \begin{bmatrix} 1 & 1 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} 1 & 0 \\ 1 & 1 \end{bmatrix} = \begin{bmatrix} 2 & 1 \\ 1 & 1 \end{bmatrix}$
$(X’X)^{-1} = \begin{bmatrix} 1 & -1 \\ -1 & 2 \end{bmatrix}$
$X’Y = \begin{bmatrix} 1 & 1 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} 2 \\ 4 \end{bmatrix} = \begin{bmatrix} 6 \\ 4 \end{bmatrix}$
$B = (X’X)^{-1} X’Y = \begin{bmatrix} 1 & -1 \\ -1 & 2 \end{bmatrix} \begin{bmatrix} 6 \\ 4 \end{bmatrix} = \begin{bmatrix} 2 \\ 2 \end{bmatrix}$
Our coefficients are now 2 and 2. Now, how do we interpret that result? We took human as the baseline, so what we are saying here is that the default is to have 2 legs, and then the effect of being a cat is to get 2 extra legs.
That sounds a bit anthropocentric—most animals are quadripeds, after all—so let’s try taking cat as the baseline instead. Now our variables are constant and human, and our independent variable matrix looks like this:
$X = \begin{bmatrix} 1 & 1 \\ 1 & 0 \end{bmatrix}$
$X’X = \begin{bmatrix} 1 & 1 \\ 1 & 0 \end{bmatrix} \begin{bmatrix} 1 & 1 \\ 1 & 0 \end{bmatrix} = \begin{bmatrix} 2 & 1 \\ 1 & 1 \end{bmatrix}$
$(X’X)^{-1} = \begin{bmatrix} 1 & -1 \\ -1 & 2 \end{bmatrix}$
$X’Y = \begin{bmatrix} 1 & 1 \\ 1 & 0 \end{bmatrix} \begin{bmatrix} 2 \\ 4 \end{bmatrix} = \begin{bmatrix} 6 \\ 2 \end{bmatrix}$
$B = \begin{bmatrix} 1 & -1 \\ -1 & 2 \end{bmatrix} \begin{bmatrix} 6 \\ 2 \end{bmatrix} = \begin{bmatrix} 4 \\ -2 \end{bmatrix}$
Our coefficients are 4 and -2. This seems much more phylogenetically correct: The default number of legs is 4, and the effect of being human is to lose 2 legs.
All these regressions are really saying the same thing: Humans have 2 legs, cats have 4. And in this particular case, it’s simple and obvious. But once things start getting more complicated, people tend to make mistakes even on these very simple questions.
A common mistake would be to try to include a constant and both dummy variables: constant human cat. What happens if we try that? The matrix algebra gets particularly nasty, first of all:
$X = \begin{bmatrix} 1 & 1 & 0 \\ 1 & 0 & 1 \end{bmatrix}$
$X’X = \begin{bmatrix} 1 & 1 \\ 1 & 0 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} 1 & 1 & 0 \\ 1 & 0 & 1 \end{bmatrix} = \begin{bmatrix} 2 & 1 & 1 \\ 1 & 1 & 0 \\ 1 & 0 & 1 \end{bmatrix}$
Our covariance matrix X’X is now 3×3, first of all. That means we have more coefficients than we have data points. But we could throw in another human and another cat to fix that problem.
More importantly, the covariance matrix is not invertible. Rows 2 and 3 add up together to equal row 1, so we have a singular matrix.
If you tried to run this regression, you’d get an error message about “perfect multicollinearity”. What this really means is you haven’t chosen a valid baseline. Your baseline isn’t human and it isn’t cat; and since you included a constant, it isn’t a baseline of nothing either. It’s… unspecified.
You actually can choose whatever baseline you want for this regression, by setting the constant term to whatever number you want. Set a constant of 0 and your baseline is nothing: you’ll get back the coefficients 0, 2 and 4. Set a constant of 2 and your baseline is human: you’ll get 2, 0 and 2. Set a constant of 4 and your baseline is cat: you’ll get 4, -2, 0. You can even choose something weird like 3 (you’ll get 3, -1, 1) or 7 (you’ll get 7, -5, -3) or -4 (you’ll get -4, 6, 8). You don’t even have to choose integers; you could pick -0.9 or 3.14159. As long as the constant plus the coefficient on human add to 2 and the constant plus the coefficient on cat add to 4, you’ll get a valid regression.
Again, this example seems pretty simple. But it’s an easy trap to fall into if you don’t think carefully about what variables you are including. If you are looking at effects on income and you have dummy variables on race, gender, schooling (e.g. no high school, high school diploma, some college, Bachelor’s, master’s, PhD), and what state a person lives in, it would be very tempting to just throw all those variables into a regression and see what comes out. But nothing is going to come out, because you haven’t specified a baseline. Your baseline isn’t even some hypothetical person with $0 income (which already doesn’t sound like a great choice); it’s just not a coherent baseline at all. Generally the best thing to do (for the most precise estimates) is to choose the most common category in each set as the baseline. So for the US a good choice would be to set the baseline as White, female, high school diploma, California. Another common strategy when looking at discrimination specifically is to make the most privileged category the baseline, so we’d instead have White, male, PhD, and… Maryland, it turns out. Then we expect all our coefficients to be negative: Your income is generally lower if you are not White, not male, have less than a PhD, or live outside Maryland. This is also important if you are interested in interactions: For example, the effect on your income of being Black in California is probably not the same as the effect of being Black in Mississippi. Then you’ll want to include terms like Black and Mississippi, which for dummy variables is the same thing as taking the Black variable and multiplying by the Mississippi variable. But now you need to be especially clear about what your baseline is: If being White in California is your baseline, then the coefficient on Black is the effect of being Black in California, while the coefficient on Mississippi is the effect of being in Mississippi if you are White. The coefficient on Black and Mississippi is the effect of being Black in Mississippi, over and above the sum of the effects of being Black and the effect of being in Mississippi. If we saw a positive coefficient there, it wouldn’t mean that it’s good to be Black in Mississippi; it would simply mean that it’s not as bad as we might expect if we just summed the downsides of being Black with the downsides of being in Mississippi. And if we saw a negative coefficient there, it would mean that being Black in Mississippi is even worse than you would expect just from summing up the effects of being Black with the effects of being in Mississippi. As long as you choose your baseline carefully and stick to it, interpreting regressions with dummy variables isn’t very hard. But so many people forget this step that they get very confused by the end, looking at a term like Black female Mississippi and seeing a positive coefficient, and thinking that must mean that life is good for Black women in Mississippi, when really all it means is the small mercy that being a Black woman in Mississippi isn’t quite as bad as you might think if you just added up the effect of being Black, plus the effect of being a woman, plus the effect of being Black and a woman, plus the effect of living in Mississippi, plus the effect of being Black in Mississippi, plus the effect of being a woman in Mississippi. # A tale of two storms Sep 24, JDN 2458021 There were two severe storm events this past week; one you probably heard a great deal about, the other, probably not. The first was Hurricane Irma, which hit the United States and did most of its damage in Florida; the second was Typhoon Doksuri, which hit Southeast Asia and did most of its damage in Vietnam. You might expect that this post is going to give you more bad news. Well, I have a surprise for you: The news is actually mostly good. The death tolls from both storms were astonishingly small. The hurricane is estimated to have killed at least 84 people, while the typhoon has killed at least 26. This result is nothing less than heroism. The valiant efforts of thousands of meteorologists and emergency responders around the world has saved thousands of lives, and did so both in the wealthy United States and in impoverished Vietnam. When I started this post, I had expected to see that the emergency response in Vietnam would be much worse, and fatalities would be far higher; I am delighted to report that nothing of the sort was the case, and Vietnam, despite their per-capita GDP PPP of under$6,000, has made emergency response a sufficiently high priority that they saved their people just about as well as Florida did.
To get a sense of what might have happened without them, consider that 1.5 million homes in Florida were leveled by the hurricane, and over 100,000 homes were damaged by the typhoon. Vietnam is a country of 94 million people. Florida has a population of 20 million. (The reason Florida determines so many elections is that it is by far the most populous swing state.) Without weather forecasting and emergency response, these death figures would have been in the tens of thousands, not the dozens.
Indeed, if you know statistics and demographics well, these figures become even more astonishing: These death rates were almost indistinguishable from statistical noise.
Vietnam’s baseline death rate is about 5.9 per 1,000, meaning that they experience about 560,000 deaths in any given year. This means that over 1500 people die in Vietnam on a normal day.
Florida’s baseline death rate is about 6.6 per 1,000, actually a bit higher than Vietnam’s, because Florida’s population skews so much toward the elderly. Therefore Florida experiences about 130,000 deaths per year, or 360 deaths on a normal day.
In both Vietnam and Florida, this makes the daily death probability for any given person about 0.0017%. A random process with a fixed probability of 0.0017% over a population of n people will result in an average of 0.0017n events, but with some variation around that number. The standard deviation is actually sqrt(p(1-p)n) = 0.004 sqrt(n). When n = 20,000,000 (Florida), this results in a standard deviation of 18. When n = 94,000,000 (Vietnam), this results in a standard deviation of 40.
This means that the 26 additional deaths in Vietnam were within one standard deviation of average! They basically are indistinguishable from statistical noise. There have been over a hundred days in Vietnam where an extra 26 people happened to die, just in the past year. Weather forecasting took what could have been a historic disaster and turned it into just another bad day.
The 84 additional deaths in Florida are over four standard deviations away from average, so they are definitely distinguishable from statistical noise—but this still means that Florida’s total death rate for the year will only tick up by 0.6%.
It is common in such tragedies to point out in grave tones that “one death is too many”, but I maintain that this is not actually moral wisdom but empty platitude. No conceivable policy is ever going to reduce death rates to zero, and the people who died of heart attacks or brain aneurysms are every bit as dead as the people who died from hurricanes or terrorist attacks. Instead of focusing on the handful of people who died because they didn’t heed warnings or simply got extraordinarily unlucky, I think we should be focusing on the thousands of people who survived because our weather forecasters and emergency responders did their jobs so exceptionally well. Of course if we can reduce the numbers even further, we should; but from where I’m sitting, our emergency response system has a lot to be proud of.
Of course, the economic damage of the storms was substantially greater. The losses in destroyed housing and infrastructure in Florida are projected at over $80 billion. Vietnam is much poorer, so there simply isn’t as much infrastructure to destroy; total damage is unlikely to exceed$10 billion. Florida’s GDP is $926 billion, so they are losing 8.6%; while Vietnam’s GDP is$220 billion, so they are still losing 4.5%. And of course the damage isn’t evenly spread across everyone; those hardest hit will lose perhaps their entire net wealth, while others will feel absolutely nothing.
But economic damage is fleeting. Indeed, if we spend the government money we should be, and take the opportunity to rebuild this infrastructure better than it was before, the long-run economic impact could be positive. Even 8.6% of GDP is less than five years of normal economic growth—and there were years in the 1950s where we did it in a single year. The 4.6% that Vietnam lost, they should make back within a year of their current economic growth.
Thank goodness.
# Why do so many Americans think that crime is increasing?
Jan 29, JDN 2457783
Since the 1990s, crime in United States has been decreasing, and yet in every poll since then most Americans report that they believe that crime is increasing.
It’s not a small decrease either. The US murder rate is down to the lowest it has been in a century. There are now a smaller absolute number (by 34 log points) of violent crimes per year in the US than there were 20 years ago, despite a significant increase in total population (19 log points—and the magic of log points is that, yes, the rate has decreased by precisely 53 log points).
It isn’t geographically uniform, of course; some states have improved much more than others, and a few states (such as New Mexico) have actually gotten worse.
The 1990s were a peak of violent crime, so one might say that we are just regressing to the mean. (Even that would be enough to make it baffling that people think crime is increasing.) But in fact overall crime in the US is now the lowest it has been since the 1970s, and still decreasing.
Indeed, this decrease has been underestimated, because we are now much better about reporting and investigating crimes than we used to be (which may also be part of why they are decreasing, come to think of it). If you compare against surveys of people who say they have been personally victimized, we’re looking at a decline in violent crime rates of two thirds—109 log points.
Just since 2008 violent crime has decreased by 26% (30 log points)—but of course we all know that Obama is “soft on crime” because he thinks cops shouldn’t be allowed to just shoot Black kids for no reason.
The proportion of people who think crime is increasing does seem to decrease as crime rates decrease—but it still remains alarmingly high. If people were half as rational as most economists seem to believe, the proportion of people who think crime is increasing should drop to basically zero whenever crime rates decrease, since that’s a really basic fact about the world that you can just go look up on the Web in a couple of minutes. There’s no deep ambiguity, not even much “rational ignorance” given the low cost of getting correct answers. People just don’t bother to check, or don’t feel they need to.
What’s going on? How can crime fall to half what it was 20 years ago and yet almost two-thirds of people think it’s actually increasing?
Well, one hint is that news coverage of crime doesn’t follow the same pattern as actual crime.
News coverage in general is a terrible source of information, not simply because news organizations can be biased, make glaring mistakes, and sometimes outright lie—but actually for a much more fundamental reason: Even a perfect news channel, qua news channel, would report what is surprising—and what is surprising is, by definition, improbable. (Indeed, there is a formal mathematical concept in probability theory called surprisal that is simply the logarithm of 1 over the probability.) Even assuming that news coverage reports only the truth, the probability of seeing something on the news isn’t proportional to the probability of the event occurring—it’s more likely proportional to the entropy, which is probability times surprisal.
Now, if humans were optimal information processing engines, that would be just fine, actually; reporting events proportional to their entropy is actually a very efficient mechanism for delivering information (optimal, under certain types of constraints), provided that you can then process the information back into probabilities afterward.
But of course, humans aren’t optimal information processing engines. We don’t recompute the probabilities from the given entropy; instead we use the availability heuristic, by which we simply use the number of times we can think of something happening as our estimate of the probability of that event occurring. If you see more murders on TV news than you used you, you assume that murders must be more common than they used to be. (And when I put it like that, it really doesn’t sound so unreasonable, does it? Intuitively the availability heuristic seems to make sense—which is part of why it’s so insidious.)
Another likely reason for the discrepancy between perception and reality is nostalgia. People almost always have a more positive view of the past than it deserves, particularly when referring to their own childhoods. Indeed, I’m quite certain that a major reason why people think the world was much better when they were kids was that their parents didn’t tell them what was going on. And of course I’m fine with that; you don’t need to burden 4-year-olds with stories of war and poverty and terrorism. I just wish people would realize that they were being protected from the harsh reality of the world, instead of thinking that their little bubble of childhood innocence was a genuinely much safer world than the one we live in today.
Then take that nostalgia and combine it with the availability heuristic and the wall-to-wall TV news coverage of anything bad that happens—and almost nothing good that happens, certainly not if it’s actually important. I’ve seen bizarre fluff pieces about puppies, but never anything about how world hunger is plummeting or air quality is dramatically improved or cars are much safer. That’s the one thing I will say about financial news; at least they report it when unemployment is down and the stock market is up. (Though most Americans, especially most Republicans, still seem really confused on those points as well….) They will attribute it to anything from sunspots to the will of Neptune, but at least they do report good news when it happens. It’s no wonder that people are always convinced that the world is getting more dangerous even as it gets safer and safer.
The real question is what we do about it—how do we get people to understand even these basic facts about the world? I still believe in democracy, but when I see just how painfully ignorant so many people are of such basic facts, I understand why some people don’t. The point of democracy is to represent everyone’s interests—but we also end up representing everyone’s beliefs, and sometimes people’s beliefs just don’t line up with reality. The only way forward I can see is to find a way to make people’s beliefs better align with reality… but even that isn’t so much a strategy as an objective. What do I say to someone who thinks that crime is increasing, beyond showing them the FBI data that clearly indicates otherwise? When someone is willing to override all evidence with what they feel in their heart to be true, what are the rest of us supposed to do?
# Experimentally testing categorical prospect theory
Dec 4, JDN 2457727
In last week’s post I presented a new theory of probability judgments, which doesn’t rely upon people performing complicated math even subconsciously. Instead, I hypothesize that people try to assign categories to their subjective probabilities, and throw away all the information that wasn’t used to assign that category.
The way to most clearly distinguish this from cumulative prospect theory is to show discontinuity. Kahneman’s smooth, continuous function places fairly strong bounds on just how much a shift from 0% to 0.000001% can really affect your behavior. In particular, if you want to explain the fact that people do seem to behave differently around 10% compared to 1% probabilities, you can’t allow the slope of the smooth function to get much higher than 10 at any point, even near 0 and 1. (It does depend on the precise form of the function, but the more complicated you make it, the more free parameters you add to the model. In the most parsimonious form, which is a cubic polynomial, the maximum slope is actually much smaller than this—only 2.)
If that’s the case, then switching from 0.% to 0.0001% should have no more effect in reality than a switch from 0% to 0.00001% would to a rational expected utility optimizer. But in fact I think I can set up scenarios where it would have a larger effect than a switch from 0.001% to 0.01%.
Indeed, these games are already quite profitable for the majority of US states, and they are called lotteries.
Rationally, it should make very little difference to you whether your odds of winning the Powerball are 0 (you bought no ticket) or 0.000000001% (you bought a ticket), even when the prize is $100 million. This is because your utility of$100 million is nowhere near 100 million times as large as your marginal utility of $1. A good guess would be that your lifetime income is about$2 million, your utility is logarithmic, the units of utility are hectoQALY, and the baseline level is about 100,000.
I apologize for the extremely large number of decimals, but I had to do that in order to show any difference at all. I have bolded where the decimals first deviate from the baseline.
Your utility if you don’t have a ticket is ln(20) = 2.9957322736 hQALY.
Your utility if you have a ticket is (1-10^-9) ln(20) + 10^-9 ln(1020) = 2.9957322775 hQALY.
You gain a whopping 40 microQALY over your whole lifetime. I highly doubt you could even perceive such a difference.
And yet, people are willing to pay nontrivial sums for the chance to play such lotteries. Powerball tickets sell for about $2 each, and some people buy tickets every week. If you do that and live to be 80, you will spend some$8,000 on lottery tickets during your lifetime, which results in this expected utility: (1-4*10^-6) ln(20-0.08) + 4*10^-6 ln(1020) = 2.9917399955 hQALY.
You have now sacrificed 0.004 hectoQALY, which is to say 0.4 QALY—that’s months of happiness you’ve given up to play this stupid pointless game.
Which shouldn’t be surprising, as (with 99.9996% probability) you have given up four months of your lifetime income with nothing to show for it. Lifetime income of $2 million / lifespan of 80 years =$25,000 per year; $8,000 /$25,000 = 0.32. You’ve actually sacrificed slightly more than this, which comes from your risk aversion.
Why would anyone do such a thing? Because while the difference between 0 and 10^-9 may be trivial, the difference between “impossible” and “almost impossible” feels enormous. “You can’t win if you don’t play!” they say, but they might as well say “You can’t win if you do play either.” Indeed, the probability of winning without playing isn’t zero; you could find a winning ticket lying on the ground, or win due to an error that is then upheld in court, or be given the winnings bequeathed by a dying family member or gifted by an anonymous donor. These are of course vanishingly unlikely—but so was winning in the first place. You’re talking about the difference between 10^-9 and 10^-12, which in proportional terms sounds like a lot—but in absolute terms is nothing. If you drive to a drug store every week to buy a ticket, you are more likely to die in a car accident on the way to the drug store than you are to win the lottery.
Of course, these are not experimental conditions. So I need to devise a similar game, with smaller stakes but still large enough for people’s brains to care about the “almost impossible” category; maybe thousands? It’s not uncommon for an economics experiment to cost thousands, it’s just usually paid out to many people instead of randomly to one person or nobody. Conducting the experiment in an underdeveloped country like India would also effectively amplify the amounts paid, but at the fixed cost of transporting the research team to India.
But I think in general terms the experiment could look something like this. You are given $20 for participating in the experiment (we treat it as already given to you, to maximize your loss aversion and endowment effect and thereby give us more bang for our buck). You then have a chance to play a game, where you pay$X to get a P probability of $Y*X, and we vary these numbers. The actual participants wouldn’t see the variables, just the numbers and possibly the rules: “You can pay$2 for a 1% chance of winning $200. You can also play multiple times if you wish.” “You can pay$10 for a 5% chance of winning $250. You can only play once or not at all.” So I think the first step is to find some dilemmas, cases where people feel ambivalent, and different people differ in their choices. That’s a good role for a pilot study. Then we take these dilemmas and start varying their probabilities slightly. In particular, we try to vary them at the edge of where people have mental categories. If subjective probability is continuous, a slight change in actual probability should never result in a large change in behavior, and furthermore the effect of a change shouldn’t vary too much depending on where the change starts. But if subjective probability is categorical, these categories should have edges. Then, when I present you with two dilemmas that are on opposite sides of one of the edges, your behavior should radically shift; while if I change it in a different way, I can make a large change without changing the result. Based solely on my own intuition, I guessed that the categories roughly follow this pattern: Impossible: 0% Almost impossible: 0.1% Very unlikely: 1% Unlikely: 10% Fairly unlikely: 20% Roughly even odds: 50% Fairly likely: 80% Likely: 90% Very likely: 99% Almost certain: 99.9% Certain: 100% So for example, if I switch from 0%% to 0.01%, it should have a very large effect, because I’ve moved you out of your “impossible” category (indeed, I think the “impossible” category is almost completely sharp; literally anything above zero seems to be enough for most people, even 10^-9 or 10^-10). But if I move from 1% to 2%, it should have a small effect, because I’m still well within the “very unlikely” category. Yet the latter change is literally one hundred times larger than the former. It is possible to define continuous functions that would behave this way to an arbitrary level of approximation—but they get a lot less parsimonious very fast. Now, immediately I run into a problem, because I’m not even sure those are my categories, much less that they are everyone else’s. If I knew precisely which categories to look for, I could tell whether or not I had found it. But the process of both finding the categories and determining if their edges are truly sharp is much more complicated, and requires a lot more statistical degrees of freedom to get beyond the noise. One thing I’m considering is assigning these values as a prior, and then conducting a series of experiments which would adjust that prior. In effect I would be using optimal Bayesian probability reasoning to show that human beings do not use optimal Bayesian probability reasoning. Still, I think that actually pinning down the categories would require a large number of participants or a long series of experiments (in frequentist statistics this distinction is vital; in Bayesian statistics it is basically irrelevant—one of the simplest reasons to be Bayesian is that it no longer bothers you whether someone did 2 experiments of 100 people or 1 experiment of 200 people, provided they were the same experiment of course). And of course there’s always the possibility that my theory is totally off-base, and I find nothing; a dissertation replicating cumulative prospect theory is a lot less exciting (and, sadly, less publishable) than one refuting it. Still, I think something like this is worth exploring. I highly doubt that people are doing very much math when they make most probabilistic judgments, and using categories would provide a very good way for people to make judgments usefully with no math at all. # How I wish we measured percentage change JDN 2457415 For today’s post I’m taking a break from issues of global policy to discuss a bit of a mathematical pet peeve. It is an opinion I share with many economists—for instance Miles Kimball has a very nice post about it, complete with some clever analogies to music. I hate when we talk about percentages in asymmetric terms. What do I mean by this? Well, here are a few examples. If my stock portfolio loses 10% one year and then gains 11% the following year, have I gained or lost money? I’ve lost money. Only a little bit—I’m down 0.1%—but still, a loss. In 2003, Venezuela suffered a depression of -26.7% growth one year, and then an economic boom of 36.1% growth the following year. What was their new GDP, relative to what it was before the depression? Very slightly less than before. (99.8% of its pre-recession value, to be precise.) You would think that falling 27% and rising 36% would leave you about 9% ahead; in fact it leaves you behind. Would you rather live in a country with 11% inflation and have constant nominal pay, or live in a country with no inflation and take a 10% pay cut? You should prefer the inflation; in that case your real income only falls by 9.9%, instead of 10%. We often say that the real interest rate is simply the nominal interest rate minus the rate of inflation, but that’s actually only an approximation. If you have 7% inflation and a nominal interest rate of 11%, your real interest rate is not actually 4%; it is 3.74%. If you have 2% inflation and a nominal interest rate of 0%, your real interest rate is not actually -2%; it is -1.96%. This is what I mean by asymmetric: Rising 10% and falling 10% do not cancel each other out. To cancel out a fall of 10%, you must actually rise 11.1%. Gaining 20% and losing 20% do not cancel each other out. To cancel out a loss of 20%, you need a gain of 25%. Is it starting to bother you yet? It sure bothers me. Worst of all is the fact that the way we usually measure percentages, losses are bounded at 100% while gains are unbounded. To cancel a loss of 100%, you’d need a gain of infinity. There are two basic ways of solving this problem: The simple way, and the good way. The simple way is to just start measuring percentages symmetrically, by including both the starting and ending values in the calculation and averaging them. That is, instead of using this formula: % change = 100% * (new – old)/(old) You use this one: % change = 100% * (new – old)/((new + old)/2) In this new system, percentage changes are symmetric. Suppose a country’s GDP rises from$5 trillion to $6 trillion. In the old system we’d say it has risen 20%: 100% * ($6 T – $5 T)/($5 T) = 20%
In the symmetric system, we’d say it has risen 18.2%:
100% * ($6 T –$5 T)/($5.5 T) = 18.2% Suppose it falls back to$5 trillion the next year.
In the old system we’d say it has only fallen 16.7%:
100% * ($5 T –$6 T)/($6 T) = -16.7% But in the symmetric system, we’d say it has fallen 18.2%. 100% * ($5 T – $6 T)/($5.5 T) = -18.2%
In the old system, the gain of 20% was somehow canceled by a loss of 16.7%. In the symmetric system, the gain of 18.2% was canceled by a loss of 18.2%, just as you’d expect.
This also removes the problem of losses being bounded but gains being unbounded. Now both losses and gains are bounded, at the rather surprising value of 200%.
Formally, that’s because of these limits:
lim_{x rightarrow infty} {(x-1) over {(x+1)/2}} = 2
lim_{x rightarrow infty} {(0-x) over {(x+0)/2}} = -2
It might be easier to intuit these limits with an example. Suppose something explodes from a value of 1 to a value of 10,000,000. In the old system, this means it rose 1,000,000,000%. In the symmetric system, it rose 199.9999%. Like the speed of light, you can approach 200%, but never quite get there.
100% * (10^7 – 1)/(5*10^6 + 0.5) = 199.9999%
Gaining 200% in the symmetric system is gaining an infinite amount. That’s… weird, to say the least. Also, losing everything is now losing… 200%?
This is simple to explain and compute, but it’s ultimately not the best way.
The best way is to use logarithms.
As you may vaguely recall from math classes past, logarithms are the inverse of exponents.
Since 2^4 = 16, log_2 (16) = 4.
The natural logarithm ln() is the most fundamental for deep mathematical reasons I don’t have room to explain right now. It uses the base e, a transcendental number that starts 2.718281828459045…
To the uninitiated, this probably seems like an odd choice—no rational number has a natural logarithm that is itself a rational number (well, other than 1, since ln(1) = 0).
But perhaps it will seem a bit more comfortable once I show you that natural logarithms are remarkably close to percentages, particularly for the small changes in which percentages make sense.
We define something called log points such that the change in log points is 100 times the natural logarithm of the ratio of the two:
log points = 100 * ln(new / old)
This is symmetric because of the following property of logarithms:
ln(a/b) = – ln(b/a)
Let’s return to the country that saw its GDP rise from $5 trillion to$6 trillion.
The logarithmic change is 18.2 log points:
100 * ln($6 T /$5 T) = 100 * ln(1.2) = 18.2
If it falls back to $5 T, the change is -18.2 log points: 100 * ln($5 T / $6 T) = 100 * ln(0.833) = -18.2 Notice how in the symmetric percentage system, it rose and fell 18.2%; and in the logarithmic system, it rose and fell 18.2 log points. They are almost interchangeable, for small percentages. In this graph, the old value is assumed to be 1. The horizontal axis is the new value, and the vertical axis is the percentage change we would report by each method. The green line is the usual way we measure percentages. The red curve is the symmetric percentage method. The blue curve is the logarithmic method. For percentages within +/- 10%, all three methods are about the same. Then both new methods give about the same answer all the way up to changes of +/- 40%. Since most real changes in economics are within that range, the symmetric method and the logarithmic method are basically interchangeable. However, for very large changes, even these two methods diverge, and in my opinion the logarithm is to be preferred. The symmetric percentage never gets above 200% or below -200%, while the logarithm is unbounded in both directions. If you lose everything, the old system would say you have lost 100%. The symmetric system would say you have lost 200%. The logarithmic system would say you have lost infinity log points. If infinity seems a bit too extreme, think of it this way: You have in fact lost everything. No finite proportional gain can ever bring it back. A loss that requires a gain of infinity percent seems like it should be called a loss of infinity percent, doesn’t it? Under the logarithmic system it is. If you gain an infinite amount, the old system would say you have gained infinity percent. The logarithmic system would also say that you have gained infinity log points. But the symmetric percentage system would say that you have gained 200%. 200%? Counter-intuitive, to say the least. Log points also have another very nice property that neither the usual system nor the symmetric percentage system have: You can add them. If you gain 25 log points, lose 15 log points, then gain 10 log points, you have gained 20 log points. 25 – 15 + 10 = 20 Just as you’d expect! But if you gain 25%, then lose 15%, and then gain 10%, you have gained… 16.9%. (1 + 0.25)*(1 – 0.15)*(1 + 0.10) = 1.169 If you gain 25% symmetric, lose 15% symmetric, then gain 10% symmetric, that calculation is really a pain. To find the value y that is p symmetric percentage points from the starting value x, you end up needing to solve this equation: p = 100 * (y – x)/((x+y)/2) This can be done; it comes out like this: y = (200 + p)/(200 – p) * x (This also gives a bit of insight into why it is that the bounds are +/- 200%.) So by chaining those, we can in fact find out what happens after gaining 25%, losing 15%, then gaining 10% in the symmetric system: (200 + 25)/(200 – 25)*(200 – 15)/(200 + 15)*(200 + 10)/(200 – 10) = 1.223 Then we can put that back into the symmetric system: 100% * (1.223 – 1)/((1+1.223)/2) = 20.1% So after all that work, we find out that you have gained 20.1% symmetric. We could almost just add them—because they are so similar to log points—but we can’t quite. Log points actually turn out to be really convenient, once you get the hang of them. The problem is that there’s a conceptual leap for most people to grasp what a logarithm is in the first place. In particular, the hardest part to grasp is probably that a doubling is not 100 log points. It is in fact 69 log points, because ln(2) = 0.69. (Doubling in the symmetric percentage system is gaining 67%—much closer to the log points than to the usual percentage system.) Calculation of the new value is a bit more difficult than in the usual system, but not as difficult as in the symmetric percentage system. If you have a change of p log points from a starting point of x, the ending point y is: y = e^{p/100} * x The fact that you can add log points ultimately comes from the way exponents add: e^{p1/100} * e^{p2/100} = e^{(p1+p2)/100} Suppose US GDP grew 2% in 2007, then 0% in 2008, then fell 8% in 2009 and rose 4% in 2010 (this is approximately true). Where was it in 2010 relative to 2006? Who knows, right? It turns out to be a net loss of 2.4%; so if it was$15 T before it’s now $14.63 T. If you had just added, you’d think it was only down 2%; you’d have underestimated the loss by$70 billion.
But if it had grown 2 log points, then 0 log points, then fell 8 log points, then rose 4 log points, the answer is easy: It’s down 2 log points. If it was $15 T before, it’s now$14.70 T. Adding gives the correct answer this time.
Thus, instead of saying that the stock market fell 4.3%, we should say it fell 4.4 log points. Instead of saying that GDP is up 1.9%, we should say it is up 1.8 log points. For small changes it won’t even matter; if inflation is 1.4%, it is in fact also 1.4 log points. Log points are a bit harder to conceptualize; but they are symmetric and additive, which other methods are not.
Is this a matter of life and death on a global scale? No.
But I can’t write about those every day, now can I?
# What does correlation have to do with causation?
JDN 2457345
I’ve been thinking of expanding the topics of this blog into some basic statistics and econometrics. It has been said that there are “Lies, damn lies, and statistics”; but in fact it’s almost the opposite—there are truths, whole truths, and statistics. Almost everything in the world that we know—not merely guess, or suppose, or intuit, or believe, but actually know, with a quantifiable level of certainty—is done by means of statistics. All sciences are based on them, from physics (when they say the Higgs discovery is a “5-sigma event”, that’s a statistic) to psychology, ecology to economics. Far from being something we cannot trust, they are in a sense the only thing we can trust.
The reason it sometimes feels like we cannot trust statistics is that most people do not understand statistics very well; this creates opportunities for both accidental confusion and willful distortion. My hope is therefore to provide you with some of the basic statistical knowledge you need to combat the worst distortions and correct the worst confusions.
I wasn’t quite sure where to start on this quest, but I suppose I have to start somewhere. I figured I may as well start with an adage about statistics that I hear commonly abused: “Correlation does not imply causation.”
Taken at its original meaning, this is definitely true. Unfortunately, it can be easily abused or misunderstood.
In its original meaning, the formal sense of the word “imply” meaning logical implication, to “imply” something is an extremely strong statement. It means that you logically entail that result, that if the antecedent is true, the consequent must be true, on pain of logical contradiction. Logical implication is for most practical purposes synonymous with mathematical proof. (Unfortunately, it’s not quite synonymous, because of things like Gödel’s incompleteness theorems and Löb’s theorem.)
And indeed, correlation does not logically entail causation; it’s quite possible to have correlations without any causal connection whatsoever, simply by chance. One of my former professors liked to brag that from 1990 to 2010 whether or not she ate breakfast had a statistically significant positive correlation with that day’s closing price for the Dow Jones Industrial Average.
How is this possible? Did my professor actually somehow influence the stock market by eating breakfast? Of course not; if she could do that, she’d be a billionaire by now. And obviously the Dow’s price at 17:00 couldn’t influence whether she ate breakfast at 09:00. Could there be some common cause driving both of them, like the weather? I guess it’s possible; maybe in good weather she gets up earlier and people are in better moods so they buy more stocks. But the most likely reason for this correlation is much simpler than that: She tried a whole bunch of different combinations until she found two things that correlated. At the usual significance level of 0.05, on average you need to try about 20 combinations of totally unrelated things before two of them will show up as correlated. (My guess is she used a number of different stock indexes and varied the starting and ending year. That’s a way to generate a surprisingly large number of degrees of freedom without it seeming like you’re doing anything particularly nefarious.)
But how do we know they aren’t actually causally related? Well, I suppose we don’t. Especially if the universe is ultimately deterministic and nonlocal (as I’ve become increasingly convinced by the results of recent quantum experiments), any two data sets could be causally related somehow. But the point is they don’t have to be; you can pick any randomly-generated datasets, pair them up in 20 different ways, and odds are, one of those ways will show a statistically significant correlation.
All of that is true, and important to understand. Finding a correlation between eating grapefruit and getting breast cancer, or between liking bitter foods and being a psychopath, does not necessarily mean that there is any real causal link between the two. If we can replicate these results in a bunch of other studies, that would suggest that the link is real; but typically, such findings cannot be replicated. There is something deeply wrong with the way science journalists operate; they like to publish the new and exciting findings, which 9 times out of 10 turn out to be completely wrong. They never want to talk about the really important and fascinating things that we know are true because we’ve been confirming them over hundreds of different experiments, because that’s “old news”. The journalistic desire to be new and first fundamentally contradicts the scientific requirement of being replicated and confirmed.
So, yes, it’s quite possible to have a correlation that tells you absolutely nothing about causation.
But this is exceptional. In most cases, correlation actually tells you quite a bit about causation.
And this is why I don’t like the adage; “imply” has a very different meaning in common speech, meaning merely to suggest or evoke. Almost everything you say implies all sorts of things in this broader sense, some more strongly than others, even though it may logically entail none of them.
Correlation does in fact suggest causation. Like any suggestion, it can be overridden. If we know that 20 different combinations were tried until one finally yielded a correlation, we have reason to distrust that correlation. If we find a correlation between A and B but there is no logical way they can be connected, we infer that it is simply an odd coincidence.
But when we encounter any given correlation, there are three other scenarios which are far more likely than mere coincidence: A causes B, B causes A, or some other factor C causes A and B. These are also not mutually exclusive; they can all be true to some extent, and in many cases are.
A great deal of work in science, and particularly in economics, is based upon using correlation to infer causation, and has to be—because there is simply no alternative means of approaching the problem.
Yes, sometimes you can do randomized controlled experiments, and some really important new findings in behavioral economics and development economics have been made this way. Indeed, much of the work that I hope to do over the course of my career is based on randomized controlled experiments, because they truly are the foundation of scientific knowledge. But sometimes, that’s just not an option.
Let’s consider an example: In my master’s thesis I found a strong correlation between the level of corruption in a country (as estimated by the World Bank) and the proportion of that country’s income which goes to the top 0.01% of the population. Countries that have higher levels of corruption also tend to have a larger proportion of income that accrues to the top 0.01%. That correlation is a fact; it’s there. There’s no denying it. But where does it come from? That’s the real question.
Could it be pure coincidence? Well, maybe; but when it keeps showing up in several different models with different variables included, that becomes unlikely. A single p < 0.05 will happen about 1 in 20 times by chance; but five in a row should happen less than 1 in 1 million times (assuming they’re independent, which, to be fair, they usually aren’t).
Could it be some artifact of the measurement methods? It’s possible. In particular, I was concerned about the possibility of Halo Effect, in which people tend to assume that something which is better (or worse) in one way is automatically better (or worse) in other ways as well. People might think of their country as more corrupt simply because it has higher inequality, even if there is no real connection. But it would have taken a very large halo bias to explain this effect.
So, does corruption cause income inequality? It’s not hard to see how that might happen: More corrupt individuals could bribe leaders or exploit loopholes to make themselves extremely rich, and thereby increase inequality.
Does inequality cause corruption? This also makes some sense, since it’s a lot easier to bribe leaders and manipulate regulations when you have a lot of money to work with in the first place.
Does something else cause both corruption and inequality? Also quite plausible. Maybe some general cultural factors are involved, or certain economic policies lead to both corruption and inequality. I did try to control for such things, but I obviously couldn’t include all possible variables.
So, which way does the causation run? Unfortunately, I don’t know. I tried some clever statistical techniques to try to figure this out; in particular, I looked at which tends to come first—the corruption or the inequality—and whether they could be used to predict each other, a method called Granger causality. Those results were inconclusive, however. I could neither verify nor exclude a causal connection in either direction. But is there a causal connection? I think so. It’s too robust to just be coincidence. I simply don’t know whether A causes B, B causes A, or C causes A and B.
Imagine trying to do this same study as a randomized controlled experiment. Are we supposed to create two societies and flip a coin to decide which one we make more corrupt? Or which one we give more income inequality? Perhaps you could do some sort of experiment with a proxy for corruption (cheating on a test or something like that), and then have unequal payoffs in the experiment—but that is very far removed from how corruption actually works in the real world, and worse, it’s prohibitively expensive to make really life-altering income inequality within an experimental context. Sure, we can give one participant $1 and the other$1,000; but we can’t give one participant $10,000 and the other$10 million, and it’s the latter that we’re really talking about when we deal with real-world income inequality. I’m not opposed to doing such an experiment, but it can only tell us so much. At some point you need to actually test the validity of your theory in the real world, and for that we need to use statistical correlations.
Or think about macroeconomics; how exactly are you supposed to test a theory of the business cycle experimentally? I guess theoretically you could subject an entire country to a new monetary policy selected at random, but the consequences of being put into the wrong experimental group would be disastrous. Moreover, nobody is going to accept a random monetary policy democratically, so you’d have to introduce it against the will of the population, by some sort of tyranny or at least technocracy. Even if this is theoretically possible, it’s mind-bogglingly unethical.
Now, you might be thinking: But we do change real-world policies, right? Couldn’t we use those changes as a sort of “experiment”? Yes, absolutely; that’s called a quasi-experiment or a natural experiment. They are tremendously useful. But since they are not truly randomized, they aren’t quite experiments. Ultimately, everything you get out of a quasi-experiment is based on statistical correlations.
Thus, abuse of the adage “Correlation does not imply causation” can lead to ignoring whole subfields of science, because there is no realistic way of running experiments in those subfields. Sometimes, statistics are all we have to work with.
This is why I like to say it a little differently:
Correlation does not prove causation. But correlation definitely can suggest causation.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5055192708969116, "perplexity": 819.3511701510643}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823710.44/warc/CC-MAIN-20181212000955-20181212022455-00570.warc.gz"}
|
http://www.ck12.org/chemistry/Derived-Units/web/user:13IntK/Essentials-of-the-SI/
|
<meta http-equiv="refresh" content="1; url=/nojavascript/">
# Derived Units
%
Progress
Practice Derived Units
Progress
%
Here is a guide to the base and derived units of the SI, which establishes what the difference between a base and derived unit is.
### Explore More
Sign in to explore more, including practice questions and solutions for Derived Units.
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8657603859901428, "perplexity": 4397.078930118778}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131300799.9/warc/CC-MAIN-20150323172140-00003-ip-10-168-14-71.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/2026849/how-do-i-show-that-the-galois-group-of-an-algebraic-number-over-the-field-contai
|
# How do I show that the Galois group of an algebraic number over the field containing roots of unity is cyclic?
Suppose that I have a complex number $\alpha\not\in\mathbb Q$ such that $\alpha^n\in\mathbb Q$. I take the extension $\mathbb Q(\alpha)$. Suppose that $\mathbb Q(\alpha)$ is Galois. Clearly, this extension need not be cyclic. However, I believe that if I consider the extension $F$ of $\mathbb Q$ where $F$ contains all the roots of unity in $\mathbb Q(\alpha)$, then the extension $[\mathbb Q(\alpha):F]$ becomes cyclic. I've seen this used in a few places, but can't seem to rigorously find why this is the case. I can see that $\alpha$ satisfies $x^n-\alpha^n$, but how do I know the minimal polynomial? I guessed that the roots of the minimal polynomial might be $\alpha\zeta_n$, where $\zeta_n$ is a primitive root of unity, but even then I'm not sure. I'd appreciate some help. Thanks.
• You probably want to say that $F(\alpha)/F$ is cyclic, not $\mathbb Q(\alpha)/F$ because in the latter, $\mathbb Q(\alpha)$ does not necessarily contain $F$. – Ravi Nov 23 '16 at 6:10
• Yes, but I think I may as well suppose $\mathbb Q(\alpha)$ is Galois over $\mathbb Q$. I've edited it. – adrija Nov 23 '16 at 6:13
• I don't think you should assume that because that will lead to you quickly coming to the conclusion that $n=2$ from your opening sentence. – Ravi Nov 23 '16 at 6:16
• How so? I didn't say $\alpha$ is real, it may as well be a primitive $n^{th}$ root. – adrija Nov 23 '16 at 6:20
First some definitions and notations: $a\in \mathbf Q^*, K=\mathbf Q(\alpha)$, where $\alpha$ is a root of $g_n (X)=X^n - a$. You suppose that $K/ \mathbf Q$ is normal, and you ask whether a certain subfield $F$ of $K$, containing enough roots of unity, is such that $K/F$ is a cyclic extension. Let us consider two cases according to the (ir)reducibility of $g_n (X)=X^n - a$. In the following, just to avoid petty trouble with $2$ , all rational primes which enter the game will be odd (but this not a mathematical restriction).
1) If $g_n (X)$ is irreducible, the normality hypothesis implies that $K$ contains the group $\mu_n$ of $n$-th roots of unity. Take $F=\mathbf Q (\mu_n)$ ; then $K=F(\alpha)$ is a simple Kummer extension, hence cyclic (see e.g. S. Lang's "Algebra", chap.8, §8), of degree dividing $n$
2) If $g_n (X)$ is reducible, a general criterion in op. cit., chap.8, §9, asserts that there exists a prime divisor $p$ of $n$ such that $a\in \mathbf Q ^{*p}$. Let $p^r$ the maximal power such that $a\in \mathbf Q^{*p^r}$. Then $\beta :=\alpha^{p^r}$ is a root of $g_m(X)=X^m - a$, where $n=mp^r$. If $\beta$ were a $p$-th power in $\mathbf Q(\beta)$, say $\beta=\gamma^{p^r}$, taking norms in $\mathbf Q(\beta) / \mathbf Q$ would give that $a= N(\pm\gamma) ^{p^{r+1}}$ (here we use that $p$ is odd), which contradicts the maximality of $r$. Hence, according to the same criterion op. cit., $X^{p^r} – a$ is irreducible over $\mathbf Q(\beta)$. The same argument as in 1) then shows that $K$ is cyclic over $F_p := \mathbf Q(\beta, \mu_{p^r})$, of degree dividing $p^r$. Repeat this process for all prime divisors $q$ of $n$ wich share the same propery as $p$ above, and take $F$ to be the compositum of all the $F_q$ ‘s. Then $Gal(K/F)$, being the intersection of all the $Gal(K/F_q)$'s, is cyclic
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9770938754081726, "perplexity": 91.18768028945235}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998913.66/warc/CC-MAIN-20190619043625-20190619065625-00329.warc.gz"}
|
http://ptsymmetry.net/?p=670
|
## Conservation relations and anisotropic transmission resonances in one-dimensional PT-symmetric photonic heterostructures
Li Ge, Y. D. Chong, A. D. Stone
We analyze the optical properties of one-dimensional (1D) PT-symmetric structures of arbitrary complexity. These structures violate normal unitarity (photon flux conservation) but are shown to satisfy generalized unitarity relations, which relate the elements of the scattering matrix and lead to a conservation relation in terms of the transmittance and (left and right) reflectances. One implication of this relation is that there exist anisotropic transmission resonances in PT-symmetric systems, frequencies at which there is unit transmission and zero reflection, but only for waves incident from a single side. The spatial profile of these transmission resonances is symmetric, and they can occur even at PT-symmetry breaking points. The general conservation relations can be utilized as an experimental signature of the presence of PT-symmetry and of PT-symmetry breaking transitions. The uniqueness of PT-symmetry breaking transitions of the scattering matrix is briefly discussed by comparing to the corresponding non-Hermitian Hamiltonians.
http://arxiv.org/abs/1112.5167
Optics (physics.optics)
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8591693043708801, "perplexity": 1650.2875735390624}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887692.13/warc/CC-MAIN-20180119010338-20180119030338-00626.warc.gz"}
|
https://threesquirrelsdotblog.com/2017/05/15/77/?like_comment=739&_wpnonce=e9b2aa709b
|
# Solution to (general) Truncated Moment Problem
We are going to solve the truncated moment problem in this post. The theorem we are going to establish is more general than the original problem itself. The following theorem is a bit abstract, you can skip to Corollary 2 to see what the truncated moment problem is and why it has a generalization in the form of Theorem 1.
Theorem 1 Suppose ${X}$ is a random transformation from a probability space ${(A,\mathcal{A},\mathop{\mathbb P})}$ to a measurable space ${(B,\mathcal{B})}$ where each singleton set of $B$ is in $\mathcal{B}$. Let each ${f_i}$ be a real valued (Borel measurable) function with its domain to be ${B}$, ${i=1,\dots,m}$. Given
$\displaystyle (\mathbb{E}f_i(X))_{i=1,\dots,m}$
and they are all finite, there exists a random variable ${Y\in B}$ such that ${Y}$ takes no more than ${m+1}$ values in ${B}$, and
$\displaystyle (\mathbb{E}f_i(Y))_{i=1,\dots,m} = (\mathbb{E}f_i(X))_{i=1,\dots,m}.$
(If you are not familiar with terms Borel measurable, measurable space and sigma-algebras $\mathcal{A}, \mathcal{B}$, then just ignore these. I put these term here just to make sure the that the theorem is rigorous enough.)
Let me parse the theorem for you. Essentially, the theorem is trying to say that given ${m}$ many expectations, no matter what kind of source the randomness comes from, i.e., what ${X}$ is, we can always find a finite valued random variable (which is ${Y}$ in the theorem) that achieves the same expectation.
To have a concrete sense of what is going on, consider the following Corollary of Theorem 1. It is the original truncated moment problem.
Corollary 2 (Truncated Moment Problem) For any real valued random variable ${X\in {\mathbb R}}$ with its first ${m}$ moments all finite, i.e., for all ${1\leq i\leq m}$
$\displaystyle \mathop{\mathbb E}|X|^i < \infty,$
there exists a real valued discrete random variable ${Y}$ which takes no more than ${m+1}$ values in ${{\mathbb R}}$ and its first ${m}$ moments are the same as ${X}$, i.e.,
$\displaystyle (\mathbb{E}Y,\mathbb{E}(Y^2),\dots, \mathbb{E}(Y^m) )=(\mathbb{E}X,\mathbb{E}(X^2),\dots, \mathbb{E}(X^m)).$
This original truncated moment problem is asking that given the (uncentered) moments, can we always find a finite discrete random variable that matches all the moments. It should be clear that is a simple consequence of Theorem 1 by letting ${B={\mathbb R}}$ and ${f_i(x) = x^{i},, i=1,\dots,m}$.
There is also a multivariate version of truncated moment problem which can also be regarded as a special case of Theorem 1.
Corollary 3 (Truncated Moment Problem, Multivariate Version) For any real random vector ${X=(X_1,\dots,X_n)\in \mathbb{R}^n}$ and its all ${k}$th order moments are finite, i.e.,
$\displaystyle \mathop{\mathbb E}(\Pi_{i=1}^n|X_{i}|^{\alpha_i}) <\infty$
for any ${{1\leq \sum \alpha_i\leq k}}$. Each ${\alpha_i}$ here is a nonnegative integer. The total number of moments in this case is ${n+k \choose k}$. Then there is a real random vector ${Y \in \mathbb{R}^n}$ such that it takes no more than ${{n+k \choose k}+1}$ values, and
$\displaystyle (\mathop{\mathbb E}(\Pi_{i=1}^nX_{i}^{\alpha_i}))_{1\leq \sum \alpha_i\leq k} = (\mathop{\mathbb E}(\Pi_{i=1}^nY_{i}^{\alpha_i})) _{1\leq \sum \alpha_i\leq k}.$
Though the form of Theorem 1 is quite general and looks scary, it is actually a simple consequence of the following lemma and the use of convex hull.
Lemma 4 For any convex set ${C \in \mathbb{R}^k}$, and any random variable ${Z}$ which has finite mean and takes value only in ${C}$ , i.e,
$\displaystyle \mathop{\mathbb E}(Z) \in \mathbb{R}^k, \mathop{\mathbb P}(Z\in C) =1,$
we have
$\displaystyle \mathop{\mathbb E} (Z) \in C.$
The above proposition is trivially true if ${C}$ is closed or $Z$ takes only finitely many value. But it is true that ${C}$ is only assumed to be convex. We will show it in this post.
We are now ready to show Theorem 1.
Proof of Theorem 1: Consider the set
$\displaystyle L = \{ (f_i(x))_{i=1,\dots,m}\mid x\in B \},$
The convex hull of this set ${L}$ is
$\displaystyle \text{conv}(L) = \{ \sum_{j=1}^l \alpha _j a_j\mid \alpha_j \geq 0 ,\sum_{j=1}^l \alpha_j =1, a_j\in L, l \in {\mathbb N}\}.$
Now take the random variable ${Z=(f_i(X))_{i=1,\dots,m}}$ which takes value only in ${L\subset \text{conv}(L)}$, by Lemma 4 of convex set, we know that
$\displaystyle \mathop{\mathbb E} Z \in \text{conv}(L).$
Note that every element in ${\text{conv}(L)}$ has a FINITE representation in terms of ${a_j}$s!
This means we can find ${l\in {\mathbb N}}$, ${\alpha_j\geq 0, \sum_{j = 1}^l \alpha_j =1}$ and ${a_j \in L, j=1,\dots,l}$ such that
$\displaystyle \sum_{j=1}^l \alpha_ja_j = \mathop{\mathbb E} Z = (\mathop{\mathbb E} f_i(X))_{i=1,\dots,m}.$
Since each ${a_j = (f(x_j))_{i=1,\dots,m}}$ for some ${x_j \in B}$, we can simply take the distribution of ${Y}$ to be
$\displaystyle \mathop{\mathbb P}(Y = x_j) = \alpha_j, \forall i =1,\dots,l.$
Finally, apply the theorem of Caratheodory to conclude that ${l\leq m+1}$. $\Box$
## 4 thoughts on “Solution to (general) Truncated Moment Problem”
1. Good info over again. Thumbs up;)
Like
2. certainly like your web site however you have to check the spelling on several
of your posts. Many of them are rife with spelling problems and
I to find it very troublesome to inform the truth on the other hand I will surely come back again.
Like
• ding1ijun
Hi Lola, thank you for reading the blog and sorry for the spelling errors. Some of them are prepared in a hurry and I did not get time to correct the spellings. Can you point out some spelling errors the next time you read?
Like
3. Hi there, this weekend is nice for me, as this moment i am reading this impressive educational article here at my home.
Like
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 65, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8979355692863464, "perplexity": 190.73455393909228}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710870.69/warc/CC-MAIN-20221201221914-20221202011914-00012.warc.gz"}
|
http://mathforum.org/kb/thread.jspa?threadID=2163058&messageID=7252093
|
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Least-square optimization with a complex residual function
Replies: 4 Last Post: Oct 30, 2010 11:16 AM
Messages: [ Previous | Next ]
elgen Posts: 7 Registered: 8/27/10
Re: Least-square optimization with a complex residual function
Posted: Oct 28, 2010 10:20 PM
On 10-10-28 09:38 PM, [email protected] wrote:
> elgen<[email protected]> wrote:
>> I have a question on the least-square optimization with a complex
>> residual function. The residual function is r(z_1, z_2), in which z_1
>> and z_2 are complex variables.
> [...]
>> In my case r(z_1, z_2) is a complex function. If I use the Euclidean
>> norm (conjugated inner product), the cost function becomes
>>
>> \sum_i conj(r)r
>
> So your resid is real now? OK. Change your mind, that's alright. :)
>
>> I am stuck on how to calculate the gradient of this cost function as
>> conj(r) is not an analytic function and the gradient needs to take the
>> derivative with respect to z_1 and z_2.
>
> Ahhh. At worst you can treat the resid as 2 SoS -- the real parts of r and
> the imag parts of r.
>
> For more general resid functions maybe think in terms of Euclid form.
>
>
I understand that you refer residual to
conj(r) r
in my case.
How would I proceed to calculate its gradient? Would you mind being more
specific? What is "SoS"?
elgen
Date Subject Author
10/28/10 elgen
10/28/10 R Kym Horsell
10/28/10 elgen
10/29/10 R Kym Horsell
10/30/10 elgen
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.857801616191864, "perplexity": 7666.136465150183}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443738095178.99/warc/CC-MAIN-20151001222135-00040-ip-10-137-6-227.ec2.internal.warc.gz"}
|
http://openstudy.com/updates/5087910fe4b058e80cf64ff9
|
Got Homework?
Connect with other students for help. It's a free community.
• across
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
55 members online
• 0 viewing
sjerman1 Group Title find the derivative of x(x-1)^2 one year ago one year ago Edit Question Delete Cancel Submit
• This Question is Closed
1. lgbasallote
Best Response
You've already chosen the best response.
2
my advice for you is..don't use product rule. expand (x-1)^2 first
• one year ago
2. sjerman1
Best Response
You've already chosen the best response.
1
alright so $x^2-2x+1$
• one year ago
3. SheldonEinstein
Best Response
You've already chosen the best response.
1
Why not to use product rule?
• one year ago
4. lgbasallote
Best Response
You've already chosen the best response.
2
right. don't forget the x x(x^2 - 2x + 1) now distribute x into the trinomial
• one year ago
5. lgbasallote
Best Response
You've already chosen the best response.
2
@SheldonEinstein because expanding it will be a lot simpler
• one year ago
6. lgbasallote
Best Response
You've already chosen the best response.
2
if you expand it, you just get a polynomial (then you can derive the terms individually)
• one year ago
7. SheldonEinstein
Best Response
You've already chosen the best response.
1
Oh! Right, I thought you were referring that "Product rule will not work here" . It's my bad, sorry :)
• one year ago
8. sjerman1
Best Response
You've already chosen the best response.
1
then $x^3-2x^2+x$ so f'(x) = $3x^2-4x+1$
• one year ago
9. lgbasallote
Best Response
You've already chosen the best response.
2
yes
• one year ago
10. sjerman1
Best Response
You've already chosen the best response.
1
can you show product rule with that quickly please?
• one year ago
11. SheldonEinstein
Best Response
You've already chosen the best response.
1
Though, I would like to show how product rule will work in this way : $\large{\frac{d}{dx}{u.v} = u \frac{dv}{dx} + v \frac{du}{dx} \implies \textbf{Product rule}}$
• one year ago
12. SheldonEinstein
Best Response
You've already chosen the best response.
1
put u = x and v = (x-1)^2
• one year ago
13. SheldonEinstein
Best Response
You've already chosen the best response.
1
Now? @sjerman1 can you do it?
• one year ago
14. lgbasallote
Best Response
You've already chosen the best response.
2
are you required to solve it by product rule or are you just interested @sjerman1 ?
• one year ago
15. sjerman1
Best Response
You've already chosen the best response.
1
i am just interested, that was a much easier way to solve it but I am actually in the process of finding points guaranteed to exist by rolle's theorem
• one year ago
16. SheldonEinstein
Best Response
You've already chosen the best response.
1
Wait I am typing :(
• one year ago
17. lgbasallote
Best Response
You've already chosen the best response.
2
i have no idea what that theorem is.....but let's see what @SheldonEinstein is typing
• one year ago
18. SheldonEinstein
Best Response
You've already chosen the best response.
1
$\large{x \frac{d (x-1)^2}{dx} + (x-1)^2 \frac{dx}{dx} }$ $\large{x \frac{d(x^2+1-2x)}{dx} + (x-1)^2}$ $\large{x [ \frac{d(x^2)}{dx} + \frac{d(1)}{d(x)} - \frac{d(2x)}{dx} ] +(x-1)^2}$ $\large{x [ 2x + 0 - 2 ] + x^2 +1 - 2x}$ $\large{ 2x^2 - 2x + x^2 + 1 - 2x}$ $\large{3x^2-4x+1}$
• one year ago
19. SheldonEinstein
Best Response
You've already chosen the best response.
1
Sorry for late answer :( I was checking it :)
• one year ago
20. sjerman1
Best Response
You've already chosen the best response.
1
Amazing job! Thank you very much!
• one year ago
21. SheldonEinstein
Best Response
You've already chosen the best response.
1
You're welcome @sjerman1 ... Thanks for the patience friends :) Also , good job @lgbasallote
• one year ago
22. lgbasallote
Best Response
You've already chosen the best response.
2
you can also do it without expanding (x-1)^2 in the derivative x(x-1)^2 x[2(x-1)] + (x-1)^2 x[2x -2] + x^2 - 2x + 1 2x^2 - 2x + x^2 - 2x + 1 3x^2 - 4x + 1 just showing an alternative
• one year ago
23. SheldonEinstein
Best Response
You've already chosen the best response.
1
Right ..... but usually I prefer to just use d(x^n) / dx and let the students know that x can be anything. like x can be (x-1) and similarly it can be (x+1) ...
• one year ago
24. sjerman1
Best Response
You've already chosen the best response.
1
i wonder, is there anyway to view these questions after they are closed?
• one year ago
25. SheldonEinstein
Best Response
You've already chosen the best response.
1
Yes @sjerman1
• one year ago
26. SheldonEinstein
Best Response
You've already chosen the best response.
1
• one year ago
27. SheldonEinstein
Best Response
You've already chosen the best response.
1
• one year ago
28. sjerman1
Best Response
You've already chosen the best response.
1
Thank you both!
• one year ago
29. lgbasallote
Best Response
You've already chosen the best response.
2
or...you can just click your username on the upper right corner and then press questions asked in the profile page...
• one year ago
30. SheldonEinstein
Best Response
You've already chosen the best response.
1
Here the message says all.
• one year ago
1 Attachment
31. SheldonEinstein
Best Response
You've already chosen the best response.
1
There in the message has "reopen" word but that means like you can not get in "OPEN QUESTIONS SECTION" again but still the other users can help you if they regularly see closed quest. section.
• one year ago
• Attachments:
See more questions >>>
spraguer (Moderator) 5→ View Detailed Profile
23
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9989930987358093, "perplexity": 9909.377967397602}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657132883.65/warc/CC-MAIN-20140914011212-00248-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
|
https://karagila.org/2019/in-praise-of-replacement/
|
Asaf Karagila
I don't have much choice...
I have often seen people complain about Replacement axioms. For example, this MathOverflow question, or this one, or that one, and also this one. This technical-looking schema of axioms state that if $$\varphi$$ defines a function on a set $$x$$, then the image of $$x$$ under that function is a set. And this axiom schema is a powerhouse! It is one of the three component that give $$\ZF$$ its power (the others being power set and infinity, of course).
You'd think that people in category theory would like it, from a foundational point of view, it literally tells you that functions exists if you can define them! And category theory is all about the functions (yes, I know it's not, but I'm trying to make a point).
One of the equivalences of Replacement is the following statement: Suppose that $$\varphi(x,y,z,p)$$ is a formula such that for some parameter $$p$$, $$\varphi(x,y,z,p)\leftrightarrow\varphi(x,y,z',p)$$ if and only if $$z=z'$$. Namely, $$\varphi$$ (modulo the parameter $$p$$) defines the ordered pair $$(x,y)$$. Then the Cartesian product $$A\times_{\varphi,p} B=\{z\mid\exists a\in A\exists b\in B\varphi(a,b,z,p)\}$$ is a set.
In other words, Replacement is equivalent to saying "I don't care how you're coding ordered pairs, as long as it's going to satisfy the axioms of ordered pairs!". So you'd think people from non-$$\ZFC$$ foundations would be happy to have something like that, especially if they are focused on category theory (which is all about functions (yes, again, I know, I'm just trying to make a point)).
Well. It is exactly the opposite. Since the Kuratowski coding of ordered pairs is so simple, it's an easy solution to the problem. So from a foundational point of view, there's no problem anymore, and nothing else matter. Once you have one coding that lets you have ordered pairs from a set theoretic point of view, the rest is redundant. This is very much reflected in how $$\ETCS$$ is equivalent to a relatively weak set theory: bounded Zermelo.
So you might want to say, well, if category theory is not really using it, then maybe it's not that necessary. And indeed, the uses of Replacement outside of set theory are rare. Borel determinacy is one of them, and arguably it is a set theoretic statement.
But, like many other posts, this too was inspired by some discussion on Math.SE. And here is a very nice example of why Replacement is important.
Theorem. If $$A$$ is a set, then $$\{A^n\mid n\in\Bbb N\}$$ exists.
One would like to prove that by saying that this is just a bunch of subsets of $$A^{<\omega}$$, which itself is a subset of $$\mathcal P(\omega\times A)$$, so by power set and very bounded separation axioms, we can get that very set. But this depends on how $$A^n$$ is defined. If $$A^n$$ is the set of functions from $$n$$ to $$A$$, that's fine, the above suggestion works just fine. However, it is not uncommon to see the following inductive definition: $$A^0=\{\varnothing\}$$ and $$A^{n+1}=A^n\times A$$, or $$A^1=A$$ and $$A^{n+1}=A^n\times A$$.
Under that latter definition and using the Kuratowski definition of ordered pairs, $$A^{n+1}$$ has a strictly larger von Neumann rank comapred to $$A^n$$. At least when starting with $$A=V_{\omega+1}$$, or something like that. So the von Neumann ranks of $$V_{\omega+1}^n$$ are strictly increasing under the Kuratowski definition, and so there is no set in $$V_{\omega+\omega}$$ which contains exactly the $$V_{\omega+1}^n$$'s. But since $$V_{\omega+\omega}$$ is a model of Zermelo, that means that we cannot prove without Replacement that $$\{A^n\mid n\in\Bbb N\}$$ is a set for any set $$A$$.
Hey hey, wait a minute, you might claim. You might argue that you don't care that $$A\times A$$ and $$\{f\colon\{0,1\}\to A\}$$ are two different sets. They both satisfy the same properties from an abstract point of view. BUT THIS IS THE POINT!
This is exactly the point! When you say that you don't want to distinguish between $$A\times A$$ and $$A^2$$, you are ostensibly replacing one object with another. You are literally appealing to Replacement!
Yes, this argument does not use a lot of Replacement, but it does use some of it. And it might just be enough to clarify of its necessity.
(And I haven't even started on talking about how it is equivalent to Reflection, which is so awesome on its own (foundationally speaking)...)
### There are no comments on this post.
Want to comment? Send me an email!
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8693873286247253, "perplexity": 173.2400634481812}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949506.62/warc/CC-MAIN-20230330225648-20230331015648-00398.warc.gz"}
|
https://socratic.org/questions/553d8d81581e2a30db7f2b83
|
Chemistry
Topics
# Question f2b83
Apr 27, 2015
The new concentration will be 1.00%.
When doing percent concentration problems, you should always start by taking a closer look at your initial solution.
A mass by volume percent concentration is expressed as grams of solute, in your case sodium chloride, per 100 mL of solution. A 1% m/v solution will have 1 g of solute in 100 mL of solution
$\text{% m/v" = "grams of solute"/"100 mL of solution} \cdot 100$
Your initial solution is 10.% m/v, which means that it has 10 g of sodium chloride in every 100 mL of solution. Since you have less than 100 mL of solution, you'll have less than 10 g of sodium chloride in your sample
"%m/v" = "x g NaCl"/"50. mL" * 100 => x = ("%m/v" * 50)/100
x = (10 * 50cancel("ml"))/(100cancel("mL")) = "5.0 g"#
This is how much sodium chloride your initial solution contains.
Now, you want to increase the volume of the solution to 500 mL. The key here is to realize that the amount of sodium chloride present will not change.
Your final solution will still have 5.0 g of $N a C l$, but its volume will be bigger. The same amount of solute in a bigger volume will automatically mean a smaller concentration.
$\text{%m/v" = "5.0 g"/"500 mL" * 100 = "1.00%}$
The volume is 10 times bigger, so the concentration must be ten times smaller.
##### Impact of this question
1285 views around the world
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8570941090583801, "perplexity": 1108.0193737540437}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738653.47/warc/CC-MAIN-20200810072511-20200810102511-00180.warc.gz"}
|
http://mathonline.wikidot.com/tangent-planes-to-level-surfaces
|
Tangent Planes to Level Surfaces
# Tangent Planes to Level Surfaces
We have already looked at computing tangent planes to surfaces described by a two variable function $z = f(x, y)$ on the Finding a Tangent Plane on a Surface page. This method is convenient when the variable $z$ is isolated from the variables $x$ and $y$ as we can apply the following formula to obtain the tangent plane at a point $P(x_0, y_0, z_0)$:
(1)
\begin{align} \quad z - z_0 = f_x (x_0, y_0) (x - x_0) + f_y (x_0, y_0) (y - y_0) \quad \mathrm{or} \quad z - z_0 = \frac{\partial}{\partial x} f(x_0, y_0) (x - x_0) + \frac{\partial}{\partial y} f(x_0, y_0) (y - y_0) \end{align}
If $z$ is not isolated from the variables $x$ and $y$, then finding the tangent plane at a point can be messy with this method, so instead, we will view these surfaces as "level surfaces" to a three variable function.
Let $w = f(x, y, z)$ be a three variable real-valued function, and consider the equation $f(x, y, z) = k$ where $k \in \mathbb{R}^2$. The equation $f(x, y, z) = k$ represents a level surface corresponding to the real number $k$ (this is analogous to obtaining level curves for functions of two variables).
Let $P(x_0, y_0, z_0)$ be a point on this level surface, and let $C$ be any curve that passes through $P$ and that is on the surface $S$. This curve $C$ can be parameterized as a vector-valued function $\vec{r}(t) = (x(t), y(t), z(t))$. Let $P$ correspond to $t_0$, that is $\vec{r}(t_0) = (x_0, y_0, z_0)$.
Now since the curve $C$ is on the surface $S$, we must have that $f(x(t), y(t), z(t)) = k$ for any defined $t$ for the curve $C$. Suppose that $x = x(t)$, $y = y(t)$, and $z = z(t)$ are differentiable functions. If we apply the chain rule to both sides of the equation $f(x(t), y(t), z(t)) = k$ we get that:
(2)
\begin{align} \quad \frac{\partial w}{\partial x} \frac{dx}{dt} + \frac{\partial w}{\partial y} \frac{dy}{dt} + \frac{\partial w}{\partial z} \frac{dz}{dt} = 0 \\ \quad \left ( \frac{\partial w}{\partial x}, \frac{\partial w}{\partial y}, \frac{\partial w}{\partial z} \right ) \cdot \left (\frac{dx}{dt}, \frac{dy}{dt}, \frac{dz}{dt} \right ) = 0 \\ \quad \nabla f(x, y, z) \cdot \vec{r'}(t) = 0 \end{align}
Therefore, the gradient vector $\nabla f(x, y, z)$ and the derivative (tangent) vector $\vec{r}(t)$ are perpendicular since their dot product is equal to $0$. Therefore, at any $t_0$, we have that:
(3)
\begin{align} \quad \nabla f(x_0, y_0, z_0) \cdot \vec{r'}(t_0) = 0 \end{align}
Since $\nabla f(x_0, y_0, z_0)$ is perpendicular to the tangent vector at point $P$ (corresponding to $t_0$) and passes through the point $P(x_0, y_0, z_0)$, then we can obtain the equation of the tangent plane at $P$ as:
(4)
\begin{align} \quad \quad \nabla f(x_0, y_0, z_0) \cdot (x - x_0, y - y_0, z - z_0) = 0 \\ \quad \quad \left ( \frac{\partial}{\partial x} f(x_0, y_0, z_0), \frac{\partial}{\partial y} f(x_0, y_0, z_0), \frac{\partial}{\partial z} f(x_0, y_0, z_0) \right ) \cdot (x - x_0, y - y_0, z - z_0) = 0 \\ \quad \quad \frac{\partial}{\partial x} f(x_0, y_0, z_0) (x - x_0) + \frac{\partial}{\partial y} f(x_0, y_0, z_0) (y - y_0) + \frac{\partial}{\partial z} f(x_0, y_0, z_0) (z - z_0) = 0 \end{align}
## Example 1
Find the equation of the tangent plane to the sphere $x^2 + y^2 + z^2 = 16$ at $(1, 2, \sqrt{11})$.
We note that the sphere $x^2 + y^2 + z^2 = 16$ is the level surface of the function $w = f(x,y,z) = x^2 + y^2 + z^2$ when $k = 3$.
The partial derivatives of $f$ are $\frac{\partial w}{\partial x} = 2x$, $\frac{\partial w}{\partial y} = 2y$, and $\frac{\partial w}{\partial z} = 2z$. Therefore we have that $\frac{\partial}{\partial x} f(1, 2, \sqrt{11}) = 2$, $\frac{\partial}{\partial y} f(1, 2, \sqrt{11}) = 4$, and $\frac{\partial}{\partial z} f(1, 2, \sqrt{11}) = 2 \sqrt{11}$. Applying these values to the formula above and we get that the equation of the tangent plane is:
(5)
\begin{align} \quad 2(x - 1) + 4(y - 2) + 2 \sqrt{11}(z - \sqrt{11}) = 0 \end{align}
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 5, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9996461868286133, "perplexity": 186.87983010834589}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542714.38/warc/CC-MAIN-20161202170902-00152-ip-10-31-129-80.ec2.internal.warc.gz"}
|
https://debasishg.blogspot.com/2011/02/applicatives-for-composable-json.html
|
## Monday, February 14, 2011
### Applicatives for composable JSON serialization in Scala
It has been quite some time I have decided to play around with sjson once again. For the convenience of those who are not familiar with sjson, it's a tiny JSON serialization library that can serialize and de-serialize Scala objects. sjson offers two ways in which you can serialize your Scala objects :-
1. typeclass based serialization, where you define your own protocol (typeclass instances) for your own objects. The standard ones, of course come out of the box.
2. reflection based serialization, where you provide a bunch of annotations and sjson looks up reflectively and tries to get your objects serialized and de-serialized.
One of the things which bothered me in both the implementations is the way errors are handled. Currently I use exceptions to report errors in serializing / de-serializing. Exceptions, as you know, are side-effects and don't compose. Hence even though your input JSON value has many keys that don't match with the names in your Scala class, errors are reported one by one.
scalaz is a Haskell like library for Scala that offers myriads of options towards pure functional programming. I have been playing around with Scalaz recently, particularly the typeclasses for Applicatives. I have also blogged on some of the compositional features that scalaz offers that help make your code much more declarative, concise and composable.
The meat of scalaz is based on the two most potent forces that Scala offers towards data type generic programming :-
1. typeclass encoding using implicits and
2. ability to abstract over higher kinded types (type constructor polymorphism)
Using these features scalaz has made lots of operations available to a large family of data structures, which were otherwise available only for a smaller subset in the Scala standard library. Another contribution of scalaz has been to make many of the useful abstractions first class in Scala e.g. Applicative, Monad, Traversable etc. All of these are available in Haskell as typeclass hierarchies - so now you can use the goodness of these abstractions in Scala as well.
One of the areas which I focused on in sjson using scalaz is to make error reporting composable. Have a look at the following snippet ..
// an immutable value object in Scala case class Address(no: Int, street: String, city: String, zip: String) // typeclass instance for sjson serialization protocol for Address object AddressProtocol extends DefaultProtocol { implicit object AddressFormat extends Format[Address] { def reads(json: JsValue): ValidationNEL[String, Address] = json match { case m@JsObject(_) => (field[Int]("no", m) |@| field[String]("street", m) |@| field[String]("city", m) |@| field[String]("zip", m)) { Address } case _ => "JsObject expected".fail.liftFailNel } //.. }
In the current version of sjson, reads returns an Address. Now it returns an applicative, ValidationNEL[String, Address], which is a synonym for Validation[NonEmptyList[String], Address]. Validation is isomorphic to scala.Either in the sense that it has two separate types for error and success. But it has a much cleaner API and does not leave the choice to convention. In our case since we will be accumulating errors, we choose to use a List type for the error part. As a general implementation strategy, when Validation is used as an Applicative, the error type is modeled as a SemiGroup that offers an append operation. Have a look at scalaz for details of how you can use Validation as an applicative for cumulative error reporting.
Let's see what happens in the above snippet ..
1. field extracts the value the relevant field (passed as the first argument) from the JsObject. Incidentally JsObject is from Nathan Hamblen's dispatch-json, which sjson uses under the covers. More on dispatch-json's awesomeness later :). Here's how I define field .. Note if the name is not available, it gives us a Failure type on the Validation.
def field[T](name: String, js: JsValue)(implicit fjs: Reads[T]): ValidationNEL[String, T] = { val JsObject(m) = js m.get(JsString(name)) .map(fromjson[T](_)(fjs)) .getOrElse(("field " + name + " not found").fail.liftFailNel) }
2. field invocations are composed using |@| combinator of scalaz, which gives us an ApplicativeBuilder that allows me to play around with the elements that it composes. In the above snippet we simply pass these components to build up an instance of the Address class.
Since Validation is an Applicative, all errors that come up during composition of field invocations get accumulated in the final list that occurs as the error type of it.
Let's first look at the normal usecase where things are happy and we get an instance of Address constructed from the parsed json. No surprises here ..
// test case it ("should serialize an Address") { import Protocols._ import AddressProtocol._ // typeclass instances val a = Address(12, "Tamarac Square", "Denver", "80231") fromjson[Address](tojson(a)) should equal(a.success) }
But what happens if there are some errors in the typeclass instance that you created ? Things start to get interesting from here ..
implicit object AddressFormat extends Format[Address] { def reads(json: JsValue): ValidationNEL[String, Address] = json match { case m@JsObject(_) => (field[Int]("number", m) |@| field[String]("stret", m) |@| field[String]("City", m) |@| field[String]("zip", m)) { Address } case _ => "JsObject expected".fail.liftFailNel } //.. }
Note that the keys in json as passed to field API do not match the field names in the Address class. Deserialization fails and we get a nice list of all errors reported as part of the Failure type ..
it ("address serialization should fail") { import Protocols._ import IncorrectPersonProtocol._ val a = Address(12, "Tamarac Square", "Denver", "80231") (fromjson[Person](tojson(p))).fail.toOption.get.list should equal (List("field number not found", "field stret not found", "field City not found")) }
Composability .. Again!
A layer of monads on top of your API makes your API composable with any other monad in the world. With sjson de-serialization returning a Validation, we can get better composability when writing complex serialization code like the following. Consider this JSON string from where we need to pick up fields selectively and make a Scala object ..
val jsonString = """{ "lastName" : "ghosh", "firstName" : "debasish", "age" : 40, "address" : { "no" : 12, "street" : "Tamarac Square", "city" : "Denver", "zip" : "80231" }, "phone" : { "no" : "3032144567", "ext" : 212 }, "office" : { "name" : "anshinsoft", "address" : { "no" : 23, "street" : "Hampden Avenue", "city" : "Denver", "zip" : "80245" } } }"""
We would like to cherry pick a few of the fields from here and create an instance of Contact class ..
case class Contact(lastName: String, firstName: String, address: Address, officeCity: String, officeAddress: Address)
Try this with the usual approach as shown above and you will find some of the boilerplate repetitions within your implementation ..
import dispatch.json._ import Js._ val js = Js(jsonString) // js is a JsValue (field[String]("lastName", js) |@| field[String]("firstName", js) |@| field[Address]("address", js) |@| field[String]("city", (('office ! obj) andThen ('address ? obj))(js)) |@| field[Address]((('office ! obj) andThen ('address ! obj)), js)) { Contact } should equal(c.success)
Have a look at this how we need to repeatedly pass around js, though we never modify it any time. Since our field API is monadic, we can compose all invocations of field together with a Reader monad. This is a very useful technique of API composition which I discussed in an earlier blog post. (Here is a trivia : How can we compose similar stuff when there's modification involved in the passed around state ? Hint: The answer is within the question itself :D)
But for that we need to make a small change in our field API. We need to make it curried .. Here are 2 variants of the curried field API ..
// curried version: for lookup of a String name def field_c[T](name: String)(implicit fjs: Reads[T]) = { js: JsValue => val JsObject(m) = js m.get(JsString(name)).map(fromjson[T](_)(fjs)).getOrElse(("field " + name + " not found").fail.liftFailNel) } // curried version: we need to get a complete JSON object out def field_c[T](f: (JsValue => JsValue))(implicit fjs: Reads[T]) = { js: JsValue => try { fromjson[T](f(js))(fjs) } catch { case e: Exception => e.getMessage.fail.liftFailNel } }
Note how in the second variant of field_c, we use the extractors of dispatch-json to take out nested objects from a JsValue structure. We use it below to get the office address from within the parsed JSON.
And here's how we compose all lookups monadically and finally come up with the Contact instance ..
// reader monad val contact = for { last <- field_c[String]("lastName") first <- field_c[String]("firstName") address <- field_c[Address]("address") office <- field_c[Address]((('office ! obj) andThen ('address ! obj))) } yield(last |@| first |@| address |@| office) // city needs to be parsed separately since we are working on part of js val city = field_c[String]("city") // compose everything and build a Contact (contact(js) |@| city((('office ! obj) andThen ('address ? obj))(js))) { (last, first, address, office, city) => Contact(last, first, address, city, office) } should equal(c.success)
I am still toying around with some of the monadic implementations of sjson APIs. It's offered as a separate package and will make a nice addition to the API families that sjson offers. You can have a look at my github repo for more details. I plan to finalize soon before I get to 1.0.
Heiko Seeberger said...
Very descriptive example showing the power and usefulness of FP and scalaz.
You're saying "Since Validation is an Applicative, all errors that come up during composition of field invocations get accumulated ...". I don't think that's correct, because not every applicative will accumulate errors. In fact it is the special applicative that scalaz offers for Validations (which requires the failure type to have a semigroup) that does this special treatment.
Debasish said...
Hi Heiko -
Indeed applicatives are the most common abstraction that accumulates effects. This is because <*> keeps the structure of the computation fixed and just sequences the effects irrespective of the value returned by any of the computations. This is unlike monads, where the computation sequence is broken as soon as one of them fails. In case of a monad m, (>>=) :: m a -> (a -> m b) -> m b allows the value returned by one computation to influence the choice of another, quite unlike <*> of applicative. Have a look at section 5 of Conor McBride and Ross Paterson paper which introduces Applicatives.
In Haskell also we have the same use of applicatives for accumulating effects. The applicative version of the Parsec library uses applicatives to accumulate parse results. Just like we can do with scalaz.
Hence I think it's a common pattern in general to accumulate errors using applicatives.
Heiko Seeberger said...
I fully agree that collecting errors is a common use case for applicatives, but it is not a necessary consequence. The reason why I am that pedantic is that I was once mislead by that assumption, but that's probably just me.
Ittay Dror said...
Isn't Applicative about mapping regular functions inside a context? Here the function 'field' adds a context. It is not a String => T function, but String => Validation[String, T]. This sounds to me like a Monad, not an applicative
Debasish said...
Hi Ittay -
What happens here is that (M[A] |@| M[B]) or a chaining thereof of |@| returns an ApplicativeBuilder on which we apply a pure function, the constructor for Contact. Have a look at https://github.com/scalaz/scalaz/blob/master/core/src/main/scala/scalaz/MA.scala#L40 ..
andry said...
very good tutorial and can hopefully help me in building json in the application that I created for this lecture. thank you
anriz said...
I can't find the sjson-scalaz project in github. Do you have a link to it ? Thanks.
Debasish Ghosh said...
@anriz - I have removed it for the time being. Will bring it back when I finish some of the changes on it. Also need to upgrade to the latest version of scalaz.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2697940170764923, "perplexity": 623.191711682146}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424931.1/warc/CC-MAIN-20170724222306-20170725002306-00445.warc.gz"}
|
http://swmath.org/software/8388
|
# CRYSTAL09
The Gruneisen parameter for silver azide. A first-principle procedure is proposed to determine the Gruneisen parameter for a crystal by calculating the external pressure and the vibration spectrum as functions of the volume of a unit cell. In the gradient approximation of the electron density functional theory, on the basis of a linear combination of atomic orbitals, the elastic and the thermodynamic Gruneisen parameters of silver azide, which decrease with volume (with increasing pressure), are calculated with the use of the CRYSTAL09 code. The equilibrium values of the parameter $gamma_0$ for various cold equations of state of crystals and for the thermodynamic models used are, respectively, $sim 2.3$ and 1.6.
## Keywords for this software
Anything in here will be replaced on browsers that support the canvas element
## References in zbMATH (referenced in 4 articles )
Showing results 1 to 4 of 4.
Sorted by year (citations)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7855829000473022, "perplexity": 1215.8352206880127}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719564.4/warc/CC-MAIN-20161020183839-00421-ip-10-171-6-4.ec2.internal.warc.gz"}
|
https://www.maplesoft.com/support/help/maple/view.aspx?path=Maplets/Elements/BoxCell&L=E
|
Box Cell - Maple Help
Online Help
All Products Maple MapleSim
Maplets[Elements]
BoxCell
specify a cell in a box column, box layout, or box row
Calling Sequence BoxCell(opts)
Parameters
opts - equation(s) of the form option=value where option is one of halign, hscroll, valign, value, or vscroll; specify options for the BoxCell element
Description
• The BoxCell layout element specifies an entry in a BoxColumn, BoxLayout, or BoxRow element. The contents of the cell are specified by using the value option.
• For horizontal control in a box layout, use the BoxRow element. For vertical control in a box layout, use the BoxColumn element.
• The BoxCell element features can be modified by using options. To simplify specifying options in the Maplets package, certain options and contents can be set without using an equation. The following table lists elements, symbols, and types (in the left column) and the corresponding option or content (in the right column) to which inputs of this type are, by default, assigned.
Elements, Symbols, or Types Assumed Option or Content always, as_needed, or never hscroll and vscroll options left or right halign option string or symbol value option top or bottom valign option
• A BoxCell element can contain BoxLayout, GridLayout, BorderLayout, or window body elements to specify the value option.
• A BoxCell element can be contained in a BoxColumn or BoxRow element, or Maplet element in a nested list representing a box layout.
• The following table describes the control and use of the BoxCell element options.
An x in the I column indicates that the option can be initialized, that is, specified in the calling sequence (element definition).
An x in the R column indicates that the option is required in the calling sequence.
An x in the G column indicates that the option can be read, that is, retrieved by using the Get tool.
An x in the S column indicates that the option can be written, that is, set by using the SetOption element or the Set tool.
Option I R G S halign x hscroll x valign x value x vscroll x
• The opts argument can contain one or more of the following equations that set Maplet options.
halign = left, center, right, or none
Horizontally aligns the cell when in a BoxRow. By default, the value is center. The none option can be used in combination with HorizontalGlue elements for finer control of the layout of cells in a row. For more detail, see BoxRow, and the example of this usage below.
hscroll = never, as_needed, or always
This option determines when a horizontal scroll bar appears in the box cell. By default, the value is never.
valign = top, center, bottom, or none
Vertically aligns the cell when in a BoxColumn. By default, the value is center. The none option can be used in combination with VerticalGlue elements for finer control of the layout of cells in a column. For more detail, see BoxColumn.
value = window body, BoxLayout, GridLayout, or BorderLayout element, or reference to such an element (name or string)
The Maplet element that appears in this cell.
vscroll = never, as_needed, or always
This option determines when a vertical scroll bar appears in the box cell. By default, the value is never.
Examples
A Maplet application in which element layout is controlled by using BoxCell elements.
> $\mathrm{with}\left(\mathrm{Maplets}\left[\mathrm{Elements}\right]\right):$
> $\mathrm{maplet}≔\mathrm{Maplet}\left(\left[\left[\mathrm{BoxCell}\left("Hello",'\mathrm{right}'\right)\right],\left[\mathrm{BoxCell}\left(\mathrm{Button}\left("Quit",\mathrm{Shutdown}\left(\right)\right),'\mathrm{left}'\right)\right]\right]\right):$
> $\mathrm{Maplets}\left[\mathrm{Display}\right]\left(\mathrm{maplet}\right)$
A Maplet application in which the halign=none option is used in BoxCell elements in combination with HorizontalGlue to achieve better control over the location of objects.
> \mathrm{maplet}≔\mathrm{Maplet}\left(\mathrm{BoxLayout}\left(\mathrm{BoxColumn}\left(\mathrm{BoxRow}\left(\mathrm{BoxCell}\left("Long text label to force alignment usage"\right)\right),\mathrm{BoxRow}\left(\mathrm{BoxCell}\left(\mathrm{Button}\left("Left1",\mathrm{Shutdown}\left(\right)\right),'\mathrm{halign}'='\mathrm{none}'\right),\mathrm{BoxCell}\left(\mathrm{Button}\left("Left2",\mathrm{Shutdown}\left(\right)\right),'\mathrm{halign}'='\mathrm{none}'\right),\mathrm{HorizontalGlue}\left(\right)\right),\mathrm{BoxRow}\left(\mathrm{BoxCell}\left(\mathrm{Button}\left("Left",\mathrm{Shutdown}\left(\right)\right),'\mathrm{halign}'='\mathrm{none}'\right),\mathrm{HorizontalGlue}\left(\right),\mathrm{BoxCell}\left(\mathrm{Button}\left("Right",\mathrm{Shutdown}\left(\right)\right),'\mathrm{halign}'='\mathrm{none}'\right)\right),\mathrm{BoxRow}\left(\mathrm{HorizontalGlue}\left(\right),\mathrm{BoxCell}\left(\mathrm{Button}\left("Right2",\mathrm{Shutdown}\left(\right)\right),'\mathrm{halign}'='\mathrm{none}'\right),\mathrm{BoxCell}\left(\mathrm{Button}\left("Right1",\mathrm{Shutdown}\left(\right)\right),'\mathrm{halign}'='\mathrm{none}'\right)\right)\right)\right)\right):
> $\mathrm{Maplets}\left[\mathrm{Display}\right]\left(\mathrm{maplet}\right)$
Note: This is different than the effect of using halight=left/right for the cells, as these options add space between consecutive elements with the same alignment.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8382619023323059, "perplexity": 2569.71953736907}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499826.71/warc/CC-MAIN-20230130165437-20230130195437-00838.warc.gz"}
|
http://metamathological.blogspot.com/2013/05/maths-on-steroids-8.html
|
## Tuesday, 21 May 2013
### Maths on Steroids: 8
W0000! The Puzzle Hunt is over! Now I can finally get some work done. =P
Well, maybe not straight-away...maybe after I watch the entire first season of Pendleton Ward's (*) Bravest Warriors.
Blogging (**) is strange: there's something disarmingly private (even diary-esque?) about writing up a post, especially given how open blogging is as a medium. Okay...okay...okay...fair enough: no-one's actually going to ever read it, but there's still something sort of...umm...faintly and intangibly Huxleyan about all this.
Or not.
Hmm...whatevs, - it's not like I know what I'm on about.
Err...let's just do some maths.
I mentioned at the end of tute 8 that I would write up a solution to a non-standard Gram-Schmidt orthonormalisation question in the yellow book. Something like question 29:
Let $$P_2$$ be the vector space of polynomials of degree at most two with the inner product\begin{align}\langle p,q\rangle :=\int_{-1}^{1}p(x)q(x)\,\mathrm{d}x.\end{align}Obtain an orthonormal basis for $$P_2$$ from the basis $$\{1,x,x^2\}$$ using the Gram-Schmidt process.
To begin with, we just pick one of the three vectors and normalise it. Let's use the vector $$1\in P_2$$. We know that the magnitude of this vector is:\begin{align}||1||^2=\langle1,1\rangle=\int_{-1}^{1}1\cdot1\,\mathrm{d}x=2.\end{align}Therefore, the first vector in our orthogonal basis is: $$\frac{1}{\sqrt{2}}\cdot1\in P_2$$.
The second step is to take another basis vector, like $$x\in P_2$$, and to first remove the $$1$$-component from this vector, before normalising the result to form a new vector. So, consider the vector\begin{align}x-\langle x,\frac{1}{\sqrt{2}}1\rangle\cdot\frac{1}{\sqrt{2}}\cdot1=x-(\frac{1}{2}\int_{-1}^1 x\,\mathrm{d}x)\cdot1=x-0\cdot1=x.\end{align}
At this point, we've got two orthogonal vectors: $$\frac{1}{\sqrt{2}}1$$ and $$x$$, and we just need to normalise this latter vector to obtain: \begin{align}\{\frac{1}{\sqrt{2}}1,\sqrt{\frac{3}{2}}x\}.\end{align}In this last step, we take the final basis vector $$x^2\in P_2$$ and remove the $$1$$ and $$x$$-components from this vector, before normalising the result to form a new vector. So, consider the vector\begin{align}&x^2-\langle x^2,\frac{1}{\sqrt{2}}1\rangle\cdot\frac{1}{\sqrt{2}}1-\langle x^2,\sqrt{\frac{3}{2}}x\rangle\cdot\sqrt{\frac{3}{2}}x\\=&x^2-(\frac{1}{2}\int_{-1}^1x^2\,\mathrm{d}x)\cdot1-(\frac{3}{2}\int_{-1}^1x^3\,\mathrm{d}x)\cdot x=x^2-\frac{1}{3}.\end{align}After normalisation, we obtain the Gram-Schmidt orthonormalised basis: \begin{align}\{\frac{1}{\sqrt{2}},\sqrt{\frac{3}{2}}x,\sqrt{\frac{45}{8}}(x^2-\frac{1}{3}).\}\end{align}
W000000.
Note that the answer to this problem is non-unique, because the basis that results from the Gram-Schmidt process depends a lot upon the order in that we feed the original basis into the orthonormalisation procedure. For example, shoving in the vectors in the order $$x^2,x,1$$ results in the orthonormal basis: $$\{\sqrt{\frac{5}{2}}x^2,\sqrt{\frac{3}{2}}x,\sqrt{\frac{5}{8}}(1-3x^2)\}.$$
Well, that was that.
The other thing that I promised to do was to give answers to the tute 9 lab sheets. Psst: Turns out that that's going to have to wait because I left the lab sheets at uni again. >____<
*: the creator of Adventure Time!
**: actually, when you think about it, this is pretty applicable to Twitter, Facebook, YouTube, heck - it's applicable to pretty much anything without a password on the internet.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 6, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8485283255577087, "perplexity": 1093.3115612989936}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583510415.29/warc/CC-MAIN-20181016072114-20181016093614-00088.warc.gz"}
|
https://makersportal.com/blog/2019/1/21/loudspeaker-analysis-and-experiments-part-i
|
Loudspeaker Analysis and Experiments: Part I
Part I: Thiele-Small Parameters, Impedance, and Resonance
A modern loudspeaker is a crossover technology comprised of electrical and mechanical components. And while this design helps humans enjoy the analog world using digitization, it also creates a complex problem that encompasses the fields of fluid dynamics and electrical engineering. In order to demystify the loudspeaker, two engineers: Neville Thiele and Richard H. Small derived relationships between the physical parameters of a loudspeaker and its acoustic performance. These parameters, called the Thiele-Small Parameters, are still used today to design audio systems and remain a cornerstone for quantifying the performance of such a system [read more about Thiele-Small Parameters here and here, and design aspects here and here].
In this tutorial, a loudspeaker will be analyzed by calculating the Thiele-Small parameters from impedance measurements using an inexpensive USB data acquisition system (minimum sampling rate of 44.1 kHz). The methods used in this project will educate the user on multiple engineering topics ranging from: data acquisition, electronics, acoustics, signal processing, and computer programming.
1. Magnet, 2. Voice Coil (Inductive Coil), 3. Suspension, 4. Diaphragm
Thiele-Small Parameters and Background
A few crutial Thiele-Small pararmeters are cited below based on the Thiele and Small papers:
The parameters above give information about the speaker’s performance and limitations, which define some physical characteristics of the loudspeaker. These characteristics are difficult to quantify once the loudspeaker is assembled, so we are forced to approximate the following physical characteristics using experiments:
A few notes on the definitions above: r represents a parameter calculated at the loudspeaker’s resonance frequency. Additionally, e represents a parameter in the electrical domain; m represents a parameter in the mechanical domain, and a represents the acoustic domain. From this point onward, I will use each parameter to define another parameter using specific equation derived by Thiele and Small.
Parts List for Experiments
This is a fairly involved experiment in that it requires the following core parts: a computer, a USB acquisition device (at least 44.1 kHz sampling rate), a loudspeaker driver, and small calibration weights. These components will allow us to fully characterize the loudspeaker using the Thiele-Small parameters. Below is the full parts list used in my method of calculating the parameters:
1. Boss Audio 80 Watt Loudspeaker - $10.79 [Amazon] 2. Behringer UCA202 USB Audio Interface -$29.99 [Amazon]
3. Calibration Weights - $6.99 [Amazon] 4. Audio Amplifier 15 Watt -$8.99 [Amazon]
5. Speaker Wire - $8.49 [Amazon] 6. Multimeter with AC Voltmeter -$37.97 [Amazon]
7. Alligator Clips - $6.39 [Amazon] 8. Jumper Wires -$6.49 [Amazon]
9. Resistor Kit - $7.99 [Amazon] 10. Breadboard -$7.99 [Amazon]
11. 3.5 mm cable - $5.10 [Amazon] Boss Audio Full Range 80W Driver Measuring the DC Coil Resistance The first step in the process is to measure the speaker’s electrical resistance, which should always be less than the nominal resistance. Start by measuring the resistance of the leads on your multimeter, then subtract that from the resistance measured across the terminals of the speaker. I measured the resistance to be 3.3 Ohms: This value will be necessary for calculating the electrical damping factor, Qer , and efficiency η0 . Wiring for Impedance Measurement The wiring for this experiment is somewhat involved, however, the root of the wiring method is based on a voltage divider with the speaker acting as the second resistor. The full diagram is shown below: The process flow for the wiring above is as follows: 1. A smartphone app called ‘Audio Function Generator’ is used to generate a sine wave sweep or constant frequency into the amplifier’s 3.5 mm input 2. The amplified signal is wired across the voltage divider 3. The voltage across the amplified signal is inputted into the Behringer USB acquisition device 4. The voltage across the loudspeaker terminals is inputted into the Behringer USB acquisition device 5. The USB stereo input is read by Python on a computer (Raspberry Pi in my case) Using the wiring method above, we will be able to approximate the resonance frequency of the driver where the impedance is maximum. We will also be able to find parameters relating to the electrical and mechanical properties of the driver. Identifying the Resonance Region Using a Full-Spectrum Frequency Sweep We’ll be using the Behringer UCA202 USB audio interface to sample voltage readings at 44.1 kHz on a Raspberry Pi. The general setup for finding the resonance frequency starts by wiring the loudspeaker in series with a resistor to measure voltage across the speaker. And since the voltage varies when we excite the speaker using a sinusoidal function, we need to approximate the impedance across the loudspeaker. The voltage divider equation can be rewritten specifically for our scenario: where Vspeaker is the voltage measured across the loudspeaker, Vsupply is the input voltage applied to the system, R1 is the resistance of the chosen resistor (I used 10 Ohms), and the approximate impedance here is Rspeaker , which can be solved for: This result is important for many reasons. It allows us to approximate the maximum impedance, Zmax (Rspeaker,max ), which allows us to cite the resonance frequency and also solve for other crutial variables like the damping and efficiency of the system. Therefore, once the resonance is discovered we can begin to calculate other parameters necessary to characterizing the perforamnce of a loudspeaker. An impedance plot of the loudspeaker that I used is shown below for a frequency sweep from 20 Hz - 20,000 Hz. I sampled the impedance over 180 seconds. It was done in three pieces and then stitched together. I used the ranges of 20 Hz - 120 Hz, 120 Hz - 2,000 Hz, and 2,000 Hz - 20,000 Hz. This was done to avoid diminishing the peak and the transition zone between high and low frequencies. I used a sampling period of 1 second, which resulted in a frequency resolution of 1 Hz. Impedance Plot Identifying Resonance Region Several observations can be made regarding the behavior of the impedance plot above. The first is - there is a clear peak which we can identify as the resonance frequency. For the loudspeaker I'm using, the resonance is located around 86 Hz, but the plot above shows about 93 Hz. In the narrower plot below I show a better approximation of the resonance. The full spectrum sweep identifies the resonance region. There are also a few extraneous peaks that likely represent some sort of noise in the input signal. The noise is likely due to the sampling interval and how the smartphone app handles the sweep. Also - the user may notice an increase in the impedance as the frequency increases toward infinity. This is often cited as an artifact of the speaker's inductance, which can act as a frequency filter (hence the increase in impedance with frequency). The important thing to remember is that we have identified the resonance region of the loudspeaker, which we will further explore in the next section when we discuss phase behavior and approximating the amplitude of the impedance at resonance. Impedance Magnitude and Phase Relation to Resonance There are multiple ways of identifying the actual resonance frequency of a loudspeaker. Many manufacturers will put the impedance response curve on their datasheet, and fewer will include the phase measurement, which is often correlated to the derivative of the impedance. This means that if we plot both the impedance response and the phase, we should see a phase zero-crossing around the resonance frequency. The response curves for phase and impedance can be seen below calculated for the loudspeaker used in this project (44.1kHz, 10 sec recording, 20 Hz - 220 Hz sweep): The full code to replicate the figures above is also given below import pyaudio import numpy as np import matplotlib.pyplot as plt plt.style.use('ggplot') def fft_calc(data_vec): fft_data_raw = (np.fft.fft(data_vec)) fft_data = (fft_data_raw[0:int(np.floor(len(data_vec)/2))])/len(data_vec) fft_data[1:] = 2*fft_data[1:] return fft_data form_1 = pyaudio.paInt16 # 16-bit resolution chans = 2 # 1 channel samp_rate = 44100 # 44.1kHz sampling rate record_secs = 10 # seconds to record dev_index = 2 # device index found by p.get_device_info_by_index(ii) chunk = 44100*record_secs # 2^12 samples for buffer R_1 = 9.8 # measured resistance from voltage divider resistor R_dc = 3.3 # measured dc resistance of loudspeaker freq_sweep_range = (40.0,200.0) # range of frequencies in sweep (plus or minus a few to avoid noise at the ends) audio = pyaudio.PyAudio() # create pyaudio instantiation # create pyaudio stream stream = audio.open(format = form_1,rate = samp_rate,channels = chans, \ input_device_index = dev_index,input = True, \ frames_per_buffer=chunk) stream.stop_stream() # pause stream so that user can control the recording input("Click to Record") data,chan0_raw,chan1_raw = [],[],[] # loop through stream and append audio chunks to frame array stream.start_stream() # start recording for ii in range(0,int((samp_rate/chunk)*record_secs)): data.append(np.fromstring(stream.read(chunk),dtype=np.int16)) # stop, close stream, and terminate pyaudio instantiation stream.stop_stream() stream.close() audio.terminate() print("finished recording\n------------------") # loop through recorded data to extract each channel for qq in range(0,(np.shape(data))[0]): curr_dat = data[qq] chan0_raw.append(curr_dat[::2]) chan1_raw.append((curr_dat[1:])[::2]) # conversion from bits chan0_raw = np.divide(chan0_raw,(2.0**15)-1.0) chan1_raw = np.divide(chan1_raw,(2.0**15)-1.0) # Calculating FFTs and phases spec_array_0_noise,spec_array_1_noise,phase_array,Z_array = [],[],[],[] for mm in range(0,(np.shape(chan0_raw))[0]): Z_0 = ((fft_calc(chan0_raw[mm]))) Z_1 = ((fft_calc(chan1_raw[mm]))) phase_array.append(np.subtract(np.angle(Z_0,deg=True), np.angle(Z_1,deg=True))) spec_array_0_noise.append(((np.abs(Z_0[1:])))) spec_array_1_noise.append(((np.abs(Z_1[1:])))) # frequency values for FFT f_vec = samp_rate*np.arange(chunk/2)/chunk # frequency vector plot_freq = f_vec[1:] # avoiding f = 0 for logarithmic plotting # calculating Z Z_mean = np.divide(R_1*np.nanmean(spec_array_0_noise,0), np.subtract(np.nanmean(spec_array_1_noise,0),np.nanmean(spec_array_0_noise,0))) # setting minimum frequency locations based on frequency sweep f_min_loc = np.argmin(np.abs(plot_freq-freq_sweep_range[0])) f_max_loc = np.argmin(np.abs(plot_freq-freq_sweep_range[1])) max_f_loc = np.argmax(Z_mean[f_min_loc:f_max_loc])+f_min_loc f_max = plot_freq[max_f_loc] # print out impedance found from phase zero-crossing print('Resonance at Z-based Maximum:') print('f = {0:2.1f}, Z = {1:2.1f}'.format(f_max,np.max(Z_mean[f_min_loc:f_max_loc]))) print('------------------') # smoothing out the phase data by averaging large spikes smooth_width = 10 # width of smoothing window for phase phase_trimmed = (phase_array[0])[f_min_loc:f_max_loc] phase_diff = np.append(0,np.diff(phase_trimmed)) for yy in range(smooth_width-1,len(phase_diff)-smooth_width): for mm in range(0,smooth_width): if np.abs(phase_diff[yy]) > 100.0: phase_trimmed[yy] = (phase_trimmed[yy-mm]+phase_trimmed[yy+mm])/2.0 phase_diff[yy] = (phase_diff[yy-mm]+phase_diff[yy+mm])/2.0 if np.abs(phase_diff[yy]) > 100.0: continue else: break if np.abs(phase_diff[yy]) > 100.0: phase_trimmed[yy] = np.nan ##### plotting algorithms for impedance and phase #### fig,ax = plt.subplots() fig.set_size_inches(12,8) # Logarithm plots in x-axis p1, = ax.semilogx(plot_freq[f_min_loc:f_max_loc],Z_mean[f_min_loc:f_max_loc],label='$Z$',color='#7CAE00') ax2 = ax.twinx() # mirror axis for phase p2, = ax2.semilogx(plot_freq[f_min_loc:f_max_loc],phase_trimmed,label='$\phi$',color='#F8766D') # plot formatting subplot_vec = [p1,p2] ax2.legend(subplot_vec,[l.get_label() for l in subplot_vec],fontsize=20) ax.yaxis.label.set_color(p1.get_color()) ax2.yaxis.label.set_color(p2.get_color()) ax.set_ylabel('Impedance [$\Omega$]',fontsize=16) ax2.set_ylabel('Phase [Degrees]',fontsize=16) ax2.grid(False) ax.spines["right"].set_edgecolor(p1.get_color()) ax2.spines["right"].set_edgecolor(p2.get_color()) ax.tick_params(axis='y', colors=p1.get_color()) ax2.tick_params(axis='y', colors=p2.get_color()) ax.set_xlabel('Frequency [Hz]',fontsize=16) peak_width = 70.0 # approx width of peak in Hz ax.set_xlim([f_max-(peak_width/2.0),f_max+(peak_width/2.0)]) ax.set_ylim([np.min(Z_mean[f_min_loc:f_max_loc]),np.max(Z_mean[f_min_loc:f_max_loc])+0.5]) ax2.set_xlim([f_max-(peak_width/2.0),f_max+(peak_width/2.0)]) ax2.set_ylim([-45,45]) ax.set_xticks([],minor=True) ax2.set_xticks(np.arange(f_max-(peak_width/2.0),f_max+(peak_width/2.0),10)) ax2.set_xticklabels(['{0:2.0f}'.format(ii) for ii in np.arange(f_max-(peak_width/2.0),f_max+(peak_width/2.0),10)]) # locating phase and Z maximums to annotate the figure Z_max_text = ' = {0:2.1f}$\Omega$'.format(np.max(Z_mean[f_min_loc:f_max_loc])) f_max_text = ' = {0:2.1f} Hz'.format(plot_freq[np.argmax(Z_mean[f_min_loc:f_max_loc])+f_min_loc]) ax.annotate('$f_{max}$'+f_max_text+',$Z_{max}$'+Z_max_text,xy=(plot_freq[np.argmax(Z_mean[f_min_loc:f_max_loc])+f_min_loc], np.max(Z_mean[f_min_loc:f_max_loc])),\ xycoords='data',xytext=(-300,-50),size=14,textcoords='offset points', arrowprops=dict(arrowstyle='simple', fc='0.6',ec='none')) # from phase phase_f_min = np.argmin(np.abs(np.subtract(f_max-(peak_width/2.0),plot_freq))) phase_f_max = np.argmin(np.abs(np.subtract(f_max+(peak_width/2.0),plot_freq))) phase_min_loc = np.argmin(np.abs(phase_array[0][phase_f_min:phase_f_max]))+phase_f_min Z_max_text_phase = ' = {0:2.1f}$\Omega$'.format(Z_mean[phase_min_loc]) f_max_text_phase = ' = {0:2.1f} Hz'.format(plot_freq[phase_min_loc]) ax2.annotate('$\phi_{min}$'+' ={0:2.1f}$^\circ$'.format(np.abs(phase_array[0][phase_min_loc]))+',$f_{max}$= '+f_max_text_phase+\ '\n$Z_{max}\$'+Z_max_text_phase,
xy=(plot_freq[phase_min_loc],phase_array[0][phase_min_loc]),\
xycoords='data',xytext=(-120,-150),size=14,textcoords='offset points',arrowprops=dict(arrowstyle='simple',
fc='0.6',ec='none'))
# print out impedance found from phase zero-crossing
print('Resonance at Phase Zero-Crossing:')
print('f = {0:2.1f}, Z = {1:2.1f}'.format(plot_freq[phase_min_loc],Z_mean[phase_min_loc]))
# uncomment to save plot
##plt.savefig('Z_sweep_with_phase.png',dpi=300,facecolor=[252/255,252/255,252/255])
plt.show()
The code above and all the codes for this project can be found on the project’s GitHub page:
https://github.com/engineersportal/thiele_small_parameters.git
Therefore, for the case of our loudspeaker - we can say that its resonance frequency is about 86 Hz. And by using the actual RMS values of steady frequency inputs, I have another plot that shows just how close this is to the likely resonance:
Plot of impedance and phase around resonance
This method is done by hand, not using an FFT. We can still see similar behavior using this method, indicating that the FFT method is accurate (and time-saving).
The manual method shown directly above can be used to approximate the resonance, however, I will use the quicker and nearly as accurate FFT method. And in the next entry, we will be doing multiple measurements of resonance at different mass loadings - so the manual method would really time quite a bit of time.
Conclusion and Continuation
In this first entry into the loudspeaker analysis series, I discussed the Thiele-Small parameters and the notion of impedance and resonance. The complex nature of loudspeakers makes this series an educational and diverse topic in engineering. I explored how to find the resonance frequency of a speaker driver and how to use both phase and magnitude to approximate the frequency of the resonance and the magnitude of the impedance at resonance. Both of these values will become instrumental in characterizing loudspeakers and audio systems for use in real-world applications.
In the next entry, I discuss how to find the remaining mechanical and electrical properties of a loudspeaker and the applications that they open up in terms of design in acoustic environments.
See More in Acoustics and Engineering:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49450185894966125, "perplexity": 1971.5635179722615}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574018.53/warc/CC-MAIN-20190920113425-20190920135425-00264.warc.gz"}
|
https://astarmathsandphysics.com/university-maths-notes/complex-analysis/1850-proof-of-the-open-mapping-theorem.html?tmpl=component&print=1&page=
|
Deprecated: Methods with the same name as their class will not be constructors in a future version of PHP; plgContentJComments has a deprecated constructor in /var/www/astarmathsandphysics/plugins/content/jcomments/jcomments.php on line 25 Call Stack: 0.0000 362552 1. {main}() /var/www/astarmathsandphysics/index.php:0 0.0764 1211992 2. Joomla\CMS\Application\SiteApplication->execute() /var/www/astarmathsandphysics/index.php:49 0.0764 1211992 3. Joomla\CMS\Application\SiteApplication->doExecute() /var/www/astarmathsandphysics/libraries/src/Application/CMSApplication.php:267 0.1939 4127576 4. Joomla\CMS\Application\SiteApplication->dispatch() /var/www/astarmathsandphysics/libraries/src/Application/SiteApplication.php:233 0.1953 4155336 5. Joomla\CMS\Component\ComponentHelper::renderComponent() /var/www/astarmathsandphysics/libraries/src/Application/SiteApplication.php:194 0.1962 4173048 6. Joomla\CMS\Component\ComponentHelper::executeComponent() /var/www/astarmathsandphysics/libraries/src/Component/ComponentHelper.php:356 0.1964 4203568 7. require_once('/var/www/astarmathsandphysics/components/com_content/content.php') /var/www/astarmathsandphysics/libraries/src/Component/ComponentHelper.php:381 0.1974 4226288 8. ContentController->execute() /var/www/astarmathsandphysics/components/com_content/content.php:42 0.1974 4226288 9. ContentController->display() /var/www/astarmathsandphysics/libraries/src/MVC/Controller/BaseController.php:710 0.2597 4912120 10. ContentController->display() /var/www/astarmathsandphysics/components/com_content/controller.php:113 0.2635 5104272 11. Joomla\CMS\Cache\Controller\ViewController->get() /var/www/astarmathsandphysics/libraries/src/MVC/Controller/BaseController.php:663 0.2641 5125200 12. ContentViewArticle->display() /var/www/astarmathsandphysics/libraries/src/Cache/Controller/ViewController.php:102 0.2756 5326480 13. Joomla\CMS\Plugin\PluginHelper::importPlugin() /var/www/astarmathsandphysics/components/com_content/views/article/view.html.php:189 0.2756 5326736 14. Joomla\CMS\Plugin\PluginHelper::import() /var/www/astarmathsandphysics/libraries/src/Plugin/PluginHelper.php:182
## Proof of the Open Mapping Theorem
The Open Mapping Theorem states:
Letbe a function analytic and non – constant on a regionand letbe an open subset ofThenis open.
Proof
To prove thatis open we need to show that ifthen there exists such that
Sincethere existssuch thatFurther, the solutions of the equationare isolated sinceis non constant and analytic and so we can find an open discinwith centreand radius sufficiently small such thatforThus, ifis a circle inwith centrethen the imageis a closed contour which does not pass through
is compact, being the continuous image of a compact set, so the complement ofis open and we can chooseso that lies in the complement of
The winding number ofabout each point of the disc is equal to
Nowby the Argument Principle sinceso for
Thus by the Argument Principle again, the equationhas at least one solution inside for eachsuch thathence as required.
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8090134263038635, "perplexity": 12891.017780981412}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814787.54/warc/CC-MAIN-20180223134825-20180223154825-00513.warc.gz"}
|
http://wiki.diyfaq.org.uk/index.php?title=Power_factor
|
# Power factor
## What is Power Factor
${\displaystyle Power=Voltage\times Current\times Powerfactor}$
Some loads, eg electric heaters, have a power factor of 1. This means the current drawn is proportional to the voltage at any instantaneous moment, ie maximum current flows at voltage peaks of the mains supply.
So a 1kW heater on 240v consumes ${\displaystyle {\frac {1000}{240\times 1}}=4.2Amps}$
But some loads, such as motors, behave a bit differently, having a power factor of less than 1. With a 0.8 PF motor, the current drawn lags behind the voltage by a tiny fraction of a second, so the peaks in current draw occur after the voltage peaks.
A 1kW motor with 0.8 pf on 240v draws ${\displaystyle {\frac {1000}{240\times 0.8}}=5.2Amps}$
Notes:
• Power factor (PF) is a concept that only applies to electrical loads being powered from an AC supply. To try to apply it to a dc supply is meaningless.
• Domestic electricity users are only charged for that part of the current that produces real power, so in the motor example above the user only pays for 4.2A or 1kW, not 5.2A.
• It quantifies what proportion of the apparent power flowing into a load, is actually dissipated as real power.
## Relevance to DIY
There are occasions where it may crop up in a DIY setting. The most common cases are when designing circuits, and when taking electrical measurements. In circuit design, a low power factor increases circuit current flow, and hence larger cable conductors sizes are needed. When attempting to measure current flow or voltage drop, a low power factor may result in incorrect readings, as it fools most test meters.
Its relevant to:
• Loads such as motors and LPF fluorescent lighting on circuits where their uncompensated current draw can exceed the circuit's ampacity
• Tool use on generators
• Plug-in motorised appliances that can exceed 13A without Power Factor Correction (aka PFC, a number of measures designed to get a low power factor to look more like a higher one))
• Use of droppers
• Large lighting circuits using CFLs
• Sales blurb of dubious energy saving gadgets
• Industrial electricity use where users pay for uncorrected PF
• Knowing which type of fluorescent lighting ballast to buy
• Use of motors and other below 1 PF loads on invertors
## Terminology
PF
Power factor
PFC
Power factor correction
LPF
Low power factor
HPF
High power factor
W and kW
Real power in watts and kilowatts
VA and kVA
Voltamps and kilovoltamps, voltage times current. Often greater than the real power consumed
## Reactive power
With a DC supply, the power dissipated by a load is proportional to the voltage applied to it and the current through it (which is proportional to 1/ its resistance). The relationship between these is simple with dc.
In the simplest cases, the same calculations also apply to loads being powered from an AC source (like the mains). A resistive load being driven from the mains, draws current in sympathy with the mains voltage; as V rises, the current rises, and as V falls current falls. At the zero voltage crossing point, current is also zero.
Here the image shows a short section of an AC supply plotted against time. The voltage goes through one complete cycle (of which there are 50 per second on UK mains). With a resistive load, the current flows back and forth through the load in exact synchronisation with the voltage.
Many domestic appliances present this sort of load, including many heaters, filament lamps, and the type of electric motor commonly used in vacuum cleaners or similar appliances.
Ohms law tells us that:
Voltage = Current x Resistance
or more commonly: V = IR
we also know
Power = Current x Voltage
or W = IV
So combining these we can also get:
W = I x IR or
W = I² R
So if we know something about our load - say that it is rated at 100W at 230V, we can deduce from it that it must draw 100/230 or about 0.4A, and its internal resistance must equal 230/0.4 or about 575 ohms.
The red sections of the bottom graph, show when power is being dissipated in the load, or in other words the power is actually doing "work". If you imagine averaging the power transfer graph over a full second, you end up with a positive number that is the actual power consumption of the appliance
However things can get more complicated due to the effect of "reactive" elements in the load. These are typically components that have an inductance or capacitance. Inductors and capacitors actually store electrical energy (although in different ways). As you might imagine, having something that stores energy being fed from an AC supply, causes it to charge and discharge in response to the ever changing applied voltage. So during one part of the mains cycle its absorbing energy, and in another it gives the stored energy back again. Reactive elements in a load make the types of calculations that are easy to apply to simple resistive loads more complex.
By way of example, consider an extreme example: a load consisting of nothing but a capacitor being driven from an AC supply. Capacitors present an infinite resistance to a steady applied voltage, and lower reactance to quickly changing voltage. So with a sinusoidal input voltage, peak current is drawn when the voltage is at the zero point in the cycle, and the zero current is at the peak of the voltage cycle (i.e where the rate of change of voltage is zero). The effect is that the voltage and current waveforms still look the same, however they are no longer aligned (there is said to be a "phase shift" between them). Real measurable current is thus flowing into and out of the capacitor, but because it is being stored and returned rather than dissipated as heat, there is no real power being dissipated in the load.
Say we had a 5.5uF capacitor for a load, it reactance
(i.e. AC resistance) is calculated using the formula:
${\displaystyle X_{c}={1 \over 2\times \pi \times f\times c}}$
With a frequency of 50Hz we would get:
${\displaystyle X_{c}={1 \over 2\times \pi \times 50\times 5.5^{-6}}=578\Omega }$
So, from ohms law we can compute the current
that will flow: 230/578 = 0.4A, but we also
know that when the mains voltage is at its
peak, the current is zero due to the phase
shift, and when the voltage is zero, the
current is 0.4A.
If we were to work out the overall sum of
Voltage x Current for a complete cycle of
the mains, we get a total of zero.
Hence we have the odd situation where the
VA of the load (i.e the magnitude of the
product of the current and voltage but
ignoring the phase relationship) looks like
100, but the real power dissipated is
actually 0W.
As you can see from the power graph, power appears to both flow into the load (red section) and then half a mains cycle later, it flows out of the load. So on average, there is no power dissipated in the load at all.
a capacitor is an energy store that consists of two metal plates separated by a gap. When you initially connect a capacitor to a voltage, electrons (i.e. a current) flow into it, charging it up. This energy is actually stored in the electric field created between the two plates. As the capacitor charges, the rate of flow of electrons decreases, until the capacitor is fully charged and no further current flows (after all, the electrons can not actually flow over the insulating gap between the capacitor's plates to complete a circuit). If one were to reverse the polarity of the applied voltage however, the capacitor could discharge from its present state, and recharge with the opposing polarity.
A similar situation exists with an inductive load. Although inductors store energy, they do so in a magnetic field around a coil of wire, and thus behave differently to capacitors. (Since the wire is coiled, the magnetic field produced by the current flowing in the wire can also interact with adjacent coils.) As current flow through the coil changes, the associated magnetic field changes. This changing field induces a current flow in the wire. However the induced current flow caused by the changing field is in the opposite direction to the flow in the circuit. So the inductor tries to oppose changes in current flowing through it, by counteracting them using its stored energy. So inductors offer a low resistance path to stable voltages, but increase their resistance to changing ones - the opposite of a capacitor.
The main difference resulting from this is that the current waveform lags the voltage waveform rather than leads it. (see diagram to the right)
Real world inductors also have significant resistance, unlike capacitors, so they always present an RL load rather than a pure L load.
### Switched mode supplies
Some electronic devices put the mains feed through a rectifier and to a reservoir capacitor. These loads draw a pulse of current at each mains voltage peak, and no current the rest of the cycle.
Generally anything that uses a switched mode supply (SMPSU) works this way. Prime examples are TVs, CFLs, computers, switched mode wallwarts and fluorescent lights that use an electronic ballast.
Recent EU legislation has stipulated that SMPS over 25W must now include PFC. However there's lots of legacy equipment in use without PFC.
The exact PF figures for such equipment without PFC varies a fair bit, and is often very low (this spice sim shows peak current draw about 8x rms current). The power drawn by such items is also usually low, so its not often an issue.
A Switched Mode Power supply
Switched mode supplies are increasingly found in electronic appliances such as computer and electronic equipment, and also in energy saving light bulbs.
Switched Mode Power Supplies (SMPS) typically rectify the mains and use this rectified AC to charge a capacitor. The capacitor is then in turn used to power the later parts of the power supply circuit. At each peak of the mains cycle the capacitor is "topped up", when the mains voltage is above the capacitor voltage, so current is only drawn near the peaks of the mains voltage. No current flows during the rest of the cycle.
Unlike in the examples shown above, there is no phase shift visible between the voltage and current waveforms, the current waveform is distorted into a peaky square shape and is no longer the same shape as the voltage waveform. This change in shape also means that the current waveform is no longer made up from just a single frequency, but is in now made from a single fundamental frequency with a whole bunch of other frequencies mixed in with it.
The following diagram shows how a single pure frequency will be distorted as you add in harmonic components:
The blue line represents the main fundamental frequency. All the other smaller waveforms are harmonics (i.e. multiples of the fundamental frequency). If these are all added together, you get the resulting square looking purple wave. The more harmonic frequencies we add in, the cleaner looking the square wave becomes.
The actual power dissipated by the load is no longer a simple product of Voltage and Current (since when dealing with AC, this calculation has the implicit assumption that the waveforms are both the same shape).
As you can see from the power section of the graph, power is only dissipated in the PSU in short regular bursts.
The PF of a computer SMPS is often be in the region of 0.7. Energy saving CFL ballasts can be as low as 0.1
### Dimmers
There are other causes of low power factor. One example is where current only flows in the load during some but not all of the mains cycle. giving rise to a non sinusoidal current waveform. No current flows until the triac fires.
In the real world, electrical loads are rarely just reactive, they are usually combined with a resistive element as well. This can give rise to the situation where there is a misalignment between the alternating voltage and the current drawn as before, but not as pronounced. If you look at the resulting power graph, you can see that in this example, most of the power flowing into the load is power transfer that is doing some useful work, but a small proportion is being returned as reactive power.
An appliance like an electric fan heater presents a load like this. The bulk of the current drawn supplies the resistive heater element. However a small amount powers the fan, typically an induction motor.
### Other causes of low power factor
The above examples show the classical cause of non unity power factors. A typical real word example of a load that has a less than unity power factor as a result of these phase shift effects is an induction motor. Here a significant proportion of the current flow into the motor is actually reactive and does not get dissipated as work. A power factor of 0.5 is not be uncommon. Basic switchstart linear fluorescent lights are another common load with a low PF.
## How is a power factor expressed?
A power factor is usually[1] expressed as a number between 0 and 1. A power factor of 1 (aka a "unity power factor") basically says the power in a load is like a resistive load, and ohms law applies. A PF of 0 is a completely reactive load, one that dissipates no real power at all.
The power factor of our load can be expressed as:
${\displaystyle PowerFactor={RealPower \over VA}}$
So in our capacitive and inductive examples above, the power factor
would actually be zero.
A load like this that is totally "reactive" would be unusual. In real world situations, loads can have both resistive and reactive components (and of those reactive components, the capacitive ones will have leading, and the inductive ones lagging phase shifts).
To compute the actual current flow into a load like this at any given time therefore requires vector arithmetic, ie taking into account not only the reactance of the leading and lagging components, but also the direction of their phase shifts.
[1] Note that this makes the assumption that power is always flowing in the traditional direction from source to load. In real world circumstances there are systems where on occasion power may flow in the reverse direction (e.g. a power distribution grid connected a property with a photo voltaic solar array). In some cases the nominal "load" becomes the supply. Practitioners working with these systems will therefore make reference to negative power factors in the range 0 to -1
## Power factor correction
Often low power factor is correctible with power factor correction (PFC). While this is worthwhile in an industrial setting where customers are usually charged based on their VA loading rather than their real power loading in watts, its seldom useful in a domestic one where the meter reads actual power consumption regardless of the PF.
With poor PFs caused by current phase shifts, you can add a reactive component to the load to offset the phase shift and return the PF to near 1. Adding a capacitor across a fan motor with a lagging power factor can cancel out the poor pf.
In many cases one can only partly cancel reduced power factor this way. Eg with electromagnetic ballasted fluorescent strip lights (which have a lagging power factor due to their inductive ballasts), adding a capacitor to create a leading reactive element can cancel out much of the effect of the inductor, but the below 1 pf of the tube itself can't be cancelled so easily. So such fittings, with pf correction capacitor, might end up with a pf of around 0.8.
## How to correct PF
• Inductive lagging load: add the right sized capacitor in parallel with the load. If the load can't be relied on to discharge the cap when unplugged, also add a bleeder resistor across the cap.
• Capacitive leading load: in principle one can add a suitable inductor in parallel with the load, but its rare that capacitive loads need compensation, as total house/factory loads are normally lagging, and any capacitive load serves to correct this to some degree.
• Rectifier capacitor load: These can be corrected with either active electronic circuitry or use of relatively large passive components. Both require electronics expertise and are outside the scope of DIY. They're also just about never worth doing.
Note that components used for PFC need to be calculated correctly. Do not just hook up whatever's in the junk box.
## Does a low power factor mean I am using more electricity?
Yes and no. LPF does mean drawing more current that you would with a high PF, but no more power is consumed. In a domestic situation a poor power factor does not result in any increased electricity cost.
Low power factors reduce distribution efficiency in the grid slightly, and in some situations can even result in the mains supply waveform getting misshapen - so power supply companies tend to charge big industrial users if they don't control their power factors.
If you want an analogy, imagine riding a bike up hill. You stick a certain amount of push into the pedals to keep it moving overcoming resistance, and more to add the energy you are acquiring by climbing the hill. Imagine someone attaching a big spring to one pedal and the seat post, such that every time you push the right pedal down you also need to stretch the spring. As you can imagine this will take more "push" from you to keep riding. However that extra push is only required on the right pedal. When you push the left pedal you have the energy stored in the spring pulling up on the right pedal and hence working for you. So the result is the bike is harder to ride, but the total energy required to get up the hill is actually the same. This is similar to the effect of having a poor power factor as a result of large reactive elements in the load - the load still dissipates the same amount of energy, but it is harder to drive (i.e. needs more peak current flow).
## Fluorescent lights
Most of the time power factor doesn't much matter with fluorescent lighting. But a power factor of eg 0.65 means a light takes 50% more current than if PFCed, and this can run into problems if you've got a large enough quantity of lighting on a circuit to take it over its current rating. Such a situation isn't normally found in houses, but is common in commercial premises.
### Electromagnetic ballast
These basic ballasts nearly always flicker and flash when switched on. You can get them in HPF & LPF versions, with PFC capacitor or without.
LPF ballasts are converted to HPF by adding a capacitor across the mains feed. LPF (uncorrected) ballasts cause switches to arc, this may be reduced by adding a small snubber (a 0.1uF capacitor and a 100ohm resistor in series) in parallel with the fitting.
### Electronic
Its impractical to add DIY PFC to electronic ballast fluorescent lights. New electronic ballasts over 25w already have PFC.
## Generators
Generators are rated by VA rather than power in watts. So a 1kW 0.8pf drill consumes 1.25kVA under load, and the generator needs to supply this.
Connecting a suitable capacitor would allow the above tool to run on some generators that couldn't quite run it without PFC.
Use of tools on generators is more complex than this, as
• many tools also consume well above run current during startup.
• invertor and non-electronic generators behave quite differently with overcurrents, the latter handling them much better.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 6, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6403343677520752, "perplexity": 1134.571079275178}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121869.65/warc/CC-MAIN-20170423031201-00414-ip-10-145-167-34.ec2.internal.warc.gz"}
|
https://methods.sagepub.com/Reference/encyc-of-research-design/n509.xml
|
# z Score
Encyclopedia
Edited by: Published: 2010
• ## Subject Index
z score, also called z value, normal score, or standard score, is the standardized value of a normal random variable. The difference between the value of an observation and the mean of the distribution is usually called the deviation from the mean of the observation. The z score is then a dimensionless quantity obtained by dividing the deviation from the mean of the observation by the standard deviation of the distribution. In the following, random variables are represented by uppercase letters such as X and Z, and the specific values of the random variable are represented by the corresponding lowercase letters such as x and z.
The Normal Distribution and Standardization
The normal distribution is a family of normal random variables. A normal random variable X with ...
• All
• A
• B
• C
• D
• E
• F
• G
• H
• I
• J
• K
• L
• M
• N
• O
• P
• Q
• R
• S
• T
• U
• V
• W
• X
• Y
• Z
## Methods Map
Research Methods
Copy and paste the following HTML into your website
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8000367283821106, "perplexity": 2005.4525114651874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251671078.88/warc/CC-MAIN-20200125071430-20200125100430-00010.warc.gz"}
|
https://www.physicsforums.com/threads/collision-question-involving-velocity-kinetic-energy-and-conservation.753136/
|
# Collision question involving velocity, Kinetic energy and conservation
1. May 11, 2014
### selsunblue
1. The problem statement, all variables and given/known data
A 50 gram steel ball moving on a frictionless horizontal surface at 2.0m.s^-1 hits a stationary 20 gram ceramic ball. After the collision the ceramic ball moves off at a velocity of 2.5m.s^-1.
(i) Calculate the velocity of the steel ball after the collision.
(ii) Calculate the total kinetic energy of the balls before the collision and again after the collision.
(iii) From your results in part (ii) has the kinetic energy been conserved? If not, where has this energy gone?
2. Relevant equations
initial momentum = final momentum
3. The attempt at a solution
For the first sub-question I got initial momentum=50*2=100 and final momentum i got 50v+2.5(20)
equating these I got 100=50v+50 => v=1ms^-1
Would the 20 be correct in that calculation of final momentum?
initial KE = ½ * 50 * 2^2 = 100
Final KE = ½ * 50 * v^2 + ½ * m * 2.5^2
Last edited: May 11, 2014
2. May 11, 2014
### voko
Where does "20" come from? It was not specified in the statement of the problem.
3. May 11, 2014
### selsunblue
Oh It's the mass of the stationary ceramic ball, sorry about that, forgot to write it in I guess. Would it be 50 or 20 in this case?
4. May 11, 2014
### voko
Then your solution for the steel's ball velocity after the collision is correct.
5. May 11, 2014
### Nathanael
Yes but it should be remembered that the units are not in standard units (joules) since you didn't make the conversion from grams to kilograms. The answers would have to be divided by 1000 to be in joules (it's a good habit to pay attention to units).
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8393332958221436, "perplexity": 1156.0868702095174}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647322.47/warc/CC-MAIN-20180320072255-20180320092255-00165.warc.gz"}
|
https://indico.cern.ch/event/732911/contributions/3152187/
|
DISCRETE 2018
26-30 November 2018
Europe/Vienna timezone
CMS searches for pair production of charginos and top squarks in final states with two oppositely charged leptons
28 Nov 2018, 14:50
25m
Johannessaal
Johannessaal
Austrian Academy of Sciences, Dr.-Ignaz-Seipel-Platz 2, 1010 Vienna, AUSTRIA
Non-Invited Talk [6] Supersymmetry
Speaker
Barbara Chazin Quero (Universidad de Cantabria (ES))
Description
A search for pair production of supersymmetric particles in events with two oppositely charged leptons and missing transverse momentum is reported. The data sample corresponds to an integrated luminosity of 35.9/fb of proton-proton collisions at 13 TeV collected with the CMS detector during the 2016 data taking period at the LHC. No significant deviation is observed from the predicted standard model background. The results are interpreted in terms of several simplified models for chargino and top squark pair production, assuming R-parity conservation and with the neutralino as the lightest supersymmetric particle.
Content of the contribution Experiment
Primary author
Barbara Chazin Quero (Universidad de Cantabria (ES))
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9779683947563171, "perplexity": 4584.0228442140515}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668682.16/warc/CC-MAIN-20191115144109-20191115172109-00155.warc.gz"}
|
https://tapeop.com/interviews/110/deke-dickerson/
|
A collector of guitars. Owner of Major Label and Ecco-Fonic Records. Record producer and engineer at his Ecco-Fonic Studios. Band leader. Guitarist. Author of two volumes of The Strat in the Attic. This guy, Deke Dickerson, does a lot of things. We met up for breakfast on a rainy Portland morning and, at the end of our interview, what did we do but to head downtown to look for the plaque (at 411 SW 13th Avenue) where The Kingsmen recorded "Louie Louie!" How appropriate.
Thanks for meeting up to do this interview.
I have to say, I, in no way, consider myself to be an engineer, producer, or a full-time recording studio guy, like many of the people you talk to, so I sort of don't feel like I'm qualified to be in your magazine. I'm obsessed with records, old and new, and to me there's a fascinating process of listening to those records and thinking about how they made them. I feel really proud of being able to come up with some really cool sounds of the past and present in my recordings, as well as in some of the other things I've produced.
I think that makes you someone worth putting in Tape Op! How does work come to you? I'd imagine that people come and search you out?
Yeah, [that's what happens] whenever I get hired to out-and-out record or produce somebody. But it seems like a lot of these sessions are things that I come up with in my head, like, "The Trashmen must make another record!" Then I realize that nobody's going to support that, unless it's me. So I do everything out of my pocket, just to make it happen. I did a record with Nokie Edwards from The Ventures. That one turned out really cool.
To some, these people are bygones, but they're stylistic icons to us.
That's the thing. It always pisses me off when you talk to 20-year-old know-it-alls, and they talk about guys in their sixties, seventies, or eighties as being all washed-up has-beens. Nokie Edwards has magic in his hands. He will come up with licks that you never could have come up with.
How did recording music enter your life?
I never really set out to record. I was in a band, and we did the whole thing where we saved up a bunch of money and went to a local recording studio. But after, we were like, "Man, this sounds like crap! We just spent a bunch of money, this guy seemed like he knew what he was doing, but it sounds like a Steely Dan record or something."
Was that with Untamed Youth back in Missouri?
Exactly. It all boils down to the fact that you have to have somebody on the same wavelength. The guys in Missouri were all competent engineers; but they were hippie guys, and they recorded blues and hard rock. So when I moved out to California, I discovered this whole new world of people, like Wally Hersom, Tim Maag, and Mark Neill, who I wound up recording several records with. It was like this blanket had been lifted, and I could actually make records that would sound good — or, should I say, what my personal taste thinks is good. I thought that was something that had literally vanished forever. You start realizing that it's really an approach to recording, and by using certain pieces of gear, that you can achieve those sounds. Right around the time that eBay started was when I started picking up pieces of gear, because I found them for cheap.
You'd always been involved in finding guitars, instruments, and amps, like you talk about in your book.
I was always a junk hound, so I figured that I should really focus my energy on picking up recording gear when it was cheap. I just had this feeling that it was all going to go sky high, which it did. I was lucky enough to pick things up, like an RCA 77[-DX ribbon mic] and a Sony C-37A [mic] at a flea market for a couple hundred bucks each. I started watching the local Recycler [classified] paper, which was one of the things that predated Craigslist. I picked up a couple of Ampex tape recorders, a couple Altec mixers, and some other basic items to start with. That's when you have the harsh realization that this is not plug-and-play. It starts you down this whole path, where you realize you have to learn about +4 and -10, balanced and unbalanced. And if you're using tube gear, you find out what a world of difference things like quality capacitors, resistors, and low-noise tubes make. Tube gear, when it's well maintained, can sound shockingly good. On the other hand, I've been to a lot of studios that have some noisy old tube gear that sounds like crap because it all needs to be rebuilt. I do think I'm lucky, in that I was sort of the last generation that learned how to record audio with reel to reel tape recorders. I used to do this radio show back in Missouri. They had a production room with all these ReVox tape recorders and a Studer mixer. I spent hours in there with these guys teaching me how to chop tape, as well as clean and demagnetize the heads.
Even how to feed the tape through the rollers?
Exactly. I feel like I was lucky to have a hands-on experience with all of the analog stuff. When the whole digital thing came around, I wasn't helpless. At some point I decided I wanted to start recording bands so that I could learn how to use some of the gear I'd gotten together. My first few efforts were atrocious. I like to think of myself as a fairly intelligent guy, so I sat down and tried to figure it out. I thought, "If Sam Phillips, or any of these guys I idolize, were able to make such great-sounding records with so little equipment, what is it that I'm doing wrong?" Eventually you start figuring out that it really has to do with the musicians and the sound that's coming out of the instruments, along with their performances and musical interaction. The gear has a little bit to do with it, especially when you're dealing with chasing a tone. Mark Neill was the guy to teach me this. He comes across as a real hard-ass a lot of times, because he'll tell musicians, "You're not good enough to play on this recording." A lot of times, he's right. I took notes on that.
But, on the other hand, you and I are also fans of garage bands from the '60s. How did those records get done?
I really love extremely well produced records that are played by virtuoso musicians; high-fidelity and magically great records. Think about those incredible records produced by Bill Porter in Nashville in the 1950s and '60s — Roy Orbison, The Everly Brothers, that Nashville period of Elvis Presley's. Those recordings are shockingly good, from every aspect. They're perfect records! On the other hand, I really love messed-up sounding records, and I think there's a whole art and science to messed up sounding records. For me, when I've recorded bands in the house, I always try to ask myself, "Is this guy really not a good musician, and should he not be on this recording?" Versus, "I don't want to be the guy who stops the next Velvet Underground from happening." What if the Velvet Underground had walked into a studio, and they told them, "You guys suck! We're going to get some studio musicians in here and let Lou Reed sing." I always try to ask myself if things are actually working and genius in their own way, or if they actually need to be improved.
What's your home recording setup like?
I was really lucky when I bought my house, because I was planning on converting the two-car garage into a floating-floor type studio room. But first I started doing some band rehearsals in my living room. It's wood all around, with hardwood floors, wood on the walls, and a vaulted wood ceiling. All of us were like, "Damn, it sounds good in here!" The last four or five albums I've done there, with the band in the living room and all of my gear set up in one of the bedrooms. I wouldn't go so far as to call my setup a full-on commercial studio, but I've got top gear and a good sounding room. I'm open for business!
You've got a label, your writing, the various combos you put together, plus recording and producing people — you've got all these different ventures going, but are you making a living?
Well, yeah. The reason I do all those different things is to attempt to make a living. Most of the guys I know who quit their day jobs have to find out some way to juggle all the bills and make it happen. For me, the whole record industry thing sort of imploded around 2000.
Were you in a genre or niche that used to see more money, like from direct CD sales and such?
Yes and no. I was on HighTone Records, which was a mid-sized record label. I signed with them in 1998, and when the first record came out, it was all the good, old classic record label deal. I got a good-sized advance, they did a radio promo, took out ads in local entertainment magazines when I went on tour, and they gave me some tour support money. Within two years, by 2000, it had turned into one-third the size of the record advance, and nothing else. It was like the entire industry had back-ended itself in two years. You can sit there and drive yourself crazy thinking about, "Man, if I had only gotten signed six years earlier." I realized I was a good hustler of merchandise, so I started putting out my own records. The first record I did after HighTone was a record called [Deke Dickerson] In 3-Dimensions! I sold 9,000 CDs all on my own, without taking out ads or anything like that.
Are you still on a quest for perfect guitar tones?
The funny thing for me is that, especially in this digital age, people think that some gadget, some plug-in, or some effect is going to make things sound better. You avoid the basic building blocks of whether the guitar is good or the singer is good. What's the actual sound that's being produced from the beginning? As the old saying goes, you can rub it and you can buff it, but you can't shine shit. The other thing I found is that the simplest path you can take gives you the best guitar tone you can have. There are so many guys who go overboard, not only with effects on their amp setup, but also with effects while they're recording, just loading it down with everything under the sun. For me, if the guy can get a good sound in the room, then a really simple path of a microphone into a good preamp straight to tape can't be beat. It took me forever to figure that out.
With you, being a guitar player, and on a quest for guitars and amps, what have you found, as far as sounds go, while searching for some of the more obscure amps and guitars?
I'll just say this, I'm definitely a nerd about that sort of thing. If I hear a record that I really like, I'll sometimes try to chase a tone and figure out how they got a sound. If you don't have the right gear, you spend a lot of time and effort trying to chase that sound. If you figure out a sound that you're going for, and then you get the exact same gear that was on the recording, it's always mind-blowing to listen to the gear and hear the sound. It lives! On a new record that I did, there were a few places where I was trying to chase these elusive rockabilly guitar tones. There was a guy named Grady Martin who played a lot on Owen Bradley records in Nashville. He had a Bigsby double-neck guitar, which he played through a Magnatone amp, with an RCA mic on the amp. It was an old tube studio and a pretty primitive setup. They'd use a second microphone direct into the mic preamp of an Ampex 350 [tape deck] for the slapback, and they'd mix that in as a separate channel on their console, as opposed to [using] an aux send or something. On this record, I was lucky enough to have a Bigsby electric guitar (which is really rare), a Magnatone amp, an RCA microphone, and an Ampex 350. I set that whole signal chain up, and then all of a sudden there was that exact sound. I'd spent all these years trying to chase that exact tone with other guitars and amplifiers.
That is a cool idea, a second mic feeding a delay. That's a whole different ball game.
Exactly. I've seen so many guys who try to get old recording sounds and tones using modern techniques. If they had a slapback echo, they'd literally have to put a second mic on the amp, send that through a second slapback tape recorder, and put that through the mixing board — this actually sounds different.
It could be a phase situation too, depending on where the mics are.
True. I study old photographs to see how everything was set up in old studios, and there's a really interesting photo of Buddy Holly recording at Owen Bradley's studio in about 1956. There's a [Neumann] U 47 hanging from the ceiling, and an Altec 639 butted up underneath it. I remember seeing some guy on the Ampex forum a long time ago saying, "Well, they did that because they'd combine the signal of the bright Neumann and the warm signal of the Altec microphone." I'm like, "No, dude; one was the echo mic!" The 639 was going into a tape recorder for the echo. Then you get into this whole thing of, "Wow, the tape echo actually sounds different because the tonality of a 639 feeding it is different from the tonality of a U 47 feeding it."
It's so simple, but it's probably something people wouldn't necessarily set up.
If you come from the modern standpoint, it's so out of the realm of thinking that they'd ever do this. Then, when you realize old boards didn't have aux sends, you get really archaic about it. You realize why people did it, and why things sound like that. You also realize that some of those tones you think were so amazing weren't even deliberate. Again, to use the Owen Bradley rockabilly example, I spent a long time trying to figure out how they got the echo sound on the drums that they did — and eventually I realized that it was just leakage into the vocalist's echo mic on the other side of the room. It's always the simplest thing. Those guys back then had about 15 knobs on their entire recording console. Amazing sounds came from the most primitive of setups, mostly by accident.
You can even listen to some old recordings and kind of guess where they were done because of the techniques or the sounds of the room.
Absolutely. The main thing I learned from Mark Neill, just by watching him work, was controlled leakage. Controlled leakage is the secret to all those great recordings. When I first started recording in the '80s, it was when people were starting to isolate every instrument, close mic everything, and put the drummer in a separate room with a door. It took me a long time to realize that there's magic with everybody playing in the room together, as well as having controlled leakage going on to help things. It's amazing how much leakage from the drums will help your drum sound, as long as you have a good drummer who's not bashing the shit out of the drums.
True.
When you start studying older records, you can pick apart what you're hearing on them. One of my favorite conversations I ever had was with Cosimo Matassa, the guy in New Orleans who recorded all the great music. He's such a low-key guy, and I was trying to pick his brain. He basically said that it was all the musicians, and he didn't have anything to do with it. That's very true, but then he told me that he only had one condenser mic, and that was an Altec M11. The Altec M11 has a big omnidirectional field. He said he'd actually have someone in the studio while the band was playing who would swing the microphone over on a boom to the saxophone for the saxophone solo, and then swing it over to the drums after the sax solo was done. I went and listened to a bunch of Little Richard records after he told me that, and you can hear it! That's literally the only microphone that you could do that with. You have this giant field, and it almost acts like a compressor, because the drums go down a little bit when he swings it by the saxophone. It's genius.
There's something to be said about making do with a limited amount of tracks, or a limited amount of inputs. He had one track to start with!
Plus, he was making records that were huge hits then and are still played today. That's what always blows my mind. I've wound up recording a bunch of modern rockabilly bands. It's always funny, because they come in, and I tell them, "We can do it live to mono tape, just like the old days, or we can do it in Pro Tools." Without fail, they go, "Oh, let's do live to mono tape!" Then, after about 16 takes of one song, they realize, "Oh, we can't go back and fix it. We can't overdub that." A lot of them aren't really good enough to pull out that magic performance. There have been half a dozen sessions that started out live to mono tape and wound up going to Pro Tools for fixing later on.
Have you forced yourself to work in that fashion too, as a performer? To go live-to-tape?
I've done a lot of things live-to-tape in the past. I like the way it sounds, but it's always a compromise when it comes to your own performance. You always have to settle. The vocals on one sound good, but the guitar's not as good as the one before. Actually on the first couple of records that I did, Mark Neill and I did a lot of splicing, which was another technique they did a lot in the old days. Those early Beatles records, like "She Loves You" and "Please Please Me," weren't made with overdubs or multitrack recorders, but they did have about a dozen tape splices in every single one of those early hits, cobbled together from various takes.
Your book, The Strat in the Attic, is about you and people you've talked to, as well as their quests and adventures in finding interesting new guitar equipment.
I tried to write this book so that even if you don't like guitars, it's still interesting to read. It's really more a collection of human stories about people being obsessed with something and taking it as far as they can possibly go. A Les Paul Standard from 1958 to 1960 is the most valuable electric guitar in the world. People lust after these guitars. But if you had a story that went, "Yeah, this is a nice Les Paul, and it's worth a bunch of money," it would be kind of boring. The story I put in the book is about a 1958 Les Paul sunburst that turned up where a guy had turned it into a left-handed guitar by sawing it up on a bandsaw and putting a whole bunch of extra holes in it and routing out this and that for a vibrato. He basically butchered a guitar that could possibly be worth as much as $250,000 into a guitar that's worth about$3,500. To me, that's a really interesting story. It almost makes you throw up. I tried to write a book that's full of funny, interesting stories. Not just people who are like, "I'm a lawyer, and I paid a crapload of money for this guitar."
There's a lot of that.
It's the same way with vintage recording equipment. There are a few guys who tend to buy up almost everything. The people who are more interesting to me are the guys who are still using something they bought 55 years ago. Kearney Barton was a friend of mine up in Seattle. It was so awesome to watch him using all this gear. Or people who have maybe one cool piece of vintage gear and learned how to use it really, really well. That's way more interesting to me than people who have every piece of vintage gear under the sun, and it's basically a giant dog and pony show.
You should do a book on finding vintage recording equipment next.
You know, the book I've always wanted to do [would be about] all the classic recording studios and how they were actually set up. The room size, ceiling height, what the echo chamber was like, what they actually had in the echo chamber, and what boards they were using at different times. Being the obsessive nerd that I am, I've been to most of these places. I went to Norman Petty's studio in Clovis, New Mexico, and got a tour of the place. They had just gotten all the crap out of the echo chamber, because Vi Petty had been using it for 30 years as a storage room. Just seeing all these multi-colored tiles that Buddy Holly's family put in there — they were in the tile business and donated all these leftover tiles — made me think, "Man, somebody really ought to do a book and include all these obsessive details." That's one of my favorite things to do when I'm on tour around the country, or even over in Europe. I was just in New Orleans, and I went by the laundromat where [Cosimo Matassa's] J&M Recording Studio used to be. Or by the place in San Antonio where Robert Johnson recorded. It's amazing to see the building where it happened. One of the reasons Sun Recording Studios in Memphis is still in pristine condition is because nobody wanted it. No one ever bothered to tear out the acoustic tile on the ceilings. It's amazing. Whereas in Los Angeles, places like Gold Star Records are gone because the real estate got so valuable, and they just had to tear it out.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18197587132453918, "perplexity": 1718.6421737014264}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703509973.34/warc/CC-MAIN-20210117051021-20210117081021-00406.warc.gz"}
|
https://www.khanacademy.org/math/algebra2/x2ec2f6f830c9fb89:transformations/x2ec2f6f830c9fb89:symmetry/v/even-and-odd-functions-tables
|
If you're seeing this message, it means we're having trouble loading external resources on our website.
If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.
# Even and odd functions: Tables
CCSS.Math:
## Video transcript
- [Instructor] We're told this table defines function f. All right. For every x, they give us the corresponding f of x. According to the table, is f even, odd, or neither? So pause this video and see if you can figure that out on your own. All right, now let's work on this together. So let's just remind ourselves the definition of even and odd. One definition that we can think of is that f of x, if f of x is equal to f of negative x, then we're dealing with an even function. And if f of x is equal to the negative of f of negative x, or another way of saying that, if f of negative x. If f of negative x, instead of it being equal to f of x, it's equal to negative f of x. These last two are equivalent. Then in these situations, we are dealing with an odd function. And if neither of these are true, then we're dealing with neither. So what about what's going on over here? So let's see. F of negative seven is equal to negative one. What about f of the negative of negative seven? Well, that would be f of seven. And we see f of seven here is also equal to negative one. So at least in that case and that case, if we think of x as seven, f of x is equal to f of negative x. So it works for that. It also works for negative three and three. F of three is equal to f of negative three. They're both equal to two. And you can see and you can kind of visualize in your head that we have symmetry around the y-axis. And so this looks like an even function. So I will circle that in. Let's do another example. So here, once again, the table defines function f. It's a different function f. Is this function even, odd, or neither? So pause this video and try to think about it. All right, so let's just try a few examples. So here we have f of five is equal to two. F of five is equal to two. What is f of negative five? F of negative five. Not only is it not equal to two, it would have to be equal to two if this was an even function. And it would be equal to negative two if this was an odd function, but it's neither. So we very clearly see just looking at that data point that this can neither be even, nor odd. So I would say neither or neither right over here. Let's do one more example. Once again, the table defines function f. According to the table, is it even, odd, or neither? Pause the video again. Try to answer it. All right, so actually let's just start over here. So we have f of four is equal to negative eight. What is f of negative four? And the whole idea here is I wanna say, okay, if f of x is equal to something, what is f of negative x? Well, they luckily give us f of negative four. It is equal to eight. So it looks like it's not equal to f of x. It's equal to the negative of f of x. This is equal to the negative of f of four. So on that data point alone, at least that data point satisfies it being odd. It's equal to the negative of f of x. But now let's try the other points just to make sure. So f of one is equal to five. What is f of negative one? Well, it is equal to negative five. Once again, f of negative x is equal to the negative of f of x. So that checks out. And then f of zero, well, f of zero is of course equal to zero. But of course if you say what is the negative of f of, if you say what f of negative of zero, well, that's still f of zero. And then if you were to take the negative of zero, that's still zero. So you could view this. This is consistent still with being odd. This you could view as the negative of f of negative zero, which of course is still going to be zero. So this one is looking pretty good that it is odd.
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8334291577339172, "perplexity": 314.2040625588542}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401614309.85/warc/CC-MAIN-20200928202758-20200928232758-00527.warc.gz"}
|
https://blog.plover.com/math/pi-approx-2.html
|
# The Universe of Discourse
Tue, 14 Feb 2006
More approximations to pi
In an earlier post I discussed the purported Biblical approximation to π, and the verses that supposedly equate it to 3.
Eli Bar-Yahalom wrote in to tell me of a really fascinating related matter. He says that the word for "perimeter" is normally written "QW", but in the original, canonical text of the book of Kings, it is written "QWH", which is a peculiar (mis-)spelling. (M. Bar-Yahalom sent me the Hebrew text itself, in addition to the Romanizations I have shown, but I don't have either a Hebrew terminal or web browser handy, and in any event I don't know how to type these characters. Q here is qoph, W is vav, and H is hay.) M. Bar-Yahalom says that the canonical text also contains a footnote, which explains the peculiar "QWH" by saying that it represents "QW".
The reason this is worth mentioning is that the Hebrews, like the Greeks, made their alphabet do double duty for both words and numerals. The two systems were quite similar. The Greek one went something like this:
Α 1 Κ 10 Τ 100 Β 2 Λ 20 Υ 200 Γ 3 Μ 30 Φ 300 Δ 4 Ν 40 Χ 400 Ε 5 Ξ 50 Ψ 500 Ζ 6 Ο 60 Ω 600 Η 7 Π 70 Θ 8 Ρ 80 Ι 9 Σ 90
This isn't quite right, because the Greek alphabet had more letters then, enough to take them up to 900. I think there was a "digamma" between Ε and Ζ, for example. (This is why we have F after E. The F is a descendant of the digamma. The G was put in in place of Ζ, which was later added back at the end, and the H is a descendent of Η.) But it should give the idea. If you wanted to write the number 172, you would use ΒΠΤ. Or perhaps ΤΒΠ. It didn't matter.
Anyway, the Hebrew system was similar, only using the Hebrew alphabet. So here's the point: "QW" means "circumference", but it also represents the number 106. (Qoph is 100; vav is 6.) And the odd spelling, "QWH", also represents the number 111. (Hay is 5.) So the footnote could be interpreted as saying that the 106 is represented by 111, or something of the sort.
Now it so happens that 111/106 is a highly accurate approximation of π/3. π/3 is 1.04719755 and 111/106 is 1.04716981. And the value cited for the perimeter, 30, is in fact accurate, if you put 111 in place of 106, by multiplying it by 111/106.
It's really hard to know for sure. But if true, I wonder where the Hebrews got hold of such an accurate approximation? Archimedes pushed it as far as he could, by calculating the perimeters of 96-sided polygons that were respectively inscribed within and circumscribed around a unit circle, and so calculated that 223/71 < π < 22/7. Neither of these fractions is as good an approximation as 333/106.
Thanks very much, M. Bar-Yaholom.
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9344792366027832, "perplexity": 1698.837928213414}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084888878.44/warc/CC-MAIN-20180120023744-20180120043744-00477.warc.gz"}
|
http://math.stackexchange.com/questions/20809/trouble-deriving-the-harris-corner-detection
|
Trouble deriving the Harris Corner Detection
I just started studying a small paper about the Harris Corner Detection. The problem is I don't understand how step 7 is derived from step 6. In step 7 the expression is expanded in a way that we get a structure tensor $C$ for $x$ and $y$. If one would multiply the three matrices again, I see that we would end up with 6 again (and that it's correct). However I do not see given step 6 how one can derive step 7.
-
$$\big(a^Tb\big)^2 = \big(a^Tb\big)\big(a^Tb\big) = \big(b^Ta\big)\big(a^Tb\big) = b^T\big(aa^T\big)b.$$
Didn't know this identity: $$\big(a^Tb\big)\big(a^Tb\big) = \big(b^Ta\big)\big(a^Tb\big)$$ thanks! – Nils Feb 7 '11 at 10:16
@Nils: It's merely the fact that the dot product is commutative: $a^Tb = \sum a_i b_i = b^Ta$. – Rahul Feb 7 '11 at 10:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9850397706031799, "perplexity": 228.06416662900793}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397748.48/warc/CC-MAIN-20160624154957-00070-ip-10-164-35-72.ec2.internal.warc.gz"}
|
https://fankhauserblog.wordpress.com/1995/08/
|
Sample Math Problems for Microbiology
Remember, all these calculations for calculating microbe per unit volume from the results of a plate count depend on this equation:
CFU/standard unit volume = no. of colonies x dil’n factor x standard unit volume/aliquot plated
The answers to the problems can be found at the bottom of the page.
PROBLEMS: (Note that they get more challenging as you work your way through them.)
1. 0.1 mL of urine plated out on nutrient agar. After incubation at 37C, 279 colonies appeared. Give the CFU/mL. How many CFU are there per 100 mL?
2. A sample was diluted by placing a 0.1 mL aliquot into 0.9 mL of diluent, and 1 mL of this dilution was pour-plated. After incubation, 217 colonies appeared. What is the CFU/mL in the original specimen?
3. A culture was diluted by adding a 0.1 mL aliquot to 0.9 mL water. Then, 0.1 mL of this dilution was plated out, yielding 73 colonies. Calculate the CFU/100 mL in the original culture.
4. A dilution of a bacterial suspension was prepared by adding 100 lambdas of the suspension to 9.9 mL of sterile physiological saline (PSS). 100 of this dilution was plated out, yielding 27 colonies. Give the CFU/mL in the original culture.
5. A serial dilution was prepared by adding 0.1 mL to 9.9 mL, and 0.1 of that to 9.9 mL of fresh diluent. Then, 0.1 mL of the last dilution was spread. Later, 561 colonies were counted. What was the original CFU/mL?
6. 200 µL of milk was mixed with 9.8 mL diluent, and 10 lambdas of the dilution were pour-plated. If there were 141 colonies on the plate, what was the original CFU/mL in the milk?
7. 250 mL of drinking water was passed through a millipore filter, and the membrane was layered on a pad supplied with on m-Endo MF medium. 21 red colonies formed. What was the coliform/100 mL? Does this drinking water meet the standards for drinking water? Using this protocol, what would the largest number of coliform permitted and still have the water “safe for consumption?”
8. 10 µL of a phage lysate suspension were added to 9.99 mL diluent, 10 µL of that added to 9.99 mL fresh diluent. Then 10 µL of this last dilution plated with indicator bacteria. 207 plaques appeared following incubation. What was the titer of the original lysate (phage/mL)?
9. 0.73 gm of hamburger was suspended in 7.3 mL of diluent, 0.1 mL of the suspension diluted into 9.9 mL diluent, 0.1 of that into 0.9 mL diluent, and 0.2 mL of this last dilution was pour plated. 422 colonies formed. What was the CFU/g in the meat? Does this hamburger meet the standards for wholesome meat?
10. A 1 cm square surface was swabbed with a moistened sterile cotton swab, and the swab was suspended in 2 mL of sterile water and vortexed to suspend swabbed bacteria. 10 lambdas of this suspension was added to 9.99 mL diluent, and 20 lambdas were pour plated, and 106 colones appeared. What was the CFU/sq.cm on the surface?
11. A package of yeast (7.39 g) was suspended into 100 mL dH2O. This was serially diluted by transferring 0.1 aliquots successively to 9.9 dilution blanks three times in secession. 0.1 and 0.2 mL aliquots were spread on 2% glucose nutrient agar. Colonies appeared as follows: 0.1 plate: 179, 0.2 plate: 321. How many yeast were there originally in the package? How many yeast are there in a gram? How many picograms does a single yeast cell weigh?
Here are the answers to these problems:
1. a: 2,790/mL, b: 279,000/100 mL.
2. 2,170 CFU/mL.
3. 7.3 x 105 CFU/100mL.
4. 27,000 CFU/mL.
5. 5.61 x 107 CFU/mL.
6. 705,000 CFU/mL.
7. a: 8.4 coliform/100 mL, b: No it is not safe to drink, c: 25 colonies.
8. 2.07 x 1010.
9. 2.11 x 107. Yes, it is (barely) below the maximum permissible bacterial load,
10. 1.06 x 107/sq. cm,
11. a) 1.7 x 1011/package, b) 2.3 x 106/gram, c) 44 picograms/yeast cell.
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8548355102539062, "perplexity": 11036.77493284409}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585916.29/warc/CC-MAIN-20211024081003-20211024111003-00143.warc.gz"}
|
https://www.rdocumentation.org/packages/diptest/versions/0.25-0/topics/dip
|
# dip
0th
Percentile
##### Compute Hartigan's Dip Test Statistic for Unimodality
Computes Hartigan's dip test statistic for testing unimodality, and additionally the modal interval.
Keywords
distribution, htest
##### Usage
dip(x, full.result = FALSE, debug = FALSE)
##### Arguments
x
numeric; the data.
full.result
logical; if TRUE returns the full result list, see below.
debug
logical; if true, some tracing information is printed (from the C routine).
##### Value
• depending on full.result either a number, the dip statistic, or a list with components
• xthe sorted unname()d data.
• nlength(x).
• dipthe dip statistic
• lo.hiindices into x for lower and higher end of modal interval
• xl, xulower and upper end of modal interval
• gcm, lcm(last used) indices for greatest convex minorant and the least concave majorant.
• mn, mjindex vectors of length n for the GC minorant and the LC majorant respectively.
##### Note
For $n \le 3$ where n <- length(x), the dip statistic is always zero, i.e., there's no possible dip test.
Yong Lu [email protected] found in Oct 2003 that the code wasn't giving symmetric results for mirrored data (and was giving results of almost 1, and then found the reason, a misplaced ")" in the original Fortran code. This bug has been corrected for diptest version 0.25-0.
##### References
P. M. Hartigan (1985) Computation of the Dip Statistic to Test for Unimodality; Applied Statistics (JRSS C) 34, 320--325.
J. A. Hartigan and P. M. Hartigan (1985) The Dip Test of Unimodality; Annals of Statistics 13, 70--84.
isoreg for isotonic regression.
• dip
##### Examples
data(statfaculty)
plot(density(statfaculty))
dip(statfaculty)
str(dip(statfaculty, full = TRUE, debug = TRUE))
data(faithful)
fE <- faithful\$eruptions
plot(density(fE))
str(dip(fE, full = TRUE, debug = TRUE))
data(precip)
plot(density(precip))
str(dip(precip, full = TRUE, debug = TRUE))
Documentation reproduced from package diptest, version 0.25-0, License: GPL version 2 or later
### Community examples
Looks like there are no examples yet.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30725523829460144, "perplexity": 19474.768663105802}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348513321.91/warc/CC-MAIN-20200606124655-20200606154655-00185.warc.gz"}
|
http://mathhelpforum.com/number-theory/122112-how-many-elements-order-2-there-product-groups.html
|
# Math Help - How many elements of order 2 are there in this product of groups?
1. ## How many elements of order 2 are there in this product of groups?
How many elements of order 2 are there in the following abelian group of order 16? :
Z2 X Z2 X Z4
where Z2 is the integers mod 2 and Z4 is the integers mod 4.
How are these found?
Thanks for any help
2. Originally Posted by Siknature
How many elements of order 2 are there in the following abelian group of order 16? :
Z2 X Z2 X Z4
where Z2 is the integers mod 2 and Z4 is the integers mod 4.
How are these found?
Thanks for any help
In this case perhaps is easier to ask how many elements are NOT of order 2: first, note that every non-unit element is of order 2 or 4.
Now, if we agree to write the elements of the group in the form $(x,y,z)\,,\,\,x,y,=0,1\!\!\!\!\pmod 2\,,\,z=0,1,2,3\!\!\!\!\pmod 4$, then an element has order 4 iff it has either $1\,\,\, or\,\,\, 3\!\!\!\!\pmod 4$ in the last entry...can you now count them all?
Tonio
3. Originally Posted by tonio
can you now count them all?
Tonio
Thanks, yes, now i think that this means the answer must be 7 (8 if we include the identity)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8410894274711609, "perplexity": 299.03078636532297}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440646242843.97/warc/CC-MAIN-20150827033042-00057-ip-10-171-96-226.ec2.internal.warc.gz"}
|
http://export.arxiv.org/abs/hep-th/9810016
|
hep-th
(what is this?)
# Title: Two different gauge-invariant models in the Lagrangian approach
Abstract: We show how to systematically derive the complete set of the gauge transformations of different types of the gauge invariant models, which are the chiral Schwinger and CP$^1$ with Chern-Simons term, in the Lagrangian Formalism.
Comments: 15 pages, no figures, LaTex. to appear in Journal of Physics A: Mathematical and General Subjects: High Energy Physics - Theory (hep-th) Journal reference: J.Phys.A31:8677-8687,1998 DOI: 10.1088/0305-4470/31/43/011 Report number: SOGANG HEP 238/98 Cite as: arXiv:hep-th/9810016 (or arXiv:hep-th/9810016v1 for this version)
## Submission history
From: Yong-Wan Kim [view email]
[v1] Fri, 2 Oct 1998 05:28:04 GMT (11kb)
Link back to: arXiv, form interface, contact.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3854033946990967, "perplexity": 6712.637173352092}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103639050.36/warc/CC-MAIN-20220629115352-20220629145352-00353.warc.gz"}
|
https://gmatclub.com/forum/if-y-is-greater-than-110-percent-of-x-is-y-greater-than-100853.html?fl=similar
|
Check GMAT Club Decision Tracker for the Latest School Decision Releases https://gmatclub.com/AppTrack
It is currently 26 May 2017, 03:09
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# If y is greater than 110 percent of x, is y greater than 75?
Author Message
TAGS:
### Hide Tags
Retired Moderator
Status: 2000 posts! I don't know whether I should feel great or sad about it! LOL
Joined: 04 Oct 2009
Posts: 1672
Location: Peru
Schools: Harvard, Stanford, Wharton, MIT & HKS (Government)
WE 1: Economic research
WE 2: Banking
WE 3: Government: Foreign Trade and SMEs
Followers: 103
Kudos [?]: 992 [0], given: 109
If y is greater than 110 percent of x, is y greater than 75? [#permalink]
### Show Tags
11 Sep 2010, 08:36
1
This post was
BOOKMARKED
00:00
Difficulty:
45% (medium)
Question Stats:
64% (01:58) correct 36% (01:07) wrong based on 243 sessions
### HideShow timer Statistics
If y is greater than 110 percent of x, is y greater than 75?
(1) x > 75
(2) y - x = 10
My arguments are in the next post.
[Reveal] Spoiler: OA
_________________
"Life’s battle doesn’t always go to stronger or faster men; but sooner or later the man who wins is the one who thinks he can."
My Integrated Reasoning Logbook / Diary: http://gmatclub.com/forum/my-ir-logbook-diary-133264.html
GMAT Club Premium Membership - big benefits and savings
Last edited by Bunuel on 17 Sep 2013, 08:52, edited 1 time in total.
Edited the question.
Current Student
Joined: 12 Jun 2009
Posts: 1843
Location: United States (NC)
Concentration: Strategy, Finance
Schools: UNC (Kenan-Flagler) - Class of 2013
GMAT 1: 720 Q49 V39
WE: Programming (Computer Software)
Followers: 26
Kudos [?]: 253 [1] , given: 52
### Show Tags
11 Sep 2010, 08:41
1
KUDOS
metallicafan wrote:
If y is greater than 110 percent of x, is y greater than 75?
(1)$$x<75$$
(2) $$y - x = 10$$
My arguments are in the next post.
if
y > 1.1x is y>75 ?
a. x<75
so lets try x =1 which means y >1.1 which doesnt mean y > 75
no x = 74. 1.1* 74 = 81ish so y >75 -INSUFFF
b. y= 10+x we dont know x so no go INSUFF
C.
so y = 10+1 = 11 which means y<75.
y = 10 + 74 = 84 which means y>75 so INSUFF
E?
_________________
Retired Moderator
Joined: 02 Sep 2010
Posts: 803
Location: London
Followers: 111
Kudos [?]: 1020 [1] , given: 25
### Show Tags
11 Sep 2010, 08:45
1
KUDOS
A) Not suff x=74, x=2
B) Not suff x=50, x=90
A+B) Not suff x=74,y=84 & x=50,y=60
OA is wrong
_________________
Retired Moderator
Status: 2000 posts! I don't know whether I should feel great or sad about it! LOL
Joined: 04 Oct 2009
Posts: 1672
Location: Peru
Schools: Harvard, Stanford, Wharton, MIT & HKS (Government)
WE 1: Economic research
WE 2: Banking
WE 3: Government: Foreign Trade and SMEs
Followers: 103
Kudos [?]: 992 [0], given: 109
### Show Tags
11 Sep 2010, 08:47
For example, in the case of clue Nº 1 we can have two possible scenarios in which y is greater than 75 or less than 75.
Scenario 1: y> 75
For example, x = 70 and y = 78
The 110% of x is 77. So, we are Ok with the original condition (y > 110% of x).
Scenario 2: y<75
For example, x = 10 and y = 12
The 110% of y is 11. Also, we are ok with the original condition.
As you can see, only with clue nº 1, we cannot verify if y > 75.
That's why I think OA is wrong.
_________________
"Life’s battle doesn’t always go to stronger or faster men; but sooner or later the man who wins is the one who thinks he can."
My Integrated Reasoning Logbook / Diary: http://gmatclub.com/forum/my-ir-logbook-diary-133264.html
GMAT Club Premium Membership - big benefits and savings
Retired Moderator
Status: 2000 posts! I don't know whether I should feel great or sad about it! LOL
Joined: 04 Oct 2009
Posts: 1672
Location: Peru
Schools: Harvard, Stanford, Wharton, MIT & HKS (Government)
WE 1: Economic research
WE 2: Banking
WE 3: Government: Foreign Trade and SMEs
Followers: 103
Kudos [?]: 992 [0], given: 109
### Show Tags
11 Sep 2010, 08:52
Yes, I also think the answer is E.
Thanks for the confirmation guys. Kudos for you!
I woke up happy today
_________________
"Life’s battle doesn’t always go to stronger or faster men; but sooner or later the man who wins is the one who thinks he can."
My Integrated Reasoning Logbook / Diary: http://gmatclub.com/forum/my-ir-logbook-diary-133264.html
GMAT Club Premium Membership - big benefits and savings
Math Expert
Joined: 02 Sep 2009
Posts: 38894
Followers: 7737
Kudos [?]: 106178 [2] , given: 11608
### Show Tags
11 Sep 2010, 09:06
2
KUDOS
Expert's post
metallicafan wrote:
If y is greater than 110 percent of x, is y greater than 75?
(1) $$x<75$$
(2) $$y - x = 10$$
My arguments are in the next post.
For A to be an answer statement (1) should be $$x>75$$ (and I think this is the case as I've seen this question before).
Then we would have: $$y>1.1x$$. Question: is $$y>75$$? Or is $$x\geq{\frac{75}{1.1}}$$
(1) $$x>75$$. Sufficient.
(2) $$y - x = 10$$. Not sufficient as shown above.
_________________
Intern
Joined: 27 Jul 2010
Posts: 20
Location: Bangalore
Followers: 0
Kudos [?]: 4 [0], given: 0
### Show Tags
14 Sep 2010, 21:05
Bunuel wrote:
metallicafan wrote:
If y is greater than 110 percent of x, is y greater than 75?
(1) $$x<75$$
(2) $$y - x = 10$$
My arguments are in the next post.
For A to be an answer statement (1) should be $$x>75$$ (and I think this is the case as I've seen this question before).
Then we would have: $$y>1.1x$$. Question: is $$y>75$$? Or is $$x\geq{\frac{75}{1.1}}$$
(1) $$x>75$$. Sufficient.
(2) $$y - x = 10$$. Not sufficient as shown above.
Hi,
The option is x<75 not x>75..
If x <75, x can be x<75/1.1 or 75/1.1 <x<75
So IMO, (1) is insufficient..
_________________
Nothing is free.. You 've to earn it!!!
Math Expert
Joined: 02 Sep 2009
Posts: 38894
Followers: 7737
Kudos [?]: 106178 [0], given: 11608
### Show Tags
14 Sep 2010, 21:17
BalakumaranP wrote:
Bunuel wrote:
metallicafan wrote:
If y is greater than 110 percent of x, is y greater than 75?
(1) $$x<75$$
(2) $$y - x = 10$$
My arguments are in the next post.
For A to be an answer statement (1) should be $$x>75$$ (and I think this is the case as I've seen this question before).
Then we would have: $$y>1.1x$$. Question: is $$y>75$$? Or is $$x\geq{\frac{75}{1.1}}$$
(1) $$x>75$$. Sufficient.
(2) $$y - x = 10$$. Not sufficient as shown above.
Hi,
The option is x<75 not x>75..
If x <75, x can be x<75/1.1 or 75/1.1 <x<75
So IMO, (1) is insufficient..
OA was given to be A and I said: "(A) to be an answer statement (1) should be $$x>75$$, (and I think this is the case as I've seen this question before)".
_________________
Manager
Joined: 04 Aug 2010
Posts: 155
Followers: 2
Kudos [?]: 28 [0], given: 15
### Show Tags
16 Sep 2010, 11:00
E it is. A is an obviously wrong answer. Source?
VP
Status: Current Student
Joined: 24 Aug 2010
Posts: 1345
Location: United States
GMAT 1: 710 Q48 V40
WE: Sales (Consumer Products)
Followers: 111
Kudos [?]: 422 [0], given: 73
### Show Tags
16 Sep 2010, 20:05
It's E. I did the math out. Follow me with this one. Remember that the question stem simply says that y is GREATER than 110% of x (it's doesn't say how much greater than 110%)
1. x<75 - Insufficient. No matter what X is we don't know how much over 110% it is
2. y-x=10
y=x+10
If y is 120% of x ===> y= x+10=1.2x===>10=.2x===>x=50===>y=60, thus 60<75 (No)
If y is 111% of x ===> y= x+10=1.11x==>10=.11x==>x=90.9090...==>y=100.9090>75 (Yes)
Statement 2 is Insufficient
Now let's take S1 and S2 together:
If x=74, y=84 and is 113% of x and is greater than 75
If x=50, y=60 and is 120% of x and is less than 75
Statements 1 and 2 together are Insufficient
E
_________________
The Brain Dump - From Low GPA to Top MBA (Updated September 1, 2013) - A Few of My Favorite Things--> http://cheetarah1980.blogspot.com
Intern
Joined: 26 Feb 2011
Posts: 37
Followers: 1
Kudos [?]: 109 [0], given: 1
If y is greater ... [#permalink]
### Show Tags
28 Feb 2011, 02:37
If y is greater than 110 percent of x, is y greater than 75?
1) x>75
2) y - x = 10
Why (2) is not sufficient?
Manager
Joined: 14 Feb 2011
Posts: 194
Followers: 4
Kudos [?]: 135 [0], given: 3
Re: If y is greater ... [#permalink]
### Show Tags
28 Feb 2011, 02:45
We are given that y > 1.1x, we need to find if y>75
From 1, x > 75, so y > 82.5 and hence y>75, sufficient
From 2 Y = x+10 or x + 10 > 1.1x or 10 > 0.1x or 100 > x, so x is less than 100.
Now for x =10, y = 20 and it is less than 75, whereas for x = 80, y = 90 and it is greater than 75, so insufficient. Answer A
Last edited by beyondgmatscore on 28 Feb 2011, 03:04, edited 1 time in total.
Math Expert
Joined: 02 Sep 2009
Posts: 38894
Followers: 7737
Kudos [?]: 106178 [0], given: 11608
Re: If y is greater ... [#permalink]
### Show Tags
28 Feb 2011, 02:59
Merging similar topics.
_________________
Current Student
Joined: 23 Jul 2013
Posts: 304
Followers: 0
Kudos [?]: 77 [0], given: 71
If y is greater than 110% of x, is y greater than 75? [#permalink]
### Show Tags
17 Sep 2013, 08:43
If y is greater than 110% of x, is y greater than 75?
1.) x > 75
2.) y-x = 10
So I think my issue is in substitution, which may be a more general problem. But here is how I tackled this problem.
1.) y > 1.1x so if you plug in x > 75 you will get that 1.1(75) > 75 so 1.) is sufficient.
2.) arrange so -x = 10-y.
Divide by - and get x = -10 + y.
substitute x into y > 1.1x and get y > 1.1(-10+y)
y > -11 + 1.1y
-.1y > -11
y < 110
Does that mean Y could be less or more than 75 as long as it is less than 110, so it is insufficient. Is that substitution even legal?
A.?
Thanks!
Math Expert
Joined: 02 Sep 2009
Posts: 38894
Followers: 7737
Kudos [?]: 106178 [0], given: 11608
Re: If y is greater than 110% of x, is y greater than 75? [#permalink]
### Show Tags
17 Sep 2013, 08:52
TheLostOne wrote:
If y is greater than 110% of x, is y greater than 75?
1.) x > 75
2.) y-x = 10
So I think my issue is in substitution, which may be a more general problem. But here is how I tackled this problem.
1.) y > 1.1x so if you plug in x > 75 you will get that 1.1(75) > 75 so 1.) is sufficient.
2.) arrange so -x = 10-y.
Divide by - and get x = -10 + y.
substitute x into y > 1.1x and get y > 1.1(-10+y)
y > -11 + 1.1y
-.1y > -11
y < 110
Does that mean Y could be less or more than 75 as long as it is less than 110, so it is insufficient. Is that substitution even legal?
A.?
Thanks!
Merging similar topics. Please refer to the solutions above.
_________________
GMAT Club Legend
Joined: 09 Sep 2013
Posts: 15457
Followers: 649
Kudos [?]: 209 [0], given: 0
Re: If y is greater than 110 percent of x, is y greater than 75? [#permalink]
### Show Tags
18 Sep 2015, 09:01
Hello from the GMAT Club BumpBot!
Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos).
Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email.
_________________
Re: If y is greater than 110 percent of x, is y greater than 75? [#permalink] 18 Sep 2015, 09:01
Similar topics Replies Last post
Similar
Topics:
7 If x and y are both positive numbers, is x greater than 75% of y? 3 23 Aug 2016, 11:54
2 Is x greater than y? 3 26 Jan 2016, 21:47
23 If y is greater than 110 percent of x, is y greater than 75 11 06 Dec 2016, 07:47
2 Is x greater than y? 3 21 Apr 2016, 21:04
7 Is x greater than y? 6 24 Feb 2017, 15:55
Display posts from previous: Sort by
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46946239471435547, "perplexity": 5938.867693954041}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608652.65/warc/CC-MAIN-20170526090406-20170526110406-00152.warc.gz"}
|
https://www.oreilly.com/library/view/python-for-data/9781449323592/ch04.html
|
# Chapter 4. NumPy Basics: Arrays and Vectorized Computation
NumPy, short for Numerical Python, is the fundamental package required for high performance scientific computing and data analysis. It is the foundation on which nearly all of the higher-level tools in this book are built. Here are some of the things it provides:
• ndarray, a fast and space-efficient multidimensional array providing vectorized arithmetic operations and sophisticated broadcasting capabilities
• Standard mathematical functions for fast operations on entire arrays of data without having to write loops
• Tools for reading / writing array data to disk and working with memory-mapped files
• Linear algebra, random number generation, and Fourier transform capabilities
• Tools for integrating code written in C, C++, and Fortran
The last bullet point is also one of the most important ones from an ecosystem point of view. Because NumPy provides an easy-to-use C API, it is very easy to pass data to external libraries written in a low-level language and also for external libraries to return data to Python as NumPy arrays. This feature has made Python a language of choice for wrapping legacy C/C++/Fortran codebases and giving them a dynamic and easy-to-use interface.
While NumPy by itself does not provide very much high-level data analytical functionality, having an understanding of NumPy arrays and array-oriented computing will help you use tools like pandas much more effectively. If you’re new to Python and just looking to get your hands dirty working with data using pandas, feel free to give this chapter a skim. For more on advanced NumPy features like broadcasting, see Chapter 12.
For most data analysis applications, the main areas of functionality I’ll focus on are:
• Fast vectorized array operations for data munging and cleaning, subsetting and filtering, transformation, and any other kinds of computations
• Common array algorithms like sorting, unique, and set operations
• Efficient descriptive statistics and aggregating/summarizing data
• Data alignment and relational data manipulations for merging and joining together heterogeneous data sets
• Expressing conditional logic as array expressions instead of loops with if-elif-else branches
• Group-wise data manipulations (aggregation, transformation, function application). Much more on this in Chapter 5
While NumPy provides the computational foundation for these operations, you will likely want to use pandas as your basis for most kinds of data analysis (especially for structured or tabular data) as it provides a rich, high-level interface making most common data tasks very concise and simple. pandas also provides some more domain-specific functionality like time series manipulation, which is not present in NumPy.
### Note
In this chapter and throughout the book, I use the standard NumPy convention of always using import numpy as np. You are, of course, welcome to put from numpy import * in your code to avoid having to write np., but I would caution you against making a habit of this.
# The NumPy ndarray: A Multidimensional Array Object
One of the key features of NumPy is its N-dimensional array object, or ndarray, which is a fast, flexible container for large data sets in Python. Arrays enable you to perform mathematical operations on whole blocks of data using similar syntax to the equivalent operations between scalar elements:
In [8]: data
Out[8]:
array([[ 0.9526, -0.246 , -0.8856],
[ 0.5639, 0.2379, 0.9104]])
In [9]: data * 10 In [10]: data + data
Out[9]: Out[10]:
array([[ 9.5256, -2.4601, -8.8565], array([[ 1.9051, -0.492 , -1.7713],
[ 5.6385, 2.3794, 9.104 ]]) [ 1.1277, 0.4759, 1.8208]])
An ndarray is a generic multidimensional container for homogeneous data; that is, all of the elements must be the same type. Every array has a shape, a tuple indicating the size of each dimension, and a dtype, an object describing the data type of the array:
In [11]: data.shape
Out[11]: (2, 3)
In [12]: data.dtype
Out[12]: dtype('float64')
This chapter will introduce you to the basics of using NumPy arrays, and should be sufficient for following along with the rest of the book. While it’s not necessary to have a deep understanding of NumPy for many data analytical applications, becoming proficient in array-oriented programming and thinking is a key step along the way to becoming a scientific Python guru.
### Note
Whenever you see “array”, “NumPy array”, or “ndarray” in the text, with few exceptions they all refer to the same thing: the ndarray object.
## Creating ndarrays
The easiest way to create an array is to use the array function. This accepts any sequence-like object (including other arrays) and produces a new NumPy array containing the passed data. For example, a list is a good candidate for conversion:
In [13]: data1 = [6, 7.5, 8, 0, 1]
In [14]: arr1 = np.array(data1)
In [15]: arr1
Out[15]: array([ 6. , 7.5, 8. , 0. , 1. ])
Nested sequences, like a list of equal-length lists, will be converted into a multidimensional array:
In [16]: data2 = [[1, 2, 3, 4], [5, 6, 7, 8]]
In [17]: arr2 = np.array(data2)
In [18]: arr2
Out[18]:
array([[1, 2, 3, 4],
[5, 6, 7, 8]])
In [19]: arr2.ndim
Out[19]: 2
In [20]: arr2.shape
Out[20]: (2, 4)
Unless explicitly specified (more on this later), np.array tries to infer a good data type for the array that it creates. The data type is stored in a special dtype object; for example, in the above two examples we have:
In [21]: arr1.dtype
Out[21]: dtype('float64')
In [22]: arr2.dtype
Out[22]: dtype('int64')
In addition to np.array, there are a number of other functions for creating new arrays. As examples, zeros and ones create arrays of 0’s or 1’s, respectively, with a given length or shape. empty creates an array without initializing its values to any particular value. To create a higher dimensional array with these methods, pass a tuple for the shape:
In [23]: np.zeros(10)
Out[23]: array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])
In [24]: np.zeros((3, 6))
Out[24]:
array([[ 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0.]])
In [25]: np.empty((2, 3, 2))
Out[25]:
array([[[ 4.94065646e-324, 4.94065646e-324],
[ 3.87491056e-297, 2.46845796e-130],
[ 4.94065646e-324, 4.94065646e-324]],
[[ 1.90723115e+083, 5.73293533e-053],
[ -2.33568637e+124, -6.70608105e-012],
[ 4.42786966e+160, 1.27100354e+025]]])
### Caution
It’s not safe to assume that np.empty will return an array of all zeros. In many cases, as previously shown, it will return uninitialized garbage values.
arange is an array-valued version of the built-in Python range function:
In [26]: np.arange(15)
Out[26]: array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14])
See Table 4-1 for a short list of standard array creation functions. Since NumPy is focused on numerical computing, the data type, if not specified, will in many cases be float64 (floating point).
Table 4-1. Array creation functions
FunctionDescription
arrayConvert input data (list, tuple, array, or other sequence type) to an ndarray either by inferring a dtype or explicitly specifying a dtype. Copies the input data by default.
asarrayConvert input to ndarray, but do not copy if the input is already an ndarray
arangeLike the built-in range but returns an ndarray instead of a list.
ones, ones_likeProduce an array of all 1’s with the given shape and dtype. ones_like takes another array and produces a ones array of the same shape and dtype.
zeros, zeros_likeLike ones and ones_like but producing arrays of 0’s instead
empty, empty_likeCreate new arrays by allocating new memory, but do not populate with any values like ones and zeros
eye, identityCreate a square N x N identity matrix (1’s on the diagonal and 0’s elsewhere)
## Data Types for ndarrays
The data type or dtype is a special object containing the information the ndarray needs to interpret a chunk of memory as a particular type of data:
In [27]: arr1 = np.array([1, 2, 3], dtype=np.float64)
In [28]: arr2 = np.array([1, 2, 3], dtype=np.int32)
In [29]: arr1.dtype In [30]: arr2.dtype
Out[29]: dtype('float64') Out[30]: dtype('int32')
Dtypes are part of what make NumPy so powerful and flexible. In most cases they map directly onto an underlying machine representation, which makes it easy to read and write binary streams of data to disk and also to connect to code written in a low-level language like C or Fortran. The numerical dtypes are named the same way: a type name, like float or int, followed by a number indicating the number of bits per element. A standard double-precision floating point value (what’s used under the hood in Python’s float object) takes up 8 bytes or 64 bits. Thus, this type is known in NumPy as float64. See Table 4-2 for a full listing of NumPy’s supported data types.
### Note
Don’t worry about memorizing the NumPy dtypes, especially if you’re a new user. It’s often only necessary to care about the general kind of data you’re dealing with, whether floating point, complex, integer, boolean, string, or general Python object. When you need more control over how data are stored in memory and on disk, especially large data sets, it is good to know that you have control over the storage type.
Table 4-2. NumPy data types
TypeType CodeDescription
int8, uint8i1, u1Signed and unsigned 8-bit (1 byte) integer types
int16, uint16i2, u2Signed and unsigned 16-bit integer types
int32, uint32i4, u4Signed and unsigned 32-bit integer types
int64, uint64i8, u8Signed and unsigned 32-bit integer types
float16f2Half-precision floating point
float32f4 or fStandard single-precision floating point. Compatible with C float
float64f8 or dStandard double-precision floating point. Compatible with C double and Python float object
float128f16 or gExtended-precision floating point
complex64, complex128, complex256c8, c16, c32Complex numbers represented by two 32, 64, or 128 floats, respectively
bool?Boolean type storing True and False values
objectOPython object type
string_SFixed-length string type (1 byte per character). For example, to create a string dtype with length 10, use 'S10'.
unicode_UFixed-length unicode type (number of bytes platform specific). Same specification semantics as string_ (e.g. 'U10').
You can explicitly convert or cast an array from one dtype to another using ndarray’s astype method:
In [31]: arr = np.array([1, 2, 3, 4, 5])
In [32]: arr.dtype
Out[32]: dtype('int64')
In [33]: float_arr = arr.astype(np.float64)
In [34]: float_arr.dtype
Out[34]: dtype('float64')
In this example, integers were cast to floating point. If I cast some floating point numbers to be of integer dtype, the decimal part will be truncated:
In [35]: arr = np.array([3.7, -1.2, -2.6, 0.5, 12.9, 10.1])
In [36]: arr
Out[36]: array([ 3.7, -1.2, -2.6, 0.5, 12.9, 10.1])
In [37]: arr.astype(np.int32)
Out[37]: array([ 3, -1, -2, 0, 12, 10], dtype=int32)
Should you have an array of strings representing numbers, you can use astype to convert them to numeric form:
In [38]: numeric_strings = np.array(['1.25', '-9.6', '42'], dtype=np.string_)
In [39]: numeric_strings.astype(float)
Out[39]: array([ 1.25, -9.6 , 42. ])
If casting were to fail for some reason (like a string that cannot be converted to float64), a TypeError will be raised. See that I was a bit lazy and wrote float instead of np.float64; NumPy is smart enough to alias the Python types to the equivalent dtypes.
You can also use another array’s dtype attribute:
In [40]: int_array = np.arange(10)
In [41]: calibers = np.array([.22, .270, .357, .380, .44, .50], dtype=np.float64)
In [42]: int_array.astype(calibers.dtype)
Out[42]: array([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9.])
There are shorthand type code strings you can also use to refer to a dtype:
In [43]: empty_uint32 = np.empty(8, dtype='u4')
In [44]: empty_uint32
Out[44]:
array([ 0, 0, 65904672, 0, 64856792, 0,
39438163, 0], dtype=uint32)
### Note
Calling astype always creates a new array (a copy of the data), even if the new dtype is the same as the old dtype.
### Caution
It’s worth keeping in mind that floating point numbers, such as those in float64 and float32 arrays, are only capable of approximating fractional quantities. In complex computations, you may accrue some floating point error, making comparisons only valid up to a certain number of decimal places.
## Operations between Arrays and Scalars
Arrays are important because they enable you to express batch operations on data without writing any for loops. This is usually called vectorization. Any arithmetic operations between equal-size arrays applies the operation elementwise:
In [45]: arr = np.array([[1., 2., 3.], [4., 5., 6.]])
In [46]: arr
Out[46]:
array([[ 1., 2., 3.],
[ 4., 5., 6.]])
In [47]: arr * arr In [48]: arr - arr
Out[47]: Out[48]:
array([[ 1., 4., 9.], array([[ 0., 0., 0.],
[ 16., 25., 36.]]) [ 0., 0., 0.]])
Arithmetic operations with scalars are as you would expect, propagating the value to each element:
In [49]: 1 / arr In [50]: arr ** 0.5
Out[49]: Out[50]:
array([[ 1. , 0.5 , 0.3333], array([[ 1. , 1.4142, 1.7321],
[ 0.25 , 0.2 , 0.1667]]) [ 2. , 2.2361, 2.4495]])
Operations between differently sized arrays is called broadcasting and will be discussed in more detail in Chapter 12. Having a deep understanding of broadcasting is not necessary for most of this book.
## Basic Indexing and Slicing
NumPy array indexing is a rich topic, as there are many ways you may want to select a subset of your data or individual elements. One-dimensional arrays are simple; on the surface they act similarly to Python lists:
In [51]: arr = np.arange(10)
In [52]: arr
Out[52]: array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
In [53]: arr[5]
Out[53]: 5
In [54]: arr[5:8]
Out[54]: array([5, 6, 7])
In [55]: arr[5:8] = 12
In [56]: arr
Out[56]: array([ 0, 1, 2, 3, 4, 12, 12, 12, 8, 9])
As you can see, if you assign a scalar value to a slice, as in arr[5:8] = 12, the value is propagated (or broadcasted henceforth) to the entire selection. An important first distinction from lists is that array slices are views on the original array. This means that the data is not copied, and any modifications to the view will be reflected in the source array:
In [57]: arr_slice = arr[5:8]
In [58]: arr_slice[1] = 12345
In [59]: arr
Out[59]: array([ 0, 1, 2, 3, 4, 12, 12345, 12, 8, 9])
In [60]: arr_slice[:] = 64
In [61]: arr
Out[61]: array([ 0, 1, 2, 3, 4, 64, 64, 64, 8, 9])
If you are new to NumPy, you might be surprised by this, especially if you have used other array programming languages which copy data more zealously. As NumPy has been designed with large data use cases in mind, you could imagine performance and memory problems if NumPy insisted on copying data left and right.
### Caution
If you want a copy of a slice of an ndarray instead of a view, you will need to explicitly copy the array; for example arr[5:8].copy().
With higher dimensional arrays, you have many more options. In a two-dimensional array, the elements at each index are no longer scalars but rather one-dimensional arrays:
In [62]: arr2d = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
In [63]: arr2d[2]
Out[63]: array([7, 8, 9])
Thus, individual elements can be accessed recursively. But that is a bit too much work, so you can pass a comma-separated list of indices to select individual elements. So these are equivalent:
In [64]: arr2d[0][2]
Out[64]: 3
In [65]: arr2d[0, 2]
Out[65]: 3
See Figure 4-1 for an illustration of indexing on a 2D array.
In multidimensional arrays, if you omit later indices, the returned object will be a lower-dimensional ndarray consisting of all the data along the higher dimensions. So in the 2 × 2 × 3 array arr3d
In [66]: arr3d = np.array([[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]])
In [67]: arr3d
Out[67]:
array([[[ 1, 2, 3],
[ 4, 5, 6]],
[[ 7, 8, 9],
[10, 11, 12]]])
arr3d[0] is a 2 × 3 array:
In [68]: arr3d[0]
Out[68]:
array([[1, 2, 3],
[4, 5, 6]])
Both scalar values and arrays can be assigned to arr3d[0]:
In [69]: old_values = arr3d[0].copy()
In [70]: arr3d[0] = 42
In [71]: arr3d
Out[71]:
array([[[42, 42, 42],
[42, 42, 42]],
[[ 7, 8, 9],
[10, 11, 12]]])
In [72]: arr3d[0] = old_values
In [73]: arr3d
Out[73]:
array([[[ 1, 2, 3],
[ 4, 5, 6]],
[[ 7, 8, 9],
[10, 11, 12]]])
Similarly, arr3d[1, 0] gives you all of the values whose indices start with (1, 0), forming a 1-dimensional array:
In [74]: arr3d[1, 0]
Out[74]: array([7, 8, 9])
Note that in all of these cases where subsections of the array have been selected, the returned arrays are views.
### Indexing with slices
Like one-dimensional objects such as Python lists, ndarrays can be sliced using the familiar syntax:
In [75]: arr[1:6]
Out[75]: array([ 1, 2, 3, 4, 64])
Higher dimensional objects give you more options as you can slice one or more axes and also mix integers. Consider the 2D array above, arr2d. Slicing this array is a bit different:
In [76]: arr2d In [77]: arr2d[:2]
Out[76]: Out[77]:
array([[1, 2, 3], array([[1, 2, 3],
[4, 5, 6], [4, 5, 6]])
[7, 8, 9]])
As you can see, it has sliced along axis 0, the first axis. A slice, therefore, selects a range of elements along an axis. You can pass multiple slices just like you can pass multiple indexes:
In [78]: arr2d[:2, 1:]
Out[78]:
array([[2, 3],
[5, 6]])
When slicing like this, you always obtain array views of the same number of dimensions. By mixing integer indexes and slices, you get lower dimensional slices:
In [79]: arr2d[1, :2] In [80]: arr2d[2, :1]
Out[79]: array([4, 5]) Out[80]: array([7])
See Figure 4-2 for an illustration. Note that a colon by itself means to take the entire axis, so you can slice only higher dimensional axes by doing:
In [81]: arr2d[:, :1]
Out[81]:
array([[1],
[4],
[7]])
Of course, assigning to a slice expression assigns to the whole selection:
In [82]: arr2d[:2, 1:] = 0
## Boolean Indexing
Let’s consider an example where we have some data in an array and an array of names with duplicates. I’m going to use here the randn function in numpy.random to generate some random normally distributed data:
In [83]: names = np.array(['Bob', 'Joe', 'Will', 'Bob', 'Will', 'Joe', 'Joe'])
In [84]: data = np.random.randn(7, 4)
In [85]: names
Out[85]:
array(['Bob', 'Joe', 'Will', 'Bob', 'Will', 'Joe', 'Joe'],
dtype='|S4')
In [86]: data
Out[86]:
array([[-0.048 , 0.5433, -0.2349, 1.2792],
[-0.268 , 0.5465, 0.0939, -2.0445],
[-0.047 , -2.026 , 0.7719, 0.3103],
[ 2.1452, 0.8799, -0.0523, 0.0672],
[-1.0023, -0.1698, 1.1503, 1.7289],
[ 0.1913, 0.4544, 0.4519, 0.5535],
[ 0.5994, 0.8174, -0.9297, -1.2564]])
Suppose each name corresponds to a row in the data array and we wanted to select all the rows with corresponding name 'Bob'. Like arithmetic operations, comparisons (such as ==) with arrays are also vectorized. Thus, comparing names with the string 'Bob' yields a boolean array:
In [87]: names == 'Bob'
Out[87]: array([ True, False, False, True, False, False, False], dtype=bool)
This boolean array can be passed when indexing the array:
In [88]: data[names == 'Bob']
Out[88]:
array([[-0.048 , 0.5433, -0.2349, 1.2792],
[ 2.1452, 0.8799, -0.0523, 0.0672]])
The boolean array must be of the same length as the axis it’s indexing. You can even mix and match boolean arrays with slices or integers (or sequences of integers, more on this later):
In [89]: data[names == 'Bob', 2:]
Out[89]:
array([[-0.2349, 1.2792],
[-0.0523, 0.0672]])
In [90]: data[names == 'Bob', 3]
Out[90]: array([ 1.2792, 0.0672])
To select everything but 'Bob', you can either use != or negate the condition using -:
In [91]: names != 'Bob'
Out[91]: array([False, True, True, False, True, True, True], dtype=bool)
In [92]: data[-(names == 'Bob')]
Out[92]:
array([[-0.268 , 0.5465, 0.0939, -2.0445],
[-0.047 , -2.026 , 0.7719, 0.3103],
[-1.0023, -0.1698, 1.1503, 1.7289],
[ 0.1913, 0.4544, 0.4519, 0.5535],
[ 0.5994, 0.8174, -0.9297, -1.2564]])
Selecting two of the three names to combine multiple boolean conditions, use boolean arithmetic operators like & (and) and | (or):
In [93]: mask = (names == 'Bob') | (names == 'Will')
Out[94]: array([True, False, True, True, True, False, False], dtype=bool)
Out[95]:
array([[-0.048 , 0.5433, -0.2349, 1.2792],
[-0.047 , -2.026 , 0.7719, 0.3103],
[ 2.1452, 0.8799, -0.0523, 0.0672],
[-1.0023, -0.1698, 1.1503, 1.7289]])
Selecting data from an array by boolean indexing always creates a copy of the data, even if the returned array is unchanged.
### Caution
The Python keywords and and or do not work with boolean arrays.
Setting values with boolean arrays works in a common-sense way. To set all of the negative values in data to 0 we need only do:
In [96]: data[data < 0] = 0
In [97]: data
Out[97]:
array([[ 0. , 0.5433, 0. , 1.2792],
[ 0. , 0.5465, 0.0939, 0. ],
[ 0. , 0. , 0.7719, 0.3103],
[ 2.1452, 0.8799, 0. , 0.0672],
[ 0. , 0. , 1.1503, 1.7289],
[ 0.1913, 0.4544, 0.4519, 0.5535],
[ 0.5994, 0.8174, 0. , 0. ]])
Setting whole rows or columns using a 1D boolean array is also easy:
In [98]: data[names != 'Joe'] = 7
In [99]: data
Out[99]:
array([[ 7. , 7. , 7. , 7. ],
[ 0. , 0.5465, 0.0939, 0. ],
[ 7. , 7. , 7. , 7. ],
[ 7. , 7. , 7. , 7. ],
[ 7. , 7. , 7. , 7. ],
[ 0.1913, 0.4544, 0.4519, 0.5535],
[ 0.5994, 0.8174, 0. , 0. ]])
## Fancy Indexing
Fancy indexing is a term adopted by NumPy to describe indexing using integer arrays. Suppose we had a 8 × 4 array:
In [100]: arr = np.empty((8, 4))
In [101]: for i in range(8):
.....: arr[i] = i
In [102]: arr
Out[102]:
array([[ 0., 0., 0., 0.],
[ 1., 1., 1., 1.],
[ 2., 2., 2., 2.],
[ 3., 3., 3., 3.],
[ 4., 4., 4., 4.],
[ 5., 5., 5., 5.],
[ 6., 6., 6., 6.],
[ 7., 7., 7., 7.]])
To select out a subset of the rows in a particular order, you can simply pass a list or ndarray of integers specifying the desired order:
In [103]: arr[[4, 3, 0, 6]]
Out[103]:
array([[ 4., 4., 4., 4.],
[ 3., 3., 3., 3.],
[ 0., 0., 0., 0.],
[ 6., 6., 6., 6.]])
Hopefully this code did what you expected! Using negative indices select rows from the end:
In [104]: arr[[-3, -5, -7]]
Out[104]:
array([[ 5., 5., 5., 5.],
[ 3., 3., 3., 3.],
[ 1., 1., 1., 1.]])
Passing multiple index arrays does something slightly different; it selects a 1D array of elements corresponding to each tuple of indices:
# more on reshape in Chapter 12
In [105]: arr = np.arange(32).reshape((8, 4))
In [106]: arr
Out[106]:
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11],
[12, 13, 14, 15],
[16, 17, 18, 19],
[20, 21, 22, 23],
[24, 25, 26, 27],
[28, 29, 30, 31]])
In [107]: arr[[1, 5, 7, 2], [0, 3, 1, 2]]
Out[107]: array([ 4, 23, 29, 10])
Take a moment to understand what just happened: the elements (1, 0), (5, 3), (7, 1), and (2, 2) were selected. The behavior of fancy indexing in this case is a bit different from what some users might have expected (myself included), which is the rectangular region formed by selecting a subset of the matrix’s rows and columns. Here is one way to get that:
In [108]: arr[[1, 5, 7, 2]][:, [0, 3, 1, 2]]
Out[108]:
array([[ 4, 7, 5, 6],
[20, 23, 21, 22],
[28, 31, 29, 30],
[ 8, 11, 9, 10]])
Another way is to use the np.ix_ function, which converts two 1D integer arrays to an indexer that selects the square region:
In [109]: arr[np.ix_([1, 5, 7, 2], [0, 3, 1, 2])]
Out[109]:
array([[ 4, 7, 5, 6],
[20, 23, 21, 22],
[28, 31, 29, 30],
[ 8, 11, 9, 10]])
Keep in mind that fancy indexing, unlike slicing, always copies the data into a new array.
## Transposing Arrays and Swapping Axes
Transposing is a special form of reshaping which similarly returns a view on the underlying data without copying anything. Arrays have the transpose method and also the special T attribute:
In [110]: arr = np.arange(15).reshape((3, 5))
In [111]: arr In [112]: arr.T
Out[111]: Out[112]:
array([[ 0, 1, 2, 3, 4], array([[ 0, 5, 10],
[ 5, 6, 7, 8, 9], [ 1, 6, 11],
[10, 11, 12, 13, 14]]) [ 2, 7, 12],
[ 3, 8, 13],
[ 4, 9, 14]])
When doing matrix computations, you will do this very often, like for example computing the inner matrix product XTX using np.dot:
In [113]: arr = np.random.randn(6, 3)
In [114]: np.dot(arr.T, arr)
Out[114]:
array([[ 2.584 , 1.8753, 0.8888],
[ 1.8753, 6.6636, 0.3884],
[ 0.8888, 0.3884, 3.9781]])
For higher dimensional arrays, transpose will accept a tuple of axis numbers to permute the axes (for extra mind bending):
In [115]: arr = np.arange(16).reshape((2, 2, 4))
In [116]: arr
Out[116]:
array([[[ 0, 1, 2, 3],
[ 4, 5, 6, 7]],
[[ 8, 9, 10, 11],
[12, 13, 14, 15]]])
In [117]: arr.transpose((1, 0, 2))
Out[117]:
array([[[ 0, 1, 2, 3],
[ 8, 9, 10, 11]],
[[ 4, 5, 6, 7],
[12, 13, 14, 15]]])
Simple transposing with .T is just a special case of swapping axes. ndarray has the method swapaxes which takes a pair of axis numbers:
In [118]: arr In [119]: arr.swapaxes(1, 2)
Out[118]: Out[119]:
array([[[ 0, 1, 2, 3], array([[[ 0, 4],
[ 4, 5, 6, 7]], [ 1, 5],
[ 2, 6],
[[ 8, 9, 10, 11], [ 3, 7]],
[12, 13, 14, 15]]])
[[ 8, 12],
[ 9, 13],
[10, 14],
[11, 15]]])
swapaxes similarly returns a view on the data without making a copy.
# Universal Functions: Fast Element-wise Array Functions
A universal function, or ufunc, is a function that performs elementwise operations on data in ndarrays. You can think of them as fast vectorized wrappers for simple functions that take one or more scalar values and produce one or more scalar results.
Many ufuncs are simple elementwise transformations, like sqrt or exp:
In [120]: arr = np.arange(10)
In [121]: np.sqrt(arr)
Out[121]:
array([ 0. , 1. , 1.4142, 1.7321, 2. , 2.2361, 2.4495,
2.6458, 2.8284, 3. ])
In [122]: np.exp(arr)
Out[122]:
array([ 1. , 2.7183, 7.3891, 20.0855, 54.5982,
148.4132, 403.4288, 1096.6332, 2980.958 , 8103.0839])
These are referred to as unary ufuncs. Others, such as add or maximum, take 2 arrays (thus, binary ufuncs) and return a single array as the result:
In [123]: x = np.random.randn(8)
In [124]: y = np.random.randn(8)
In [125]: x
Out[125]:
array([ 0.0749, 0.0974, 0.2002, -0.2551, 0.4655, 0.9222, 0.446 ,
-0.9337])
In [126]: y
Out[126]:
array([ 0.267 , -1.1131, -0.3361, 0.6117, -1.2323, 0.4788, 0.4315,
-0.7147])
In [127]: np.maximum(x, y) # element-wise maximum
Out[127]:
array([ 0.267 , 0.0974, 0.2002, 0.6117, 0.4655, 0.9222, 0.446 ,
-0.7147])
While not common, a ufunc can return multiple arrays. modf is one example, a vectorized version of the built-in Python divmod: it returns the fractional and integral parts of a floating point array:
In [128]: arr = randn(7) * 5
In [129]: np.modf(arr)
Out[129]:
(array([-0.6808, 0.0636, -0.386 , 0.1393, -0.8806, 0.9363, -0.883 ]),
array([-2., 4., -3., 5., -3., 3., -6.]))
See Table 4-3 and Table 4-4 for a listing of available ufuncs.
Table 4-3. Unary ufuncs
FunctionDescription
abs, fabsCompute the absolute value element-wise for integer, floating point, or complex values. Use fabs as a faster alternative for non-complex-valued data
sqrtCompute the square root of each element. Equivalent to arr ** 0.5
squareCompute the square of each element. Equivalent to arr ** 2
expCompute the exponent ex of each element
log, log10, log2, log1pNatural logarithm (base e), log base 10, log base 2, and log(1 + x), respectively
signCompute the sign of each element: 1 (positive), 0 (zero), or -1 (negative)
ceilCompute the ceiling of each element, i.e. the smallest integer greater than or equal to each element
floorCompute the floor of each element, i.e. the largest integer less than or equal to each element
rintRound elements to the nearest integer, preserving the dtype
modfReturn fractional and integral parts of array as separate array
isnanReturn boolean array indicating whether each value is NaN (Not a Number)
isfinite, isinfReturn boolean array indicating whether each element is finite (non-inf, non-NaN) or infinite, respectively
cos, cosh, sin, sinh, tan, tanhRegular and hyperbolic trigonometric functions
arccos, arccosh, arcsin, arcsinh, arctan, arctanhInverse trigonometric functions
logical_notCompute truth value of not x element-wise. Equivalent to -arr.
Table 4-4. Binary universal functions
FunctionDescription
addAdd corresponding elements in arrays
subtractSubtract elements in second array from first array
multiplyMultiply array elements
divide, floor_divideDivide or floor divide (truncating the remainder)
powerRaise elements in first array to powers indicated in second array
maximum, fmaxElement-wise maximum. fmax ignores NaN
minimum, fminElement-wise minimum. fmin ignores NaN
modElement-wise modulus (remainder of division)
copysignCopy sign of values in second argument to values in first argument
greater, greater_equal, less, less_equal, equal, not_equalPerform element-wise comparison, yielding boolean array. Equivalent to infix operators >, >=, <, <=, ==, !=
logical_and, logical_or, logical_xorCompute element-wise truth value of logical operation. Equivalent to infix operators & |, ^
# Data Processing Using Arrays
Using NumPy arrays enables you to express many kinds of data processing tasks as concise array expressions that might otherwise require writing loops. This practice of replacing explicit loops with array expressions is commonly referred to as vectorization. In general, vectorized array operations will often be one or two (or more) orders of magnitude faster than their pure Python equivalents, with the biggest impact in any kind of numerical computations. Later, in Chapter 12, I will explain broadcasting, a powerful method for vectorizing computations.
As a simple example, suppose we wished to evaluate the function sqrt(x^2 + y^2) across a regular grid of values. The np.meshgrid function takes two 1D arrays and produces two 2D matrices corresponding to all pairs of (x, y) in the two arrays:
In [130]: points = np.arange(-5, 5, 0.01) # 1000 equally spaced points
In [131]: xs, ys = np.meshgrid(points, points)
In [132]: ys
Out[132]:
array([[-5. , -5. , -5. , ..., -5. , -5. , -5. ],
[-4.99, -4.99, -4.99, ..., -4.99, -4.99, -4.99],
[-4.98, -4.98, -4.98, ..., -4.98, -4.98, -4.98],
...,
[ 4.97, 4.97, 4.97, ..., 4.97, 4.97, 4.97],
[ 4.98, 4.98, 4.98, ..., 4.98, 4.98, 4.98],
[ 4.99, 4.99, 4.99, ..., 4.99, 4.99, 4.99]])
Now, evaluating the function is a simple matter of writing the same expression you would write with two points:
In [134]: import matplotlib.pyplot as plt
In [135]: z = np.sqrt(xs ** 2 + ys ** 2)
In [136]: z
Out[136]:
array([[ 7.0711, 7.064 , 7.0569, ..., 7.0499, 7.0569, 7.064 ],
[ 7.064 , 7.0569, 7.0499, ..., 7.0428, 7.0499, 7.0569],
[ 7.0569, 7.0499, 7.0428, ..., 7.0357, 7.0428, 7.0499],
...,
[ 7.0499, 7.0428, 7.0357, ..., 7.0286, 7.0357, 7.0428],
[ 7.0569, 7.0499, 7.0428, ..., 7.0357, 7.0428, 7.0499],
[ 7.064 , 7.0569, 7.0499, ..., 7.0428, 7.0499, 7.0569]])
In [137]: plt.imshow(z, cmap=plt.cm.gray); plt.colorbar()
Out[137]: <matplotlib.colorbar.Colorbar instance at 0x4e46d40>
In [138]: plt.title("Image plot of $\sqrt{x^2 + y^2}$ for a grid of values")
Out[138]: <matplotlib.text.Text at 0x4565790>
See Figure 4-3. Here I used the matplotlib function imshow to create an image plot from a 2D array of function values.
## Expressing Conditional Logic as Array Operations
The numpy.where function is a vectorized version of the ternary expression x if condition else y. Suppose we had a boolean array and two arrays of values:
In [140]: xarr = np.array([1.1, 1.2, 1.3, 1.4, 1.5])
In [141]: yarr = np.array([2.1, 2.2, 2.3, 2.4, 2.5])
In [142]: cond = np.array([True, False, True, True, False])
Suppose we wanted to take a value from xarr whenever the corresponding value in cond is True otherwise take the value from yarr. A list comprehension doing this might look like:
In [143]: result = [(x if c else y)
.....: for x, y, c in zip(xarr, yarr, cond)]
In [144]: result
Out[144]: [1.1000000000000001, 2.2000000000000002, 1.3, 1.3999999999999999, 2.5]
This has multiple problems. First, it will not be very fast for large arrays (because all the work is being done in pure Python). Secondly, it will not work with multidimensional arrays. With np.where you can write this very concisely:
In [145]: result = np.where(cond, xarr, yarr)
In [146]: result
Out[146]: array([ 1.1, 2.2, 1.3, 1.4, 2.5])
The second and third arguments to np.where don’t need to be arrays; one or both of them can be scalars. A typical use of where in data analysis is to produce a new array of values based on another array. Suppose you had a matrix of randomly generated data and you wanted to replace all positive values with 2 and all negative values with -2. This is very easy to do with np.where:
In [147]: arr = randn(4, 4)
In [148]: arr
Out[148]:
array([[ 0.6372, 2.2043, 1.7904, 0.0752],
[-1.5926, -1.1536, 0.4413, 0.3483],
[-0.1798, 0.3299, 0.7827, -0.7585],
[ 0.5857, 0.1619, 1.3583, -1.3865]])
In [149]: np.where(arr > 0, 2, -2)
Out[149]:
array([[ 2, 2, 2, 2],
[-2, -2, 2, 2],
[-2, 2, 2, -2],
[ 2, 2, 2, -2]])
In [150]: np.where(arr > 0, 2, arr) # set only positive values to 2
Out[150]:
array([[ 2. , 2. , 2. , 2. ],
[-1.5926, -1.1536, 2. , 2. ],
[-0.1798, 2. , 2. , -0.7585],
[ 2. , 2. , 2. , -1.3865]])
The arrays passed to where can be more than just equal sizes array or scalars.
With some cleverness you can use where to express more complicated logic; consider this example where I have two boolean arrays, cond1 and cond2, and wish to assign a different value for each of the 4 possible pairs of boolean values:
result = []
for i in range(n):
if cond1[i] and cond2[i]:
result.append(0)
elif cond1[i]:
result.append(1)
elif cond2[i]:
result.append(2)
else:
result.append(3)
While perhaps not immediately obvious, this for loop can be converted into a nested where expression:
np.where(cond1 & cond2, 0,
np.where(cond1, 1,
np.where(cond2, 2, 3)))
In this particular example, we can also take advantage of the fact that boolean values are treated as 0 or 1 in calculations, so this could alternatively be expressed (though a bit more cryptically) as an arithmetic operation:
result = 1 * (cond1 & -cond2) + 2 * (cond2 & -cond1) + 3 * -(cond1 | cond2)
## Mathematical and Statistical Methods
A set of mathematical functions which compute statistics about an entire array or about the data along an axis are accessible as array methods. Aggregations (often called reductions) like sum, mean, and standard deviation std can either be used by calling the array instance method or using the top level NumPy function:
In [151]: arr = np.random.randn(5, 4) # normally-distributed data
In [152]: arr.mean()
Out[152]: 0.062814911084854597
In [153]: np.mean(arr)
Out[153]: 0.062814911084854597
In [154]: arr.sum()
Out[154]: 1.2562982216970919
Functions like mean and sum take an optional axis argument which computes the statistic over the given axis, resulting in an array with one fewer dimension:
In [155]: arr.mean(axis=1)
Out[155]: array([-1.2833, 0.2844, 0.6574, 0.6743, -0.0187])
In [156]: arr.sum(0)
Out[156]: array([-3.1003, -1.6189, 1.4044, 4.5712])
Other methods like cumsum and cumprod do not aggregate, instead producing an array of the intermediate results:
In [157]: arr = np.array([[0, 1, 2], [3, 4, 5], [6, 7, 8]])
In [158]: arr.cumsum(0) In [159]: arr.cumprod(1)
Out[158]: Out[159]:
array([[ 0, 1, 2], array([[ 0, 0, 0],
[ 3, 5, 7], [ 3, 12, 60],
[ 9, 12, 15]]) [ 6, 42, 336]])
See Table 4-5 for a full listing. We’ll see many examples of these methods in action in later chapters.
Table 4-5. Basic array statistical methods
MethodDescription
sumSum of all the elements in the array or along an axis. Zero-length arrays have sum 0.
meanArithmetic mean. Zero-length arrays have NaN mean.
std, varStandard deviation and variance, respectively, with optional degrees of freedom adjustment (default denominator n).
min, maxMinimum and maximum.
argmin, argmaxIndices of minimum and maximum elements, respectively.
cumsumCumulative sum of elements starting from 0
cumprodCumulative product of elements starting from 1
## Methods for Boolean Arrays
Boolean values are coerced to 1 (True) and 0 (False) in the above methods. Thus, sum is often used as a means of counting True values in a boolean array:
In [160]: arr = randn(100)
In [161]: (arr > 0).sum() # Number of positive values
Out[161]: 44
There are two additional methods, any and all, useful especially for boolean arrays. any tests whether one or more values in an array is True, while all checks if every value is True:
In [162]: bools = np.array([False, False, True, False])
In [163]: bools.any()
Out[163]: True
In [164]: bools.all()
Out[164]: False
These methods also work with non-boolean arrays, where non-zero elements evaluate to True.
## Sorting
Like Python’s built-in list type, NumPy arrays can be sorted in-place using the sort method:
In [165]: arr = randn(8)
In [166]: arr
Out[166]:
array([ 0.6903, 0.4678, 0.0968, -0.1349, 0.9879, 0.0185, -1.3147,
-0.5425])
In [167]: arr.sort()
In [168]: arr
Out[168]:
array([-1.3147, -0.5425, -0.1349, 0.0185, 0.0968, 0.4678, 0.6903,
0.9879])
Multidimensional arrays can have each 1D section of values sorted in-place along an axis by passing the axis number to sort:
In [169]: arr = randn(5, 3)
In [170]: arr
Out[170]:
array([[-0.7139, -1.6331, -0.4959],
[ 0.8236, -1.3132, -0.1935],
[-1.6748, 3.0336, -0.863 ],
[-0.3161, 0.5362, -2.468 ],
[ 0.9058, 1.1184, -1.0516]])
In [171]: arr.sort(1)
In [172]: arr
Out[172]:
array([[-1.6331, -0.7139, -0.4959],
[-1.3132, -0.1935, 0.8236],
[-1.6748, -0.863 , 3.0336],
[-2.468 , -0.3161, 0.5362],
[-1.0516, 0.9058, 1.1184]])
The top level method np.sort returns a sorted copy of an array instead of modifying the array in place. A quick-and-dirty way to compute the quantiles of an array is to sort it and select the value at a particular rank:
In [173]: large_arr = randn(1000)
In [174]: large_arr.sort()
In [175]: large_arr[int(0.05 * len(large_arr))] # 5% quantile
Out[175]: -1.5791023260896004
For more details on using NumPy’s sorting methods, and more advanced techniques like indirect sorts, see Chapter 12. Several other kinds of data manipulations related to sorting (for example, sorting a table of data by one or more columns) are also to be found in pandas.
## Unique and Other Set Logic
NumPy has some basic set operations for one-dimensional ndarrays. Probably the most commonly used one is np.unique, which returns the sorted unique values in an array:
In [176]: names = np.array(['Bob', 'Joe', 'Will', 'Bob', 'Will', 'Joe', 'Joe'])
In [177]: np.unique(names)
Out[177]:
array(['Bob', 'Joe', 'Will'],
dtype='|S4')
In [178]: ints = np.array([3, 3, 3, 2, 2, 1, 1, 4, 4])
In [179]: np.unique(ints)
Out[179]: array([1, 2, 3, 4])
Contrast np.unique with the pure Python alternative:
In [180]: sorted(set(names))
Out[180]: ['Bob', 'Joe', 'Will']
Another function, np.in1d, tests membership of the values in one array in another, returning a boolean array:
In [181]: values = np.array([6, 0, 0, 3, 2, 5, 6])
In [182]: np.in1d(values, [2, 3, 6])
Out[182]: array([ True, False, False, True, True, False, True], dtype=bool)
See Table 4-6 for a listing of set functions in NumPy.
Table 4-6. Array set operations
MethodDescription
unique(x)Compute the sorted, unique elements in x
intersect1d(x, y)Compute the sorted, common elements in x and y
union1d(x, y)Compute the sorted union of elements
in1d(x, y)Compute a boolean array indicating whether each element of x is contained in y
setdiff1d(x, y)Set difference, elements in x that are not in y
setxor1d(x, y)Set symmetric differences; elements that are in either of the arrays, but not both
# File Input and Output with Arrays
NumPy is able to save and load data to and from disk either in text or binary format. In later chapters you will learn about tools in pandas for reading tabular data into memory.
## Storing Arrays on Disk in Binary Format
np.save and np.load are the two workhorse functions for efficiently saving and loading array data on disk. Arrays are saved by default in an uncompressed raw binary format with file extension .npy.
In [183]: arr = np.arange(10)
In [184]: np.save('some_array', arr)
If the file path does not already end in .npy, the extension will be appended. The array on disk can then be loaded using np.load:
In [185]: np.load('some_array.npy')
Out[185]: array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
You save multiple arrays in a zip archive using np.savez and passing the arrays as keyword arguments:
In [186]: np.savez('array_archive.npz', a=arr, b=arr)
When loading an .npz file, you get back a dict-like object which loads the individual arrays lazily:
In [187]: arch = np.load('array_archive.npz')
In [188]: arch['b']
Out[188]: array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
Loading text from files is a fairly standard task. The landscape of file reading and writing functions in Python can be a bit confusing for a newcomer, so I will focus mainly on the read_csv and read_table functions in pandas. It will at times be useful to load data into vanilla NumPy arrays using np.loadtxt or the more specialized np.genfromtxt.
These functions have many options allowing you to specify different delimiters, converter functions for certain columns, skipping rows, and other things. Take a simple case of a comma-separated file (CSV) like this:
In [191]: !cat array_ex.txt
0.580052,0.186730,1.040717,1.134411
0.194163,-0.636917,-0.938659,0.124094
-0.126410,0.268607,-0.695724,0.047428
-1.484413,0.004176,-0.744203,0.005487
2.302869,0.200131,1.670238,-1.881090
-0.193230,1.047233,0.482803,0.960334
This can be loaded into a 2D array like so:
In [192]: arr = np.loadtxt('array_ex.txt', delimiter=',')
In [193]: arr
Out[193]:
array([[ 0.5801, 0.1867, 1.0407, 1.1344],
[ 0.1942, -0.6369, -0.9387, 0.1241],
[-0.1264, 0.2686, -0.6957, 0.0474],
[-1.4844, 0.0042, -0.7442, 0.0055],
[ 2.3029, 0.2001, 1.6702, -1.8811],
[-0.1932, 1.0472, 0.4828, 0.9603]])
np.savetxt performs the inverse operation: writing an array to a delimited text file. genfromtxt is similar to loadtxt but is geared for structured arrays and missing data handling; see Chapter 12 for more on structured arrays.
### Tip
For more on file reading and writing, especially tabular or spreadsheet-like data, see the later chapters involving pandas and DataFrame objects.
# Linear Algebra
Linear algebra, like matrix multiplication, decompositions, determinants, and other square matrix math, is an important part of any array library. Unlike some languages like MATLAB, multiplying two two-dimensional arrays with * is an element-wise product instead of a matrix dot product. As such, there is a function dot, both an array method, and a function in the numpy namespace, for matrix multiplication:
In [194]: x = np.array([[1., 2., 3.], [4., 5., 6.]])
In [195]: y = np.array([[6., 23.], [-1, 7], [8, 9]])
In [196]: x In [197]: y
Out[196]: Out[197]:
array([[ 1., 2., 3.], array([[ 6., 23.],
[ 4., 5., 6.]]) [ -1., 7.],
[ 8., 9.]])
In [198]: x.dot(y) # equivalently np.dot(x, y)
Out[198]:
array([[ 28., 64.],
[ 67., 181.]])
A matrix product between a 2D array and a suitably sized 1D array results in a 1D array:
In [199]: np.dot(x, np.ones(3))
Out[199]: array([ 6., 15.])
numpy.linalg has a standard set of matrix decompositions and things like inverse and determinant. These are implemented under the hood using the same industry-standard Fortran libraries used in other languages like MATLAB and R, such as like BLAS, LAPACK, or possibly (depending on your NumPy build) the Intel MKL:
In [201]: from numpy.linalg import inv, qr
In [202]: X = randn(5, 5)
In [203]: mat = X.T.dot(X)
In [204]: inv(mat)
Out[204]:
array([[ 3.0361, -0.1808, -0.6878, -2.8285, -1.1911],
[-0.1808, 0.5035, 0.1215, 0.6702, 0.0956],
[-0.6878, 0.1215, 0.2904, 0.8081, 0.3049],
[-2.8285, 0.6702, 0.8081, 3.4152, 1.1557],
[-1.1911, 0.0956, 0.3049, 1.1557, 0.6051]])
In [205]: mat.dot(inv(mat))
Out[205]:
array([[ 1., 0., 0., 0., -0.],
[ 0., 1., -0., 0., 0.],
[ 0., -0., 1., 0., 0.],
[ 0., -0., -0., 1., -0.],
[ 0., 0., 0., 0., 1.]])
In [206]: q, r = qr(mat)
In [207]: r
Out[207]:
array([[ -6.9271, 7.389 , 6.1227, -7.1163, -4.9215],
[ 0. , -3.9735, -0.8671, 2.9747, -5.7402],
[ 0. , 0. , -10.2681, 1.8909, 1.6079],
[ 0. , 0. , 0. , -1.2996, 3.3577],
[ 0. , 0. , 0. , 0. , 0.5571]])
See Table 4-7 for a list of some of the most commonly-used linear algebra functions.
### Note
The scientific Python community is hopeful that there may be a matrix multiplication infix operator implemented someday, providing syntactically nicer alternative to using np.dot. But for now this is the way.
Table 4-7. Commonly-used numpy.linalg functions
FunctionDescription
diagReturn the diagonal (or off-diagonal) elements of a square matrix as a 1D array, or convert a 1D array into a square matrix with zeros on the off-diagonal
dotMatrix multiplication
traceCompute the sum of the diagonal elements
detCompute the matrix determinant
eigCompute the eigenvalues and eigenvectors of a square matrix
invCompute the inverse of a square matrix
pinvCompute the Moore-Penrose pseudo-inverse inverse of a matrix
qrCompute the QR decomposition
svdCompute the singular value decomposition (SVD)
solveSolve the linear system Ax = b for x, where A is a square matrix
lstsqCompute the least-squares solution to Ax = b
# Random Number Generation
The numpy.random module supplements the built-in Python random with functions for efficiently generating whole arrays of sample values from many kinds of probability distributions. For example, you can get a 4 by 4 array of samples from the standard normal distribution using normal:
In [208]: samples = np.random.normal(size=(4, 4))
In [209]: samples
Out[209]:
array([[ 0.1241, 0.3026, 0.5238, 0.0009],
[ 1.3438, -0.7135, -0.8312, -2.3702],
[-1.8608, -0.8608, 0.5601, -1.2659],
[ 0.1198, -1.0635, 0.3329, -2.3594]])
Python’s built-in random module, by contrast, only samples one value at a time. As you can see from this benchmark, numpy.random is well over an order of magnitude faster for generating very large samples:
In [210]: from random import normalvariate
In [211]: N = 1000000
In [212]: %timeit samples = [normalvariate(0, 1) for _ in xrange(N)]
1 loops, best of 3: 1.33 s per loop
In [213]: %timeit np.random.normal(size=N)
10 loops, best of 3: 57.7 ms per loop
See Table 4-8 for a partial list of functions available in numpy.random. I’ll give some examples of leveraging these functions’ ability to generate large arrays of samples all at once in the next section.
Table 4-8. Partial list of numpy.random functions
FunctionDescription
seedSeed the random number generator
permutationReturn a random permutation of a sequence, or return a permuted range
shuffleRandomly permute a sequence in place
randDraw samples from a uniform distribution
randintDraw random integers from a given low-to-high range
randnDraw samples from a normal distribution with mean 0 and standard deviation 1 (MATLAB-like interface)
binomialDraw samples from a binomial distribution
normalDraw samples from a normal (Gaussian) distribution
betaDraw samples from a beta distribution
chisquareDraw samples from a chi-square distribution
gammaDraw samples from a gamma distribution
uniformDraw samples from a uniform [0, 1) distribution
# Example: Random Walks
An illustrative application of utilizing array operations is in the simulation of random walks. Let’s first consider a simple random walk starting at 0 with steps of 1 and -1 occurring with equal probability. A pure Python way to implement a single random walk with 1,000 steps using the built-in random module:
import random
position = 0
walk = [position]
steps = 1000
for i in xrange(steps):
step = 1 if random.randint(0, 1) else -1
position += step
walk.append(position)
See Figure 4-4 for an example plot of the first 100 values on one of these random walks.
You might make the observation that walk is simply the cumulative sum of the random steps and could be evaluated as an array expression. Thus, I use the np.random module to draw 1,000 coin flips at once, set these to 1 and -1, and compute the cumulative sum:
In [215]: nsteps = 1000
In [216]: draws = np.random.randint(0, 2, size=nsteps)
In [217]: steps = np.where(draws > 0, 1, -1)
In [218]: walk = steps.cumsum()
From this we can begin to extract statistics like the minimum and maximum value along the walk’s trajectory:
In [219]: walk.min() In [220]: walk.max()
Out[219]: -3 Out[220]: 31
A more complicated statistic is the first crossing time, the step at which the random walk reaches a particular value. Here we might want to know how long it took the random walk to get at least 10 steps away from the origin 0 in either direction. np.abs(walk) >= 10 gives us a boolean array indicating where the walk has reached or exceeded 10, but we want the index of the first 10 or -10. Turns out this can be computed using argmax, which returns the first index of the maximum value in the boolean array (True is the maximum value):
In [221]: (np.abs(walk) >= 10).argmax()
Out[221]: 37
Note that using argmax here is not always efficient because it always makes a full scan of the array. In this special case once a True is observed we know it to be the maximum value.
## Simulating Many Random Walks at Once
If your goal was to simulate many random walks, say 5,000 of them, you can generate all of the random walks with minor modifications to the above code. The numpy.random functions if passed a 2-tuple will generate a 2D array of draws, and we can compute the cumulative sum across the rows to compute all 5,000 random walks in one shot:
In [222]: nwalks = 5000
In [223]: nsteps = 1000
In [224]: draws = np.random.randint(0, 2, size=(nwalks, nsteps)) # 0 or 1
In [225]: steps = np.where(draws > 0, 1, -1)
In [226]: walks = steps.cumsum(1)
In [227]: walks
Out[227]:
array([[ 1, 0, 1, ..., 8, 7, 8],
[ 1, 0, -1, ..., 34, 33, 32],
[ 1, 0, -1, ..., 4, 5, 4],
...,
[ 1, 2, 1, ..., 24, 25, 26],
[ 1, 2, 3, ..., 14, 13, 14],
[ -1, -2, -3, ..., -24, -23, -22]])
Now, we can compute the maximum and minimum values obtained over all of the walks:
In [228]: walks.max() In [229]: walks.min()
Out[228]: 138 Out[229]: -133
Out of these walks, let’s compute the minimum crossing time to 30 or -30. This is slightly tricky because not all 5,000 of them reach 30. We can check this using the any method:
In [230]: hits30 = (np.abs(walks) >= 30).any(1)
In [231]: hits30
Out[231]: array([False, True, False, ..., False, True, False], dtype=bool)
In [232]: hits30.sum() # Number that hit 30 or -30
Out[232]: 3410
We can use this boolean array to select out the rows of walks that actually cross the absolute 30 level and call argmax across axis 1 to get the crossing times:
In [233]: crossing_times = (np.abs(walks[hits30]) >= 30).argmax(1)
In [234]: crossing_times.mean()
Out[234]: 498.88973607038122
Feel free to experiment with other distributions for the steps other than equal sized coin flips. You need only use a different random number generation function, like normal to generate normally distributed steps with some mean and standard deviation:
In [235]: steps = np.random.normal(loc=0, scale=0.25,
.....: size=(nwalks, nsteps))
Get Python for Data Analysis now with the O’Reilly learning platform.
O’Reilly members experience live online training, plus books, videos, and digital content from nearly 200 publishers.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23008303344249725, "perplexity": 3918.991239490292}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710890.97/warc/CC-MAIN-20221202014312-20221202044312-00392.warc.gz"}
|
http://math.stackexchange.com/questions/112317/how-can-i-find-the-expected-value-due-to-the-trials-instead-of-simply-expected-v
|
# How can I find the expected value due to the trials instead of simply expected value of trials of a geometric distribution?
Suppose in a game, if I win in the $j$th round, I gain $+\$2^{j-1}$and if I don't win in the$j$th, I lose$-\$2^{j-1}$. If I lose, I will keep playing until I win. Once I win, I leave the game. Otherwise, I continue to play until 30 rounds and leave the game even if I don't win. In other words, I will just stop at the $30$th round. Each round is independent and the probability of winning in each round is $\frac{9}{13}$.
I let $X$ be a random variable of my winnings. I want to find my expected winnings.
Since it works in a way that I would stop during my first win, I have a feeling that $X$ should be distributed over the Geometric Distribution. The game is either a win or a lose, so it is pretty much like a Bernoulli trial. But since I am not getting the expected number of rounds played, the Bernoulli trials cannot be just $0$ or $1$. So I thought I could modify it to become this way: $${ X }_{ j }=\left\{\begin{matrix} +2^{j-1} & if\; win\\ -2^{j-1} & if\; lose \end{matrix}\right.$$ Then, $E(X)=E(X_1+X_2+\cdots +X_{30})=E(X_1)+E(X_2)+\cdots +E(X_{30})$
However, because I thought this is a Geometric Distribution, the expected value is just $\frac{1}{p}=\frac{13}{9}$. But I don't think the expected winnings is $\frac{13}{9}$ and is wrong.
Is what I have done correct?
So, instead of the usual finding of the expected number of trials of a standard geometric distribution, how can I find an expected number of another factor due to the trials (in this case, the expected number of winnings from the trials)?
Edit:
What I attempted to do was to make use of an indicator function to determine the expectation. But it doesn't seem successful.
-
The ideas in your post can be used to produce a nice solution, which will be given later. But the expected length of the game, which is almost but not quite $\frac{13}{9}$, because of the cutoff at $30$, has I think not much bearing on the problem.
First Solution: Let's do some calculations. If we win on the first round, we win $1$ dollar and leave. If we lose on the first and win on the second, we lost $1$ dollar but won $2$, for a net of $1$. If we lose on the first two rounds, and win on the third, we have lost $1+2$ dollars, and won $4$, for a net of $1$.
Reasoning in the same way, we can see that we either leave with $1$ dollar, or lose a whole bunch of money, namely $1+2+4+\cdots+2^{29}$, which is $2^{30}-1$. The reason that if we win, our net gain is $1$, is that if we win on the $i$-th round, we have lost $1+2+\cdots +2^{i-2}$ and won $2^{i-1}$. Since $1+2+\cdots +2^{i-2}=2^{i-1}-1$, our net gain is $1$.
The probability we lose an enormous amount of money is $\left(\frac{4}{13}\right)^{30}$ ($30$ losses in a row). Now the expectation is easy to find. It is $$(1)\left(1-\left(\frac{4}{13}\right)^{30}\right)- (2^{30}-1)\left(\frac{4}{13}\right)^{30}.$$ There is a bit of cancellation. The expectation simplifies to $$1-2^{30}\left(\frac{4}{13}\right)^{30}.$$
Second Solution: We give a much more attractive solution based on the idea of your post. For $j=1$ to $30$, let $X_j$ be the amount "won" on the $j$-th trial Then the total amount $X$ won is $X_1+\cdots+X_{30}$. Thus, by the linearity of expectation, $$E(X)=\sum_{j=1}^{30} E(X_j).$$ The $X_j$ are not independent, but that is irrelevant.
Let $p=4/13$. We win $2^{j-1}$ at stage $j$ with probability $p^{j-1}(1-p)$, and lose $2^{j-1}$ with probability $p^{j-1}p$. So $$E(X_j)=2^{j-1}p^{j-1}(1-2p).$$ Now sum, from $j=1$ to $j=30$. The sum is $$(1-2p)\frac{1-(2p)^{30}}{1-2p}, \quad\text{that is,}\quad 1-\left(\frac{8}{13}\right)^{30}.$$
-
Thanks! I think if we look at it this way, then the expectation will be $(2^{30}-1)\left(\frac{4}{13}\right)^{30}+1(1-\left(\frac{4}{13}\right)^{30})$, is this right? Does that mean that I will not be able to use the linearity method to solve for this problem? – xenon Feb 23 '12 at 6:26
@xEnOn: I earlier made a sign typo, and so have you. The second part is right, but the first should have a $-$ sign in front, since we lose $2^{30}-1$. I think my post at this moment is (after a few corrections!) fairly typo-free. There is a smoother way to do it using indicator functions, but it is late, and my brain has stopped working. – André Nicolas Feb 23 '12 at 6:44
oh yea...there should be a minus sign in front because that is a lost. Thanks! And I think what I attempted to do earlier was to use what you said as the indicator functions method. I should have used this term in my question though. I didn't know of the name of the method. Thanks for telling. It will be interesting to know how I could use the indicator function way because there could be chances where the net profit doesn't cancel out so nicely to just $\$1$. – xenon Feb 23 '12 at 7:07 Would you have time to show how this problem could be done with an indicator function later in the day? I kept trying for many hours but still couldn't figure out how I should do it. :( Thanks for you help. – xenon Feb 23 '12 at 12:09 @xEnOn: I wrote out the much nicer solution that goes along the line you tried. Had done it before, but made an arithmetical error, was too tired to figure out what was happening. An inessential variant uses$Y_j=1$if we win on the$j$-th trial,$-1$if we lose, and$X=\sum 2^{j-1}Y_j\$. – André Nicolas Feb 23 '12 at 19:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9836899638175964, "perplexity": 554.9901072871313}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802775080.16/warc/CC-MAIN-20141217075255-00055-ip-10-231-17-201.ec2.internal.warc.gz"}
|
http://pseudomonad.blogspot.com/2011/06/theory-update-85.html
|
## Monday, June 6, 2011
### Theory Update 85
Back at the start of our M Theory lessons, when we were discussing twistor amplitudes, associahedra, motivic cohomology and Kepler's law, we were always thinking about ternary analogues of Stone duality. Recall that a special object in the (usual) category of topological spaces is specified by a two point lattice, because every space contains the empty set (a zero) and the whole space itself (a one).
In a sense, this is the most basic arrow ($0 \rightarrow 1$) of category theory, since much mathematical industry is motivated by classical topology. The requirements of M Theory*, however, are different. Recall, once more, that if we cannot begin with space, which after all is an emergent concept, we cannot begin with the classical symmetries that act upon space, or with simple generalisations such as traditional supersymmetry. The imposition of such symmetries from the outset simply does not make any sense. The groups that physicists like, such as $SU(2)$ and $SU(3)$, are very basic categories, with one object and nice properties, so we should not fear that they will disappear into the mist, never to be recovered.
There are a number of ternary analogues for the arrow, but what really is the ternary analogue of its self dual property (ie. true triality)? We have drawn triangles and cubes and globule triangles. We could draw Kan extensions. As a minimum, we expect three dualities for the sides of the ternary triangle (let us call them S, T and U), but what truly three dimensional element appears for a self ternary arrow? For a start, the Gray tensor product of Crans will be busy generating higher dimensional arrows for us. Since a classical space, properly described, ought to be an infinite dimensional category, we would like to make use of arrow generation in its definition.
But M Theory* requires even more. The dualities S, T and U do not merely reflect the properties of a classical space. Already, quantum information takes precedence. We let our lone arrow stand for the noncommutative world, and look further into the nonassociative one for the meaning of ternary. And here at last, categorical weakness is forced upon us, unbid. The associahedra provide the simplest possible object (a $1$-operad) for describing nonassociativity (of alphabets). There exists a vast collection of such combinatorial structures, of higher and higher information dimension.
*This term, as always, will refer here to the correct form of M Theory, and not to some offshoot of crackpot stringy physics.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8160178065299988, "perplexity": 970.3982480414976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867374.98/warc/CC-MAIN-20180526073410-20180526093410-00094.warc.gz"}
|
http://blog.stata.com/category/statistics/
|
### Archive
Archive for the ‘Statistics’ Category
## A simulation-based explanation of consistency and asymptotic normality
Overview
In the frequentist approach to statistics, estimators are random variables because they are functions of random data. The finite-sample distributions of most of the estimators used in applied work are not known, because the estimators are complicated nonlinear functions of random data. These estimators have large-sample convergence properties that we use to approximate their behavior in finite samples.
Two key convergence properties are consistency and asymptotic normality. A consistent estimator gets arbitrarily close in probability to the true value. The distribution of an asymptotically normal estimator gets arbitrarily close to a normal distribution as the sample size increases. We use a recentered and rescaled version of this normal distribution to approximate the finite-sample distribution of our estimators.
I illustrate the meaning of consistency and asymptotic normality by Monte Carlo simulation (MCS). I use some of the Stata mechanics I discussed in Monte Carlo simulations using Stata.
Consistent estimator
A consistent estimator gets arbitrarily close in probability to the true value as you increase the sample size. In other words, the probability that a consistent estimator is outside a neighborhood of the true value goes to zero as the sample size increases. Figure 1 illustrates this convergence for an estimator $$\theta$$ at sample sizes 100, 1,000, and 5,000, when the true value is 0. As the sample size increases, the density is more tightly distributed around the true value. As the sample size becomes infinite, the density collapses to a spike at the true value.
Figure 1: Densities of an estimator for sample sizes 100, 1,000, 5,000, and $$\infty$$
I now illustrate that the sample average is a consistent estimator for the mean of an independently and identically distributed (i.i.d.) random variable with a finite mean and a finite variance. In this example, the data are i.i.d. draws from a $$\chi^2$$ distribution with 1 degree of freedom. The true value is 1, because the mean of a $$\chi^2(1)$$ is 1.
Code block 1 implements an MCS of the sample average for the mean from samples of size 1,000 of i.i.d. $$\chi^2(1)$$ variates.
Code block 1: mean1000.do
clear all
set seed 12345
postfile sim m1000 using sim1000, replace
forvalues i = 1/1000 {
quietly capture drop y
quietly set obs 1000
quietly generate y = rchi2(1)
quietly summarize y
quietly post sim (r(mean))
}
postclose sim
Line 1 clears Stata, and line 2 sets the seed of the random number generator. Line 3 uses postfile to create a place in memory named sim, in which I store observations on the variable m1000, which will be the new dataset sim1000. Note that the keyword using separates the name of the new variable from the name of the new dataset. The replace option specifies that sim1000.dta be replaced, if it already exists.
Lines 5 and 11 use forvalues to repeat the code in lines 6–10 1,000 times. Each time through the forvalues loop, line 6 drops y, line 7 sets the number of observations to 1,000, line 8 generates a sample of size 1,000 of i.i.d. $$\chi^2(1)$$ variates, line 9 estimates the mean of y in this sample, and line 10 uses post to store the estimated mean in what will be the new variable m1000. Line 12 writes everything stored in sim to the new dataset sim100.dta. See Monte Carlo simulations using Stata for more details about using post to implement an MCS in Stata.
In example 1, I run mean1000.do and then summarize the results.
Example 1: Estimating the mean from a sample of size 1,000
. do mean1000
. clear all
. set seed 12345
. postfile sim m1000 using sim1000, replace
.
. forvalues i = 1/1000 {
2. quietly capture drop y
3. quietly set obs 1000
4. quietly generate y = rchi2(1)
5. quietly summarize y
6. quietly post sim (r(mean))
7. }
. postclose sim
.
.
end of do-file
. use sim1000, clear
. summarize m1000
Variable | Obs Mean Std. Dev. Min Max
-------------+---------------------------------------------------------
m1000 | 1,000 1.00017 .0442332 .8480308 1.127382
The mean of the 1,000 estimates is close to 1. The standard deviation of the 1,000 estimates is 0.0442, which measures how tightly the estimator is distributed around the true value of 1.
Code block 2 contains mean100000.do, which implements the analogous MCS with
a sample size of 100,000.
Code block 2: mean100000.do
clear all
// no seed, just keep drawing
postfile sim m100000 using sim100000, replace
forvalues i = 1/1000 {
quietly capture drop y
quietly set obs 100000
quietly generate y = rchi2(1)
quietly summarize y
quietly post sim (r(mean))
}
postclose sim
Example 2 runs mean100000.do and summarizes the results.
Example 2: Estimating the mean from a sample of size 100,000
. do mean100000
. clear all
. // no seed, just keep drawing
. postfile sim m100000 using sim100000, replace
.
. forvalues i = 1/1000 {
2. quietly capture drop y
3. quietly set obs 100000
4. quietly generate y = rchi2(1)
5. quietly summarize y
6. quietly post sim (r(mean))
7. }
. postclose sim
.
.
end of do-file
. use sim100000, clear
. summarize m100000
Variable | Obs Mean Std. Dev. Min Max
-------------+---------------------------------------------------------
m100000 | 1,000 1.000008 .0043458 .9837129 1.012335
The standard deviation of 0.0043 indicates that the distribution of the estimator with a sample size 100,000 is much more tightly distributed around the true value of 1 than the estimator with a sample size of 1,000.
Example 3 merges the two datasets of estimates and plots the densities of the estimator for the two sample sizes in figure 2. The distribution of the estimator for the sample size of 100,000 is much tighter around 1 than the estimator for the sample size of 1,000.
Example 3: Densities of sample-average estimator for 1,000 and 100,000
. merge 1:1 _n using sim1000
Result # of obs.
-----------------------------------------
not matched 0
matched 1,000 (_merge==3)
-----------------------------------------
. kdensity m1000, n(500) generate(x_1000 f_1000) kernel(gaussian) nograph
. label variable f_1000 "N=1000"
. kdensity m100000, n(500) generate(x_100000 f_100000) kernel(gaussian) nograph
. label variable f_100000 "N=100000"
. graph twoway (line f_1000 x_1000) (line f_100000 x_100000)
Figure 2: Densities of the sample-average estimator for sample sizes 1,000 and 100,000
The sample average is a consistent estimator for the mean of an i.i.d. $$\chi^2(1)$$ random variable because a weak law of large numbers applies. This theorem specifies that the sample average converges in probability to the true mean if the data are i.i.d., the mean is finite, and the variance is finite. Other versions of this theorem weaken the i.i.d. assumption or the moment assumptions, see Cameron and Trivedi (2005, sec. A.3), Wasserman (2003, sec. 5.3), and Wooldridge (2010, 41–42) for details.
Asymptotic normality
So the good news is that distribution of a consistent estimator is arbitrarily tight around the true value. The bad news is the distribution of the estimator changes with the sample size, as illustrated in figures 1 and 2.
If I knew the distribution of my estimator for every sample size, I could use it to perform inference using this finite-sample distribution, also known as the exact distribution. But the finite-sample distribution of most of the estimators used in applied research is unknown. Fortunately, the distributions of a recentered and rescaled version of these estimators gets arbitrarily close to a normal distribution as the sample size increases. Estimators for which a recentered and rescaled version converges to a normal distribution are said to be asymptotically normal. We use this large-sample distribution to approximate the finite-sample distribution of the estimator.
Figure 2 shows that the distribution of the sample average becomes increasingly tight around the true value as the sample size increases. Instead of looking at the distribution of the estimator $$\widehat{\theta}_N$$ for sample size $$N$$, let’s look at the distribution of $$\sqrt{N}(\widehat{\theta}_N – \theta_0)$$, where $$\theta_0$$ is the true value for which $$\widehat{\theta}_N$$ is consistent.
Example 4 estimates the densities of the recentered and rescaled estimators, which are shown in figure 3.
Example 4: Densities of the recentered and rescaled estimator
. generate double m1000n = sqrt(1000)*(m1000 - 1)
. generate double m100000n = sqrt(100000)*(m100000 - 1)
. kdensity m1000n, n(500) generate(x_1000n f_1000n) kernel(gaussian) nograph
. label variable f_1000n "N=1000"
. kdensity m100000n, n(500) generate(x_100000n f_100000n) kernel(gaussian) ///
> nograph
. label variable f_100000n "N=100000"
. graph twoway (line f_1000n x_1000n) (line f_100000n x_100000n)
Figure 3: Densities of the recentered and rescaled estimator for sample sizes 1,000 and 100,000
The densities of the recentered and rescaled estimators in figure 3 are indistinguishable from each and look close to a normal density. The Lindberg–Levy central limit theorem guarantees that the distribution of the recentered and rescaled sample average of i.i.d. random variables with finite mean $$\mu$$ and finite variance $$\sigma^2$$ gets arbitrarily closer to a normal distribution with mean 0 and variance $$\sigma^2$$ as the sample size increases. In other words, the distribution of $$\sqrt{N}(\widehat{\theta}_N-\mu)$$ gets arbitrarily close to a $$N(0,\sigma^2)$$ distribution as $$\rightarrow\infty$$, where $$\widehat{\theta}_N=1/N\sum_{i=1}^N y_i$$ and $$y_i$$ are realizations of the i.i.d. random variable. This convergence in distribution justifies our use of the distribution $$\widehat{\theta}_N\sim N(\mu,\frac{\sigma^2}{N})$$ in practice.
Given that $$\sigma^2=2$$ for the $$\chi^2(1)$$ distribution, in example 5, we add a plot of a normal density with mean 0 and variance 2 for comparison.
Example 5: Densities of the recentered and rescaled estimator
. twoway (line f_1000n x_1000n) ///
> (line f_100000n x_100000n) ///
> (function normalden(x, sqrt(2)), range(-4 4)) ///
> ,legend( label(3 "Normal(0, 2)") cols(3))
We see that the densities of recentered and rescaled estimators are indistinguishable from the density of a normal distribution with mean 0 and variance 2, as predicted by the theory.
Figure 4: Densities of the recentered and rescaled estimates and a Normal(0,2)
Other versions of the central limit theorem weaken the i.i.d. assumption or the moment assumptions, see Cameron and Trivedi (2005, sec. A.3), Wasserman (2003, sec. 5.3), and Wooldridge (2010, 41–42) for details.
Done and undone
I used MCS to illustrate that the sample average is consistent and asymptotically normal for data drawn from an i.i.d. process with finite mean and variance.
Many method-of-moments estimators, maximum likelihood estimators, and M-estimators are consistent and asymptotically normal under assumptions about the true data-generating process and the estimators themselves. See Cameron and Trivedi (2005, sec. 5.3), Newey and McFadden (1994), Wasserman (2003, chap. 9), and Wooldridge (2010, chap. 12) for discussions.
References
Cameron, A. C., and P. K. Trivedi. 2005. Microeconometrics: Methods and Applications. Cambridge: Cambridge University Press.
Newey, W. K., and D. McFadden. 1994. Large sample estimation and hypothesis testing. In Handbook of Econometrics, ed. R. F. Engle and D. McFadden, vol. 4, 2111–2245. Amsterdam: Elsevier.
Wasserman, L. A. 2003. All of Statistics: A Concise Course in Statistical Inference. New York: Springer.
Wooldridge, J. M. 2010. Econometric Analysis of Cross Section and Panel Data. 2nd ed. Cambridge, Massachusetts: MIT Press.
Categories: Statistics Tags:
## Fitting distributions using bayesmh
This post was written jointly with Yulia Marchenko, Executive Director of Statistics, StataCorp.
As of update 03 Mar 2016, bayesmh provides a more convenient way of fitting distributions to the outcome variable. By design, bayesmh is a regression command, which models the mean of the outcome distribution as a function of predictors. There are cases when we do not have any predictors and want to model the outcome distribution directly. For example, we may want to fit a Poisson distribution or a binomial distribution to our outcome. This can now be done by specifying one of the four new distributions supported by bayesmh in the likelihood() option: dexponential(), dbernoulli(), dbinomial(), or dpoisson(). Previously, the suboption noglmtransform of bayesmh‘s option likelihood() was used to fit the exponential, binomial, and Poisson distributions to the outcome variable. This suboption continues to work but is now undocumented.
For examples, see Beta-binomial model, Bayesian analysis of change-point problem, and Item response theory under Remarks and examples in [BAYES] bayesmh.
We have also updated our earlier “Bayesian binary item response theory models using bayesmh” blog entry to use the new dbernoulli() specification when fitting 3PL, 4PL, and 5PL IRT models.
Categories: Statistics Tags:
## How to generate random numbers in Stata
Overview
I describe how to generate random numbers and discuss some features added in Stata 14. In particular, Stata 14 includes a new default random-number generator (RNG) called the Mersenne Twister (Matsumoto and Nishimura 1998), a new function that generates random integers, the ability to generate random numbers from an interval, and several new functions that generate random variates from nonuniform distributions.
Random numbers from the uniform distribution
In the example below, we use runiform() to create Read more…
Categories: Statistics Tags:
## Vector autoregression—simulation, estimation, and inference in Stata
$$\newcommand{\epsb}{{\boldsymbol{\epsilon}}} \newcommand{\mub}{{\boldsymbol{\mu}}} \newcommand{\thetab}{{\boldsymbol{\theta}}} \newcommand{\Thetab}{{\boldsymbol{\Theta}}} \newcommand{\etab}{{\boldsymbol{\eta}}} \newcommand{\Sigmab}{{\boldsymbol{\Sigma}}} \newcommand{\Phib}{{\boldsymbol{\Phi}}} \newcommand{\Phat}{\hat{{\bf P}}}$$Vector autoregression (VAR) is a useful tool for analyzing the dynamics of multiple time series. VAR expresses a vector of observed variables as a function of its own lags.
Simulation
Let’s begin by simulating a bivariate VAR(2) process using the following specification,
$\begin{bmatrix} y_{1,t}\\ y_{2,t} \end{bmatrix} = \mub + {\bf A}_1 \begin{bmatrix} y_{1,t-1}\\ y_{2,t-1} \end{bmatrix} + {\bf A}_2 \begin{bmatrix} y_{1,t-2}\\ y_{2,t-2} \end{bmatrix} + \epsb_t$
where $$y_{1,t}$$ and $$y_{2,t}$$ are the observed series at time $$t$$, $$\mub$$ is a $$2 \times 1$$ vector of intercepts, $${\bf A}_1$$ and $${\bf A}_2$$ are $$2\times 2$$ parameter matrices, and $$\epsb_t$$ is a $$2\times 1$$ vector of innovations that is uncorrelated over time. I assume a $$N({\bf 0},\Sigmab)$$ distribution for the innovations $$\epsb_t$$, where $$\Sigmab$$ is a $$2\times 2$$ covariance matrix.
I set my sample size to 1,100 and Read more…
## Testing model specification and using the program version of gmm
This post was written jointly with Joerg Luedicke, Senior Social Scientist and Statistician, StataCorp.
The command gmm is used to estimate the parameters of a model using the generalized method of moments (GMM). GMM can be used to estimate the parameters of models that have more identification conditions than parameters, overidentified models. The specification of these models can be evaluated using Hansen’s J statistic (Hansen, 1982).
We use gmm to estimate the parameters of a Poisson model with an endogenous regressor. More instruments than regressors are available, so the model is overidentified. We then use estat overid to calculate Hansen’s J statistic and test the validity of the overidentification restrictions.
## Bayesian binary item response theory models using bayesmh
This post was written jointly with Yulia Marchenko, Executive Director of Statistics, StataCorp.
Overview
Item response theory (IRT) is used for modeling the relationship between the latent abilities of a group of subjects and the examination items used for measuring their abilities. Stata 14 introduced a suite of commands for fitting IRT models using maximum likelihood; see, for example, the blog post Spotlight on irt by Rafal Raciborski and the [IRT] Item Response Theory manual for more details. In this post, we demonstrate how to fit Bayesian binary IRT models by using the redefine() option introduced for the bayesmh command in Stata 14.1. We also use the likelihood option dbernoulli() available as of the update on 03 Mar 2016 for fitting Bernoulli distribution. If you are not familiar with the concepts and jargon of Bayesian statistics, you may want to watch the introductory videos on the Stata Youtube channel before proceeding.
We use the abridged version of the mathematics and science data from DeBoeck and Wilson (2004), masc1. The dataset includes 800 student responses to 9 test questions intended to measure mathematical ability.
The irt suite fits IRT models using data in the wide form – one observation per subject with items recorded in separate variables. To fit IRT models using bayesmh, we need data in the long form, where items are recorded as multiple observations per subject. We thus reshape the dataset in a long form: we have a single binary response variable, y, and two index variables, item and id, which identify the items and subjects, respectively. This allows us to Read more…
Categories: Statistics Tags:
## regress, probit, or logit?
In a previous post I illustrated that the probit model and the logit model produce statistically equivalent estimates of marginal effects. In this post, I compare the marginal effect estimates from a linear probability model (linear regression) with marginal effect estimates from probit and logit models.
My simulations show that when the true model is a probit or a logit, using a linear probability model can produce inconsistent estimates of the marginal effects of interest to researchers. The conclusions hinge on the probit or logit model being the true model.
Simulation results
For all simulations below, I use a sample size of 10,000 and 5,000 replications. The true data-generating processes (DGPs) are constructed using Read more…
Categories: Statistics Tags:
We often use probit and logit models to analyze binary outcomes. A case can be made that the logit model is easier to interpret than the probit model, but Stata’s margins command makes any estimator easy to interpret. Ultimately, estimates from both models produce similar results, and using one or the other is a matter of habit or preference.
I show that the estimates from a probit and logit model are similar for the computation of a set of effects that are of interest to researchers. I focus on the effects of changes in the covariates on the probability of a positive outcome for continuous and discrete covariates. I evaluate these effects on average and at the mean value of the covariates. In other words, I study the average marginal effects (AME), the average treatment effects (ATE), the marginal effects at the mean values of the covariates (MEM), and the treatment effects at the mean values of the covariates (TEM).
First, I present the results. Second, I discuss the code used for the simulations.
Results
In Table 1, I present the results of a simulation with 4,000 replications when the true data generating process (DGP) satisfies the assumptions of a probit model. I show the Read more…
Categories: Statistics Tags:
## Using mlexp to estimate endogenous treatment effects in a heteroskedastic probit model
I use features new to Stata 14.1 to estimate an average treatment effect (ATE) for a heteroskedastic probit model with an endogenous treatment. In 14.1, we added new prediction statistics after mlexp that margins can use to estimate an ATE.
I am building on a previous post in which I demonstrated how to use mlexp to estimate the parameters of a probit model with an endogenous treatment and used margins to estimate the ATE for the model Using mlexp to estimate endogenous treatment effects in a probit model. Currently, no official commands estimate the heteroskedastic probit model with an endogenous treatment, so in this post I show how mlexp can be used to extend the models estimated by Stata.
Heteroskedastic probit model
For binary outcome $$y_i$$ and regressors $${\bf x}_i$$, the probit model assumes
$y_i = {\bf 1}({\bf x}_i{\boldsymbol \beta} + \epsilon_i > 0)$
The indicator function $${\bf 1}(\cdot)$$ outputs 1 when its input is true and outputs 0 otherwise. The error $$\epsilon_i$$ is standard normal.
Assuming that the error has constant variance may not always be wise. Suppose we are studying a certain business decision. Large firms, because they have the resources to take chances, may exhibit more variation in the factors that affect their decision than small firms.
In the heteroskedastic probit model, regressors $${\bf w}_i$$ determine the variance of $$\epsilon_i$$. Following Harvey (1976), we have
$\mbox{Var}\left(\epsilon_i\right) = \left\{\exp\left({\bf w}_i{\boldsymbol \gamma}\right)\right\}^2 \nonumber$
Heteroskedastic probit model with treatment
In this section, I review the potential-outcome framework used to define an ATE and extend it for the heteroskedastic probit model. For each treatment level, there is an outcome that we would observe if a person were to select that treatment level. When the outcome is binary and there are two treatment levels, we can specify how the potential outcomes $$y_{0i}$$ and $$y_{1i}$$ are generated from the regressors $${\bf x}_i$$ and the error terms $$\epsilon_{0i}$$ and $$\epsilon_{1i}$$:
$\begin{eqnarray*} y_{0i} &=& {\bf 1}({\bf x}_i{\boldsymbol \beta}_0 + \epsilon_{0i} > 0) \cr y_{1i} &=& {\bf 1}({\bf x}_i{\boldsymbol \beta}_1 + \epsilon_{1i} > 0) \end{eqnarray*}$
We assume a heteroskedastic probit model for the potential outcomes. The errors are normal with mean $$0$$ and conditional variance generated by regressors $${\bf w}_i$$. In this post, we assume equal variance of the potential outcome errors.
$\mbox{Var}\left(\epsilon_{0i}\right) = \mbox{Var}\left(\epsilon_{1i}\right) = \left\{\exp\left({\bf w}_i{\boldsymbol \gamma}\right)\right\}^2 \nonumber$
The heteroskedastic probit model for potential outcomes $$y_{0i}$$ and $$y_{1i}$$ with treatment $$t_i$$ assumes that we observe the outcome
$y_i = (1-t_i) y_{0i} + t_i y_{1i} \nonumber$
So we observe $$y_{1i}$$ under the treatment ($$t_{i}=1$$) and $$y_{0i}$$ when the treatment is withheld ($$t_{i}=0$$).
The treatment $$t_i$$ is determined by regressors $${\bf z}_i$$ and error $$u_i$$:
$t_i = {\bf 1}({\bf z}_i{\boldsymbol \psi} + u_i > 0) \nonumber$
The treatment error $$u_i$$ is normal with mean zero, and we allow its variance to be determined by another set of regressors $${\bf v}_i$$:
$\mbox{Var}\left(u_i\right) = \left\{\exp\left({\bf v}_i{\boldsymbol \alpha}\right)\right\}^2 \nonumber$
Heteroskedastic probit model with endogenous treatment
In the previous post, I described how to model endogeneity for the treatment $$t_i$$ by correlating the outcome errors $$\epsilon_{0i}$$ and $$\epsilon_{1i}$$ with the treatment error $$u_i$$. We use the same framework for modeling endogeneity here. The variance of the errors may change depending on the heteroskedasticity regressors $${\bf w}_i$$ and $${\bf v}_i$$, but their correlation remains constant. The errors $$\epsilon_{0i}$$, $$\epsilon_{1i}$$, and $$u_i$$ are trivariate normal with correlation
$\left[\begin{matrix} 1 & \rho_{01} & \rho_{t} \cr \rho_{01} & 1 & \rho_{t} \cr \rho_{t} & \rho_{t} & 1 \end{matrix}\right] \nonumber$
Now we have all the pieces we need to write the log likelihood of the heteroskedastic probit model with an endogenous treatment. The form of the likelihood is similar to what was given in the previous post. Now the inputs to the bivariate normal cumulative distribution function, $$\Phi_2$$, are standardized by dividing by the conditional standard deviations of the errors.
The log likelihood for observation $$i$$ is
$\begin{eqnarray*} \ln L_i = & & {\bf 1}(y_i =1 \mbox{ and } t_i = 1) \ln \Phi_2\left\{\frac{{\bf x}_i{\boldsymbol \beta}_1}{\exp\left({\bf w}_i{\boldsymbol \gamma}\right)}, \frac{{\bf z}_i{\boldsymbol \psi}}{\exp\left({\bf v}_i{\boldsymbol \alpha}\right)},\rho_t\right\} + \cr & & {\bf 1}(y_i=0 \mbox{ and } t_i=1)\ln \Phi_2\left\{\frac{-{\bf x}_i{\boldsymbol \beta}_1}{\exp\left({\bf w}_i{\boldsymbol \gamma}\right)}, \frac{{\bf z}_i{\boldsymbol \psi}}{\exp\left({\bf v}_i{\boldsymbol \alpha}\right)},-\rho_t\right\} + \cr & & {\bf 1}(y_i=1 \mbox{ and } t_i=0) \ln \Phi_2\left\{\frac{{\bf x}_i{\boldsymbol \beta}_0}{\exp\left({\bf w}_i{\boldsymbol \gamma}\right)}, \frac{-{\bf z}_i{\boldsymbol \psi}}{\exp\left({\bf v}_i{\boldsymbol \alpha}\right)},-\rho_t\right\} + \cr & & {\bf 1}(y_i=0 \mbox{ and } t_i = 0)\ln \Phi_2\left\{\frac{-{\bf x}_i{\boldsymbol \beta}_0}{\exp\left({\bf w}_i{\boldsymbol \gamma}\right)}, \frac{-{\bf z}_i{\boldsymbol \psi}}{\exp\left({\bf v}_i{\boldsymbol \alpha}\right)},\rho_t\right\} \end{eqnarray*}$
The data
We will simulate data from a heteroskedastic probit model with an endogenous treatment and then estimate the parameters of the model with mlexp. Then, we will use margins to estimate the ATE.
. set seed 323
. set obs 10000
number of observations (_N) was 0, now 10,000
. generate x = .8*rnormal() + 4
. generate b = rpoisson(1)
. generate z = rnormal()
. matrix cm = (1, .3,.7 \ .3, 1, .7 \ .7, .7, 1)
. drawnorm ey0 ey1 et, corr(cm)
We simulate a random sample of 10,000 observations. The treatment and outcome regressors are generated in a similar manner to their creation in the last post. As in the last post, we generate the errors with drawnorm to have correlation $$0.7$$.
. generate g = runiform()
. generate h = rnormal()
. quietly replace ey0 = ey0*exp(.5*g)
. quietly replace ey1 = ey1*exp(.5*g)
. quietly replace et = et*exp(.1*h)
. generate t = .5*x - .1*b + .5*z - 2.4 + et > 0
. generate y0 = .6*x - .8 + ey0 > 0
. generate y1 = .3*x - 1.3 + ey1 > 0
. generate y = (1-t)*y0 + t*y1
The uniform variable g is generated as a regressor for the outcome error variance, while h is a regressor for the treatment error variance. We scale the errors by using the variance regressors so that they are heteroskedastic, and then we generate the treatment and outcome indicators.
Estimating the model parameters
Now, we will use mlexp to estimate the parameters of the heteroskedastic probit model with an endogenous treatment. As in the previous post, we use the cond() function to calculate different values of the likelihood based on the different values of $$y$$ and $$t$$. We use the factor-variable operator ibn on $$t$$ in equation y to allow for a different intercept at each level of $$t$$. An interaction between $$t$$ and $$x$$ is also specified in equation y. This allows for a different coefficient on $$x$$ at each level of $$t$$.
. mlexp (ln(cond(t, ///
> cond(y,binormal({y: i.t#c.x ibn.t}/exp({g:g}), ///
> {t: x b z _cons}/exp({h:h}),{rho}), ///
> binormal(-{y:}/exp({g:}),{t:}/exp({h:}),-{rho})), ///
> cond(y,binormal({y:}/exp({g:}),-{t:}/exp({h:}),-{rho}), ///
> binormal(-{y:}/exp({g:}),-{t:}/exp({h:}),{rho}) ///
> )))), vce(robust)
initial: log pseudolikelihood = -13862.944
alternative: log pseudolikelihood = -16501.619
rescale: log pseudolikelihood = -13858.877
rescale eq: log pseudolikelihood = -11224.877
Iteration 0: log pseudolikelihood = -11224.877 (not concave)
Iteration 1: log pseudolikelihood = -10644.625
Iteration 2: log pseudolikelihood = -10074.998
Iteration 3: log pseudolikelihood = -9976.6027
Iteration 4: log pseudolikelihood = -9973.0988
Iteration 5: log pseudolikelihood = -9973.0913
Iteration 6: log pseudolikelihood = -9973.0913
Maximum likelihood estimation
Log pseudolikelihood = -9973.0913 Number of obs = 10,000
------------------------------------------------------------------------------
| Robust
| Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
y |
t#c.x |
0 | .6178115 .0334521 18.47 0.000 .5522467 .6833764
1 | .2732094 .0365742 7.47 0.000 .2015253 .3448936
|
t |
0 | -.8403294 .1130197 -7.44 0.000 -1.061844 -.6188149
1 | -1.215177 .1837483 -6.61 0.000 -1.575317 -.8550371
-------------+----------------------------------------------------------------
g |
g | .4993187 .0513297 9.73 0.000 .3987143 .5999232
-------------+----------------------------------------------------------------
t |
x | .4985802 .0183033 27.24 0.000 .4627065 .5344539
b | -.1140255 .0132988 -8.57 0.000 -.1400908 -.0879603
z | .4993995 .0150844 33.11 0.000 .4698347 .5289643
_cons | -2.402772 .0780275 -30.79 0.000 -2.555703 -2.249841
-------------+----------------------------------------------------------------
h |
h | .1011185 .0199762 5.06 0.000 .0619658 .1402713
-------------+----------------------------------------------------------------
/rho | .7036964 .0326734 21.54 0.000 .6396577 .7677351
------------------------------------------------------------------------------
Our parameter estimates are close to their true values.
Estimating the ATE
The ATE of $$t$$ is the expected value of the difference between $$y_{1i}$$ and $$y_{0i}$$, the average difference between the potential outcomes. Using the law of iterated expectations, we have
$\begin{eqnarray*} E(y_{1i}-y_{0i})&=& E\left\{ E\left(y_{1i}-y_{0i}|{\bf x}_i,{\bf w}_i\right)\right\} \cr &=& E\left\lbrack\Phi\left\{\frac{{\bf x}_i{\boldsymbol \beta}_1}{ \exp\left({\bf w}_i{\boldsymbol \gamma}\right)}\right\}- \Phi\left\{\frac{{\bf x}_i{\boldsymbol \beta}_0}{ \exp\left({\bf w}_i{\boldsymbol \gamma}\right)}\right\}\right\rbrack \cr \end{eqnarray*}$
This can be estimated as a mean of predictions.
Now, we estimate the ATE by using margins. We specify the normal probability expression in the expression() option. We use the expression function xb() to get the linear predictions for the outcome equation and the outcome error variance equation. We can now predict these linear forms after mlexp in Stata 14.1. We specify r.t so that margins will take the difference of the expression under t=1 and t=0. We specify vce(unconditional) to obtain standard errors for the population ATE rather than the sample ATE; we specified vce(robust) for mlexp so that we could specify vce(unconditional) for margins. The contrast(nowald) option is specified to omit the Wald test for the difference.
. margins r.t, expression(normal(xb(y)/exp(xb(g)))) ///
> vce(unconditional) contrast(nowald)
Contrasts of predictive margins
Expression : normal(xb(y)/exp(xb(g)))
--------------------------------------------------------------
| Unconditional
| Contrast Std. Err. [95% Conf. Interval]
-------------+------------------------------------------------
t |
(1 vs 0) | -.4183043 .0202635 -.4580202 -.3785885
--------------------------------------------------------------
We estimate that the ATE of $$t$$ on $$y$$ is $$-0.42$$. So taking the treatment decreases the probability of a positive outcome by $$0.42$$ on average over the population.
We will compare this estimate to the average difference of $$y_{1}$$ and $$y_{0}$$ in the sample. We can do this because we simulated the data. In practice, only one potential outcome is observed for every observation, and this average difference cannot be computed.
. generate diff = y1 - y0
. sum diff
Variable | Obs Mean Std. Dev. Min Max
-------------+---------------------------------------------------------
diff | 10,000 -.4164 .5506736 -1 1
In our sample, the average difference of $$y_{1}$$ and $$y_{0}$$ is also $$-0.42$$.
Conclusion
I have demonstrated how to estimate the parameters of a model that is not available in Stata: the heteroskedastic probit model with an endogenous treatment using mlexp. See [R] mlexp for more details about mlexp. I have also demonstrated how to use margins to estimate the ATE for the heteroskedastic probit model with an endogenous treatment. See [R] margins for more details about mlexp.
Reference
Harvey, A. C. 1976. Estimating regression models with multiplicative heteroscedasticity. Econometrica 44: 461-465.
Categories: Statistics Tags:
## Understanding the generalized method of moments (GMM): A simple example
$$\newcommand{\Eb}{{\bf E}}$$This post was written jointly with Enrique Pinzon, Senior Econometrician, StataCorp.
The generalized method of moments (GMM) is a method for constructing estimators, analogous to maximum likelihood (ML). GMM uses assumptions about specific moments of the random variables instead of assumptions about the entire distribution, which makes GMM more robust than ML, at the cost of some efficiency. The assumptions are called moment conditions.
GMM generalizes the method of moments (MM) by allowing the number of moment conditions to be greater than the number of parameters. Using these extra moment conditions makes GMM more efficient than MM. When there are more moment conditions than parameters, the estimator is said to be overidentified. GMM can efficiently combine the moment conditions when the estimator is overidentified.
We illustrate these points by estimating the mean of a $$\chi^2(1)$$ by MM, ML, a simple GMM estimator, and an efficient GMM estimator. This example builds on Efficiency comparisons by Monte Carlo simulation and is similar in spirit to the example in Wooldridge (2001).
GMM weights and efficiency
GMM builds on the ideas of expected values and sample averages. Moment conditions are expected values that specify the model parameters in terms of the true moments. The sample moment conditions are the sample equivalents to the moment conditions. GMM finds the parameter values that are closest to satisfying the sample moment conditions.
The mean of a $$\chi^2$$ random variable with $$d$$ degree of freedom is $$d$$, and its variance is $$2d$$. Two moment conditions for the mean are thus
$\begin{eqnarray*} \Eb\left[Y – d \right]&=& 0 \\ \Eb\left[(Y – d )^2 – 2d \right]&=& 0 \end{eqnarray*}$
The sample moment equivalents are
$\begin{eqnarray} 1/N\sum_{i=1}^N (y_i – \widehat{d} )&=& 0 \tag{1} \\ 1/N\sum_{i=1}^N\left[(y_i – \widehat{d} )^2 – 2\widehat{d}\right] &=& 0 \tag{2} \end{eqnarray}$
We could use either sample moment condition (1) or sample moment condition (2) to estimate $$d$$. In fact, below we use each one and show that (1) provides a much more efficient estimator.
When we use both (1) and (2), there are two sample moment conditions and only one parameter, so we cannot solve this system of equations. GMM finds the parameters that get as close as possible to solving weighted sample moment conditions.
Uniform weights and optimal weights are two ways of weighting the sample moment conditions. The uniform weights use an identity matrix to weight the moment conditions. The optimal weights use the inverse of the covariance matrix of the moment conditions.
We begin by drawing a sample of a size 500 and use gmm to estimate the parameters using sample moment condition (1), which we illustrate is the sample as the sample average.
. drop _all
. set obs 500
number of observations (_N) was 0, now 500
. set seed 12345
. generate double y = rchi2(1)
. gmm (y - {d}) , instruments( ) onestep
Step 1
Iteration 0: GMM criterion Q(b) = .82949186
Iteration 1: GMM criterion Q(b) = 1.262e-32
Iteration 2: GMM criterion Q(b) = 9.545e-35
note: model is exactly identified
GMM estimation
Number of parameters = 1
Number of moments = 1
Initial weight matrix: Unadjusted Number of obs = 500
------------------------------------------------------------------------------
| Robust
| Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
/d | .9107644 .0548098 16.62 0.000 .8033392 1.01819
------------------------------------------------------------------------------
Instruments for equation 1: _cons
. mean y
Mean estimation Number of obs = 500
--------------------------------------------------------------
| Mean Std. Err. [95% Conf. Interval]
-------------+------------------------------------------------
y | .9107644 .0548647 .8029702 1.018559
--------------------------------------------------------------
The sample moment condition is the product of an observation-level error function that is specified inside the parentheses and an instrument, which is a vector of ones in this case. The parameter $$d$$ is enclosed in curly braces {}. We specify the onestep option because the number of parameters is the same as the number of moment conditions, which is to say that the estimator is exactly identified. When it is, each sample moment condition can be solved exactly, and there are no efficiency gains in optimally weighting the moment conditions.
We now illustrate that we could use the sample moment condition obtained from the variance to estimate $$d$$.
. gmm ((y-{d})^2 - 2*{d}) , instruments( ) onestep
Step 1
Iteration 0: GMM criterion Q(b) = 5.4361161
Iteration 1: GMM criterion Q(b) = .02909692
Iteration 2: GMM criterion Q(b) = .00004009
Iteration 3: GMM criterion Q(b) = 5.714e-11
Iteration 4: GMM criterion Q(b) = 1.172e-22
note: model is exactly identified
GMM estimation
Number of parameters = 1
Number of moments = 1
Initial weight matrix: Unadjusted Number of obs = 500
------------------------------------------------------------------------------
| Robust
| Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
/d | .7620814 .1156756 6.59 0.000 .5353613 .9888015
------------------------------------------------------------------------------
Instruments for equation 1: _cons
While we cannot say anything definitive from only one draw, we note that this estimate is further from the truth and that the standard error is much larger than those based on the sample average.
Now, we use gmm to estimate the parameters using uniform weights.
. matrix I = I(2)
. gmm ( y - {d}) ( (y-{d})^2 - 2*{d}) , instruments( ) winitial(I) onestep
Step 1
Iteration 0: GMM criterion Q(b) = 6.265608
Iteration 1: GMM criterion Q(b) = .05343812
Iteration 2: GMM criterion Q(b) = .01852592
Iteration 3: GMM criterion Q(b) = .0185221
Iteration 4: GMM criterion Q(b) = .0185221
GMM estimation
Number of parameters = 1
Number of moments = 2
Initial weight matrix: user Number of obs = 500
------------------------------------------------------------------------------
| Robust
| Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
/d | .7864099 .1050692 7.48 0.000 .5804781 .9923418
------------------------------------------------------------------------------
Instruments for equation 1: _cons
Instruments for equation 2: _cons
The first set of parentheses specifies the first sample moment condition, and the second set of parentheses specifies the second sample moment condition. The options winitial(I) and onestep specify uniform weights.
Finally, we use gmm to estimate the parameters using two-step optimal weights. The weights are calculated using first-step consistent estimates.
. gmm ( y - {d}) ( (y-{d})^2 - 2*{d}) , instruments( ) winitial(I)
Step 1
Iteration 0: GMM criterion Q(b) = 6.265608
Iteration 1: GMM criterion Q(b) = .05343812
Iteration 2: GMM criterion Q(b) = .01852592
Iteration 3: GMM criterion Q(b) = .0185221
Iteration 4: GMM criterion Q(b) = .0185221
Step 2
Iteration 0: GMM criterion Q(b) = .02888076
Iteration 1: GMM criterion Q(b) = .00547223
Iteration 2: GMM criterion Q(b) = .00546176
Iteration 3: GMM criterion Q(b) = .00546175
GMM estimation
Number of parameters = 1
Number of moments = 2
Initial weight matrix: user Number of obs = 500
GMM weight matrix: Robust
------------------------------------------------------------------------------
| Robust
| Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
/d | .9566219 .0493218 19.40 0.000 .8599529 1.053291
------------------------------------------------------------------------------
Instruments for equation 1: _cons
Instruments for equation 2: _cons
All four estimators are consistent. Below we run a Monte Carlo simulation to see their relative efficiencies. We are most interested in the efficiency gains afforded by optimal GMM. We include the sample average, the sample variance, and the ML estimator discussed in Efficiency comparisons by Monte Carlo simulation. Theory tells us that the optimally weighted GMM estimator should be more efficient than the sample average but less efficient than the ML estimator.
The code below for the Monte Carlo builds on Efficiency comparisons by Monte Carlo simulation, Maximum likelihood estimation by mlexp: A chi-squared example, and Monte Carlo simulations using Stata. Click gmmchi2sim.do to download this code.
. clear all
. set seed 12345
. matrix I = I(2)
. postfile sim d_a d_v d_ml d_gmm d_gmme using efcomp, replace
. forvalues i = 1/2000 {
2. quietly drop _all
3. quietly set obs 500
4. quietly generate double y = rchi2(1)
5.
. quietly mean y
6. local d_a = _b[y]
7.
. quietly gmm ( (y-{d=d_a'})^2 - 2*{d}) , instruments( ) ///
8. if e(converged)==1 {
9. local d_v = _b[d:_cons]
10. }
11. else {
12. local d_v = .
13. }
14.
. quietly mlexp (ln(chi2den({d=d_a'},y)))
15. if e(converged)==1 {
16. local d_ml = _b[d:_cons]
17. }
18. else {
19. local d_ml = .
20. }
21.
. quietly gmm ( y - {d=d_a'}) ( (y-{d})^2 - 2*{d}) , instruments( ) ///
> winitial(I) onestep conv_maxiter(200)
22. if e(converged)==1 {
23. local d_gmm = _b[d:_cons]
24. }
25. else {
26. local d_gmm = .
27. }
28.
. quietly gmm ( y - {d=d_a'}) ( (y-{d})^2 - 2*{d}) , instruments( ) ///
29. if e(converged)==1 {
30. local d_gmme = _b[d:_cons]
31. }
32. else {
33. local d_gmme = .
34. }
35.
. post sim (d_a') (d_v') (d_ml') (d_gmm') (d_gmme')
36.
. }
. postclose sim
. use efcomp, clear
. summarize
Variable | Obs Mean Std. Dev. Min Max
-------------+---------------------------------------------------------
d_a | 2,000 1.00017 .0625367 .7792076 1.22256
d_v | 1,996 1.003621 .1732559 .5623049 2.281469
d_ml | 2,000 1.002876 .0395273 .8701175 1.120148
d_gmm | 2,000 .9984172 .1415176 .5947328 1.589704
d_gmme | 2,000 1.006765 .0540633 .8224731 1.188156
The simulation results indicate that the ML estimator is the most efficient (d_ml, std. dev. 0.0395), followed by the efficient GMM estimator (d_gmme}, std. dev. 0.0541), followed by the sample average (d_a, std. dev. 0.0625), followed by the uniformly-weighted GMM estimator (d_gmm, std. dev. 0.1415), and finally followed by the sample-variance moment condition (d_v, std. dev. 0.1732).
The estimator based on the sample-variance moment condition does not converge for 4 of 2,000 draws; this is why there are only 1,996 observations on d_v when there are 2,000 observations for the other estimators. These convergence failures occurred even though we used the sample average as the starting value of the nonlinear solver.
For a better idea about the distributions of these estimators, we graph the densities of their estimates.
Figure 1: Densities of the estimators
The density plots illustrate the efficiency ranking that we found from the standard deviations of the estimates.
The uniformly weighted GMM estimator is less efficient than the sample average because it places the same weight on the sample average as on the much less efficient estimator based on the sample variance.
In each of the overidentified cases, the GMM estimator uses a weighted average of two sample moment conditions to estimate the mean. The first sample moment condition is the sample average. The second moment condition is the sample variance. As the Monte Carlo results showed, the sample variance provides a much less efficient estimator for the mean than the sample average.
The GMM estimator that places equal weights on the efficient and the inefficient estimator is much less efficient than a GMM estimator that places much less weight on the less efficient estimator.
We display the weight matrix from our optimal GMM estimator to see how the sample moments were weighted.
. quietly gmm ( y - {d}) ( (y-{d})^2 - 2*{d}) , instruments( ) winitial(I)
. matlist e(W), border(rows)
-------------------------------------
| 1 | 2
| _cons | _cons
-------------+-----------+-----------
1 | |
_cons | 1.621476 |
-------------+-----------+-----------
2 | |
_cons | -.2610053 | .0707775
-------------------------------------
The diagonal elements show that the sample-mean moment condition receives more weight than the less efficient sample-variance moment condition.
Done and undone
We used a simple example to illustrate how GMM exploits having more equations than parameters to obtain a more efficient estimator. We also illustrated that optimally weighting the different moments provides important efficiency gains over an estimator that uniformly weights the moment conditions.
Our cursory introduction to GMM is best supplemented with a more formal treatment like the one in Cameron and Trivedi (2005) or Wooldridge (2010).
Graph code appendix
use efcomp
local N = _N
kdensity d_a, n(N') generate(x_a den_a) nograph
kdensity d_v, n(N') generate(x_v den_v) nograph
kdensity d_ml, n(N') generate(x_ml den_ml) nograph
kdensity d_gmm, n(N') generate(x_gmm den_gmm) nograph
kdensity d_gmme, n(N') generate(x_gmme den_gmme) nograph
twoway (line den_a x_a, lpattern(solid)) ///
(line den_v x_v, lpattern(dash)) ///
(line den_ml x_ml, lpattern(dot)) ///
(line den_gmm x_gmm, lpattern(dash_dot)) ///
(line den_gmme x_gmme, lpattern(shordash))
References
Cameron, A. C., and P. K. Trivedi. 2005. Microeconometrics: Methods and applications. Cambridge: Cambridge University Press.
Wooldridge, J. M. 2001. Applications of generalized method of moments estimation. Journal of Economic Perspectives 15(4): 87-100.
Wooldridge, J. M. 2010. Econometric Analysis of Cross Section and Panel Data. 2nd ed. Cambridge, Massachusetts: MIT Press.
Categories: Statistics Tags:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7624198794364929, "perplexity": 2582.6772045911143}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860113541.87/warc/CC-MAIN-20160428161513-00073-ip-10-239-7-51.ec2.internal.warc.gz"}
|
http://www.lilypond.org/doc/v2.18/Documentation/usage/lilypond-output-in-other-programs.html
|
## 4.4 LilyPond output in other programs
This section shows methods to integrate text and music, different than the automated method with `lilypond-book`.
### Many quotes from a large score
If you need to quote many fragments from a large score, you can also use the clip systems feature, see Extracting fragments of music.
### Inserting LilyPond output into OpenOffice and LibreOffice
LilyPond notation can be added to OpenOffice.org and LibreOffice with OOoLilyPond.
### Inserting LilyPond output into other programs
To insert LilyPond output in other programs, use `lilypond` instead of `lilypond-book`. Each example must be created individually and added to the document; consult the documentation for that program. Most programs will be able to insert LilyPond output in ‘PNG’, ‘EPS’, or ‘PDF’ formats.
To reduce the white space around your LilyPond score, use the following options
```\paper{
indent=0\mm
line-width=120\mm
oddFooterMarkup=##f
bookTitleMarkup = ##f
scoreTitleMarkup = ##f
}
{ c1 }
```
To produce useful image files:
```EPS
PNG
lilypond -dbackend=eps -dno-gs-load-fonts -dinclude-eps-fonts --png myfile.ly
A transparent PNG
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9497371315956116, "perplexity": 11223.564057914526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131305143.93/warc/CC-MAIN-20150323172145-00161-ip-10-168-14-71.ec2.internal.warc.gz"}
|
https://www.ques10.com/p/50009/a-simply-supported-wooden-beam-of-span-13-m-having/
|
0
A simply supported wooden beam of span 1.3 m having a cross-section 150 mm wide and 250 mm deep carries a point load W at the centre. The permissible stresses are 7 N/mm2 in bending and 1 N/mm2.
Calculate the safe load W.
somd-2 • 192 views
0
Given:
For a rectangular S.S wooden beam -
b = 150mm, d = 250mm, L = 1.3m ,
Central point load = 'W' N
$\sigma_{b} = 7 N/mm^{2} and q_{max} = 1 N/mm^{2}$
Solution:
M = Max. B.M. = $\frac{WL}{4} = \frac{w \times 1.3}{4} = 0.325 W - N.m$
S = Max S.F =. Reaction = $\frac{W}{2}N = 0.5 WN$
For rectangular section,
A = b x d = 150 x 250 = 37500 $mm^{2}$
I = $\frac{bd^{3}}{12} = \frac{150 \times 250^{3}}{12} = 195.31 \times 10^{6} mm^{4}$
$y_{max} = d/2 = \frac{250}{2} = 125 mm$
Value of 'W' for bending stress criteria
$\frac{M}{I} = \frac{\sigma}{y} \therefore M = \frac{\sigma}{y} \times y$
$\therefore 325 W = \frac{7 \times 195.31 \times 10^{6}}{125}$
$\therefore W = 33653.41 N = 33.65kN$
Value of 'W' for shear stress criteria
$q_{max} = \frac{1.55}{A}$
$\therefore 1 = \frac{1.5 \times 0 w}{37500} = 33.65kN$
$\therefore W = 50000N - 50kN$
Safe Value of W = min, of A & B = 33.65 kN
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4175631105899811, "perplexity": 6846.812234821431}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875147234.52/warc/CC-MAIN-20200228135132-20200228165132-00185.warc.gz"}
|
http://math.stackexchange.com/users/11995/zbigniew?tab=questions
|
Zbigniew
Reputation
227
Next privilege 250 Rep.
2 14
Impact
~6k people reached
# 19 Questions
2
3
1
1k
views
1
3
1
238
views
2
2
1
173
views
2
1
2k
views
2
1
108
views
2
1
153
views
2
0
114
views
### Asymptotic stability of semi-trivial solution and existence of a nontrivial solution
jul 1 '11 at 1:31 Zbigniew 227
1
vote
1
63
views
1
vote
0
37
views
### Linear map in Hilbert space.
apr 25 '14 at 18:12 Davide Giraudo 98k
1
1
vote
1
79
views
1
vote
1
1k
views
0
0
53
views
### $\arctan$ relation
jan 29 at 9:28 Zbigniew 227
0
0
17
views
### Maximal distance of a segment
jan 21 at 3:05 Zbigniew 227
0
2
23
views
0
0
31
views
### Confusion about Theorem structure of Structure theorem for Gaussian measures
dec 23 at 9:09 Zbigniew 227
0
0
13
views
### Conditions of expression to be positive
mar 23 at 19:00 barto 6,600
0
0
121
views
### convergence in law of Cauchy random variables.
feb 17 '14 at 16:09 Zbigniew 227
0
1
81
views
0
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7768405675888062, "perplexity": 9350.641961500765}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701163421.31/warc/CC-MAIN-20160205193923-00287-ip-10-236-182-209.ec2.internal.warc.gz"}
|
https://isotc211.geolexica.org/concepts/2039/
|
ISO TC 211 Geographic information/Geomatics
# ISO/TC 211 Geolexica
## Concept “coordinate dimension <coordinate geometry>”
Term ID
2039
source
ISO 19107:2019, (E), 3.17
eng
### coordinate dimension <coordinate geometry>
number of separate decisions needed to describe a position in a coordinate system
Note to entry: The coordinate dimension represents the number of choices made, and constraints can restrict choices. A barycentric coordinate which has (n+1)-offsets, but the underlying space is dimension n. Homogeneous coordinates (wx, wy, wz, w) are actually 3 dimensional because the choice of "w" does not affect the position, i.e. (wx, wy, wz, w) = (x y, z, 1) → (x, y, z) which is not affected by w. The dimension will be at most the count of the numbers in the coordinate, but it can be less if the coordinates are constrained in some manner.asie
ORIGIN: ISO/TC 211 Glossary of Terms - English (last updated: 2020-06-02)
JSON
/api/concepts/2039.json
SKOS in JSON-LD
/api/concepts/2039.jsonld
SKOS in RDF
/api/concepts/2039.ttl
info
• status: valid
• classification: preferred
• date accepted: 2019-12-02
Review
last review performed:
(2019-12-02)
status:
final
decision:
accepted
decision event:
Publication of document ISO 19107:2019(E)
notes:
This entry supersedes the entry for coordinate dimension in ISO 19107:2003(E)
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9078134894371033, "perplexity": 7353.458167206056}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107881551.11/warc/CC-MAIN-20201023234043-20201024024043-00615.warc.gz"}
|
http://tex.stackexchange.com/questions/12638/how-to-change-catcode-in-a-macro
|
# How to change #catcode in a macro
I want to change the catcode in a macro, but it seems it doesn't work. Can anyone help me?
\def\A{\catcode\|=0 |bf{test}}
|bf{test} would not work as expect.
-
Li: A tip: If you indent lines by 4 spaces, then they are marked as a code sample. You can also highlight the code and click the "code" button (with "101010" on it). – lockstep Mar 4 '11 at 9:21
This doesn't work because the | was already read as part of the argument of \def\A and therefore has its catcode already before the included \catcode is executed.
You need to move the catcode change out of the macro:
\begingroup
\catcode\|=0
\gdef\A{|bf{test}}
\endgroup
There are also other ways to do it: eTeX privides \scantokens which re-read its content so that the catcodes are reapplied and there is a trick to do it using \lowercase.
Note that in this example makes actually no difference if \ or | is used. It would if you also change the catcode of \ to something else. If you tell us more about your exact application more specific answers can be given.
Also note that your code example would make the catcode change active for the rest of the group \A is used in, which is most likely not what you intend.
-
Thank you. I know in general, I need to use this code out of a macro. But I wish I can write a macro, which read the stream begin with \\ (such as \test), then stores test (without \) in somewhere. So, if I use \begingroup \catcode\|=0 \catcode\\=12 |@tfor|B:=\test|input|do{\if \|B |relax |else do something|fi} – Kuang-Li Huang Mar 4 '11 at 16:12
@Kuang-Li Huang: You can convert a macro like \test read be macro as argument #1 into test using \expandafter\@gobble\string#1. Note that \string returns the following token (e.g. a macro) as its string representation, e.g. the macro \test as string "\test". The \@gobble then removes the \\ . The \expandafter is required to expand \string before \@gobble. Alternatively you can change the \escapechar variable which tells \string which character to place for the \\ `. – Martin Scharrer Mar 4 '11 at 16:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9503605365753174, "perplexity": 2450.461008628525}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1404776425157.62/warc/CC-MAIN-20140707234025-00018-ip-10-180-212-248.ec2.internal.warc.gz"}
|
https://www.lmfdb.org/Variety/Abelian/Fq/2/49/aba_kh
|
Properties
Label 2.49.aba_kh Base Field $\F_{7^{2}}$ Dimension $2$ Ordinary Yes $p$-rank $2$ Principally polarizable Yes Contains a Jacobian Yes
Invariants
Base field: $\F_{7^{2}}$ Dimension: $2$ L-polynomial: $( 1 - 13 x + 49 x^{2} )^{2}$ Frobenius angles: $\pm0.121037718324$, $\pm0.121037718324$ Angle rank: $1$ (numerical) Jacobians: 2
This isogeny class is not simple.
Newton polygon
This isogeny class is ordinary.
$p$-rank: $2$ Slopes: $[0, 0, 1, 1]$
Point counts
This isogeny class contains the Jacobians of 2 curves, and hence is principally polarizable:
• $y^2=2ax^6+2ax^5+ax^4+5ax^3+4ax^2+4ax+2a$
• $y^2=ax^6+ax^3+a$
$r$ 1 2 3 4 5 6 7 8 9 10 $A(\F_{q^r})$ 1369 5433561 13774308496 33230186580969 79798428896628649 191585480762348015616 459988518428635068448489 1104428436772447798142978889 2651731098423960721055865252496 6366805832081790770045470407222201
$r$ 1 2 3 4 5 6 7 8 9 10 $C(\F_{q^r})$ 24 2260 117078 5764324 282497064 13841594206 678225995016 33232953514564 1628413753008822 79792267189587700
Decomposition and endomorphism algebra
Endomorphism algebra over $\F_{7^{2}}$
The isogeny class factors as 1.49.an 2 and its endomorphism algebra is $\mathrm{M}_{2}($$$\Q(\sqrt{-3})$$$)$
All geometric endomorphisms are defined over $\F_{7^{2}}$.
Base change
This isogeny class is not primitive. It is a base change from the following isogeny classes over subfields of $\F_{7^{2}}$.
Subfield Primitive Model $\F_{7}$ 2.7.a_an
Twists
Below are some of the twists of this isogeny class.
Twist Extension Degree Common base change 2.49.a_act $2$ (not in LMFDB) 2.49.ba_kh $2$ (not in LMFDB) 2.49.al_cu $3$ (not in LMFDB) 2.49.ac_abt $3$ (not in LMFDB) 2.49.e_dy $3$ (not in LMFDB) 2.49.n_eq $3$ (not in LMFDB) 2.49.w_il $3$ (not in LMFDB)
Below is a list of all twists of this isogeny class.
Twist Extension Degree Common base change 2.49.a_act $2$ (not in LMFDB) 2.49.ba_kh $2$ (not in LMFDB) 2.49.al_cu $3$ (not in LMFDB) 2.49.ac_abt $3$ (not in LMFDB) 2.49.e_dy $3$ (not in LMFDB) 2.49.n_eq $3$ (not in LMFDB) 2.49.w_il $3$ (not in LMFDB) 2.49.a_ct $4$ (not in LMFDB) 2.49.ay_jh $6$ (not in LMFDB) 2.49.aw_il $6$ (not in LMFDB) 2.49.ap_eu $6$ (not in LMFDB) 2.49.an_eq $6$ (not in LMFDB) 2.49.aj_cy $6$ (not in LMFDB) 2.49.ae_dy $6$ (not in LMFDB) 2.49.a_ax $6$ (not in LMFDB) 2.49.a_dq $6$ (not in LMFDB) 2.49.c_abt $6$ (not in LMFDB) 2.49.j_cy $6$ (not in LMFDB) 2.49.l_cu $6$ (not in LMFDB) 2.49.p_eu $6$ (not in LMFDB) 2.49.y_jh $6$ (not in LMFDB) 2.49.a_adq $12$ (not in LMFDB) 2.49.a_x $12$ (not in LMFDB)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9565844535827637, "perplexity": 8337.494162738105}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347390442.29/warc/CC-MAIN-20200526015239-20200526045239-00370.warc.gz"}
|
http://mathematica.stackexchange.com/questions/23633/defining-and-solving-systems-of-equations-using-matrix-tables
|
# Defining and Solving Systems of Equations Using Matrix Tables
I've defined a system of equations, but I been unable to get Mathematica to solve for the individual variables created by matrix tables.
Table[tair[j,i],{j,y},{i,x}]==c*Table[Sum[t[j,i],{j,j+1,y}]-Sum[tair[j, i],{j,j+1,y}],{j,y},{i,x}]
Table[tout[i],{i,x}]==c*Table[Sum[t[j,i],{j,1,y}]-Sum[tair[j,i],{j,1,y}],{i, x}]
tout[1]=16
tout[2]=16
tair[2,1]=0
tair[2,2]=0
C is a known constant. To confirm my math, I've used values of 2 for y and x and I've written out the 6 resulting equations in Mathematica, defining variables explicitly rather than through tables:
tair11,tair12,tair21,tair22,t11,t12,t21,t22,tout1,tout2
Using the Solve function, I get the right answer. But since I want to be able to increase the order of y and x to say, 100, a manual enumeration process is not practicable. My question: what must I do syntactically to get Mathematica to solve for these variables? Even a simple example not related to my code would be most helpful.
-
Like this? Array[C, 5] /. First[Solve[Thread[HilbertMatrix[5].Array[C, 5] == Range[5]], Array[C, 5]]] – Guess who it is. Apr 19 '13 at 18:02
This is intriguing, thanks. I'm not familiar with the thread command, let me do some homework there. – Thermoguy Apr 19 '13 at 18:27
Thread doesn't seem to help me solve for the values :( – Thermoguy Apr 19 '13 at 23:08
Finally figured out my problem, thanks – Thermoguy Apr 19 '13 at 23:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7282645106315613, "perplexity": 965.9358955079596}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928780.77/warc/CC-MAIN-20150521113208-00061-ip-10-180-206-219.ec2.internal.warc.gz"}
|
https://www.ijert.org/acoustic-echo-cancellation-using-variable-step-size-nlms-algorithms
|
# Acoustic Echo Cancellation using Variable Step- Size NLMS Algorithms
Text Only Version
#### Acoustic Echo Cancellation using Variable Step- Size NLMS Algorithms
GOPALAIAH
Research Scholar, Dayananda Sagar College of Engineering
Bangalore, India [email protected]
Dr.K. SURESH
Professor
Sri Darmasthala Manjunatheswara Institute of Technology Ujire, India
[email protected]
AbstractThe purpose of a variable step-size normalized
LMS filter is to solve the dilemma of fast convergence rate and low excess MSE. In the past two decades, many VSS-NLMS algorithms have been presented and have claimed that they have good convergence and tracking properties. This paper summarizes several promising algorithms and gives a performance comparison via extensive simulation. Simulation results demonstrate that Benestys NPVSS and our GSER have the best performance in both time-invariant and time-varying systems.
Index TermsAdaptive filters, normalized least mean square (NLMS), variable step-size NLMS, regularization parameter.
1. INTRODUCTION
Adaptive filtering algorithms have been widely employed in many signal processing applications. Among them, the normalized least mean square (NLMS) adaptive filter is most popular due to its simplicity. The stability of the basic NLMS is controlled by a fixed step-size constant , which also governs the rate of convergence, speed of tracking ability and the amount of steady-state excess mean-square error (MSE).
In practice, the NLMS is usually implemented by adding the squared norm of input vector with a small positive number commonly called the regularization parameter. For the basic NLMS algorithm, the role of is to prevent the associated denominator from getting too close to zero, so as to keep the filter from divergence. Since the performance of -NLMS is affected by the overall step-size parameter, the regularization parameter has an effect on convergence properties and the excess MSE as well, i.e., a too big may slow down the adaptation of the filter in certain applications.
There are conflicting objectives between fast convergence and low excess MSE for NLMS with fixed regularization parameter. In the past two decades, many variable step-size NLMS (VSS-NMS) algorithms have been proposed to solve this dilemma of the conventional NLMS. For example, Kwong used the power of instantaneous error to introduce a variable step-size LMS (VSSLMS) filter [6]. This VSSLMS has a larger step size when the error is large, and has a smaller step size when the error is small. Later Aboulnasr pointed out that VSSLMS algorithm is fairly sensitive to the accompanying noise, and presented a modified VSSLMS
(MVSS) algorithm to alleviate the influence of uncorrelated disturbance. The step-size update of MVSS is adjusted by utilizing an estimate of the autocorrelation of errors at adjacent time samples. Recently Shin, Sayed, and Song used the norm of filter coefficient error vector as a criterion for optimal variable step-size, and proposed a variable step-size affine projection algorithm (VS-APA), and a variable step-size NLMS (VS-NLMS) as well. Lately Benesty proposed a non- parametric VSS NLMS algorithm (NPVSS), which need not tune many parameters as that of many variable step size algorithms.
Another type of VSS algorithms has time-varying regularization parameter that is fixed in the conventional – NLMS filters. By making the regularization parameter gradient-adaptively, Mandic presented a generalized normalized gradient descent (GNGD) algorithm. Mandic claimed that the GNGD adapts its learning rate according to the dynamics of the input signals, and its performance is bounded from below by the performance of the NLMS. Very recently, Mandic introduced another scheme with hybrid filters structure to further improve the steady-state misadjustment of the GNGD. Choi, Shin, and Song then proposed a robust regularized NLMS (RR-NLMS) filter, which uses a normalized gradient to update the regularization parameter. While most variable step-size algorithms need to tune several parameters for better performance, we presented an almost tuning-free generalized square-error-regularized NLMS algorithm (GSER) recently. Our GSER exhibits very good performance with fast convergence, quick tracking
The purpose of this paper is to provide a fair comparison among these VSS algorithms. In Section II, we summarize the algorithms. Section III illustrates the simulation results. Conclusions are given in Section IV.
2. VARIABLE STEP-SIZE ALGORITHMS
In this section, we summarize several variable step-size adaptive filtering algorithms including VSSLMS , MVSS , VS- APA , VS-NLMS, NPVSS, GNGD, RR-NLMS, and GSER
algorithm .
Let d(n) be the desired response signal of the adaptive filter
d(n) = xT (n)h(n) + v(n) , (1)
where h(n) denotes the coefficient vector of the unknown system with length M ,
h(n )= [h0(n ), p(n ),.. .,hM- 1(n)]T, (2)
x(n) is the input vector
x(n) = [x(n), x(n 1),, x(n M +1)]T , (3)
and v(n) is the system noise that is independent of x(n) .
Let the adaptive filter have same structure and same order as that of the unknown system. Denoting the coefficient vector of the filter at iteration n as w(n) . We express the a priori estimation error as
e(n) = d(n) xT (n)w(n) . (4)
1. VSSLMS algorithm
Kwong used the squared instantaneous a priori estimation error to update the step size as
(n +1) = (n) + e2 (n) , (5)
where 0 < < 1 , > 0 , and (n +1) is restricted in some pre- decided[min ,max ] . The filter coefficient vector update recursion is given by
w(n +1) = w(n) + (n)e(n)x(n) . (6)
where 2 is a positive number proportional to K, max < 2 ,and
p(n) is an M ×1 vector recursively given by
p(n) = p(n 1)+(1-)X(n)(XT(n)X(n)+1I)-1 (13)
A variable step size NLMS (VS-NLMS) was obtained as a special case of VS-APA by choosing K = 1 as follows.
D.NPVSS algorithm
Benesty argued that many variable step-size algorithms may not work reliably because they need to set several parameters which are not easy to tune in practice, and proposed a nonparametric variable step-size NLMS algorithm (NPVSS).
The filter coefficient vector update recursion is given as that of (6), and the variable step size is updated as
v
Where 3, 4 are positive numbers, 2 is the power of the
2. MVSS algorithm
Aboulnasr utilized an estimate of the autocorrelation of e(n) at adjacent time samples to update the variable step size
system noise, and the power of the error signal is estimated as
E.GNGD algorithm.
as
where
(n +1) = (n) + p2
(n) , (7)
The GNGD belongs to the family of time-varying regularized VSS algorithm. The filter coefficient vector is updated as
p(n) = p(n 1) + (1 )e(n)e(n 1) , (8) and 0 < < 1 .
3. VS-APA and VS-NLMS
Shin, Sayed, and Song proposed a variable step-size affine projection algorithm (VS-APA), which employed an error vector, instead of a scalar error as used in VSSLMS [6] and MVSS, to adjust the variable step size. The coefficient vector update recursion is given by
w(n +1) = w(n)+µ(n)X(n)(XT(n)X(n)+1I)-1 (9)
where 1 is a small positive number, I is an unit matrix of size K × K , X(n) is an M × K input matrix defined as X(n) = [x(n),x(n 1),, x(n K +1)] , (10)
and
e(n) = [e(n), e(n 1),, e(n K +1)]T . (11)
The variable step size (n) is obtained by
,
where c is a fixed step size, and the regularization parameter
(n) is recursively calculated as
where is an adaptation parameter needs tuning, and the initial value (0) has to be set as well.
F.RR-NLMS Algorithm
Chois RR-NLMS algorithm is a modified version of GNGD. The regularization parameter is updated as
where sgn(x) represents the sign function, and min is a parameter needs tuning.
G. GSER Algorithm. The GSER updates w(n) as follows,
where is a positive parameter that makes the filter more general, and the power of the error signal is estimated.
3. SIMULATION RESULTS
In this section, we present the comparison results of several experiments of VSSLMS, MVSS , VS-APA, VS-NLMS, NPVSS, GNGD, RR-NLMS, and GSER algorithm. The adaptive filter is used to identify a 128-tap acoustic echo system ho(n ) .We have used the normalized squared coefficient error (NSCE) to evaluate the performance of the algorithms. The NSCE is defined as
We have run extensive simulations. The results are reasonably consistent. In this section, we show some simulation results with the following parameters setup: = = 0.99, = 5×105, 1 = 0.1, 2=10-4, 3=20, 4=10-3, min=10-4 min=10-
3,max=1,c= 1, = 0.15, and K = 4 . We assume the power of
slightly NSCE advantage than that of GSER in 20-dB SNR case. It should be noted that NPVSS assume 2
v is available in the simulation. Figures 5 and 6 are the results of AR process input with 30-dB SNR and 20-dB SNR, respectively. RR-NLMS has worst NSCE and shows slower tracking behavior compare to white Gaussian signal input case. VS-APA still has problem in low SNR situation: the NSCE of VS-APA is 10 dB worse than that of its competing algorithms. GSER has fastest tracking and convergence speed in 30-dB SNR case.
v
system noise, 2
, is available for NPVSS algorithm.
1. Time-Invariant System
The reference input, x(n) , is either a zero mean, unit variance white Gaussian signal or a second-order AR process. The power of the echo system is about 1. An independent white Gaussian signal with zero mean and variance 0.001 is added to the system output. Figures 1 and 2 are the results of white Gaussian signal input and AR process input, respectively. The NSCE curves are ensemble averages over 20 independent runs. As can be seen, VS-NLMS has the worst performance. GNGD and RR-NLMS have similar convergence speed in the early period, and GNGD exhibits very limited performance in later phase while RR-NLMS keeps adaptation to a lower NSCE. However, we notice that RR-NLMS is out-performed by the rest algorithms in this category. VSSLMS and MVSS have the same performance in the simulation. VS-APA, NPVSS and GSER are among the best group that has fast convergence speed and low NSCE.
2. Time-Varying System
Tracking the time-varying system is an important issue in adaptive signal processing. We compare RR-NLMS, VA-APA, NPVSS and GSER in a scenario that the acoustic echo system h (n) is changed to its negative value at sample 35,000. The additive zero mean white Gaussian noise, v(n) , is either with variance 0.01 or 0.001. Figures 3 and 4 are the results of white Gaussian signal input with 30-dB signal-to-noise ration (SNR) and 20-dB SNR, respectively. All algorithms have fast tracking performance. RR-NLMS has worst NSCE. VS-APA achieves the lowest NSCE when SNR is 30dB. However, the NSCE of VS-APA is 5dB worse than that of NPVSS and GSER. Notice that VS-APA exhibits slow convergence rate. NPVSS has
4. CONCLUSIONS
Many variable step-size NLMS algorithms have been proposed to achieve fast convergence rate, rapid tracking, and low misalignment in the past two decades. This paper summarized several promising algorithms and presented a performance comparison by means of extensive simulation. According to the simulation, Benestys NPVSS and our GSER have the best performance in both time-invariant and time- varying systems.
REFERENCES
1. T. Aboulnasr and K Mayyas, A robust variable step-size LMS-type algorithm: analysis and simulations, IEEE Transactions on Signal Processing, Vol. 45, No. 3, pp. 631 639, March 1997.
2. M. T. Akhtar, M. Abe, and M. Kawamata, A new variable step size LMS algorithm-based method for improved online secondary path modeling in active noise control systems, IEEE Transactions on Audio, Speech and Language Processing, Vol. 14, No. 2, pp. 720-726, March 2006.
3. J. Benesty et al., A nonparametric VSS NLMS algorithm, IEEE Signal Processing Letters, Vol. 13, No. 10, pp 581-584, Oct. 2006.
4. Y. S. Choi, H. C. Shin, and W. J. Song, Robust regularization for normalized LMS algorithms, IEEE Transactions on Circuits and Systems II, Express Briefs, Vol. 53, No. 8, pp. 627631, Aug. 2006.
5. Y. S. Choi, H. C. Shin, and W. J. Song, Adaptive regularization matrix for affine projection algorithm, IEEE Transactions on Circuits and Systems II, Express Briefs, Vol. 54, No. 12, pp. 10871091, Dec. 2007.
6. R. H. Kwong and E. W. Johnston, A variable step size LMS algorithm, IEEE Transactions on Signal Processing, Vol. 40, pp. 1633 – 1642, July 1992.
7. J. Lee, H. C. Huang, and Y. N. Yang, The generalized
square-error-regularized LMS algorithm, Proceedings of WCECS 2008, pp.
157 160, Oct. 2008.
8. D. P. Mandic et al., Collaborative adaptive learning using hybrid filters,
Proceedings of 2007 IEEE ICASSP, pp. III 921924, April 2007.
9. D. P. Mandic; A generalized normalized gradient descent algorithm,
IEEE Signal Processing Letters, Vol. 11, No. 2, pp. 115118, Feb. 2004.
10. H. C. Shin, A. H. Sayed, and W. J. Song, Variable step-size NLMS and affine projection algorithms, IEEE Signal Processing Letters, Vol. 11, No. 2, pp. 132 – 135, Feb. 2004.
11. J. M. Valin and I. B. Collings, Interference-normalized least mean square algorithm, IEEE Signal Processing Letters, Vol. 14, No. 12, pp. 988-991, Dec. 2007.
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9036243557929993, "perplexity": 2675.8119987524337}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247494449.56/warc/CC-MAIN-20190220044622-20190220070622-00428.warc.gz"}
|
https://jayryablon.wordpress.com/2013/02/
|
# Lab Notes for a Scientific Revolution (Physics)
## February 22, 2013
### Two New Papers: Grand Unified SU(8) Gauge Theory Based on Baryons which are Yang-Mills Magnetic Monopoles . . . and . . . Predicting the Neutron and Proton Masses Based on Baryons which are Yang-Mills Magnetic Monopoles and Koide Mass Triplets
I have not had the chance to make my readers aware of two new recent papers. The first is at Grand Unified SU(8) Gauge Theory Based on Baryons which are Yang-Mills Magnetic Monopoles and has been accepted for publication by the Journal of Modern Physics, and will appear in their April 2013 “Special Issue on High Energy Physics.” The second is at Predicting the Neutron and Proton Masses Based on Baryons which are Yang-Mills Magnetic Monopoles and Koide Mass Triplets and is presently under review.
The latter paper on the neutron and proton masses fulfills a goal that I have had for 42 years, which I have spoken about previously in the blog, of finding a way to predict the proton and neutron masses based on the masses of the fermions, specifically, the electron and the up and down quark (and as you will see , the Fermi vev). Between this latter paper and my earlier paper at Predicting the Binding Energies of the 1s Nuclides with High Precision, Based on Baryons which are Yang-Mills Magnetic Monopoles, I have made six distinct, independent predictions with accuracy ranging from parts in 10,000 for the neutron plus proton mass sum, to an exact relationship for the proton minus neutron mass difference, parts per 100,000 for the 3He binding energy, parts per million for the for the 3H and 4He binding energies, and parts per ten million for the 2H binding energy (based on the proton minus neutron mass difference being made exact). I have also proposed in the binding energies paper, a new approach to nuclear fusion, known as “resonant fusion,” in which one bathes hydrogen in gamma radiation at certain specified frequencies that should catalyze the fusion process.
In addition, the neutron and proton mass paper appears to also provide a seventh prediction for part of the determinant of the CKM generational mixing matrix. And the GUT paper establishes the theoretical foundation for exactly three fermion generations and the observed mixing patterns, answering Rabi’s question “who ordered this?”.
All of this in turn, is based on my foundational paper Why Baryons Are Yang-Mills Magnetic Monopoles. Taken together, these four papers place nuclear physics on a new foundation, with empirical support from multiple independent data points. The odd against six independent parts per 10^6 concurrences being mere coincidence are one in 10^36, and I now actually have about ten independent data points of very tight empirical support. If you want to start learning nuclear physics as it will be taught around the world in another decade, this is where you need to start.
Best to all,
Jay
Blog at WordPress.com.
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9106796383857727, "perplexity": 1345.5142632911427}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250607314.32/warc/CC-MAIN-20200122161553-20200122190553-00526.warc.gz"}
|
http://paperity.org/p/77586818/loop-corrections-to-the-antibrane-potential
|
# Loop corrections to the antibrane potential
Journal of High Energy Physics, Jul 2016
Antibranes provide some of the most generic ways to uplift Anti-de Sitter flux compactifications to de Sitter, and there is a growing body of evidence that antibranes placed in long warped throats such as the Klebanov-Strassler warped deformed conifold solution have a brane-brane-repelling tachyon. This tachyon was first found in the regime of parameters in which the backreaction of the antibranes is large, and its existence was inferred from a highly nontrivial cancellation of certain terms in the inter-brane potential. We use a brane effective action approach, similar to that proposed by Michel, Mintun, Polchinski, Puhm and Saad in [29], to analyze antibranes in Klebanov-Strassler when their backreaction is small, and find a regime of parameters where all perturbative contributions to the action can be computed explicitly. We find that the cancellation found at strong coupling is also present in the weak-coupling regime, and we establish its existence to all loops. Our calculation indicates that the spectrum of the antibrane worldvolume theory is not gapped, and may generically have a tachyon. Hence uplifting mechanisms involving antibranes remain questionable even when backreaction is small.
This is a preview of a remote PDF: https://link.springer.com/content/pdf/10.1007%2FJHEP07%282016%29132.pdf
Iosif Bena, Johan Blåbäck, David Turton. Loop corrections to the antibrane potential, Journal of High Energy Physics, 2016, 132, DOI: 10.1007/JHEP07(2016)132
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8751664161682129, "perplexity": 1889.86100885665}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084893530.89/warc/CC-MAIN-20180124070239-20180124090239-00620.warc.gz"}
|
https://www.aimsciences.org/article/doi/10.3934/dcds.2011.29.109
|
# American Institute of Mathematical Sciences
January 2011, 29(1): 109-140. doi: 10.3934/dcds.2011.29.109
## Detectable canard cycles with singular slow dynamics of any order at the turning point
1 Hasselt University, Campus Diepenbeek, Agoralaan-Gebouw D, B-3590 Diepenbeek, Belgium 2 Hasselt University, Campus Diepenbeek, Agoralaan gebouw D, B-3590 Diepenbeek, Belgium
Received January 2010 Revised April 2010 Published September 2010
This paper deals with the study of limit cycles that appear in a class of planar slow-fast systems, near a "canard'' limit periodic set of FSTS-type. Limit periodic sets of FSTS-type are closed orbits, composed of a Fast branch, an attracting Slow branch, a Turning point, and a repelling Slow branch. Techniques to bound the number of limit cycles near a FSTS-l.p.s. are based on the study of the so-called slow divergence integral, calculated along the slow branches. In this paper, we extend the technique to the case where the slow dynamics has singularities of any (finite) order that accumulate to the turning point, and in which case the slow divergence integral becomes unbounded. Bounds on the number of limit cycles near the FSTS-l.p.s. are derived by examining appropriate derivatives of the slow divergence integral.
Citation: P. De Maesschalck, Freddy Dumortier. Detectable canard cycles with singular slow dynamics of any order at the turning point. Discrete & Continuous Dynamical Systems - A, 2011, 29 (1) : 109-140. doi: 10.3934/dcds.2011.29.109
##### References:
show all references
##### References:
[1] Mats Gyllenberg, Yan Ping. The generalized Liénard systems. Discrete & Continuous Dynamical Systems - A, 2002, 8 (4) : 1043-1057. doi: 10.3934/dcds.2002.8.1043 [2] Long Wei, Zhijun Qiao, Yang Wang, Shouming Zhou. Conserved quantities, global existence and blow-up for a generalized CH equation. Discrete & Continuous Dynamical Systems - A, 2017, 37 (3) : 1733-1748. doi: 10.3934/dcds.2017072 [3] José M. Arrieta, Raúl Ferreira, Arturo de Pablo, Julio D. Rossi. Stability of the blow-up time and the blow-up set under perturbations. Discrete & Continuous Dynamical Systems - A, 2010, 26 (1) : 43-61. doi: 10.3934/dcds.2010.26.43 [4] Xiumei Deng, Jun Zhou. Global existence and blow-up of solutions to a semilinear heat equation with singular potential and logarithmic nonlinearity. Communications on Pure & Applied Analysis, 2020, 19 (2) : 923-939. doi: 10.3934/cpaa.2020042 [5] Haitao Yang, Yibin Chang. On the blow-up boundary solutions of the Monge -Ampére equation with singular weights. Communications on Pure & Applied Analysis, 2012, 11 (2) : 697-708. doi: 10.3934/cpaa.2012.11.697 [6] Fangfang Jiang, Junping Shi, Qing-guo Wang, Jitao Sun. On the existence and uniqueness of a limit cycle for a Liénard system with a discontinuity line. Communications on Pure & Applied Analysis, 2016, 15 (6) : 2509-2526. doi: 10.3934/cpaa.2016047 [7] Akmel Dé Godefroy. Existence, decay and blow-up for solutions to the sixth-order generalized Boussinesq equation. Discrete & Continuous Dynamical Systems - A, 2015, 35 (1) : 117-137. doi: 10.3934/dcds.2015.35.117 [8] Zhaoyang Yin. Well-posedness and blow-up phenomena for the periodic generalized Camassa-Holm equation. Communications on Pure & Applied Analysis, 2004, 3 (3) : 501-508. doi: 10.3934/cpaa.2004.3.501 [9] Wenxia Chen, Jingyi Liu, Danping Ding, Lixin Tian. Blow-up for two-component Camassa-Holm equation with generalized weak dissipation. Communications on Pure & Applied Analysis, 2020, 19 (7) : 3769-3784. doi: 10.3934/cpaa.2020166 [10] Min Zhu, Ying Wang. Blow-up of solutions to the periodic generalized modified Camassa-Holm equation with varying linear dispersion. Discrete & Continuous Dynamical Systems - A, 2017, 37 (1) : 645-661. doi: 10.3934/dcds.2017027 [11] Xi Tu, Zhaoyang Yin. Local well-posedness and blow-up phenomena for a generalized Camassa-Holm equation with peakon solutions. Discrete & Continuous Dynamical Systems - A, 2016, 36 (5) : 2781-2801. doi: 10.3934/dcds.2016.36.2781 [12] Jinlu Li, Zhaoyang Yin. Well-posedness and blow-up phenomena for a generalized Camassa-Holm equation. Discrete & Continuous Dynamical Systems - A, 2016, 36 (10) : 5493-5508. doi: 10.3934/dcds.2016042 [13] Alberto Bressan, Massimo Fonte. On the blow-up for a discrete Boltzmann equation in the plane. Discrete & Continuous Dynamical Systems - A, 2005, 13 (1) : 1-12. doi: 10.3934/dcds.2005.13.1 [14] Jie Xu, Yu Miao, Jicheng Liu. Strong averaging principle for slow-fast SPDEs with Poisson random measures. Discrete & Continuous Dynamical Systems - B, 2015, 20 (7) : 2233-2256. doi: 10.3934/dcdsb.2015.20.2233 [15] Alexandre Vidal. Periodic orbits of tritrophic slow-fast system and double homoclinic bifurcations. Conference Publications, 2007, 2007 (Special) : 1021-1030. doi: 10.3934/proc.2007.2007.1021 [16] Yong Xu, Bin Pei, Rong Guo. Stochastic averaging for slow-fast dynamical systems with fractional Brownian motion. Discrete & Continuous Dynamical Systems - B, 2015, 20 (7) : 2257-2267. doi: 10.3934/dcdsb.2015.20.2257 [17] Renato Huzak. Cyclicity of the origin in slow-fast codimension 3 saddle and elliptic bifurcations. Discrete & Continuous Dynamical Systems - A, 2016, 36 (1) : 171-215. doi: 10.3934/dcds.2016.36.171 [18] Luca Dieci, Cinzia Elia. Smooth to discontinuous systems: A geometric and numerical method for slow-fast dynamics. Discrete & Continuous Dynamical Systems - B, 2018, 23 (7) : 2935-2950. doi: 10.3934/dcdsb.2018112 [19] Guangyu Xu, Jun Zhou. Global existence and blow-up of solutions to a singular Non-Newton polytropic filtration equation with critical and supercritical initial energy. Communications on Pure & Applied Analysis, 2018, 17 (5) : 1805-1820. doi: 10.3934/cpaa.2018086 [20] Shouming Zhou, Chunlai Mu, Liangchen Wang. Well-posedness, blow-up phenomena and global existence for the generalized $b$-equation with higher-order nonlinearities and weak dissipation. Discrete & Continuous Dynamical Systems - A, 2014, 34 (2) : 843-867. doi: 10.3934/dcds.2014.34.843
2019 Impact Factor: 1.338
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6819310784339905, "perplexity": 4854.848078459361}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400219691.59/warc/CC-MAIN-20200924163714-20200924193714-00402.warc.gz"}
|
https://socialsci.libretexts.org/Under_Construction/Purgatory/Book%3A_American_Government_(OpenStax)/03%3A_American_Federalism/3.02%3A_The_Divisions_of_Power
|
# 3.2: The Divisions of Power
$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$$$\newcommand{\AA}{\unicode[.8,0]{x212B}}$$
## Learning Objectives
By the end of this section, you will be able to:
• Explain the concept of federalism
• Discuss the constitutional logic of federalism
• Identify the powers and responsibilities of federal, state, and local governments
Modern democracies divide governmental power in two general ways; some, like the United States, use a combination of both structures. The first and more common mechanism shares power among three branches of government—the legislature, the executive, and the judiciary. The second, federalism, apportions power between two levels of government: national and subnational. In the United States, the term federal government refers to the government at the national level, while the term states means governments at the subnational level.
## FEDERALISM DEFINED AND CONTRASTED
Federalism is an institutional arrangement that creates two relatively autonomous levels of government, each possessing the capacity to act directly on behalf of the people with the authority granted to it by the national constitution.
See John Kincaid. 1975. “Federalism.” In Civitas: A Framework for Civil Education, eds. Charles Quigley and Charles Bahmueller. Calabasas, CA: Center for Civic Education, 391–392; William S. Riker. 1975. “Federalism.” In Handbook of Political Science, eds. Fred Greenstein and Nelson Polsby. Reading, MA: Addison-Wesley, 93–172.
Although today’s federal systems vary in design, five structural characteristics are common to the United States and other federal systems around the world, including Germany and Mexico.
First, all federal systems establish two levels of government, with both levels being elected by the people and each level assigned different functions. The national government is responsible for handling matters that affect the country as a whole, for example, defending the nation against foreign threats and promoting national economic prosperity. Subnational, or state governments, are responsible for matters that lie within their regions, which include ensuring the well-being of their people by administering education, health care, public safety, and other public services. By definition, a system like this requires that different levels of government cooperate, because the institutions at each level form an interacting network. In the U.S. federal system, all national matters are handled by the federal government, which is led by the president and members of Congress, all of whom are elected by voters across the country. All matters at the subnational level are the responsibility of the fifty states, each headed by an elected governor and legislature. Thus, there is a separation of functions between the federal and state governments, and voters choose the leader at each level.
Garry Willis, ed. 1982. The Federalist Papers by Alexander Hamilton, James Madison and John Jay. New York: Bantam Books, 237.
The second characteristic common to all federal systems is a written national constitution that cannot be changed without the substantial consent of subnational governments. In the American federal system, the twenty-seven amendments added to the Constitution since its adoption were the result of an arduous process that required approval by two-thirds of both houses of Congress and three-fourths of the states. The main advantage of this supermajority requirement is that no changes to the Constitution can occur unless there is broad support within Congress and among states. The potential drawback is that numerous national amendment initiatives—such as the Equal Rights Amendment (ERA), which aims to guarantee equal rights regardless of sex—have failed because they cannot garner sufficient consent among members of Congress or, in the case of the ERA, the states.
Third, the constitutions of countries with federal systems formally allocate legislative, judicial, and executive authority to the two levels of government in such a way as to ensure each level some degree of autonomy from the other. Under the U.S. Constitution, the president assumes executive power, Congress exercises legislative powers, and the federal courts (e.g., U.S. district courts, appellate courts, and the Supreme Court) assume judicial powers. In each of the fifty states, a governor assumes executive authority, a state legislature makes laws, and state-level courts (e.g., trial courts, intermediate appellate courts, and supreme courts) possess judicial authority.
While each level of government is somewhat independent of the others, a great deal of interaction occurs among them. In fact, the ability of the federal and state governments to achieve their objectives often depends on the cooperation of the other level of government. For example, the federal government’s efforts to ensure homeland security are bolstered by the involvement of law enforcement agents working at local and state levels. On the other hand, the ability of states to provide their residents with public education and health care is enhanced by the federal government’s financial assistance.
Another common characteristic of federalism around the world is that national courts commonly resolve disputes between levels and departments of government. In the United States, conflicts between states and the federal government are adjudicated by federal courts, with the U.S. Supreme Court being the final arbiter. The resolution of such disputes can preserve the autonomy of one level of government, as illustrated recently when the Supreme Court ruled that states cannot interfere with the federal government’s actions relating to immigration.
Arizona v. United States, 567 U.S. __ (2012).
In other instances, a Supreme Court ruling can erode that autonomy, as demonstrated in the 1940s when, in United States v. Wrightwood Dairy Co., the Court enabled the federal government to regulate commercial activities that occurred within states, a function previously handled exclusively by the states.
United States v. Wrightwood Dairy Co., 315 U.S. 110 (1942).
Finally, subnational governments are always represented in the upper house of the national legislature, enabling regional interests to influence national lawmaking.
Ronald L. Watts. 1999. Comparing Federal Systems, 2nd ed. Kingston, Ontario: McGill-Queen’s University, 6–7; Daniel J. Elazar. 1992. Federal Systems of the World: A Handbook of Federal, Confederal and Autonomy Arrangements. Harlow, Essex: Longman Current Affairs.
In the American federal system, the U.S. Senate functions as a territorial body by representing the fifty states: Each state elects two senators to ensure equal representation regardless of state population differences. Thus, federal laws are shaped in part by state interests, which senators convey to the federal policymaking process.
The governmental design of the United States is unusual; most countries do not have a federal structure. Aside from the United States, how many other countries have a federal system?
Division of power can also occur via a unitary structure or confederation (Figure). In contrast to federalism, a unitary system makes subnational governments dependent on the national government, where significant authority is concentrated. Before the late 1990s, the United Kingdom’s unitary system was centralized to the extent that the national government held the most important levers of power. Since then, power has been gradually decentralized through a process of devolution, leading to the creation of regional governments in Scotland, Wales, and Northern Ireland as well as the delegation of specific responsibilities to them. Other democratic countries with unitary systems, such as France, Japan, and Sweden, have followed a similar path of decentralization.
In a confederation, authority is decentralized, and the central government’s ability to act depends on the consent of the subnational governments. Under the Articles of Confederation (the first constitution of the United States), states were sovereign and powerful while the national government was subordinate and weak. Because states were reluctant to give up any of their power, the national government lacked authority in the face of challenges such as servicing the war debt, ending commercial disputes among states, negotiating trade agreements with other countries, and addressing popular uprisings that were sweeping the country. As the brief American experience with confederation clearly shows, the main drawback with this system of government is that it maximizes regional self-rule at the expense of effective national governance.
## FEDERALISM AND THE CONSTITUTION
The Constitution contains several provisions that direct the functioning of U.S. federalism. Some delineate the scope of national and state power, while others restrict it. The remaining provisions shape relationships among the states and between the states and the federal government.
The enumerated powers of the national legislature are found in Article I, Section 8. These powers define the jurisdictional boundaries within which the federal government has authority. In seeking not to replay the problems that plagued the young country under the Articles of Confederation, the Constitution’s framers granted Congress specific powers that ensured its authority over national and foreign affairs. To provide for the general welfare of the populace, it can tax, borrow money, regulate interstate and foreign commerce, and protect property rights, for example. To provide for the common defense of the people, the federal government can raise and support armies and declare war. Furthermore, national integration and unity are fostered with the government’s powers over the coining of money, naturalization, postal services, and other responsibilities.
The last clause of Article I, Section 8, commonly referred to as the elastic clause or the necessary and proper cause, enables Congress “to make all Laws which shall be necessary and proper for carrying” out its constitutional responsibilities. While the enumerated powers define the policy areas in which the national government has authority, the elastic clause allows it to create the legal means to fulfill those responsibilities. However, the open-ended construction of this clause has enabled the national government to expand its authority beyond what is specified in the Constitution, a development also motivated by the expansive interpretation of the commerce clause, which empowers the federal government to regulate interstate economic transactions.
The powers of the state governments were never listed in the original Constitution. The consensus among the framers was that states would retain any powers not prohibited by the Constitution or delegated to the national government.
Jack Rakove. 2007. James Madison and the Creation of the American Republic. New York: Pearson; Samuel H. Beer. 1998. To Make a Nation: The Rediscovery of American Federalism. Cambridge, MA: Harvard University Press.
However, when it came time to ratify the Constitution, a number of states requested that an amendment be added explicitly identifying the reserved powers of the states. What these Anti-Federalists sought was further assurance that the national government’s capacity to act directly on behalf of the people would be restricted, which the first ten amendments (Bill of Rights) provided. The Tenth Amendment affirms the states’ reserved powers: “The powers not delegated to the United States by the Constitution, nor prohibited by it to the States, are reserved to the States respectively, or to the people.” Indeed, state constitutions had bills of rights, which the first Congress used as the source for the first ten amendments to the Constitution.
Some of the states’ reserved powers are no longer exclusively within state domain, however. For example, since the 1940s, the federal government has also engaged in administering health, safety, income security, education, and welfare to state residents. The boundary between intrastate and interstate commerce has become indefinable as a result of broad interpretation of the commerce clause. Shared and overlapping powers have become an integral part of contemporary U.S. federalism. These concurrent powers range from taxing, borrowing, and making and enforcing laws to establishing court systems (Figure).
Elton E. Richter. 1929. “Exclusive and Concurrent Powers in the Federal Constitution,” Notre Dame Law Review 4, No. 8: 513–542. scholarship.law.nd.edu/cgi/vi...6&context=ndlr
Article I, Sections 9 and 10, along with several constitutional amendments, lay out the restrictions on federal and state authority. The most important restriction Section 9 places on the national government prevents measures that cause the deprivation of personal liberty. Specifically, the government cannot suspend the writ of habeas corpus, which enables someone in custody to petition a judge to determine whether that person’s detention is legal; pass a bill of attainder, a legislative action declaring someone guilty without a trial; or enact an ex post facto law, which criminalizes an act retroactively. The Bill of Rights affirms and expands these constitutional restrictions, ensuring that the government cannot encroach on personal freedoms.
The states are also constrained by the Constitution. Article I, Section 10, prohibits the states from entering into treaties with other countries, coining money, and levying taxes on imports and exports. Like the federal government, the states cannot violate personal freedoms by suspending the writ of habeas corpus, passing bills of attainder, or enacting ex post facto laws. Furthermore, the Fourteenth Amendment, ratified in 1868, prohibits the states from denying citizens the rights to which they are entitled by the Constitution, due process of law, or the equal protection of the laws. Lastly, three civil rights amendments—the Fifteenth, Nineteenth, and Twenty-Sixth—prevent both the states and the federal government from abridging citizens’ right to vote based on race, sex, and age. This topic remains controversial because states have not always ensured equal protection.
The supremacy clause in Article VI of the Constitution regulates relationships between the federal and state governments by declaring that the Constitution and federal law are the supreme law of the land. This means that if a state law clashes with a federal law found to be within the national government’s constitutional authority, the federal law prevails. The intent of the supremacy clause is not to subordinate the states to the federal government; rather, it affirms that one body of laws binds the country. In fact, all national and state government officials are bound by oath to uphold the Constitution regardless of the offices they hold. Yet enforcement is not always that simple. In the case of marijuana use, which the federal government defines to be illegal, twenty-three states and the District of Columbia have nevertheless established medical marijuana laws, others have decriminalized its recreational use, and four states have completely legalized it. The federal government could act in this area if it wanted to. For example, in addition to the legalization issue, there is the question of how to treat the money from marijuana sales, which the national government designates as drug money and regulates under laws regarding its deposit in banks.
Various constitutional provisions govern state-to-state relations. Article IV, Section 1, referred to as the full faith and credit clause or the comity clause, requires the states to accept court decisions, public acts, and contracts of other states. Thus, an adoption certificate or driver’s license issued in one state is valid in any other state. The movement for marriage equality has put the full faith and credit clause to the test in recent decades. In light of Baehr v. Lewin, a 1993 ruling in which the Hawaii Supreme Court asserted that the state’s ban on same-sex marriage was unconstitutional, a number of states became worried that they would be required to recognize those marriage certificates.
Baehr v. Lewin. 1993. 74 Haw. 530.
To address this concern, Congress passed and President Clinton signed the Defense of Marriage Act(DOMA) in 1996. The law declared that “No state (or other political subdivision within the United States) need recognize a marriage between persons of the same sex, even if the marriage was concluded or recognized in another state.” The law also barred federal benefits for same-sex partners.
DOMA clearly made the topic a state matter. It denoted a choice for states, which led many states to take up the policy issue of marriage equality. Scores of states considered legislation and ballot initiatives on the question. The federal courts took up the issue with zeal after the U.S. Supreme Court in United States v. Windsor struck down the part of DOMA that outlawed federal benefits.
United States v. Windsor, 570 U.S. __ (2013).
That move was followed by upwards of forty federal court decisions that upheld marriage equality in particular states. In 2014, the Supreme Court decided not to hear several key case appeals from a variety of states, all of which were brought by opponents of marriage equality who had lost in the federal courts. The outcome of not hearing these cases was that federal court decisions in four states were affirmed, which, when added to other states in the same federal circuit districts, brought the total number of states permitting same-sex marriage to thirty.
Adam Liptak, “Supreme Court Delivers Tacit Win to Gay Marriage,” New York Times, 6 October, 2014.
Then, in 2015, the Obergefell v. Hodges case had a sweeping effect when the Supreme Court clearly identified a constitutional right to marriage based on the Fourteenth Amendment.
Obergefell v. Hodges, 576 U.S. ___ (2015).
The privileges and immunities clause of Article IV asserts that states are prohibited from discriminating against out-of-staters by denying them such guarantees as access to courts, legal protection, property rights, and travel rights. The clause has not been interpreted to mean there cannot be any difference in the way a state treats residents and non-residents. For example, individuals cannot vote in a state in which they do not reside, tuition at state universities is higher for out-of-state residents, and in some cases individuals who have recently become residents of a state must wait a certain amount of time to be eligible for social welfare benefits. Another constitutional provision prohibits states from establishing trade restrictions on goods produced in other states. However, a state can tax out-of-state goods sold within its borders as long as state-made goods are taxed at the same level.
## THE DISTRIBUTION OF FINANCES
Federal, state, and local governments depend on different sources of revenue to finance their annual expenditures. In 2014, total revenue (or receipts) reached $3.2 trillion for the federal government,$1.7 trillion for the states, and $1.2 trillion for local governments. Data reported by http://www.usgovernmentrevenue.com/federal_revenue. State and local government figures are estimated. Two important developments have fundamentally changed the allocation of revenue since the early 1900s. First, the ratification of the Sixteenth Amendment in 1913 authorized Congress to impose income taxes without apportioning it among the states on the basis of population, a burdensome provision that Article I, Section 9, had imposed on the national government. Pollock v. Farmers’ Loan & Trust Co., 158 U.S. 601 (1895). With this change, the federal government’s ability to raise revenue significantly increased and so did its ability to spend. The second development regulates federal grants, that is, transfers of federal money to state and local governments. These transfers, which do not have to be repaid, are designed to support the activities of the recipient governments, but also to encourage them to pursue federal policy objectives they might not otherwise adopt. The expansion of the federal government’s spending power has enabled it to transfer more grant money to lower government levels, which has accounted for an increasing share of their total revenue. See Robert Jay Dilger, “Federal Grants to State and Local Governments: A Historical Perspective on Contemporary Issues,” Congressional Research Service, Report 7-5700, 5 March 2015. The sources of revenue for federal, state, and local governments are detailed in Figure. Although the data reflect 2013 results, the patterns we see in the figure give us a good idea of how governments have funded their activities in recent years. For the federal government, 47 percent of 2013 revenue came from individual income taxes and 34 percent from payroll taxes, which combine Social Security tax and Medicare tax. For state governments, 50 percent of revenue came from taxes, while 30 percent consisted of federal grants. Sales tax—which includes taxes on purchased food, clothing, alcohol, amusements, insurance, motor fuels, tobacco products, and public utilities, for example—accounted for about 47 percent of total tax revenue, and individual income taxes represented roughly 35 percent. Revenue from service charges (e.g., tuition revenue from public universities and fees for hospital-related services) accounted for 11 percent. The tax structure of states varies. Alaska, Florida, Nevada, South Dakota, Texas, Washington, and Wyoming do not have individual income taxes. Figure illustrates yet another difference: Fuel tax as a percentage of total tax revenue is much higher in South Dakota and West Virginia than in Alaska and Hawaii. However, most states have done little to prevent the erosion of the fuel tax’s share of their total tax revenue between 2007 and 2014 (notice that for many states the dark blue dots for 2014 are to the left of the light blue numbers for 2007). Fuel tax revenue is typically used to finance state highway transportation projects, although some states do use it to fund non-transportation projects. The most important sources of revenue for local governments in 2013 were taxes, federal and state grants, and service charges. For local governments the property tax, a levy on residential and commercial real estate, was the most important source of tax revenue, accounting for about 74 percent of the total. Federal and state grants accounted for 37 percent of local government revenue. State grants made up 87 percent of total local grants. Charges for hospital-related services, sewage and solid-waste management, public city university tuition, and airport services are important sources of general revenue for local governments. Intergovernmental grants are important sources of revenue for both state and local governments. When economic times are good, such grants help states, cities, municipalities, and townships carry out their regular functions. However, during hard economic times, such as the Great Recession of 2007–2009, intergovernmental transfers provide much-needed fiscal relief as the revenue streams of state and local governments dry up. During the Great Recession, tax receipts dropped as business activities slowed, consumer spending dropped, and family incomes decreased due to layoffs or work-hour reductions. To offset the adverse effects of the recession on the states and local governments, federal grants increased by roughly 33 percent during this period. Jeffrey L. Barnett et al. 2014. 2012 Census of Governments: Finance-State and Local Government Summary Report, Appendix Table A-1. December 17. Washington, DC: United States Census Bureau, 2. In 2009, President Obama signed the American Recovery and Reinvestment Act (ARRA), which provided immediate economic-crisis management assistance such as helping local and state economies ride out the Great Recession and shoring up the country’s banking sector. A total of$274.7 billion in grants, contracts, and loans was allocated to state and local governments under the ARRA.
Dilger, “Federal Grants to State and Local Governments,” 4.
The bulk of the stimulus funds apportioned to state and local governments was used to create and protect existing jobs through public works projects and to fund various public welfare programs such as unemployment insurance.
James Feyrer and Bruce Sacerdote. 2011. “Did the Stimulus Stimulate? Real Time Estimates of the Effects of the American Recovery and Reinvestment Act” (Working Paper No. 16759), Cambridge, MA: National Bureau of Economic Research. http://www.nber.org/papers/w16759.pdf
How are the revenues generated by our tax dollars, fees we pay to use public services and obtain licenses, and monies from other sources put to use by the different levels of government? A good starting point to gain insight on this question as it relates to the federal government is Article I, Section 8, of the Constitution. Recall, for instance, that the Constitution assigns the federal government various powers that allow it to affect the nation as a whole. A look at the federal budget in 2014 (Figure) shows that the three largest spending categories were Social Security (24 percent of the total budget); Medicare, Medicaid, the Children’s Health Insurance Program, and marketplace subsidies under the Affordable Care Act (24 percent); and defense and international security assistance (18 percent). The rest was divided among categories such as safety net programs (11 percent), including the Earned Income Tax Credit and Child Tax Credit, unemployment insurance, food stamps, and other low-income assistance programs; interest on federal debt (7 percent); benefits for federal retirees and veterans (8 percent); and transportation infrastructure (3 percent).
Data reported by the Center on Budget and Policy Priorities. 2015. “Policy Basics: Where Do Our Federal Tax Dollars Go?” March 11. http://www.cbpp.org/research/policy-...tax-dollars-go
It is clear from the 2014 federal budget that providing for the general welfare and national defense consumes much of the government’s resources—not just its revenue, but also its administrative capacity and labor power.
Figure compares recent spending activities of local and state governments. Educational expenditures constitute a major category for both. However, whereas the states spend comparatively more than local governments on university education, local governments spend even more on elementary and secondary education. That said, nationwide, state funding for public higher education has declined as a percentage of university revenues; this is primarily because states have taken in lower amounts of sales taxes as internet commerce has increased. Local governments allocate more funds to police protection, fire protection, housing and community development, and public utilities such as water, sewage, and electricity. And while state governments allocate comparatively more funds to public welfare programs, such as health care, income support, and highways, both local and state governments spend roughly similar amounts on judicial and legal services and correctional services.
Federalism is a system of government that creates two relatively autonomous levels of government, each possessing authority granted to them by the national constitution. Federal systems like the one in the United States are different from unitary systems, which concentrate authority in the national government, and from confederations, which concentrate authority in subnational governments.
The U.S. Constitution allocates powers to the states and federal government, structures the relationship between these two levels of government, and guides state-to-state relationships. Federal, state, and local governments rely on different sources of revenue to enable them to fulfill their public responsibilities.
Which statement about federal and unitary systems is most accurate?
1. In a federal system, power is concentrated in the states; in a unitary system, it is concentrated in the national government.
2. In a federal system, the constitution allocates powers between states and federal government; in a unitary system, powers are lodged in the national government.
3. Today there are more countries with federal systems than with unitary systems.
4. The United States and Japan have federal systems, while Great Britain and Canada have unitary systems.
Which statement is most accurate about the sources of revenue for local and state governments?
1. Taxes generate well over one-half the total revenue of local and state governments.
2. Property taxes generate the most tax revenue for both local and state governments.
3. Between 30 and 40 percent of the revenue for local and state governments comes from grant money.
4. Local and state governments generate an equal amount of revenue from issuing licenses and certificates.
What key constitutional provisions define the scope of authority of the federal and state governments?
What are the main functions of federal and state governments?
## Glossary
bill of attainder
a legislative action declaring someone guilty without a trial; prohibited under the Constitution
concurrent powers
shared state and federal powers that range from taxing, borrowing, and making and enforcing laws to establishing court systems
devolution
a process in which powers from the central government in a unitary system are delegated to subnational units
elastic clause
the last clause of Article I, Section 8, which enables the national government “to make all Laws which shall be necessary and proper for carrying” out all its constitutional responsibilities
ex post facto law
a law that criminalizes an act retroactively; prohibited under the Constitution
federalism
an institutional arrangement that creates two relatively autonomous levels of government, each possessing the capacity to act directly on the people with authority granted by the national constitution
full faith and credit clause
found in Article IV, Section 1, of the Constitution, this clause requires states to accept court decisions, public acts, and contracts of other states; also referred to as the comity provision
privileges and immunities clause
found in Article IV, Section 2, of the Constitution, this clause prohibits states from discriminating against out-of-staters by denying such guarantees as access to courts, legal protection, and property and travel rights
unitary system
a centralized system of government in which the subnational government is dependent on the central government, where substantial authority is concentrated
writ of habeas corpus
a petition that enables someone in custody to petition a judge to determine whether that person’s detention is legal
This page titled 3.2: The Divisions of Power is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by OpenStax via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29542481899261475, "perplexity": 5124.673849581702}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710719.4/warc/CC-MAIN-20221130024541-20221130054541-00542.warc.gz"}
|
https://www.queryxchange.com/q/20_367742/jointly-sufficient-statistics-of-a-multi-parameter-exponential-family/
|
# Jointly sufficient statistics of a multi-parameter exponential family
by Xiaomi Last Updated September 20, 2018 02:19 AM
Let $f_X$ be a joint density function that comes from an $s$-parameter exponential family with sufficient statistics $(T_1, T_2, \dots, T_s)$ so that the density $f_X$ can be expressed as
$$f_{X|\theta}(x) = h(x) \exp \left(\sum_{i=1}^s T_i(x)\eta_i(\theta) - A(\theta) \right)$$
I have two questions:
1. Expressed in this form, is it correct to say the statistics $T_1,\dots,T_s$ are jointly sufficient, as opposed to independently sufficient?
2. Based on them being jointly sufficient, is it correct to say any unbiased estimator $\tau (X)$ such that $E[\tau(X)|T_i,T_j] = \theta_k$, for some $i,j,k$, must be UMVUE of $\theta_k \in \theta$?
I'm trying to understand the difference between being sufficient, and jointly sufficient
Tags :
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8056963682174683, "perplexity": 361.7940409156897}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512434.71/warc/CC-MAIN-20181019191802-20181019213302-00314.warc.gz"}
|
http://mathhelpforum.com/math-software/93378-convolution-under-mathematica-6-a.html
|
# Math Help - Convolution under Mathematica 6
1. ## Convolution under Mathematica 6
Hello everybody ,
I'd like your help with the following problem . Since Mathematica 6 does not has any command that involves Convolution , I defined one myself by the definition of Convolution . I need to have convolution of y(t)=x(t)*h(t)
.
However , when I try to initiate the code , nothing happens . The file is attached , thanks in advance
2. Originally Posted by new guy
Hello everybody ,
I'd like your help with the following problem . Since Mathematica 6 does not has any command that involves Convolution , I defined one myself by the definition of Convolution . I need to have convolution of y(t)=x(t)*h(t)
.
However , when I try to initiate the code , nothing happens . The file is attached , thanks in advance
How does it know what $\Lambda[.]$ denotes (it looks like a function or array to me)?
CB
3. Originally Posted by CaptainBlack
How does it know what $\Lambda[.]$ denotes (it looks like a function or array to me)?
CB
You're right of course . I've just added it , as you can see in the file attached to this comment .However even after adding the definition of
$\Lambda[.]$ , nothing happens !
4. It's better if you cut and past you're code so that we can cut and past it directly into Mathematica. Select it, then do a Cell/Convert To/Raw Input form like I did below. Now you can just cut and paste it directly in your Mathematica and if you want convert it back to standard form.
\[CapitalLambda][t_] :=Piecewise[{{1-Abs[t],Abs[t]<= 1},{0,Abs[t]>= 1}}];
t0 = 9;
s1 = t0 - 6;
x[t_] := Exp[s1*t];
h[t_] := \[CapitalLambda][t/10];
myConvol[t_]=Integrate[x[t - \[Tau]]*h[\[Tau]],
{\[Tau], -Infinity, Infinity}]
5. Originally Posted by shawsend
It's better if you cut and past you're code so that we can cut and past it directly into Mathematica. Select it, then do a Cell/Convert To/Raw Input form like I did below. Now you can just cut and paste it directly in your Mathematica and if you want convert it back to standard form.
\[CapitalLambda][t_] :=Piecewise[{{1-Abs[t],Abs[t]<= 1},{0,Abs[t]>= 1}}];
t0 = 9;
s1 = t0 - 6;
x[t_] := Exp[s1*t];
h[t_] := \[CapitalLambda][t/10];
myConvol[t_]=Integrate[x[t - \[Tau]]*h[\[Tau]],
{\[Tau], -Infinity, Infinity}]
I'll do that . thanks again for the help !
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7197189927101135, "perplexity": 1367.326158062773}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246657041.90/warc/CC-MAIN-20150417045737-00149-ip-10-235-10-82.ec2.internal.warc.gz"}
|
http://ficuspublishers.com/j7bei/how-many-unpaired-electrons-does-v2%2B-have-f464ee
|
When we write the configuration we'll put all 11 electrons in orbitals around the nucleus of the Sodium atom. That means the electron arrangement ends in 3d3 4s2. Na, Mg, Al. Problem: Which of these ions has the smallest number of unpaired electrons?V2+Cr2+Ni2+Co2+Fe3+ FREE Expert Solution. The configuration of neon ends in 2s 2 2p 6, so it has eight valence electrons. Uploaded By a2399224. how many valence electrons in boron. 2s², 2px²) exists as lone pairs. Stuart_Palmer2 TEACHER. Diamagnetic atoms are slightly repelled by an external magnetic field. There are six electrons in valence shell of oxygen, among these six electrons two electrons are unpaired and involved in forming covalent bond with hydrogen and carbon, while remaining 4 electrons in pair (i.e. For example, write the electron configuration of scandium, Sc: 1s 2 2s 2 2p 6 3s 2 3p 6 4s 2 3d 1. An electron configuration lists only the first two quantum numbers, n and $$\ell$$, and then shows how many electrons exist in each orbital. d. (5 pts) Zinc forms one cation, Zn 2+.What about the electron configuration of Zn 2+ makes it more stable than any other cation (for example Zn 1+ or Zn 3+), and therefore favors zinc being found in the Zn 2+ state? There are 3 unpaired electrons. The electron configuration for a "Mn"^(3+)" ion is "[Ar]3d"^4". n = 4, l = 3. Study sets. Valence electrons are the currency of chemical bonds. (b) The enthalpies of atomisation of the transition metals are high. 14. How many unpaired electrons does arsenic have? 4 Answers. 2.When you have a double bond, you have 1 sigma bond and 1 pi bond. You can make single bonds (2 electrons), 2 bonds or a double bonds (4 electrons), 3 bonds or a triple bonds (6 electrons) in some cases. 7.6 - Use the atomic radii of scandium, yttrium,... Ch. Favourite answer. how many valence electrons in Ne. 5. Indicate whether Fe 2 + ions are paramagnetic or diamagnetic. Browse 500 sets of term:valence = the number of unpaired electrons in an atom flashcards. The electron configuration in noble gas shorthand for … To determine the number of unpaired electrons in an atom, let’s first write its electron configuration. (2 pts) How many unpaired electrons does zinc have? b. tetrahedral FeCl4^--Both I and II have the same (nonzero) number of unpaired electrons. a. How many unpaired electrons does arsenic have. a. The p orbital can hold up to six electrons. The magnetic properties of various elements are related to the number of unpaired electrons in each atom. A neutral manganese atom also has 25 electrons. Manganese has atomic number 25, meaning its atoms have 25 protons in their nuclei. 7.6 - Gadolinium has eight unpaired electrons, the... Ch. Both of the configurations have the correct numbers of electrons in each orbital, it is just a matter of how the electronic configuration notation is written (here is an explanation why). Question: How Many Unpaired Electrons DoesMn3+ Have?Would Cr3+ And V2+ Have The Sameelectron Configuration? Check Answer and Solution for above question from Chemis ? Explain giving Statement-2. Arrange from smallest to largest ionization energy Si, F, Sr, S, Sr, Si, S, F. Which element has the lowest IE3? "The Electron configuration of vanadium is [Ar] 3d3 4s2. 7.6 - The most common oxidation state of a rare earth... Ch. In writing the electron configuration for sodium the first two electrons will go in the 1s orbital. V +2 has 3 unpaired electrons - all three are in the 3d sublevel. Ch. Answer (a): The O atom has 2s 2 2p 4 as the electron configuration. Correct Electron Configuration for Copper (Cu) Half-filled and fully filled subshell have got extra stability. 7.6 - This textbook places lanthanum directly below... Ch. 15 terms. Many of the physical and chemical properties of elements can be correlated to their unique electron configurations. Arrange from smallest to largest size Si, F, Sr, S. F, S, Si, Sr. How many unpaired electrons are found in bromine atoms? V2+ Co2+ Rb1+ Sn2+ Answer Save. Al. Arsenic, As belongs to V-A group of periodic table so has three unpaired electrons in outermost 'p' sub energy level. how many valence electrons in S. 3. Indicate whether boron atoms are paramagnetic or diamagnetic. – - - complexes. how many valence electrons in carbon. Atoms and the number of valence electrons. 7.4 - Depict the electron configurations for V2+, V3+,... Ch. Atoms with a relatively empty outer shell will want to give up electrons. so no. 1.Electron configuration notation notation (spdf notation) given by: 1s 2 2s 2 2p 2, etc. 7.6 - Gadolinium has eight unpaired electrons, the... Ch. Pages 9 This preview shows page 2 - … For Fe 3+ it has 5 unpaired electrons, as the formerly paired electrons in the 3d shell lose 1 electron, to become unpaired. - II - I. II. There are three ways of representing the electrons in an atom. Rb +1 has no unpaired electrosn. 7.6 - This textbook places lanthanum directly below... Ch. The high-spin octahedral complex has a total spin state of +2 (all unpaired d electrons), while a low spin octahedral complex has a total spin state of +1 (one set of paired d electrons, two unpaired). 6. how many valence electrons … Answer (b): The Br atom has 4s 2 3d … 9 years ago. This gives it 3 more protons than electrons, which gives it the 3+ charge. How many electrons in an atom can have each of the following quantum number or sublevel designations? Solution for How many unpaired electrons do you expect each complex ion to have? Co +2 has 5 unpaired electrons - all 5 are in the 3d sublevel. As such, they all have some difficulty describing chemical structures with: •Odd numbers of electrons (called “radicals”) •Unpaired electrons paramagnetic ; Molecular Orbital Theory • For example, when two hydrogen atoms bond, a σ1s (bonding) molecular orbital is formed as well as a σ1s* (antibonding) molecular orbital. Thus there are 3 unpaired electrons in each vanadium atom." For Fe 2+ it has 4 unpaired electrons as well, since it loses electrons from the 4s shell, rather than the 3d. I. square planar Ni(CN)4^2-II. Therefore, O has 2 unpaired electrons. (RhCl- b. John's answers … 20. Mg +2 has no unpaired electrons. c. (3 pts) Is zinc in the ground state paramagnetic or diamagnetic? The nex six electrons will go in the 2p orbital. 1.A sigma bond is just a single bond. Both of the configurations have the correct numbers of electrons in each orbital, it is just a matter of how the electronic configuration notation is written (here is an explanation why). For example, the element P, has an atomic mass of 15. DING DING DING! Relevance. You can determine the ground-state electron configuration of electrons by referring to the periodic table and locating their position in the … Commonly, the electron configuration is used to describe the orbitals of an atom in its ground state, but it can also be used to represent an atom that has ionized into a cation or anion by compensating with the loss of or gain of electrons in their subsequent orbitals. 7.6 - The most common oxidation state of a rare earth... Ch. School New Jersey Institute Of Technology; Course Title CHEM 121; Type. the ways in which electrons are arranged around the nuclei of atoms, Use the formula of the molecule or ion to help you determine the arrangement of the atoms. How many valence electrons does boron have? of unpaired electrons is 5 Use valence electrons to make 2-electron bonds to connect the atoms in the structure. Consider the following five elements, listed in alphabetic order: As, Ga, Ge, P, and Si These elements have the following five atomic radii, listed in ascending order: 100 pm, 110 pm, 115 pm, 125 pm, and 135 pm. 9 years ago. How many unpaired electrons does S have Is it diamagnetic or paramagnetic A 0. WE HAVE A WINNER! John. 4. Therefore we have (still incorrect) 1s 2 2s 2 2p 6 3s 2 3p 6 3d 9 4s 2. 110 pm b. Why do Mn( ) show maximum paramagnetic character amongst the bivalent ions of the st transition series? Fall 2017 3 Version 1b 3 (c) The transition metals generally form coloured … Indicate whether F-ions are paramagnetic or diamagnetic. there are 0 unpaired electrons. Which has the greater number of unpaired electrons? BINGO! Diagrams. 7.5 - Without looking at the figures for the periodic... Ch. To figure out how many unpaired electrons each neutral atom has, remember that when filling degenerate orbitals (e.g., the 3d orbitals) the electrons remain unpaired whenever possible. So for scandium the 1 st and 2 nd electron must be in 1s orbital, the 3 rd and 4 th in the 2s, the 5 th through 10 th in the 2p orbitals, etc. 5 Electronic configuration of Mn(ground state)=1s^2 2s^2 2p^6 3s^2 3p^6 3d^5 4s^2 In the 3d sub shell there are five orbitals each containing two electrons, and as electrons can only pair until every orbital in "degenerate" has been singly filled, so in d sub shell each orbital has only one electron . Lv 7. 0 0. The electron configuration of a neutral sulfur atom will thus be #"S: " 1s^2 2s^2 2p^6 3s^2 3p^4# Now, the sulfide anion, #"S"^(2-)#, is formed when two electrons are added to a neutral sulfur atom. -More information is needed.-Neither I nor II has any unpaired electrons. Sulfur's has an atomic number equal to #16#, which means that a neutral sulfur atom has a total of #16# electrons surrounding its nucleus. How many unpaired electrons does s have is it. How many unpaired electrons does it contain? 7.4 - Depict the electron configurations for V2+, V3+,... Ch. Which is the most metallic? An orbital is made up of 2 electrons and any orbitals left with only 1 electron is considered unpaired. 8. Lv 7. Sn +2 has no unpaired electrons. For the Fe2+ ion we remove two electrons from 4s2 leaving us with: 1s 2 2s 2 2p 6 3s 2 3p 6 3d 6. As we will see, these atoms have one or more “unpaired” electrons in the atom. However, the manganese 3+ ion, "Mn"^(3+)", has 22 electrons. How many unpaired electrons are in F2? Correct Electron Configuration for Chromium (Cr) Half-filled and fully filled subshell have got extra stability. For V it will be [1][1][1][0][0], or three u . Because the second principal energy level consists of both the 2s and the 2p sublevels, the answer is three. It does so to achieve half shell stability as electrons try to find an arrangement with the least amount of repulsion between electrons. Since 1s can only hold two electrons the next 2 electrons for sodium go in the 2s orbital. Hund's Rule suggests that the 3d3 electrons go in separate orbitals before being forced to pair up, so a V atom has 3 unpaired electrons." 4. Continue Reading. 7.5 - Without looking at the figures for the periodic... Ch. How many unpaired electrons does S have Is it diamagnetic or paramagnetic A 0 from CHEM 121 at New Jersey Institute Of Technology Therefore we have 1s 2 2s 2 2p 6 3s 2 3p 6 3d 6 4s 2. Test Prep. Metal ions with 4 7 electrons in the d orbital can exist as high spin or low spin In all electronic configurations involving two elect rons in the same orbital, the actual CFSE is reduced by the energy spent on pairing the electrons. Therefore we have (still incorrect) 1s 2 2s 2 2p 6 3s 2 3p 6 3d 4 4s 2. (a) Transition metals and many of their compounds show paramagnetic behaviour. 7.6 - Use the atomic radii of scandium, yttrium,... Ch. Based on periodic trends, which value is most likely to be the atomic radius for Ge? GeorgeSiO2. Users Options. Both of the configurations have the correct numbers of electrons in each orbital, it is just a matter of how the electronic configuration notation is written (here is an explanation why). Madhya Pradesh PMT 1993: The number of unpaired electrons in Zn2+ is (A) 2 (B) 3 (C) 4 (D) 0. "Look at the proton number, which is 23. (Co(OH),J c. cis-[Fe(en),(NO,)21* 21. Ch. Covalent bonding in a molecule of ammonia: Each hydrogen atom needs one more electron to complete its valence energy shell. Weaker ligands tend to give high-spin complexes, whereas stronger ligands tend to give low-spin complexes. Classes. This means for Ti, the occupancy of the five 3d orbitals will be [1][1][0][0][0], so two unpaired electrons. ( 3 pts ) is zinc in the 3d sublevel group of periodic table so three. '' ion is [ Ar ] 3d '' ^4 '' complex ion have... Protons than electrons, the answer is three valence energy shell subshell have got extra stability Expert solution 2 and. ] [ 1 ] [ 0 ], or three u indicate whether Fe 2 + ions paramagnetic. More protons than electrons, the... Ch metals and many of compounds! Is most likely to be the atomic radii of scandium, yttrium,... Ch the six! Ground state paramagnetic or diamagnetic so has three unpaired electrons - all 5 are in the orbital. P orbital can hold up to six electrons will go in the 3d sublevel earth... [ Ar ] 3d '' ^4 '' Fe 2+ it has 4 unpaired electrons in an atom can have of... The manganese 3+ ion, Mn '' ^ ( 3+ ) '' ion is Ar. Valence electrons ) '', has an atomic mass of 15 V2+ have the (! Electrons the next 2 electrons for sodium go in the ground state paramagnetic or?. ( 3+ ) '', has an atomic mass of 15 2p sublevels, the element p, 22... Electrons? V2+Cr2+Ni2+Co2+Fe3+ FREE Expert solution give up electrons ) is zinc in ground! Atom. ; Type s have is it consists of both the orbital. [ Ar ] 3d '' ^4 '' at the figures for the periodic... Ch consists of the! The 1s orbital configuration for sodium go in the 3d sublevel ): the O atom has 2... We have ( still incorrect ) 1s 2 2s 2 2p 4 as the electron for... A double bond, you have 1 sigma bond and 1 pi bond connect the atoms the. Question: How many unpaired electrons are found in bromine atoms 7.6 - Gadolinium has eight valence electrons does have... Is made up of 2 electrons for sodium the first two electrons will go the... To V-A group of periodic table so has three unpaired electrons, which gives it 3 more protons electrons! Magnetic properties of elements can be correlated to their unique electron configurations to! Related to the number of unpaired electrons does boron have? Would Cr3+ and V2+ the... A ): the O atom has 2s 2 2p 6 3s 2 3p 6 3d 4s. The magnetic properties of various elements are related to the number of unpaired electrons in the orbital... S. F, s, Si, Sr a molecule of ammonia each! 121 ; Type '' ion is [ Ar ] 3d3 4s2 relatively empty shell... 6, so it has eight unpaired electrons - all three are in 3d! - This textbook places lanthanum directly below... Ch same ( nonzero ) number unpaired. The atom. in writing the electron configuration for Chromium ( Cr Half-filled. For Fe 2+ it has 4 unpaired electrons, the answer is three is [ Ar ] ''. Of atomisation of the physical and chemical properties of elements can be correlated to their unique electron configurations for,... Many electrons in outermost ' p ' sub energy level smallest to largest size Si,,! For a Mn '' ^ ( 3+ ) '' ion is [. Ways of representing the electrons in outermost ' p ' sub energy level consists of both the 2s and 2p! As the electron configuration for a Mn '' ^ ( 3+ ) '' ion is Ar! The magnetic properties of elements can be correlated to their unique electron configurations 3d '' ^4 '' nor has! 2 2p 6 3s 2 3p 6 3d 4 4s 2 complete its valence energy.... Electrons, the... Ch has three unpaired electrons in an atom, let s... Use valence electrons does boron have? Would Cr3+ and V2+ have the same ( nonzero ) number of electrons. Mn '' ^ ( 3+ ) '' ion is [ Ar ] 3d '' ''... By: 1s 2 2s 2 2p 6 3s 2 3p 6 3d 4s! From the 4s shell, rather than the 3d sublevel electrons are found in bromine atoms which value is likely. Is zinc in the 2s and the 2p orbital the electrons in an atom flashcards..... Level consists of both the 2s and the 2p orbital both I and II have the Sameelectron configuration with. Will see, these atoms have 25 protons in their nuclei the p can!
Tate Online Collectionsduolingo Japanese Review, Shish Taouk Chicken Sandwich, Native Azaleas Georgia, Savills Las Vegas, Alachua County Sheriff Inmate Search, Retail Pharmacist Jobs, Mysterious Camel Figurine Locations Map,
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6619186401367188, "perplexity": 2676.299901703508}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039626288.96/warc/CC-MAIN-20210423011010-20210423041010-00573.warc.gz"}
|
https://proofwiki.org/wiki/Definition:Coset_Space
|
# Definition:Coset Space
## Definition
Let $G$ be a group, and let $H$ be a subgroup of $G$.
### Left Coset Space
The left coset space (of $G$ modulo $H$) is the quotient set of $G$ by left congruence modulo $H$, denoted $G / H^l$.
It is the set of all the left cosets of $H$ in $G$.
### Right Coset Space
The right coset space (of $G$ modulo $H$) is the quotient set of $G$ by right congruence modulo $H$, denoted $G / H^r$.
It is the set of all the right cosets of $H$ in $G$.
### Note
If we are (as is usual) concerned at a particular time with only the left or the right coset space, then the superscript is usually dropped and the notation $G / H$ is used for both the left and right coset space.
If, in addition, $H$ is a normal subgroup of $G$, then $G / H^l = G / H^r$ and the notation $G / H$ is then unambiguous anyway.
## Also known as
Some sources call this the left quotient set and right quotient set respectively.
Some sources use:
$G \mathrel \backslash H$ for $G / H^l$
$G / H$ for $G / H^r$
This notation is rarely encountered, and can be a source of confusion.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9949989914894104, "perplexity": 231.92025854318305}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540517156.63/warc/CC-MAIN-20191209013904-20191209041904-00544.warc.gz"}
|
https://anjaz.ch/2015/10/
|
## POSH: Using ~ for your home directory
Out of habit I used
cd ~
in the Powershell ISE today. Lucky for me Powershell then showed me that you can actually use ~ for your home directory. Just enter the following command:
(get-psprovider 'FileSystem').home = "P:\ath\to\Home"
As soon you have done this you’ll never have to break out of habit to get to your main folder.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7470749020576477, "perplexity": 4833.426634891034}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371830894.88/warc/CC-MAIN-20200409055849-20200409090349-00405.warc.gz"}
|
https://lavelle.chem.ucla.edu/forum/search.php?author_id=15434&sr=posts
|
## Search found 101 matches
Mon Mar 16, 2020 11:04 pm
Forum: *Enzyme Kinetics
Topic: catalysts
Replies: 4
Views: 60
### Re: catalysts
The catalyst would have an effect in whichever step it was involved in. It would affect both the forwards and reverse rate.
Thu Mar 12, 2020 6:31 pm
Forum: Reaction Mechanisms, Reaction Profiles
Topic: comparing rates
Replies: 3
Views: 21
### Re: comparing rates
Yes, the rates would be the same because one mole of reactant results in one mole of product.
Thu Mar 12, 2020 6:29 pm
Forum: Arrhenius Equation, Activation Energies, Catalysts
Topic: Arrhenius equation
Replies: 5
Views: 24
### Re: Arrhenius equation
A is a variable that represents the proportion of interactions that have the minimum activation energy and are in the right orientation.
Thu Mar 12, 2020 6:29 pm
Forum: Applying Le Chatelier's Principle to Changes in Chemical & Physical Conditions
Topic: Le Chatelier's Principle
Replies: 4
Views: 71
### Re: Le Chatelier's Principle
Le Chatlier's principle says that a system will adjust to maintain its conditions. For example, if temperature increases, the system will try to counteract the change.
Thu Mar 12, 2020 6:05 pm
Forum: Calculating Work of Expansion
Topic: isobaric compression
Replies: 7
Views: 127
### Re: isobaric compression
Using the formula PV=nRT, you can change V and P by accounting for differences in the other variables (ex: temperature).
Thu Mar 12, 2020 6:03 pm
Forum: Galvanic/Voltaic Cells, Calculating Standard Cell Potentials, Cell Diagrams
Topic: How do you know a cell can do work?
Replies: 7
Views: 43
### Re: How do you know a cell can do work?
A difference in cell potential (Ecell not Ecell naught) means the cell can do work.
Thu Mar 12, 2020 6:02 pm
Forum: Arrhenius Equation, Activation Energies, Catalysts
Topic: activation energy units
Replies: 3
Views: 20
### Re: activation energy units
The units for activation energy are joules/mol. Joules gives the energy and the moles makes it standard for the reaction.
Thu Mar 12, 2020 6:01 pm
Forum: Thermodynamic Definitions (isochoric/isometric, isothermal, isobaric)
Topic: U vs H
Replies: 15
Views: 217
### Re: U vs H
Delta U is change in internal energy, while delta H is change in enthalpy. Under certain conditions, they can be equal to each other.
Thu Mar 12, 2020 6:01 pm
Forum: Ideal Gases
Topic: reversing reactions
Replies: 14
Views: 133
### Re: reversing reactions
When you reverse the reaction, you have to invert K. Therefore, for a forward reaction, the equilibrium constant is equal to K. For the reverse reaction, the equilibrium constant is equal to 1/K.
Thu Mar 12, 2020 5:58 pm
Forum: Appications of the Nernst Equation (e.g., Concentration Cells, Non-Standard Cell Potentials, Calculating Equilibrium Constants and pH)
Topic: HW 6.57
Replies: 6
Views: 92
### Re: HW 6.57
If you write the Ka out, it is [H+][A-] and in order to get [H+], you would need to take the square root of the value.
Thu Mar 12, 2020 5:57 pm
Forum: Concepts & Calculations Using First Law of Thermodynamics
Topic: q and delta H
Replies: 4
Views: 72
### Re: q and delta H
q is equal to delta H under the condition that pressure is constant.
Thu Mar 12, 2020 5:56 pm
Forum: Work, Gibbs Free Energy, Cell (Redox) Potentials
Topic: Enaught in Concentration Cells
Replies: 4
Views: 62
### Re: Enaught in Concentration Cells
The equation that relates E and delta G uses Ecell and delta G, not delta G naught. As a result, even if Ecell naught is zero, Ecell could not be zero, so delta G does not have to be zero.
Thu Mar 12, 2020 5:53 pm
Forum: General Rate Laws
Topic: Fast and Slow Step Reactions
Replies: 5
Views: 42
### Re: Fast and Slow Step Reactions
The slow step is the rate determining step. The fast step could be before or after the sloe step.
Wed Mar 11, 2020 2:49 pm
Forum: Arrhenius Equation, Activation Energies, Catalysts
Topic: same equation?
Replies: 5
Views: 35
### Re: same equation?
The equations are the same, they’re just written differently.
Sun Mar 08, 2020 2:46 pm
Forum: General Rate Laws
Topic: Instantaneous and average reaction rate
Replies: 4
Views: 34
### Re: Instantaneous and average reaction rate
Rate laws are given for instantaneous rates.
Sun Mar 08, 2020 2:45 pm
Forum: First Order Reactions
Topic: how to know actual order
Replies: 4
Views: 73
### Re: how to know actual order
First order means the exponent is equal to one. The exponent is based on the rate order, not the coefficient.
Sun Mar 08, 2020 2:44 pm
Forum: First Order Reactions
Topic: rate constants
Replies: 19
Views: 178
### Re: rate constants
Yes, rate constants are always positive because they represent concentration which is based on mass (always positive).
Sun Mar 08, 2020 2:42 pm
Forum: First Order Reactions
Topic: Graph
Replies: 9
Views: 87
### Re: Graph
The graph will look linear and the slope is -k.
Sun Mar 08, 2020 2:41 pm
Forum: Arrhenius Equation, Activation Energies, Catalysts
Topic: activation energy/ energy barrier
Replies: 6
Views: 57
### Re: activation energy/ energy barrier
The activation energy is how much energy the reaction requires to proceed and this is for both endergonic and exergonic reactions. When there is not enough energy to overcome this barrier, the reaction will not proceed.
Sun Mar 08, 2020 3:27 am
Forum: General Rate Laws
Topic: Slow and Fast step
Replies: 2
Views: 59
### Re: Slow and Fast step
It will most likely be given to you, but you can also work backwards from the rate law to determine the slow step.
Sun Mar 08, 2020 3:26 am
Forum: Kinetics vs. Thermodynamics Controlling a Reaction
Topic: instantaneous rate
Replies: 16
Views: 117
### Re: instantaneous rate
Instantaneous rate is more accurate for a specific time, while average gives you the overall rate of the reaction.
Sun Mar 08, 2020 3:25 am
Forum: Kinetics vs. Thermodynamics Controlling a Reaction
Topic: derivations
Replies: 9
Views: 87
### Re: derivations
I would focus on the derivations that were given in class, specifically those for 1, 2, and 0 order.
Sun Mar 08, 2020 3:24 am
Forum: Method of Initial Rates (To Determine n and k)
Topic: units of T
Replies: 11
Views: 369
### Re: units of T
Make sure units are consistent in all your calculations. Generally, time should be reported in seconds.
Sun Mar 08, 2020 3:23 am
Forum: General Rate Laws
Topic: Units
Replies: 3
Views: 59
### Re: Units
For units, make sure you use the units of the components that go in. The order of the rate constant will determine the units.
Sun Mar 08, 2020 3:23 am
Forum: General Rate Laws
Topic: Units
Replies: 3
Views: 59
### Re: Units
For units, make sure you use the units of the components that go in. The order of the rate constant will determine the units.
Thu Feb 06, 2020 11:20 am
Forum: Heat Capacities, Calorimeters & Calorimetry Calculations
Topic: Cp and Cv
Replies: 4
Views: 51
### Re: Cp and Cv
Cp is the heat capacity at constant pressure. Cv is the heat capacity at constant volume. According to PV = nRT, if temperature is changing, that means one other variable also has to change and this could be number of moles, pressure, or volume. It is important to know which ones are constant when c...
Thu Feb 06, 2020 11:18 am
Forum: Calculating Work of Expansion
Topic: units
Replies: 9
Views: 69
### Re: units
w is given in joules. Some problems may represent it as kJ so it is important to pay attention.
Thu Feb 06, 2020 11:17 am
Forum: Thermodynamic Definitions (isochoric/isometric, isothermal, isobaric)
Topic: Constant pressure
Replies: 19
Views: 138
### Re: Constant pressure
Yes, they would be referring to a constant external pressure.
Thu Feb 06, 2020 11:17 am
Forum: Heat Capacities, Calorimeters & Calorimetry Calculations
Topic: Textbook question 4A.13
Replies: 5
Views: 56
### Re: Textbook question 4A.13
The reaction is transferring heat to its surroundings, so the system itself is losing heat. Therefore, it will have a negative value.
Thu Feb 06, 2020 11:16 am
Forum: Thermodynamic Systems (Open, Closed, Isolated)
Topic: Insulated system
Replies: 5
Views: 44
### Re: Insulated system
Delta S = q / t. An insulated system has no change in temperature, so the value of T in the denominator is not going to change. However, even though the initial and final temperatures of the system are the same, it would change throughout the progress of the reaction. Therefore, there can still be h...
Thu Feb 06, 2020 11:12 am
Forum: Third Law of Thermodynamics (For a Unique Ground State (W=1): S -> 0 as T -> 0) and Calculations Using Boltzmann Equation for Entropy
Topic: HW Question 4H.9
Replies: 1
Views: 22
### Re: HW Question 4H.9
Vibrational activity would have more degeneracy than nonvibrational activity, so Sc>Sb. Monatomic gases have more degeneracy than diatomic gases because there are more possible states for them to exist in. Therefore, the final answer is Sb < Sc < Sa.
Thu Feb 06, 2020 11:10 am
Forum: Entropy Changes Due to Changes in Volume and Temperature
Topic: Heat capacity
Replies: 4
Views: 38
### Re: Heat capacity
If you are not given a specific heat, you can use the generic formula for a gas. It depends on both the shape and conditions of the molecules observed. For an ideal gas in atomic form, Cv = (3/2)R and Cp = (5/2)R.
Thu Feb 06, 2020 11:08 am
Forum: Entropy Changes Due to Changes in Volume and Temperature
Topic: 4F12
Replies: 3
Views: 22
### Re: 4F12
R is the gas constant and C is a heat capacity.
Thu Feb 06, 2020 11:05 am
Forum: Entropy Changes Due to Changes in Volume and Temperature
Topic: Question 4.37
Replies: 2
Views: 30
### Re: Question 4.37
In general, if delta G is negative, then the reaction is spontaneous. Simply knowing change in entropy or heat does not directly tell you if a reaction is spontaneous or not.
Thu Feb 06, 2020 11:02 am
Forum: Entropy Changes Due to Changes in Volume and Temperature
Topic: Entropy Decreasing, Temperature Increasing
Replies: 6
Views: 56
### Re: Entropy Decreasing, Temperature Increasing
At a higher temperature, entropy is already higher than it would be at a lower temperature. As a result, the change in entropy will be lower.
Mon Jan 20, 2020 6:34 pm
Forum: Non-Equilibrium Conditions & The Reaction Quotient
Topic: q vs k
Replies: 9
Views: 45
### Re: q vs k
Q represents the current state of the system but K is the equilibrium state. Eventually, Q will go towards K.
Mon Jan 20, 2020 6:32 pm
Forum: Non-Equilibrium Conditions & The Reaction Quotient
Topic: definition of a buffer
Replies: 8
Views: 74
### Re: definition of a buffer
A buffer is a solution that resists changes in pH and it’s made by mixing a weak acid/base with its conjugate.
Mon Jan 20, 2020 6:31 pm
Forum: Reaction Enthalpies (e.g., Using Hess’s Law, Bond Enthalpies, Standard Enthalpies of Formation)
Topic: Delta H
Replies: 10
Views: 312
### Re: Delta H
If delta H is negative, it is exothermic. If delta H is positive, it is endothermic.
Mon Jan 20, 2020 6:30 pm
Forum: Equilibrium Constants & Calculating Concentrations
Topic: Writing K Expression
Replies: 6
Views: 46
### Re: Writing K Expression
You include all aqueous solutions and gases in the K expression.
Mon Jan 20, 2020 6:28 pm
Forum: Equilibrium Constants & Calculating Concentrations
Topic: ICE tables
Replies: 5
Views: 44
### Re: ICE tables
You use an ICE table to observe how concentrations change at equilibrium.
Wed Jan 15, 2020 1:27 pm
Forum: Ideal Gases
Topic: Combined gas law
Replies: 3
Views: 38
### Re: Combined gas law
You use the combined gas law to convert from liters of gas to moles.
Wed Jan 15, 2020 1:26 pm
Forum: Equilibrium Constants & Calculating Concentrations
Topic: increasing yield
Replies: 2
Views: 22
### Re: increasing yield
Remove products from the reaction so Q is less than K.
Wed Jan 15, 2020 1:24 pm
Forum: Phase Changes & Related Calculations
Topic: Autoprotolysis
Replies: 15
Views: 134
### Re: Autoprotolysis
It is when a molecule transfers a proton to another one of the same molecule.
Wed Jan 15, 2020 1:23 pm
Forum: Applying Le Chatelier's Principle to Changes in Chemical & Physical Conditions
Topic: Le Chatelier's Principle
Replies: 6
Views: 28
### Re: Le Chatelier's Principle
If the change in pressure changed the volume, it affects the concentration. If the change in pressure was caused by the addition of an intert gas, the volume hasn't changed so the concentrations have not changed.
Wed Jan 15, 2020 1:23 pm
Forum: Equilibrium Constants & Calculating Concentrations
Topic: When to omit "x-term"
Replies: 6
Views: 51
### Re: When to omit "x-term"
You omit the -x term when the equilibrium constant is very small (less than 10^3) because those changes are too small to have a significant effect on the concentration of reactants.
Fri Jan 10, 2020 4:25 pm
Forum: Equilibrium Constants & Calculating Concentrations
Topic: Exercise 5G.1
Replies: 4
Views: 40
### Re: Exercise 5G.1
C is asking about the equilibrium constant, which is a fixed value. D is asking about equilibrium concentrations, which depend on the initial reactions.
Fri Jan 10, 2020 2:52 pm
Forum: Equilibrium Constants & Calculating Concentrations
Topic: Pressure and Volume
Replies: 8
Views: 48
### Re: Pressure and Volume
The value of K would depend on whether pressure would change from a change in volume (would change) or addition of an inert gas (no change) based on how concentration would change.
Fri Jan 10, 2020 2:50 pm
Forum: Ideal Gases
Topic: ICE table
Replies: 7
Views: 383
### Re: ICE table
Set up and ICE table with initial, change, and equilibrium and use x to represent an unknown change.
Fri Jan 10, 2020 2:49 pm
Forum: Equilibrium Constants & Calculating Concentrations
Topic: Le Chatelier's Principle
Replies: 7
Views: 79
### Re: Le Chatelier's Principle
Le Chatlier's Principle states that if a system is disturbed, it will counteract the disturbance.
Fri Jan 10, 2020 2:46 pm
Forum: Equilibrium Constants & Calculating Concentrations
Topic: 5I.13
Replies: 2
Views: 31
### Re: 5I.13
Kc uses concentration while K uses partial pressure.
Fri Jan 10, 2020 2:44 pm
Forum: Non-Equilibrium Conditions & The Reaction Quotient
Topic: Why Q would be greater than K
Replies: 5
Views: 30
### Re: Why Q would be greater than K
Q would be greater than K if there are too many products. An external change could cause the system to have Q greater than K.
Sun Dec 08, 2019 2:52 pm
Forum: Conjugate Acids & Bases
Topic: Bronsted vs Conjugate
Replies: 3
Views: 99
### Re: Bronsted vs Conjugate
Bronsted can accept or donate protons. Conjugate Avis is the conjugate of a base.
Sun Dec 08, 2019 2:51 pm
Forum: Polyprotic Acids & Bases
Topic: Polyproptic Acids/Bases
Replies: 4
Views: 88
### Re: Polyproptic Acids/Bases
They would have multiple H or OH.
Sun Dec 08, 2019 2:50 pm
Forum: Identifying Acidic & Basic Salts
Topic: salt solutions
Replies: 6
Views: 100
### Re: salt solutions
You need to know whether they are neutral, acidic, or basic.
Sun Dec 08, 2019 2:49 pm
Forum: Biological Examples
Topic: b12?
Replies: 3
Views: 122
### Re: b12?
Cobalt is the coordinating metal in the middle.
Sun Dec 08, 2019 2:49 pm
Forum: Significant Figures
Topic: pH sig figs
Replies: 9
Views: 199
### Re: pH sig figs
The number of dog figs starts after the decimal place for pH. So, if there are two dig figs you would use 2.46.
Sun Dec 01, 2019 3:02 am
Forum: Calculating pH or pOH for Strong & Weak Acids & Bases
Topic: strong or weak base?
Replies: 13
Views: 127
### Re: strong or weak base?
A strong base completely dissociates in water and there are only a few. A weak base does not completely dissociate in water.
Sun Dec 01, 2019 3:00 am
Forum: Conjugate Acids & Bases
Topic: Product of Acid and Base
Replies: 5
Views: 66
### Re: Product of Acid and Base
The reaction between an acid and base is a neutralization reaction. The products of this reaction are a salt and water.
Sun Dec 01, 2019 2:58 am
Forum: Bronsted Acids & Bases
Topic: Bronsted Acid and base
Replies: 8
Views: 78
### Re: Bronsted Acid and base
A bronsted acid donates protons while a bronsted base accepts protons.
Sun Dec 01, 2019 2:56 am
Forum: *Molecular Orbital Theory (Bond Order, Diamagnetism, Paramagnetism)
Topic: Sigma and Pi Bonds
Replies: 12
Views: 509
### Re: Sigma and Pi Bonds
A triple bond would have 1 sigma bond and 2 pi bonds.
Sun Dec 01, 2019 2:54 am
Forum: Shape, Structure, Coordination Number, Ligands
Topic: Shape
Replies: 1
Views: 28
### Re: Shape
The shape of a coordination compound that has 3 ligands is trigonal planar.
Mon Nov 25, 2019 1:06 am
Forum: Determining Molecular Shape (VSEPR)
Topic: Bent vs linear
Replies: 56
Views: 754
### Re: Bent vs linear
Bent has lone pairs while linear does not.
Mon Nov 25, 2019 1:06 am
Forum: Bond Lengths & Energies
Topic: Hydrogen Bonding and Dispersion
Replies: 5
Views: 142
### Re: Hydrogen Bonding and Dispersion
Dispersion occurs between two nonpolar atoms/molecules. Hydrogen bonding occurs when an H is bonded to a highly electronegative atom.
Mon Nov 25, 2019 1:05 am
Forum: Hybridization
Topic: Sigma and Pi Bonds
Replies: 5
Views: 48
### Re: Sigma and Pi Bonds
Sigma bond is the first bond that is formed and it the overlapping of two orbitals. Pi bonds are all the remaining bonds and they are formed by side by side orbitals.
Mon Nov 25, 2019 1:04 am
Forum: Hybridization
Topic: Are terminal atoms hybridized?
Replies: 2
Views: 42
### Re: Are terminal atoms hybridized?
Each atom is hybridized based on the number of electron domains it has.
Mon Nov 25, 2019 1:04 am
Forum: Shape, Structure, Coordination Number, Ligands
Topic: Transition Metals
Replies: 7
Views: 74
### Re: Transition Metals
Yes, all transition metals can form coordinate compounds.
Sun Nov 17, 2019 11:11 pm
Forum: Ionic & Covalent Bonds
Topic: Strength and Length
Replies: 18
Views: 179
### Re: Strength and Length
The shorter the bond, the stronger it is.
Sun Nov 17, 2019 11:11 pm
Forum: Hybridization
Topic: Hybridization
Replies: 6
Views: 51
### Re: Hybridization
Hybridization is when 2 or more orbitals combine. For example, if a carbon has 4 bonds, all 4 of those bonds are the same. All the electrons cannot be in the same orbital, so in order for them to be equal, the orbitals have to hybridize.
Sun Nov 17, 2019 11:09 pm
Forum: Determining Molecular Shape (VSEPR)
Topic: Naming the Molecular Shapes
Replies: 7
Views: 64
### Re: Naming the Molecular Shapes
The molecular shape names of the electron domains have roots that you can memorize. Then, try to memorize how you remove electrons to figure out the ones where there are lone pairs.
Sun Nov 17, 2019 10:49 pm
Forum: Lewis Structures
Topic: Oxygen
Replies: 9
Views: 241
### Re: Oxygen
Oxygen can have triple bonds, but it is usually most stable with two.
Sun Nov 17, 2019 10:48 pm
Forum: Trends in The Periodic Table
Topic: Trend for Polarizability
Replies: 5
Views: 132
### Re: Trend for Polarizability
Larger molecules are more polarizable than smaller ones, so the general trend is decreases across the row and increases down the column. Anions are more polarizable than cations because they are larger.
Sun Nov 10, 2019 11:56 pm
Forum: Electronegativity
Topic: Noble Gases
Replies: 19
Views: 645
### Re: Noble Gases
No, neon gases are not included in the electronegativity trend, so neon is not more electronegative than fluorine.
Sun Nov 10, 2019 11:56 pm
Forum: Bond Lengths & Energies
Topic: Size
Replies: 9
Views: 88
### Re: Size
Larger size means larger bond length.
Sun Nov 10, 2019 11:55 pm
Forum: Dipole Moments
Topic: Dipole dipole forces
Replies: 2
Views: 36
### Re: Dipole dipole forces
A dipole-dipole force occurs when both compounds naturally have dipoles (HI). Induced dipoles occur when both molecules don't have a dipoles, and the induced dipole is caused by a random shift in the electron cloud.
Sun Nov 10, 2019 11:54 pm
Forum: Ionic & Covalent Bonds
Topic: Water molecules and ionic substances
Replies: 5
Views: 69
### Re: Water molecules and ionic substances
Hydrogen bonding and the polarity of the water molecule allow ionic substances to dissolve in water.
Sun Nov 10, 2019 11:53 pm
Forum: *Liquid Structure (Viscosity, Surface Tension, Liquid Crystals, Ionic Liquids)
Topic: Viscosity
Replies: 15
Views: 304
### Re: Viscosity
A liquid has high viscosity when it has strong intermolecular forces so it tends to by sticky and slow-moving.
Mon Nov 04, 2019 3:21 pm
Forum: Polarisability of Anions, The Polarizing Power of Cations
Topic: 2D.5 - Electronegativity
Replies: 4
Views: 83
### Re: 2D.5 - Electronegativity
You would have to follow the electronegativity trend. You would have to look at the electronegativity difference of O and S, not S and C. O is more electronegative than S, so there would be a greater difference between O and C compared to S and C, so CO2 would be more ionic than CS2.
Mon Nov 04, 2019 12:33 pm
Forum: Ionic & Covalent Bonds
Topic: Covalent and Ionic Bonds
Replies: 6
Views: 63
### Re: Covalent and Ionic Bonds
You can use the electronegativity difference between the two atoms to determine whether it has more ionic or covalent character. If the difference is greater than 1.5, it is ionic. If it is between 0.5 and 1.5, it is polar covalent. If it is less than 0.5, it is nonpolar covalent.
Mon Nov 04, 2019 12:28 pm
Forum: Ionic & Covalent Bonds
Topic: Ionic Solids
Replies: 2
Views: 44
### Re: Ionic Solids
In an ionic solid, there is an anion with a negative charge and a cation with a positive charge. The electrostatic attraction between the charged ions holds the solid together.
Mon Nov 04, 2019 12:27 pm
Forum: Octet Exceptions
Replies: 8
Views: 80
### Re: Radicals: Homework Problem #2C1
NO2- has 18 electrons. Since 18 is an even number, there aren't going to be any unpaired electrons, so it would not be a radical. NO2 on the other hand would have an unpaired electron, so it would be a radical.
Mon Nov 04, 2019 12:25 pm
Forum: Properties of Light
Topic: General Concept
Replies: 3
Views: 72
### Re: General Concept
The dual nature of light refers to its properties as a wave and particle. Formulas like c = λv are based of light's wave-like properties. The E = hv formula treats light as quantized photons.
Sun Oct 27, 2019 4:30 pm
Forum: Bond Lengths & Energies
Topic: C-C bond lengths
Replies: 4
Views: 46
### Re: C-C bond lengths
There was a discrepancy in these numbers because C-C bonds have resonance. So, they are partial single/double bonds.
Sun Oct 27, 2019 4:29 pm
Forum: Ionic & Covalent Bonds
Topic: Valence Electrons
Replies: 16
Views: 202
### Re: Valence Electrons
You look at the number of electrons in the outermost shell. This is the number of valence electrons.
Sun Oct 27, 2019 4:27 pm
Forum: Lewis Structures
Topic: Dots vs Lines in Lewis Structures
Replies: 6
Views: 72
### Re: Dots vs Lines in Lewis Structures
When drawing a bond, both dots and lines are synonymous. Lines are usually clearer in depicting the bonds. Unbonded electrons are always shown as dots.
Sun Oct 27, 2019 4:16 pm
Forum: Electron Configurations for Multi-Electron Atoms
Topic: Electron Shielding
Replies: 3
Views: 31
### Re: Electron Shielding
Electron shielding refers to the blocking of valence shell electron attraction by the nucleus due to the presence of inner-shell electrons. Penetration describes the proximity to which an electron can approach to the nucleus. The principal quantum number gives you an idea of how close the electron i...
Sun Oct 27, 2019 4:09 pm
Forum: Trends in The Periodic Table
Topic: 1F.3
Replies: 3
Views: 46
### Re: 1F.3
You need to look at the number of protons and effective nuclear charge to determine the order.
Sun Oct 20, 2019 1:38 pm
Forum: Electron Configurations for Multi-Electron Atoms
Topic: Short Hand
Replies: 11
Views: 98
### Re: Short Hand
You put the last noble gas configuration in brackets and then continue using spdf normally after that.
Sun Oct 20, 2019 1:36 pm
Forum: Properties of Light
Topic: Baler v. Lyman Series
Replies: 10
Views: 120
### Re: Baler v. Lyman Series
The Balmer series is when the electron falls to the n = 2 energy level. The Lyman series is when the electron falls to the n = 1 energy level.
Sun Oct 20, 2019 1:35 pm
Topic: Energy of photons
Replies: 4
Views: 164
### Re: Energy of photons
E = hv calculates the energy of an individual photon.
E = 1/2mv^2 calculates the kinetic energy of a particle. Since photons have no mass, this equation is not applicable.
Sun Oct 20, 2019 1:34 pm
Forum: Heisenberg Indeterminacy (Uncertainty) Equation
Topic: Focus 1B.25 & 27 Homework
Replies: 3
Views: 46
### Re: Focus 1B.25 & 27 Homework
The Heisenberg equation states that the product of the uncertainty in momentum times the uncertainty in position has to be greater than or equal to h/4π. This means there is a minimum number of uncertainty at all times.
Sun Oct 20, 2019 1:31 pm
Forum: Properties of Light
Topic: Unit for Wavelength
Replies: 34
Views: 297
### Re: Unit for Wavelength
The standard unit for wavelength is m, and this is the output of equations relating to light. However, nanometers (10^-9) are also a common unit of measurement.
Sat Oct 12, 2019 4:24 pm
Forum: Properties of Electrons
Topic: Electron After Excited State
Replies: 7
Views: 66
### Re: Electron After Excited State
An electron jumps to a higher energy state when it absorbs energy. It falls back to its original state when it emits that energy.
Sat Oct 12, 2019 4:17 pm
Forum: Properties of Electrons
Topic: Atomic Spectra Post Module
Replies: 4
Views: 44
### Re: Atomic Spectra Post Module
The energy transition from n=5 to n=1 requires more energy than the energy transition from n=4 to n=2. More energy means shorter wavelength (E=hv), so the electron going from n=5 to n=1 would emit more energy.
Sat Oct 12, 2019 4:13 pm
Forum: Properties of Light
Topic: Wave Properties of Electrons
Replies: 3
Views: 49
### Re: Wave Properties of Electrons
The wave nature of electrons affects the electron's energy levels. Electrons form standing waves and each energy level has to have a whole number of wavelengths for the standing wave to exist. So, the wave property of electrons dictates how there are fixed energy levels for each electron.
Thu Oct 10, 2019 11:43 am
Forum: Einstein Equation
Topic: Energy Problem
Replies: 4
Views: 145
### Re: Energy Problem
Use the formula E=hv. We know that h is planck's constant (6.626*10^-34 m^2kg/s) and that v is frequency (1.09 x 10^15 s-1). If we plug these into the equation, we get E = (6.626*10^-34) (1.09 x 10^15) = 7.22 x 10^-19 J. Using the minimum frequency will also give the minimum amount of energy.
Thu Oct 10, 2019 11:37 am
Forum: Significant Figures
Topic: Do we use molar mass ?
Replies: 8
Views: 90
### Re: Do we use molar mass ?
Use the number of significant digits given to you in the problem. Make sure you round your answer at the end to avoid losing precision.
Thu Oct 03, 2019 11:51 pm
Forum: Significant Figures
Topic: When to round for sig figs?
Replies: 12
Views: 137
### Re: When to round for sig figs?
Round at the end of your calculation so you don't loose precision.
Thu Oct 03, 2019 11:50 pm
Forum: Limiting Reactant Calculations
Topic: Limiting Reactants or Reagents Module
Replies: 2
Views: 54
### Re: Limiting Reactants or Reagents Module
I believe he wanted you to focus on the reactants for the purpose of finding the limiting reactant.
Thu Oct 03, 2019 11:42 pm
Forum: SI Units, Unit Conversions
Topic: 2.Mass Percentage and Decimal Rounding:
Replies: 10
Views: 100
### Re: 2.Mass Percentage and Decimal Rounding:
I would use the same number of significant figures as what is given in the problem.
Thu Oct 03, 2019 11:38 pm
Forum: General Science Questions
Topic: Rusty on High School Chem [ENDORSED]
Replies: 169
Views: 133706
### Re: Rusty on High School Chem[ENDORSED]
The best thing you can do is to review problems that focus on fundamentals. Make sure you check your answers and try to figure out what you can improve on!
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8462029099464417, "perplexity": 6685.996590116}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657151197.83/warc/CC-MAIN-20200714181325-20200714211325-00383.warc.gz"}
|
http://math.stackexchange.com/questions/177893/iterated-prisoners-dilemma-with-discount-rate-and-infinite-game-averages
|
# Iterated prisoners dilemma with discount rate and infinite game averages
Suppose we have two players who are perfectly rational (with their perfect rationality common knowledge) playing a game. On round one both players play in a prisoners dilemma type game. With payoffs (1,1) for mutual cooperation, (.01,.01) for mutual defection and (1.01,0) and (0,1.01) for the situations where player 1 and player 2 defect respectively. They may talk before all rounds.
After playing a round they flip a coin, if coin is the result is tails, we end the game. If the result is heads, they play another round with payoffs multiplies by $.9^n$ where n is the number of rounds that have been played. If the result is tails, the game is over.
I'm confused by the two following, both seemingly plausible arguments. Argument 1, The game is clearly equivalent to the following game. flip a coin until you get tails. call the number of coin flips n. Keep n hidden from the players. For n rounds the players will play the prisoners dilemma with payoffs as before (That is, as before, multiplied by $.9^n$ where n is the round number). Both players reason as follows "this game is nothing more than an iterated prisoner dilemma where I don't know the number of rounds, given that for any finite iterated prisoners dilemma (which this is with probability 1, where, because payoff is bounded, we can ignore the cases where it is not). Thus Because the iterated prisoners dilemma for any fixed finite number of rounds has unique nash equilibrium "defect" my optimal strategy, regardless of n, is to defect, thus I should defect.
Argument 2 I will tell my opponent that I am going to play a grim trigger strategy. That is, I will collaborate until he defects, after which, I will always defect. He will reply that he is also playing a grim trigger strategy. Given that there is always a 50% chance of a next round with .9 the payoff of the current one. Defecting on any round will cost me, on average at least .99*.45>.01 thus I will never have any incentive to defect. Neither will he. Thus we will always cooperate.
There is clearly a contradiction between these two arguments. I'm wary of both the first strategy's claim that if we use a hidden, but well defined, random variable to choose which of infinitely many games to play, all of which have the same move in their unique nash equilibria that same move is the nash equilbrium move of the averaged game. (Which, though seductive, seems close enough to infinity to not be inherently trust worthy). The second argument seems to me to rely on some symmetries which I'm not sure are valid assumptions.
I'm curious about two questions. Which of the above arguments is fallacious and where is the fallacy? And what is the (are the) nash equilbria and are they unique?
-
+1, great question. I think "he will always defect" should say "I will always defect"? – joriki Aug 2 '12 at 5:55
But in any finitely repeated prisoners dilemma, there is a unique Nash equilibrium in which both players always defect. So what is the difference. Here is the proof that in the finitely repeated prisoners dilemma, always defect is the unique Nash equilibrium outcome. Say Ann and Bob play the game for a finite number of rounds. Let $m$ be the last period in which any player cooperates in any Nash equilibrium. Since after $m$, players will defect anyways, and cooperating is costless, the player who cooperated in period $m$ will prefer to defect instead, and we do not have a Nash equilibrium. If the game is infinite or of unknown, but arbitrary long finite lenght, the round $m$ may not exists.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7039827108383179, "perplexity": 600.3888269178433}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049282327.67/warc/CC-MAIN-20160524002122-00219-ip-10-185-217-139.ec2.internal.warc.gz"}
|
http://mathhelpforum.com/calculus/127603-integration-parts-print.html
|
# Integration by parts
• February 7th 2010, 07:17 AM
confusedgirl
Integration by parts
Integrate (x^3 dx) / square root of (1-x^2)
i have to integrate it by parts. i already tried it but my answer waa 0=0 is that possible?
• February 7th 2010, 07:22 AM
Henryt999
how?
Quote:
Originally Posted by confusedgirl
Integrate (x^3 dx) / square root of (1-x^2)
i have to integrate it by parts. i already tried it but my answer waa 0=0 is that possible?
Curious how did you end up with 0?
• February 7th 2010, 07:27 AM
confusedgirl
let u = x^3
du = 3x^2 dx0
dv = dx / sq rt of (1-x^2)
v= arcsin x
= x^3 arcsin x - integral of 3x^2 arcsinx dx
u = arcsin x
du = dx / sq rt of (1-x^2)
dv = 3x^2 dx
v = x^3
= x^3 arcsin x - x^3 arcsin x + integral of x^3 dx / sq rt (1-x^2)
0=0
• February 7th 2010, 08:29 AM
Pulock2009
first find integrate:sin^-1 (x)
then go on integrating keeping x^3 as the part to differentiate everytime.
• February 7th 2010, 08:30 AM
Pulock2009
first find integrate:sin^-1 (x)
then go on integrating keeping x^3 as the part to differentiate everytime.
• February 7th 2010, 08:34 AM
Soroban
Hello, confusedgirl!
Watch very carefully . . .
Quote:
$I \;=\;\int\frac{x^3\,dx}{\sqrt{1-x^2}}$
We have: . $I \;=\;\int(x^2)\left[x(1-x^2)^{-\frac{1}{2}}dx\right]$
$\text{By parts: }\;\begin{array}{ccccccc}u &=& x^2 && dv &=& x(1-x^2)^{-\frac{1}{2}}dx \\
du &=& 2x\,dx && v &=& -(1-x^2)^{\frac{1}{2}} \end{array}$
Then: . $I \;=\;-x^2(1-x^2)^{\frac{1}{2}} + \int 2x(1-x^2)^{\frac{1}{2}}dx$
. . For the second integral, use "normal" substitution.
. . We have: . $J \;=\;\int (1-x^2)^{\frac{1}{2}}(2x\,dx)$
. . Let: . $u \,=\,1-x^2\quad\Rightarrow\quad du \,=\,-2x\,dx \quad\Rightarrow\quad 2x\,dx \,=\,-du$
. . Substitute: . $J \;=\;\int u^{\frac{1}{2}}(-du) \;=\;-\int u^{\frac{1}{2}}\,du \:=\:-\tfrac{2}{3}u^{\frac{3}{2}} + C \;=\;-\tfrac{2}{3}(1-x^2)^{\frac{3}{2}} + C$
Therefore: . $I \;=\;-x^2(1-x^2)^{\frac{1}{2}} - \tfrac{2}{3}(1-x^2)^{\frac{3}{2}} + C$
~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~
. . is often simplified beyond all recognition.
Here's how they do it . . .
We have: . $-x^2(1-x^2)^{\frac{1}{2}} - \tfrac{2}{3}(1-x^2)^{\frac{3}{2}}$
Factor: . $-\tfrac{1}{3}(1-x^2)^{\frac{1}{2}}\bigg[3x^2 + 2(1-x^2)\bigg]$ . . . .
hope you followed that!
. . . . $=\;-\tfrac{1}{3}(1-x^2)^{\frac{1}{2}}\bigg[3x^2 + 2 - 2x^2\bigg]$
. . . . $=\;-\tfrac{1}{3}(1-x^2)^{\frac{1}{2}}(x^2+2)$ . . . . ta-DAA!
• February 7th 2010, 10:15 AM
hi confusedgirl,
$\int{\frac{x^3}{\sqrt{1-x^2}}}dx$
$y=1-x^2\ \Rightarrow\ x^2=1-y$
$\frac{dy}{dx}=-2x\ \Rightarrow\ dy=-2xdx\ \Rightarrow\ x^3dx=-\frac{1}{2}x^2dy=\frac{1}{2}(y-1)dy$
$\frac{1}{2}\int{\frac{(y-1)}{\sqrt{y}}}dy=\frac{1}{2}\int{\left(\sqrt{y}-\frac{1}{\sqrt{y}}\right)}dy=\frac{1}{2}\int{\left (y^{\frac{1}{2}}-y^{-\frac{1}{2}}\right)}dy$
This can then be integrated using the power rule.
To integrate by parts...
$\frac{1}{2}\int{\frac{(y-1)}{\sqrt{y}}}dy$
$u=y-1\ du=dy$
$dv=y^{-\frac{1}{2}}\ \Rightarrow\ v=\int{y^{-\frac{1}{2}}}dy=2\sqrt{y}$
$\frac{1}{2}\int{u}dv=\frac{1}{2}\left(uv-\int{v}du\right)=\frac{1}{2}\left((y-1)2\sqrt{y}-\int{2\sqrt{y}}dy\right)$
$\frac{1}{2}\left(-x^2(2)\sqrt{1-x^2}-\frac{4}{3}\left(1-x^2\right)\sqrt{1-x^2}\right)+C$
$=-x^2\sqrt{1-x^2}-\frac{2}{3}\left(1-x^2\right)\sqrt{1-x^2}+C=\sqrt{1-x^2}\left(-x^2-\frac{2}{3}+\frac{2}{3}x^2\right)+C$
$=-\frac{1}{3}\sqrt{1-x^2}\left(x^2+2\right)+C$
• February 8th 2010, 03:17 AM
confusedgirl
thanks :)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 23, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9866877794265747, "perplexity": 3990.418118390348}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398453553.36/warc/CC-MAIN-20151124205413-00210-ip-10-71-132-137.ec2.internal.warc.gz"}
|
https://www.guyanastandard.com/2019/09/10/man-who-fractured-ex-male-lovers-nose-gets-18-months-jail/
|
One day after a man was remanded to prison for fracturing his ex lover’s nose, he was moments ago sentenced to 18 months imprisonment after he was unable to compensate the victim.
The sentence was handed down this afternoon by Principal Magistrate Faith McGusty in the Georgetown Magistrate’s Courts.
Thirty-five-year-old Leon Murray was yesterday slapped with two charges.
The first charge stated that on September 5, 2019, at Georgetown, Murray maliciously damaged one spectacle valued \$30,000, the property of Shawn Solomon.
The other charge stated that, on the same day, and at the same location, Murray inflicted grievous bodily harm on Shawn Solomon.
Murray denied the first charge and pleaded guilty to the latter charge.
Murray, in addressing the court, stated, “I am a straight-up person so I’ll tell the court like it is because this is my ex. Since we done me and this fella is good friends but since I get a woman he start calling the girl names.”
Murray added “The day me and my woman deh in the room and he started calling her a whore and telling me how she’s not good for me. I warned him but he did not listen so we start to fight and he pelt me; so I get wild and pick up a brick and pelt him back.”
Facts presented by Police Prosecutor Quinn Harris stated that on the day in question, around 10:30 hrs, the victim went home from work. He and Murray later became involved in a heated argument.
The argument escalated into a fight and Murray armed himself with a brick and pelt the victim causing him to receive a fractured nose. The injured man was rushed to the Georgetown Public Hospital Corporation (GPHC) where he was treated.
The matter was then reported to the police and Murray was later arrested.
Yesterday, when the matter was called, the victim stated that he would like \$60,000 in compensation to dismiss the matter. However, when the matter was called this afternoon, Murray was unable to provide the money.
Hence, he was sentenced for the offence.
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8241609334945679, "perplexity": 5747.840689647311}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572556.54/warc/CC-MAIN-20190916120037-20190916142037-00440.warc.gz"}
|
https://math.stackexchange.com/questions/3513066/matrix-exponential-is-continuous
|
# Matrix exponential is continuous
I want to prove that the function $$\exp\colon M_n(\mathbb{C})\to \mathrm{GL}_n(\mathbb{C})$$ is continuous under standard matrix norm $$\lVert A\rVert=\sup_{\lVert x\rVert=1}\lVert Ax\rVert.$$ Wikipedia says that it follows from the inequality $$\lVert e^{X+Y}-e^X\rVert\leqslant \lVert Y\rVert e^{\lVert X\rVert}e^{\lVert Y\rVert},$$ and I understand why, but I don't quite follow how to get this inequality. Could somebody explain that?
Let $$p:M_n(\Bbb C )^k\to M_n(\Bbb C ),\quad (X_1,X_2,\ldots ,X_n)\mapsto X_1\cdot X_2\cdots X_n\tag1$$ the ordered product of a vector of matrices, and $$c_X:\{X,Y\}^k\to \{0,\ldots ,k\}\tag2$$ is a function that count the number of coordinates of a vector in $$\{X,Y\}^k$$ that are equal to $$X$$. Then we have that $$(X+Y)^k=\sum_{v\in \{X,Y\}^k}p(v)=\sum_{j=0}^k\sum_{\substack{v\in\{X,Y\}^k\\c_X(v)=j}}p(v)\tag3$$ And if $$X$$ and $$Y$$ commute then $$\sum_{\substack{v\in\{X,Y\}^k\\c_X(v)=j}}p(v)=\binom{k}{j}X^jY^{k-j}\tag4$$ Then from $$\mathrm{(3)}$$ we have that \begin{align*} \|(X+Y)^k-X^k\|&\leqslant \left\|\sum_{j=0}^k\sum_{\substack{v\in\{X,Y\}^k\\c_X(v)=j}}p(v)-X^k\right\|\\ &=\left\|\sum_{j=0}^{k-1}\sum_{\substack{v\in\{X,Y\}^k\\c_X(v)=j}}p(v)\right\|\\ &\leqslant \sum_{j=0}^{k-1}\sum_{\substack{v\in\{X,Y\}^k\\c_X(v)=j}}\|p(v)\|\\ &\leqslant \sum_{j=0}^{k-1}\sum_{\substack{v\in\{X,Y\}^k\\c_X(v)=j}}\|X\|^j\|Y\|^{k-j}\\ &=\sum_{j=0}^{k-1}\binom{k}{j}\|X\|^j\|Y\|^{k-j}\\ &=\sum_{j=0}^{k-1}\binom{k}{j}\|X\|^j\|Y\|^{k-j}+\|X\|^k-\|X\|^k\\ &=(\|X\|+\|Y\|)^k-\|X\|^k\tag5 \end{align*} where in the third inequality we used implicitly the inequality $$\|AB\|\leqslant \|A\|\|B\|$$ for any square matrices $$A$$ and $$B$$. Then you have that \begin{align*} \|e^{X+Y}-e^X\|&=\left\|\sum_{k\geqslant 0}\frac{(X+Y)^k-X^k}{k!}\right\|\\ &\leqslant \sum_{k\geqslant 0}\frac{\|(X+Y)^k-X^k\|}{k!}\\ &\leqslant \sum_{k\geqslant 0}\frac{(\|X\|+\|Y\|)^k-\|X\|^k}{k!}\\ &=e^{\|X\|+\|Y\|}-e^{\|X\|}\\ &=e^{\|X\|}(e^{\|Y\|}-1)\tag6 \end{align*} And $$e^c-1\leqslant ce^c\iff \sum_{k\geqslant 0}\frac{c^{k+1}}{(k+1)!}\leqslant \sum_{k\geqslant 0}\frac{c^{k+1}}{k!}\tag7$$ clearly holds for $$c\geqslant 0$$. Then $$\mathrm{(6)}$$ and $$\mathrm{(7)}$$ prove your inequality.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 21, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9959403872489929, "perplexity": 232.2506234979935}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347410284.51/warc/CC-MAIN-20200530165307-20200530195307-00346.warc.gz"}
|
http://math.stackexchange.com/questions/40533/what-is-the-relation
|
# What is the relation?
The function $x^2 = y\quad$ limits two areas $A$ and $B$:
$A$ is further limited with the line $x= a$, $a\gt 0$. $A$ rotates around the $x$-axis, which gives Volume $A = Va$.
$B$ is limited with the line $y=b$, $b\gt 0$. $B$ rotates around the $y$-axis, which gives Volume $B = Vb$.
What are the relations between $a$ and $b$, when $Vb = Va$?
..................
I have come to the solution that:
$Vb = (\pi b^2 )/ 2$
$Va = (\pi a^5) / 5$
so the relation between them is:
$2.5b^2 = a^5$
Is that the final solution or is it more?
-
If your calculations are correct this is what you should have found.
-
Its homework right? – Tau May 21 '11 at 20:10
I was thinking maybe a simplification of my answer? – aka May 21 '11 at 20:29
No i've got my final exam on monday, so was thinking of training on old exams and other diverse stuff. – aka May 21 '11 at 20:31
Unless stated in the question that you have to produce some special answer, your answer should be excepted. Good luck on your test. – Tau May 21 '11 at 22:01
No it just says, what are the relations between a and b. when Va=Vb! Thank you Tau! – aka May 22 '11 at 14:41
Without a drawing or a more detailed description, I cannot be certain. But under the reasonable interpretation of what you wrote, your conclusion is absolutely correct. Maybe, since $a$ and $b$ are positive, it might be slightly better to say that $$b=a^2\sqrt{\frac{2a}{5}}$$
-
Who did you get your solution? – aka May 21 '11 at 20:29
@aka: I am not sure of the meaning of your question. I drew the parabola, decided what the regions $A$ and $B$ were supposed to be, calculated the volumes of the solids, in each case by slicing, so for $A$ I integrated with respect to $x$ and for $B$ with respect to $y$. The integrations were trivial, as you discovered. – André Nicolas May 21 '11 at 20:47
The solution given above by user6312 is a simplification of your solution, in terms of expressing the relation b as a function of a: multiply both sides of your solution by 2/5 (the reciprocal of 2.5 = 5/2), then take the square root of both sides. – amWhy May 21 '11 at 20:50
I'm sorry i mean, how* ! – aka May 22 '11 at 14:31
@aka: I tried to describe it briefly in a comment above. I drew the diagram, and then calculated as explained in the very detailed explanation by @Arturo Magidin. I quickly obtained the expressions you wrote down, so took it for granted that you had done it the same way, so that a detailed working out of the details was not needed by you. Wrote down an alternate formula expressing $b$ in terms of $a$, in case you met this sort of question in a multiple choice test. – André Nicolas May 22 '11 at 15:05
If we take the region bounded by the $y$-axis, the $x$-axis, the line $x=a$ (with $a\gt 0$), and the parabola $y=x^2$, and rotate it about the $x$-axis, the volume of the resulting solid of revolution is easily computed (using, for example, discs perpendicular to the $x$-axis) to be $$\text{Volume A} = \int_0^a \pi(x^2)^2\,dx = \frac{\pi}{5}x^5\Bigm|_0^a = \frac{\pi a^5}{5}.$$ If the region bounded by the $y$-axis, the $x$ axis, the line $y=b$ (with $b\gt 0$), and the parabola $y=x^2$ is revolved around the $y$-axis, then using discs perpendicular to the $y$-axis we obtain the volume to be: $$\text{Volume B} = \int_0^b \pi (\sqrt{y})^2\,dy = \frac{\pi}{2}y^2\Bigm|_0^b = \frac{\pi b^2}{2}.$$ So your computations are correct there.
If the two volumes are the same, then we must have $$\text{Volume A} = \frac{\pi a^5}{5} = \frac{\pi b^2}{2} = \text{Volume B};$$ there are many ways to express this: you can solve for one of $a$ or $b$ in terms of the other: $$b = \sqrt{\frac{2a^5}{5}} = a^{5/2}\sqrt{\frac{2}{5}},$$ or, if you want to express $a$ in terms of $b$ instead, $$a = \sqrt[5]{\frac{5}{2}b^2} = b^{2/5}\sqrt[5]{\frac{5}{2}}.$$ Or you can simply express this relation by saying, say $$2a^5 = 5b^2.$$
Note. If $a\lt 0$, then the volume of $A$ can be computed the same way, but the integral would go from $a$ to $0$, so that the volume would be $-\frac{\pi a^5}{5}$; to account for both possibilities, both $a\gt 0$ and $a\lt 0$, you can simply write that the volume is $\frac{\pi|a|^5}{5}$. For solid $B$, however, it makes no sense to talk about $b\lt 0$, because then we don't have a finite area "enclosed" by the curves in question.
-
I did exacly the same calculations, was just a bit confused by the simplification you made in the last steps... but i think it is as you say... 2a^5 = 5b^2 2a^5/5 = b^2 then take square root we get: sqroot ((2a^5)/5) = b b = a^2* sqroot(2a/5) if we want a: a^5 = (5b^2/2) a = 5root(5b^2/2) a = b^(2/5) * 5root(5/2) – aka May 22 '11 at 14:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9330353736877441, "perplexity": 229.7314450797586}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464051036499.80/warc/CC-MAIN-20160524005036-00070-ip-10-185-217-139.ec2.internal.warc.gz"}
|
https://benknoble.github.io/blog/2018/06/22/til-reading-refactoring-and-rpatterns/
|
# Junk Drawer
For all those little papers scattered across your desk
22 Jun 2018 in Blog
I couldn’t leave the alliteration hanging in a title containing the word ‘patterns’…
# Today I Learned
1. I was not alone in the IXIA troubles
3. Refactoring without changing the functional result is fun
4. I use the right patterns
## Icky Ixia
I pinged a colleague in our Mountain View office about the Ixia, and he let our team know that we had done well so far. He had tech support help him when he was where we are now, and he got us rolling with them.
On that note, I’d like to add that my colleague Merit McMannis was very pleased with my contributions to the Ixia.
But then, I was pulled off of that and put on…
## Build Times
Let me just say one thing about builds, their scripts, and their logs. I expect to be able to jump into a build and know what’s going on at a high level, even if I don’t know the gory details.
This means that
1. A build should be composed of understandable steps on a high level, like a recipe
2. Build scripts should reflect on a high level those steps
3. Logs should output what step is being performed, what actions compose that step, and failures/failure points if any
We can solve 2 with well-refactored scripts, using function names as high- (and low-) level components. We can actually solve 3 with this as well: all of my bash scripts start with something like this–
log() {
printf '%s\n' "$@" # >&2 # stderr variant # variant with script name: printf "$0: "'%s\n' "$@" } It helps me output simple information when I need it. But we also have to be choosy about what information that is! (If I have to scan through the output of another ./configure --whatever --feature=foobar, I might go mad). Reflecting 1 in all of this means documentation and a clear understanding of what the process is–in fact, this necessitates 2 and 3 be solved already. I’m dissecting this today because I have been reassigned to work on improving a build time. Apparently, someone’s feature merge caused a 3x increase in time-to-build. We suspect there’s a duplicate compilation going on, and we’d like to turn that into a make target to do it in parallel, but we can’t find it. Or, well, I can’t find it. I sifted through a bajillion line error log, knowing already at least what script was the primary builder and what portions of that were possibly causing the problem. I couldn’t find it, between the configure output, make output, errors that get thrown away, and just general lack of informative detail. I’m sure the information is useful to someone, somewhere. For me, though, not so much. Part of the output was a bash script being run in debug mode (set -x)! Fortunately, I have free reign to refactor in an effort to diagnose and solve the issue. It might kill the git blame a little bit, but dang does that code look cleaner. ## Functional Refactoring My definition of functional refactoring is Refactoring which doesn’t alter the function That is, changing if [$var -eq true ]
then
# do a thing
sed 's/func/function' /really/long/file/name
cp /really/long/file/name /other/name
fi
to the equivalent
do_a_thing() {
local source=/really/long/file/name
local dest=/other/name
local sub='s/func/function'
sed "$sub" "$source"
cp "$source" "$dest"
main() {
if [[ "\$var" = true ]]; then
do_a_thing
fi
}
main
is a valid refactoring. I do this all the time because it helps me pull blocks of code into functions. I even turn developer comments into function names when I don’t know what the code does or why. Theoretically, this is all purely mechanical, it shouldn’t change the net result.
So, I did that all day today. It’s funny how much better things read that way.
## Patterns Patterns Patterns
Not much to say here, just a quote from the colleague who assigned me this task:
there’s a lot of ‘crufty’ code here, and based on your coding exercise, you have proper patterns ingrained already
(and reducing build times by 3x is a super awesome project, imo)
well, maybe not quite 3x reduction…but we can round up
So I’d say my first week finished strong.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43111658096313477, "perplexity": 4531.652999405535}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670770.21/warc/CC-MAIN-20191121101711-20191121125711-00473.warc.gz"}
|
https://study.com/academy/answer/a-guitarist-is-tuning-a-guitar-she-hears-a-low-beat-every-4-0-seconds-what-is-the-difference-between-the-correct-frequency-and-measured-frequency-of-the-guitar-string.html
|
# A guitarist is tuning a guitar. She hears a low beat every 4.0 seconds. What is the difference...
## Question:
A guitarist is tuning a guitar. She hears a low beat every 4.0 seconds. What is the difference between the correct frequency and measured frequency of the guitar string?
## The Beat Frequency
If two wave sources with slightly differing frequencies {eq}\displaystyle {\nu_1} {/eq} and {eq}\displaystyle {\nu_2} {/eq} generate waves at the same time and these waves superpose then an interference effect in time will occur. The intensity is found to oscillate in time with a frequency {eq}\displaystyle {\nu} {/eq} called the beat frequency. It is given by,
{eq}\displaystyle {\nu = \pm (\nu_1-\nu_2)}-----------(1) {/eq}.
This can be utilized for tuning musical instruments with the help of a known reference frequency. By sounding the two frequencies together and adjusting the instrument till the beats disappear the correct frequency is attained.
The guitarist hears one beat every 4 seconds. Therefore the beat frequency is {eq}\displaystyle {\frac{1}{4}\ Hz} {/eq}.
Since the beat frequency is the difference between the two different frequencies which are sounded together it follows that the correct frequency and the measured frequency differ by 0.25 Hz.
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9643551707267761, "perplexity": 2007.4090563112165}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146643.49/warc/CC-MAIN-20200227002351-20200227032351-00511.warc.gz"}
|
http://forum.zkoss.org/question/48007/how-to-position-image-after-label-text-on-each-tab-of-tabbox/
|
# How to position image after label text on each tab of tabbox?
526 2 2 9
I have the following code segment to display text and image on tab:
<tabs>
<tab label="text1" image="images/image1.gif" />
<tab label="text2" image="images/image2.gif" />
<tab label="text3" image="images/image3.gif" />
<tab label="text4" image="images/image4.gif" />
</tabs>
<tabpanels>
<tabpanel />
<tabpanel />
</tabpanels>
Presently image is follwed by text. But, I want the reverse.
That is, text followed by image.
How can we achieve right alignment of the image on tab?
Thank you
delete retag edit
## 4 Replies
SimonPai
1696 1
The order of text and image is hardcoded in the domContent_ function in Tab.js. You can use client side programming to override this function. For example:
<zk xmlns:w="http://www.zkoss.org/2005/zk/client">
//...
<tab label="text1" image="images/image1.gif">
<attribute w:name="domContent_">
function () {
var label = zUtl.encodeXML(this.getLabel()),
img = this.getImage();
if (!img) return label;
img = '<img src="' + img + '" align="absmiddle" class="' + this.getZclass() + '-img"/>';
//return label ? img + ' ' + label: img; <- original
return label ? label + ' ' + img: img;
}
</attribute>
</tab>
//...
</zk>
This should give you the desired layout.
Regards,
Simon
526 2 2 9
Thank you, Simon :)
The code results in the following error:
org.zkoss.zk.ui.metainfo.PropertyNotFoundException: Method, setDomContent_, not found for class org.zkoss.zul.Tab
As am new to Clientside programming, I am missing something.
Is there any detailed documentation available online?
Thank you.
SimonPai
1696 1
Oops, my bad. I forgot to put my code within <![[CDATA[ ]]> before posting it here.
<zk xmlns:w="http://www.zkoss.org/2005/zk/client">
//...
<tab label="text1" image="images/image1.gif">
<attribute w:name="domContent_"><![CDATA[
function () {
var label = zUtl.encodeXML(this.getLabel()),
img = this.getImage();
if (!img) return label;
img = '<img src="' + img + '" align="absmiddle" class="' + this.getZclass() + '-img"/>';
return label ? label + ' ' + img: img;
}
]]></attribute>
</tab>
//...
</zk>
In addition, if you read the source code of Tab.js and mold/tab.js, you'll realize how the Tab component gets translated into HTML elements.
This will help you determine how to work with client side programming and also understand it better.
Simon
526 2 2 9
Thank you, Simon.
Its Perfect :)
I will refer the source code of Tab.js and mold/tab.js to understand it better.
Thank you.
[hide preview]
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25597164034843445, "perplexity": 22828.74318174413}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202303.66/warc/CC-MAIN-20190320064940-20190320090940-00178.warc.gz"}
|
https://greprepclub.com/forum/k-j-8107.html
|
It is currently 17 Nov 2018, 07:59
GMAT Club Daily Prep
Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
k > j > 0
Author Message
TAGS:
GMAT Club Legend
Joined: 07 Jun 2014
Posts: 4710
GRE 1: Q167 V156
WE: Business Development (Energy and Utilities)
Followers: 91
Kudos [?]: 1612 [0], given: 375
k > j > 0 [#permalink] 22 Nov 2017, 18:57
Expert's post
00:00
Question Stats:
55% (00:51) correct 44% (01:57) wrong based on 18 sessions
k > j > 0
Quantity A Quantity B The time it takes to read k words at j words per minute The time it takes to read (k + 10) words at (j + 10) words per minute
A. Quantity A is greater.
B. Quantity B is greater.
C. The two quantities are equal.
D. The relationship cannot be determined from the information given.
Drill 1
Question: 9
Page: 522-523
[Reveal] Spoiler: OA
_________________
Sandy
If you found this post useful, please let me know by pressing the Kudos Button
Try our free Online GRE Test
Intern
Joined: 01 Sep 2017
Posts: 20
Followers: 0
Kudos [?]: 20 [1] , given: 21
Re: k > j > 0 [#permalink] 24 Nov 2017, 06:26
1
KUDOS
QTY A = K/J which basically could be any Number greater than 1
QTY B = K+10/J+10 which remains a number > 1. However it will become a smaller fraction. For instance (4/3) > (14/13).
Therefore A must be the answer.
GMAT Club Legend
Joined: 07 Jun 2014
Posts: 4710
GRE 1: Q167 V156
WE: Business Development (Energy and Utilities)
Followers: 91
Kudos [?]: 1612 [0], given: 375
Re: k > j > 0 [#permalink] 10 Dec 2017, 13:19
Expert's post
Explanation
Plug in numbers and use the rate formula—amount = rate × time—to check the quantities. If j = 1 and k = 2, then Quantity A is 2 minutes and Quantity B is $$\frac{20}{11}$$ minutes.
Quantity A is greater, so eliminate choices (B) and (C). Any acceptable set of values gives the same outcome; select choice A.
_________________
Sandy
If you found this post useful, please let me know by pressing the Kudos Button
Try our free Online GRE Test
Re: k > j > 0 [#permalink] 10 Dec 2017, 13:19
Display posts from previous: Sort by
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27830439805984497, "perplexity": 6727.167162335774}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743714.57/warc/CC-MAIN-20181117144031-20181117170031-00391.warc.gz"}
|
http://www.acmerblog.com/hdu-1554-Pairs-of-integers-2108.html
|
2013
12-12
# Pairs of integers
You are to find all pairs of integers such that their sum is equal to the given integer number N and the second number results from the first one by striking out one of its digits. The first integer always has at least two digits and starts with a non-zero digit. The second integer always has one digit less than the first integer and may start with a zero digit.
The input file consists of several test cases.
Each test case contains single integer N (10 ≤ N ≤ 10^9), as decribed above
The output consists of several blocks, one for each test case.
On the first line of a block write the total number of different pairs of integers that satisfy the problem statement. On the following lines write all those pairs. Write one pair on a line in ascending order of the first integer in the pair. Each pair must be written in the following format:
X + Y = N
Here X, Y, and N, must be replaced with the corresponding integer numbers. There should be exactly one space on both sides of ‘+’ and ‘=’ characters.
302
5
251 + 51 = 302
275 + 27 = 302
276 + 26 = 302
281 + 21 = 302
301 + 01 = 302
#include<cstdio>
#include<cstring>
#include<iostream>
#define MAXN 5005
using namespace std;
int f[MAXN][3],n,ans;
char s[MAXN];
void getf()
{
n=strlen(s),ans=0;
for(int k=0;k<n;++k)
for(int i=0;i+k<n;++i)
{
int j=i+k,now=k%3,pre2=(now-2+3)%3;//pre记录的是f(i,j)的上两层,这里的f[i+1][pre2]指的是f(i+1,j-1)
f[i][now]=(k==0 || (k==1 && s[i]==s[j]) || (i+1<=j-1 && s[i]==s[j] && f[i+1][pre2]));
//k==0还有k=1都要特殊处理,防止i+1>j-1
ans+=f[i][now];
}
}
int main()
{
while(~scanf("%s",s))
{
getf();
cout<<ans<<endl;
}
}
1. 有一点问题。。后面动态规划的程序中
int dp[n+1][W+1];
会报错 提示表达式必须含有常量值。该怎么修改呢。。
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4763261377811432, "perplexity": 804.8171990004871}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698540798.71/warc/CC-MAIN-20161202170900-00464-ip-10-31-129-80.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/1843961/i-am-looking-for-a-mathematical-equation-to-warp-an-image
|
# I am looking for a mathematical equation to warp an image [closed]
Theoretically, I know that to warp an image, each pixel $(x,y)$ in the source image is transformed to $(x', y')$ using a function f (i.e. $x'=f(x,y)$ & $y'=f(x,y)$ ). But what mathematical equations can I use instead of the function “f()” . For example, I found in a website the following for warping an image:
$X' = X + [\sin(aX) + \cos(cY)] \dot\ d$ where $a$,$b$,$c$ and $d$ random values.
$y'=$ the same above
My question: from where such this equation come? Is there any systematic technique to generate such equations and then get the similar warped image below?
My question is about how to build equations that represent mapping functions in complicated warping effects for example one that produces such above warping image, and then how to determine values of the coefficients “parameters” for these mathematical equations, definitely not by trial and error.
## closed as off-topic by Daniel W. Farlow, M. Vinay, JonMark Perry, user91500, Claude LeiboviciJun 30 '16 at 5:00
This question appears to be off-topic. The users who voted to close gave this specific reason:
• "This question is missing context or other details: Please improve the question by providing additional context, which ideally includes your thoughts on the problem and any attempts you have made to solve it. This information helps others identify where you have difficulties and helps them write answers appropriate to your experience level." – Daniel W. Farlow, JonMark Perry, user91500, Claude Leibovici
If this question can be reworded to fit the rules in the help center, please edit the question.
• See adrianboeing.blogspot.com.br/2011/02/… and gamedev.stackexchange.com/questions/90592/… for an example. – lhf Jun 29 '16 at 17:58
• Actually the sky is the limit. Any $x^\prime=f(x,y),y^\prime=g(x,y)$ will transform the image into a 'warped' image. If $f$ and $g$ are composed of sinusoidal functions (sine and cosine) then you will have a ripple effect of some sort. But the possibilities are limitless. First one must decide upon what kind of distortion one would like, then find an equation to do that. As an example, in face recognition, the position of certain fixed points on the face are recorded. One might wish a function which would distort one person's face so it would match the characteristic points of another's face. – John Wayland Bales Jun 29 '16 at 18:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5517736077308655, "perplexity": 857.7421119408444}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999787.0/warc/CC-MAIN-20190625031825-20190625053825-00007.warc.gz"}
|
https://www.ncbi.nlm.nih.gov/pubmed/2342578?dopt=Abstract
|
Format
Choose Destination
Nature. 1990 May 31;345(6274):458-60.
# Telomeres shorten during ageing of human fibroblasts.
### Author information
1
Department of Biochemistry, McMaster University, Hamilton, Ontario, Canada.
### Abstract
The terminus of a DNA helix has been called its Achilles' heel. Thus to prevent possible incomplete replication and instability of the termini of linear DNA, eukaryotic chromosomes end in characteristic repetitive DNA sequences within specialized structures called telomeres. In immortal cells, loss of telomeric DNA due to degradation or incomplete replication is apparently balanced by telomere elongation, which may involve de novo synthesis of additional repeats by novel DNA polymerase called telomerase. Such a polymerase has been recently detected in HeLa cells. It has been proposed that the finite doubling capacity of normal mammalian cells is due to a loss of telomeric DNA and eventual deletion of essential sequences. In yeast, the est1 mutation causes gradual loss of telomeric DNA and eventual cell death mimicking senescence in higher eukaryotic cells. Here, we show that the amount and length of telomeric DNA in human fibroblasts does in fact decrease as a function of serial passage during ageing in vitro and possibly in vivo. It is not known whether this loss of DNA has a causal role in senescence.
PMID:
2342578
DOI:
10.1038/345458a0
[Indexed for MEDLINE]
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8251911401748657, "perplexity": 7350.896290611309}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986672548.33/warc/CC-MAIN-20191017022259-20191017045759-00436.warc.gz"}
|
https://physicshelpforum.com/threads/intensity-of-a-wave-in-electromagnetism-compared-to-quantum-mechanical-waves.12353/
|
# Intensity of a wave in electromagnetism compared to quantum mechanical waves
#### fergal
Dec 2016
4
0
Hi,
why is it that in electromagnetism we can compute the intensity of a wave by taking the square of the amplitude but we cant do the same for quantum mechanical waves?
My best guess was that its because of the complex numbers in quantum but I'm really unsure.
Any help??
Thanks
#### HallsofIvy
Aug 2010
434
174
Perhaps you have these backwards. The amplitude of an electromagnetic wave (or any wave) is the intensity- there is no need to square. The difference with "quantum mechanical waves" is that the probability a particle exists at a given point is the square of the amplitude at that point.
Reactions: 1 person
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9726634621620178, "perplexity": 527.8656197263098}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371810617.95/warc/CC-MAIN-20200408041431-20200408071931-00462.warc.gz"}
|
https://cs.stackexchange.com/questions/102363/how-many-pairs-of-brackets-are-sufficient-to-make-brainfuck-turing-complete
|
# How many pairs of brackets are sufficient to make Brainfuck Turing complete?
Brainfuck is a Turing complete programming language that uses only 8 symbols (6 if you ignore I/O).
The two most notable ones that push it to Turing completeness are [and ], essentially Brainfuck's label and goto.
Normally, programs in Brainfuck use multiple sets of [], but I was wondering exactly how many pairs of these brackets have to be used to make Brainfuck Turing complete?
More simply, what is the least amount of brackets that you'd need to simulate an n-state Turing Machine (Give the number of brackets for 1, 2, and three state Turing Machines)?
Notes:
We are assuming an infinite tape and no computational limits.
It is a 2-symbol Turing Machine
• "how many pairs of these brackets have to be used?" Can you clarify "have to be used". For example, what if I ask BrainF to count to $2^{1000000}$? – Apass.Jack Jan 4 at 4:08
• @Apass.Jack the minimum number of brackets – MilkyWay90 Jan 4 at 4:13
• Oh, do you meant the minimum number of brackets to simulate an $n$-state Turing machine as a function of $n$? Anyway, can you give a nontrivial example that is as simple as possible? – Apass.Jack Jan 4 at 4:16
• @Apass.Jack Okay, I'm coming up with a buggy BF program which works for a one-state Turing Machine – MilkyWay90 Jan 4 at 4:28
• @Apass.Jack Nevermind, it is way too hard for me. Basically make a BF interpreter for my programming language, Turing Machine But Way Worse, when it uses only two possible symbols (0 and 1) and remove the I/O and halting aspect completely – MilkyWay90 Jan 4 at 4:38
This is a further development of @ais523's answer, reducing it to only two sets of brackets, and also using a more compact cell placement based on Golomb ruler theory. ais523 has made a compiler for this construction, as well as this TIO session showing a sample resulting BF program running with debug tracing of the TWM counters.
Like the original, this starts with a program in The Waterfall Model, with some restrictions that don't lose generality:
1. All counters have the same self-reset value $$R$$; that is, the TWM trigger map $$f$$ has the property that $$f(x,x)=R$$ for all $$x$$.
2. There is a single halting counter $$h$$.
3. The number $$c$$ of counters is $$(p-1)/2$$ for some prime number $$p$$.
# Golomb ruler
We combine the Erdős–Turán construction with the permutation function of a Welch–Costas array in order to get a Golomb ruler with the necessary properties.
(I'm sure this combined construction cannot be a new idea but we just found and fit together these two pieces from Wikipedia.)
Let $$r$$ be a primitive root of $$p=2c+1$$. Define the function
$$g(k)=4ck - ((r^k-1)\bmod(2c+1)), k=0,\ldots,2c-1.$$
1. $$g$$ is a Golomb ruler of order $$2c$$. That is, the difference $$g(i)-g(j)$$ is unique for every pair of distinct numbers $$i,j \in \{0,\ldots,2c-1\}$$.
2. $$g(k)\bmod(2c)$$ takes on every value $$0,\ldots,2c-1$$ exactly once.
# Tape structure
For each TWM counter $$x\in \{0,\ldots,c-1\}$$, we assign two BF tape cell positions, a fallback cell $$u(x)$$ and a value cell $$v(x)$$:
$$u(x)=g(k_1)
By the second property of $$g$$ there are exactly two distinct $$k_1,k_2$$ values to choose from.
A fallback cell's content will most of the time be kept at $$0$$, except when its counter has just been visited, when it will be at $$2R$$, twice the counter self-reset value. A value cell will be kept at twice the value of the corresponding TWM counter.
All other cells that can be reached by the BF program execution (a finite number) will be kept at odd values, so that they always test as nonzero. After initialization this is automatic because all cell adjustments are by even amounts.
If desired, all cell positions can be shifted rightwards by a constant in order to avoid moving to the left of the initial BF tape position.
# BF program structure
Let $$H = v(h)-u(h)$$ be the distance between the halting counter's value and fallback cells, and let $$N$$ be a number large enough that $$cN+1 \geq v((x+1)\bmod c) - u(x)$$ for all counters $$x$$. Then the basic BF program structure is
initialization [ >$$\times (H+cN+1)$$ [ <$$\times c$$ ] adjustments <$$\times H$$ ]
## Initialization
The initialization phase sets all cells reachable by the program to their initial values, in a state as if the last counter had just been visited and the just active cell was its fallback cell $$u(c-1)$$:
1. Value cells are initialized to twice the initial content of the corresponding TWM counter, except that counter $$0$$ is pre-decremented.
2. Fallback cells are set to $$0$$, except cell $$u(c-1)$$, which is set to $$2R$$.
3. All other cells reachable by the program (a finite number) are set to $$1$$.
Then the tape pointer is moved to position $$u(c-1)-H$$ (an always non-zero cell) before we reach the program's first [.
## Beginning of outer loop
At the beginning of an iteration of the outer loop, the tape pointer will be at either $$u(x)-H$$ or $$v(x)-H$$ for a counter $$x$$.
Let $$y=((x+1)\bmod c)$$ be the next counter to visit.
The movement >$$\times (H+cN+1)$$ places the tape pointer on a position that is $$\equiv y\pmod c$$ and not to the left of $$v(y)$$.
The inner loop [ <$$\times c$$ ] now searches leftwards in steps of $$c$$ for a zero cell. If counter $$y$$ is zero, then it will stop at the (zero) value cell $$v(y)$$; otherwise it will find the fallback cell $$u(y)$$.
Whichever cell is found becomes the new active cell.
The adjustment phase adjusts various cells on the tape based on their position relative to the active cell. This section contains only +->< commands and so these adjustments happen unconditionally. However, because all counter-related cells are in a Golomb ruler pattern, any adjustments that are not proper for the current active cell will miss all the important cells and adjust some irrelevant cell instead (while keeping it odd).
Separate code must thus be included in the program for each possible required pair of active and adjusted cell, except for an active cell's self-adjustment, which, because adjustment is based solely on relative position, must be shared between all of them.
1. Adjust the previous counter's fallback cell $$u(x)$$ by $$-2R$$.
2. Adjust the current counter's fallback cell $$u(y)$$ by $$2R$$, except if the current active cell is $$v(h)$$ and so we should halt.
3. Adjust the next counter's value cell $$v((y+1)\bmod c)$$ by $$-2$$ (decrementing the counter).
4. When the active cell is a value cell $$v(y)$$ (so the counter $$y$$ has reached zero), adjust all value cells $$v(z)$$ by $$2f(y,z)$$ from the TWM trigger map. $$v(y)$$ itself becomes adjusted by $$2R$$.
The first and second adjustments above are made necessary by the fact that all active cells must adjust themselves by the same value, which is $$2R$$ for value cells, and thus also for fallback cells. This requires preparing and cleaning up the fallback cells to ensure they get back to $$0$$ in both the value and fallback branches.
## End of outer loop
The movement <$$\times H$$ represents that at the end of the adjustment phase, the tape pointer is moved $$H$$ places to the left of the active cell.
For all active cells other than the halting counter's value cell $$v(h)$$, this is an irrelevant cell, and so odd and non-zero, and the outer loop continues for another iteration.
For $$v(h)$$, the pointer is instead placed on its corresponding fallback cell $$u(h)$$, for which we have made an exception above to keep it zero, and so the program exits through the final ] and halts.
I'm not 100% sure that it's impossible to do this with two sets of brackets. However, if the cells of the BF tape allow unbounded values, three sets of brackets are enough. (For simplicity, I'll also assume that we can move the tape head left past its starting point, although because this construction only uses a finite region of the tape, we could lift that restriction by adding sufficiently many > commands at the start of the program.) The construction below requires assuming Artin's conjecture to be able to compile arbitrarily large programs; however, even if Artin's conjecture is false, it will still be possible to show Turing-completeness indirectly via translating an interpreter for a Turing-complete language into BF using the construction below, and running arbitrary programs via giving them as input to that interpreter.
The Turing-complete language that we're compiling into unbounded BF is The Waterfall Model, which is one of the simplest known computational models. For people who don't know it already, it consists of a number of counters (and initial values for them), and a function $$f$$ from pairs of counters to integers; program execution consists of repeatedly subtracting 1 from every counter, then if any counter $$x$$ is 0, adding $$f(x,y)$$ to each counter $$y$$ (the program is written in such a way that this never happens to multiple counters simultaneously). There's a Turing-completeness proof for this language behind my link. Without loss of generality, we'll assume that all counters have the same self-reset value (i.e. $$f(x,x)$$ is the same for all $$x$$); this is a safe assumption because for any specific $$x$$, adding the same constant to each $$f(x,y)$$ will not change the behaviour of the program.
Let $$p$$ be the number of counters; without loss of generality (assuming Artin's conjecture), assume that $$p$$ has a primitive root 2. Let $$q$$ be $$p(1+s+s^2)$$, where $$s$$ is the lowest power of 2 greater than $$p$$. Without loss of generality, $$2q$$ will be less than $$2^p$$ ($$2q$$ is bounded polynomially, $$2^p$$ grows exponentially, so any sufficiently large $$p$$ will work).
The tape arrangement is as follows: we number each counter with an integer $$0 \leq i < p$$ (and without loss of generality, we assume that there's a single halt counter and number it $$2$$). The value of most counters is stored on tape cell $$2^i$$, with the exception of counter 0, which is stored on tape cell $$2q$$. For each odd-numbered tape cell from cell -1 up to and including $$2^{p+1}+2p+1$$, that tape cell always holds 1, unless it's immediately to the left of a counter, in which case it always holds 0. Even-numbered tape cells that aren't being used as counters have irrelevant values (which might or might not be 0); and odd-numbered tape cells outside the stated range also have irrelevant values. Note that setting the tape into an appropriate initial state requires initialising only finitely many tape elements to constant values, meaning that we can do it with a sequence of <>+- instructions (in fact, only >+ are needed), thus no brackets. At the end of this initialisation, we move the tape pointer to cell -1.
The general shape of our program will look like this:
initialisation [>>>[ >$$\times(2^p-1)$$ [ <$$\times(2p)$$ ]>-] adjustment <<<]
The initialisation puts the tape into the expected shape and the pointer on cell -1. This is not the cell to the left of a counter (0 is not a power of 2), so it has value 1, and we enter the loop. The loop invariant for this outermost loop is that the tape pointer is (at the start and end of each loop iteration) three cells to the left of a counter; it can be seen that the loop will thus only exit if we're three cells to the left of counter 2 (each other counter has a 1 three cells to its left, as to have a 0 there would imply that two counters' tape positions were 2 cells apart; the only two powers of 2 that differ by 2 are $$2^1$$ and $$2^2$$, and $$q$$'s binary representation changes from strings of $$0$$s to strings of $$1$$s or vice versa at least four times and thus cannot be 1 away from a power of 2).
The second loop repeatedly loops over the counters, decrementing them. The loop invariant is that the tape pointer is always pointing to a counter; thus the loop will exit when some counter becomes 0. The decrement is just -; the way we get from one counter to the next is more complex. The basic idea is that moving $$2^p-1$$ spaces to the right from $$2^x$$ will place us on an odd-numbered cell $$2^p+2^x-1$$, which is to the right of any counter ($$2^p$$ is the last counter, $$2^x-1$$ is positive because $$x$$ is positive); modulo $$2p$$, this value is congruent to (by Fermat's Little Theorem) $$2^x+1$$. The innermost loop repeatedly moves left by $$2p$$ spaces, also not changing the index of the tape cell modulo $$2p$$, and must eventually find the cell congruent to $$2^x+1$$ modulo $$2p$$ that has the value (which will be the cell to the left of some counter); because of our primitive root requirement there's exactly one such cell ($$2q-1$$ is congruent to $$-1$$ modulo $$2p$$, and $$2^{\log_{2,p}(r)+1}-1$$ is congruent to $$2r-1$$ for any other $$r$$, where $$\log_{2,p}$$ is the discrete logarithm to base 2 modulo $$p$$). Additionally, it can be seen that the position of the tape pointer modulo $$2p$$ increases by $$2$$ each time round the middle loop. Thus, the tape pointer must cycle between all $$p$$ counters (in order of their values modulo $$2p$$). Thus, every $$p$$ iterations, we decrease every counter (as required). If the loop breaks partway through an iteration, we'll resume the decrease when we re-enter the loop (because the rest of the outermost loop makes no net change to the tape pointer position).
When a counter does hit 0, the middle loop breaks, taking us to the "adjustment" code. This is basically just an encoding of $$f$$; for every pair $$(x,y)$$, it adds $$f(x,y)$$ to the tape element which is the same distance left/right of the current tape pointer as counter $$y$$'s tape location is left/right of counter $$x$$'s tape location (and then removes the tape pointer back to where it started). Whenever $$x\neq y$$, this distance turns out to be unique:
• The difference between two powers of 2 is a binary number consisting of a string of 1 or more $$1$$s followed by a string of 0 or more $$0$$s (with the place values of the start of the number, and the start of the $$0$$ string, depending on the larger and smaller respectively of $$x$$ and $$y$$); thus all those differences are distinct. * As for the difference of a power of 2 and $$q$$, it must contain at least two transitions between strings of $$1$$s and $$0$$s ($$q$$ contains at least four such transitions, the subtraction can only remove 2), thus is distinct from all differences of two powers of two, and these differences are obviously also distinct from each other.
For $$x=y$$, we obviously find that the distance moved is 0. But because all $$f(x,y)$$ are equal, we can just use this as the adjustment for the current cell. And it can be seen that the adjustment code thus implements the "when a counter hits 0" effect for every counter; all the cells that actually represent counters will be adjusted by the correct amount, and all the other adjustments will affect non-counter even-numbered cells (the difference between two even numbers is even), which have no effect on the program's behaviour.
Thus, we now have a working compilation of any program in The Waterfall Model to BF (including halt behaviour, but not including I/O, which isn't needed for Turing-completeness) using only three pairs of brackets, and thus three pairs of brackets are enough for Turing-completeness.
• Nice job! I see you worked on this in TNB! – MilkyWay90 Jan 4 at 5:20
• I think you need s to be at least p+2. When s=p+1, q is 1 less than a power of 2. – Ørjan Johansen Jan 5 at 2:32
• I think I found a much simpler (as in requiring no prime number theory) counter placement: 2p*2^i+2i. – Ørjan Johansen Jan 5 at 4:56
• @ØrjanJohansen: Right, I think I mentioned that construction in #esoteric (some time after I wrote this post)? All you actually need is a Golomb ruler for which each element is distinct modulo the number of elements, and there are various ways to construct those (although finding optimal ones is hard; the longest I've found (via brute force) is [0, 1, 3, 7, 20, 32, 42, 53, 58] for p=9). – ais523 Jan 5 at 17:04
• Oh so you did (just before I said my brain refused to be in math mode, so there's my excuse :P). I guess I found out k=0 was sufficient, then. I think Wikipedia's Erdős–Turan_construction gives a polynomially growing (and presumably O()-optimal?) one if you use just the first half of the elements (the other half repeats (mod p)). – Ørjan Johansen Jan 5 at 18:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 144, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.798644483089447, "perplexity": 1192.9225593149572}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027330962.67/warc/CC-MAIN-20190826022215-20190826044215-00082.warc.gz"}
|
https://www.hepdata.net/record/138726
|
Search for dark matter produced in association with a single top quark and an energetic $W$ boson in $\sqrt{s}=$ 13 TeV $pp$ collisions with the ATLAS detector
The collaboration
CERN-EP-2022-146, 2022.
Abstract (data abstract)
Version of twMeT HEPData record as of 09.12.2022.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9864782094955444, "perplexity": 5558.147484958684}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944452.97/warc/CC-MAIN-20230322211955-20230323001955-00633.warc.gz"}
|
https://www.jiskha.com/questions/1477473/f-x-varies-inversely-with-x-and-f-x-10-when-x-20-what-is-the-inverse-variation
|
# Algebra
f(x) varies inversely with x and f(x) = –10 when x = 20. What is the inverse variation equation?
1. 👍
2. 👎
3. 👁
1. well, f(x) = k/x
so, plug in your numbers to find k.
1. 👍
2. 👎
## Similar Questions
1. ### variation
Can you please check my answers? Thanxs! Write an equation that expresses the relationship. Use k as the constant of variation. 20. f varies jointly as b and the square of c. -I got: f=kbc^2 22. r varies jointly as the square of s
2. ### MATH
if p varies inversely as the square of q and p=8 when q=4.find q when p=32
3. ### Precalculus
m varies directly as the cube of n and inversely as g
4. ### Mathematics
M varies directly as n and inversely as p. if M=3, when n=2,and p=1, find M in terms of n and p
1. ### math
p varies directly as the square of q and inversely as r when p=36, q=3 and r=4 calculate q when p=200 and r=2
2. ### Maths
X varies directly as the product of u and v and inversely as their sum. If x=3 when u=3 and v =1, what is the value of x if u=3 and v=3
3. ### math
If y varies inversely as x, and y = 12 when x = 6, what is K, the variation constant? A. 1⁄3 C. 72 B. 2 D. 144
4. ### math
If p varies inversely with q, and p=2 when q=1, find the equation that relates p and q.
1. ### Inverse Variation
Help me please.. Explain also :( 1. E is inversely proportional to Z and Z = 4 when E = 6. 2. P varies inversely as Q and Q = 2/3 when P = 1/2. 3.R is inversely proportional to the square of I and I = 25 when R = 100. 4. F varies
2. ### math
Suppose f varies inversely with g and that f=20 when g=4. What is the value of f when g=10?
3. ### Math
A Number P Varies Directly As q And Partly Inversely As Q ^ 2, Given That P=11 When q=2 and p=25.16 when q=5. calculate the value of p when q=7
4. ### mathematics
a varies directly as the cube of b and inversely as the product of c and d
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.926445484161377, "perplexity": 2234.391881226718}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154277.15/warc/CC-MAIN-20210801221329-20210802011329-00081.warc.gz"}
|
http://mathhelpforum.com/algebra/98309-expanding-expressions-print.html
|
# Expanding expressions
Show 40 post(s) from this thread on one page
Page 1 of 2 12 Last
• August 16th 2009, 11:41 PM
fvaras89
Expanding expressions
Im not too sure how to expand these two expressions
1/ (x+2y)(x-2y)2
2/ (1-x)(1+x+x2+x3)
The small numbers r squared and cubed.. not sure how to use it here (Worried)
Would be good if you can put steps also so i can understand where it comes from..(Happy)
Thanks
• August 17th 2009, 12:01 AM
songoku
Hi fvaras89
Example
To type square : x^2
To type cube : x^3
or you can learn it here : http://www.mathhelpforum.com/math-he...-tutorial.html
1. $(x+1)^2 = (x+1) * (x+1) = x^2 + x + x + 1 = x^2 + 2x + 1$
2. $(x+1)\cdot(x^2+3x+5) = x\cdot x^2 + x\cdot 3x + x\cdot 5 + 1\cdot x^2 + 1\cdot 3x + 1\cdot 5 = x^3 + 4x^2 + 8x + 5$
• August 17th 2009, 12:08 AM
fvaras89
Thanks, but im really still not sure what to do.. any more hints? (Crying)
• August 17th 2009, 12:20 AM
songoku
Hi fvaras89
Can you expand (x+5)^2 ? :)
• August 17th 2009, 12:59 AM
fvaras89
Is that how you would expand it?
x^2 + 25x + 25
• August 17th 2009, 01:02 AM
songoku
Hi fvaras89
Almost right. How can you get 25x ? :)
• August 17th 2009, 01:04 AM
fvaras89
oh woops hahah..
so would it be x^2 + 25?
• August 17th 2009, 01:17 AM
songoku
Hi fvaras89
Still no. :)
$(x+a) \cdot (y+b) = x \cdot y + x \cdot b + a \cdot y + a \cdot b$
• August 18th 2009, 12:10 AM
jgv115
Do you have a text book that you are working from? I doubt your teacher would give you a task if you're not clear.
I'll use songoku's question:
$(x+5)^2$
since the power is 2 it's multiplying itself.
for example $2^2$
= $2 * 2$
now we do the same for this..
$(x+5)(x+5)$
now we multiply the first number/letter of the first bracket by the first number/letter of the second bracket
$x * x = x^2$
now we multiply the first letter of the first bracket by the second number/ letter
$x * 5 = 5x$
now we multiply the second letter/number by the first number of the second bracket
$5 * x = 5x$
now the last step... yep that's right, the second number/letter of the first bracket by the second number/letter of the second bracket.
$5 * 5 = 25$
do you get the pattern?
Now put them together
$x^2 + 5x + 5x + 25$
simplify..
$x^2 + 10x + 25$
now that you've learnt how to do it properly now I tell you the shortcut
when you have 2 brackets multiplying each other and they are IDENTICAL. They have to be identical.
The formula:
$a^2 + 2ab + b^2$ can be used.. for example:
$(x+5)^2$
x will be "a" and 5 will be "b"
now just substitute:
$a^2 + 2ab + b^2$
$x^2 + 2*x*5 + 5^2$
$x^2 + 10x + 25$
Was that the same answer as before? it sure was.
Do you understand better now?
• May 5th 2010, 08:37 AM
NatalieB
help
i dont understand 6(x+4)
7(3x-9)
x(x+7)
as soon as thanks
• May 5th 2010, 10:30 AM
Wilmer
WHAT are you asked to do?
• May 5th 2010, 10:39 AM
NatalieB
Sorry i am new to this...
i am asked to expand these alebraic expressions
5(x+5)
9(8x-9)
x(x+8)
and i really dont understand them (Wondering) so please could you help
thanks Natalie B (Rofl)
• May 5th 2010, 12:31 PM
Wilmer
Quote:
Originally Posted by NatalieB
i am asked to expand these alebraic expressions
5(x+5)
9(8x-9)
x(x+8)
Means multiply what's inside brackets by what's outside.
Take 5(x+5)
5 times x = 5x
5 times 5 = 25
So answer is 5x + 25
Let's see you do the other 2: GO Natalie GO (Rock)
• May 5th 2010, 10:11 PM
coolnice
• May 6th 2010, 08:10 AM
NatalieB
Okayy so is this correct ...?
9(8x-9)
9 times 8x=72x
9 times 9 = 81
so its 72x-81
P.S
how do you add the quote in at the top of the page ??? (Rofl)(Thinking)
Show 40 post(s) from this thread on one page
Page 1 of 2 12 Last
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 18, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8184224367141724, "perplexity": 4324.304010906481}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163054976/warc/CC-MAIN-20131204131734-00095-ip-10-33-133-15.ec2.internal.warc.gz"}
|
https://listserv.uni-heidelberg.de/cgi-bin/wa?A2=0207&L=LATEX-L&D=0&H=N&S=b&P=1776326
|
[email protected]
Options: Use Classic View Use Monospaced Font Show Text Part by Default Show All Mail Headers Topic: [<< First] [< Prev] [Next >] [Last >>]
Re: latex/3480: Support for UTF-8 missing in inputenc.sty Dominique Unruh <[log in to unmask]> Thu, 5 Dec 2002 23:57:12 +0100 text/plain (78 lines) Short info on what this discussion is about: We were discussing the possibility of adding UTF-8 inputenc support to LaTeX. The existing package ucs.sty is deemed to big/resource consuming for inclusion into the kernel. This discussion is now moved onto LATEX-L. Frank wrote: > it seems important to me to follow up the question Chris has posted > about what are input and what are output (font) encodings. Yes, I do understand this difference. But when adding UTF-8 support, it is probably even unwise to load all supported UTF sequences. Therefore I proposed to add to the fontenc an information, which Unicode range is to be loaded for this fontencoding. To clarify this, here an example: if we have code like the following: \usepackage[utf8]{inputenc} \usepackage[T2A]{fontenc} the file t2aenc.def could contain a line like: \FontencUnicodeRange{"400-"4FF} and \AtBeginDocument UTF-8 sequences would only be loaded for the ranges given by the fontencodings, thus taking the need from the user to decide by himself, which sequences to load. In case no UTF-8 is needed, the \FontencUnicodeRange's are ignored. Of course, the fontencoding->Unicode-Range mappings could also be in some extra file, thus removing the need to change the existing fontencodings. > commands, eg instead of > \DeclareInputText{164}{\textcurrency} > we probably need something like > [...] > \DeclareUTFeightInputText{}{\textcurrency} Code for this can be extracted from utf8.def as with ucs.sty. Interested people could have a look at the following macros in this file (unfortunately mostly undocumented (yet)): \utf@viii@map{number} constructs the UTF-8 sequence formed \u8-n-BCD where n is the first character of the sequence (as decimal number), and BCD are the (one, two or three) further characters (as characters). Here the macros content gets just number, but the macros can easily be changes to define it to anything give (e.g. \textcurrency). \utf@viii@undef{number}{char}{char}{char} calculates the Unicode number for some UTF-8 sequence (given again as number, char, char, char, with \@nil instead of the chars for shorter sequences.) A UTF-8 sequence starter would then have to be defined approximately as (here the example for the sequence starter "E3 = 227) \def\^^E3#1#2{\ifx\csname u8-227-#1#2\endcsname\relax \utf@viii@undef{227}#1#2\@nil\else \csname u8-227-#1#2\endcsname\fi} \utf@viii@make does the job of defining such macros (containing some additional code) Chris wrote: > I tried to understand Dominique's approach and to compare it with > David's but both, as on CTAN, consist of undocumented code ... so > I gave up. Have you looked at David's code? My code is documented (though only partly). The comments can be found in utf8.dtx, or in the files in the CVS archive (see http://www.unruh.de/DniQ/latex/unicode/). I don't know David's code, could you give me a CTAN location? DniQ.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9241955876350403, "perplexity": 5215.691061495924}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500044.16/warc/CC-MAIN-20230203055519-20230203085519-00141.warc.gz"}
|
http://math.stackexchange.com/questions/275611/rotating-co-ordinates-in-3d?answertab=votes
|
# Rotating co-ordinates in 3D
Suppose I have 3 axes, $x$, $y$, and $z$ such that $x$ is horizontal, $y$ is vertical, and $z$ goes in/out of the computer screen where $+$ve values stick out and $-$ve values are sunken in.
Suppose I have a spherical co-ordinate system where $r$ is the radius from the origin $(x, y, z) = (0, 0, 0)$, $\theta$ is the rotation about the $x$ axis and $\phi$ is the rotation about the $y$ axis, such that $(r,\theta,\phi)=(1,0,0) \mapsto (x, y, z) = (0, 0, 1)$. Assume that I rotate about $y$ first, then about $x$.
I have a number of points, each of which has an associated $r, \theta, \phi$, and a world $\Theta,\Phi$. Imagine the points exist in space and we're looking at them from a camera, and depending on how the world $\Theta,\Phi$ changes, the camera is looking at the origin from a different point.
Basically, I'm trying to render some objects on a computer in a roughly spherical arrangement, and allow the user to rotate the view.
Sort of like those animated, interactive keyword clouds you sometimes see on the Internet where you move the mouse and the keywords move around in a spherical arrangement.
First I calculate each point's ($p_i$) original $(x_i, y_i, z_i)$ like so:
\begin{align} x_i &= r_i * sin(\phi_i) * cos(\theta_i) \\ y_i &= r_i * sin(\theta_i) \\ z_i &= r_i * cos(\phi_i) * cos(\theta_i) \end{align}
Then, I rotate the original $(x_i,y_i,z_i)$ by the world $\Theta$ first about the $y$ axis:
\begin{align} x_i &:= cos(\Phi) * x_i - sin(\Phi) * z_i \\ y_i &:= y_i \\ z_i &:= sin(\Phi) * x_i + cos(\Phi) * z_i \end{align}
Then about the $x$ axis:
\begin{align} x_i &:= x_i \\ y_i &:= cos(\Theta) * y_i + sin(\Theta) * z_i \\ z_i &:= cos(\Theta) * z_i - sin(\Theta) * y_i \end{align}
I then render each point $p_i$ at co-ordinates $(x_i, y_i, z_i)$.
The problem is, as long as I only apply one of the two world rotations - either $\Theta$ or $\Phi$, everything is rendered correctly. As soon as I apply both rotations together, as the interactive image is displayed with $\Theta$ and $\Phi$ changing, the objects get distorted, as though they're being sheared into a plane, and then back to their original spherical arrangement again.
I'm not entirely sure what I'm doing wrong.
-
your $r,\theta,\phi$ seem to be different from the standard math usage. Perhaps if you link to or add a figure it could help. – Maesumi Jan 11 '13 at 4:03
@Maesumi $x$ is left/right, $y$ is up/down, $z$ is in/out. $\theta$ is rotation about the $x$ axis, and $\phi$ is rotation about the $y$ axis. $y$-rotation is applied first, then $x$-rotation. I don't have a figure illustrating this, but it shouldn't be too complicated? – user1002358 Jan 11 '13 at 4:20
I am not sure if this is a typo but on your second set of equations you have $x_i$ in terms of $x_i$ and $z_i$, while $z_i$ is in terms of $x_i$ and $y_i$ instead of $z_i$. – Maesumi Jan 11 '13 at 4:45
@Maesumi Thanks! You're right - but it was just a typo in my question. My code had the correct equations. – user1002358 Jan 11 '13 at 4:58
That is what I thought. Now in your second paragraph you say $\theta$ is rotation angle about $x$ axis. This is very confusing to me. So let's make sure we are talking about the same thing. Your first set of equations look like spherical coordinate formulas. Except for your formula for $y_i$. Are you using spherical coordinates? e.g. as in here. If so let me know how you relabel the first picture there to get yours. – Maesumi Jan 11 '13 at 5:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9888754487037659, "perplexity": 504.54096042159483}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398459875.44/warc/CC-MAIN-20151124205419-00138-ip-10-71-132-137.ec2.internal.warc.gz"}
|
https://www.lessonplanet.com/teachers/right-around-5th
|
# Right Around
For this estimation worksheet, 5th graders are to study possible answers and write a dividend that would make the estimated quotient to be true. Students complete 15 fill in the blank questions choosing their answers from the box of dividends provided at the bottom of the page.
Concepts
Resource Details
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8368930220603943, "perplexity": 1591.6666322759072}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818685850.32/warc/CC-MAIN-20170919145852-20170919165852-00643.warc.gz"}
|
https://math.stackexchange.com/questions/71000/certain-permutations-of-the-set-of-all-pythagorean-triples
|
# Certain permutations of the set of all Pythagorean triples
The fact that the set of all primitive Pythagorean triples naturally has the structure of a ternary rooted tree may have first been published in 1970:
http://www.jstor.org/stable/3613860
I learned of it from Joe Roberts' book Elementary number theory: A problem oriented approach, published in 1977 by MIT Press. In 2005 it was published again by Robert C. Alperin:
http://www.math.sjsu.edu/~alperin/pt.pdf
He seems to say in a footnote on the first page that he didn't know that someone had discovered this before him until a referee pointed it out. That surprised me. Maybe because of Roberts' book, I had thought this was known to all who care about such things.
If we identify a Pythagorean triple with a rational point (or should I say a set of eight points?) on the circle of unit radius centered at $0$ in the complex plane, then we can say the set of all nodes in that tree is permuted by any linear fractional transformation that leaves the circle invariant. The function $f(z) = (az+b)/(bz+a)$, where $a$ and $b$ are real, leaves the circle invariant and fixes $1$ and $-1$. There are also LFTs that leave the circle invariant and don't fix those two points.
Are there any interesting relationships between the geometry of the LFT's action on the circle and that of the permutation of the nodes in the tree?
(Would it be too far-fetched if this reminds me of the recent discovery by Ruedi Suter of rotational symmetries in some well-behaved subsets of Young's lattice?)
• As beautifully written (literally) as the book by Joe Roberts is, the fact that something is mentioned there doesn't mean all who care about number theory are supposed to know it. It's not the universal bible of number theory. – KCd Oct 9 '11 at 17:54
• That the primitive Pythagorean triples are generated from (3,4,5) by the action of 3 integral matrices was found before 1970. It is in the paper "Pytagoreiska triangular" by B. Berggren, which appeared in Tidskrift för elementär matematik, fysik och kemi 17 (1934), 129--139. But I haven't been able to see a copy of this article directly. – KCd Oct 9 '11 at 18:07
• @KCd: Of course, no one would think that because it's in that book, it's universally known. Nonetheless, having read it in that book (without remembering where one read it) might be the cause of an impression that it's universally known. (The cause of an impression, as opposed to information from which the impression is inferred.) – Michael Hardy Oct 9 '11 at 21:57
• It appears that maybe the fact that each Pythagorean triple shows up eight times on the circle might mean the answer is not as pleasant as I had hoped. However, I've found that by applying inverse matrices, you can find each Pythagorean triple occurring many times in the tree as well. – Michael Hardy Oct 10 '11 at 19:35
• I have created a new Wikipedia article titled Tree of primitive Pythagorean triples. – Michael Hardy Oct 12 '11 at 15:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8436899185180664, "perplexity": 374.4701141105651}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578517639.17/warc/CC-MAIN-20190418121317-20190418143317-00252.warc.gz"}
|
http://wiki.stat.ucla.edu/socr/index.php?title=SOCR_EduMaterials_ModelerActivities_NormalBetaModelFit
|
# SOCR EduMaterials ModelerActivities NormalBetaModelFit
## SOCR Educational Materials - Activities - SOCR Normal and Beta Distribution Model Fit Activity
### Summary
This activity describes the process of SOCR model fitting in the case of using Normal or Beta distribution models. Model fitting is the process of determining the parameters for an analytical model in such a way that we obtain optimal parameter estimates according to some criterion. There are many strategies for parameter estimation. The differences between most of these are the underlying cost-functions and the optimization strategies applied to maximize/minimize the cost-function.
### Goals
The aims of this activity are to:
• motivate the need for (analytical) modeling of natural processes
• illustrate how to use the SOCR Modeler to fit models to real data
• present applications of model fitting
### Background & Motivation
Suppose we are given the sequence of numbers {1, 2, 3, 4, 5, 6, 7, 8, 9, 10} and asked to find the best (Continuous) Uniform Distribution that fits that data. In this case, there are two parameters that need to be estimated - the minimum (m) and the maximum (M) of the data. These parameters determine exactly the support (domain) of the continuous distribution and we can explicitly write the density for the (best fit) continuous uniform distribution as:
$f(x) = {{1}\over{M-m}}$, for $m \le x \le M$ and f(x) = 0, for $x \notin [m:M]$.
Having this model distribution, we can use its analytical form, f(x), to compute probabilities of events, critical functional values and, in general, do inference on the native process without acquiring additional data. Hence, a good strategy for model fitting is extremely useful in data analysis and statistical inference. Of course, any inference based on models is only going to be as good as the data and the optimization strategy used to generate the model.
Let's look at another motivational example. This time, suppose we have recorded the following (sample) measurements from some process {1.2, 1.4, 1.7, 3.4, 1.5, 1.1, 1.7, 3.5, 2.5}. Taking bin-size of 1, we can easily calculate the frequency histogram for this sample, {6, 1, 2}, as there are 6 observations in the interval [1:2), 1 measurement in the interval [2:3) and 2 measurements in the interval [3:4).
We can now ask about the best Beta distribution model fit to the histogram of the data!
Most of the time when we study natural processes using probability distributions, it makes sense to fit distribution models to the frequency histogram of a sample, not the actual sample. This is because our general goals are to model the behavior of the native process, understand its distribution and quantify likelihoods of various events of interest (e.g., in terms of the example above, we may be interested in the probability of observing an outcome in the interval [1.50:2.15) or the chance that an observation exceeds 2.8).
### Exercises
#### Exercise 1
Let's first solve the challenge we presented in the background section, where we calculated the frequency histogram for a sample to be {6, 1, 2}. Go to the SOCR Modeler and click on the Data tab. Paste in the two columns of data. Column 1 {1, 2, 3} - these are the ranges of the sample values and correspond to measurements in the intervals [1:2), [2:3) and [3:4). The second column represents the actual frequency counts of measurements within each of these 3 histogram bins - these are the values {6, 1, 2}. Now press the Graphs tab. You should see an image like the one below. Then choose Beta_Fit_Modeler from the drop-down list of models in the top-left and click the estimate parameters check-box, also on the top-left. The graph now shows you the best Beta distribution model fit to the frequency histogram {6, 1, 2}. Click the Results tab to see the actual estimates of the two parameters of the corresponding Beta distribution (Left Parameter = 0.0446428571428572; Right Parameter = 0.11607142857142871; Left Limit = 1.0; Right Limit = 3.0).
You can also see how the (general) Beta distribution degenerates to this shape by going to SOCR Distributions, selecting the (Generalized) Beta Distribution from the top-left and setting the 4 parameters to the 4 values we computed above. Notice how the shape of the Beta distribution changes with each change of the parameters. This is also a good demonstration of why we did the distribution model fitting to the frequency histogram in the first place - precisely to obtain an analytic model for studying the general process without acquiring mode data. Notice how we can compute the odds (probability) of any event of interest, once we have an analytical model for the distribution of the process. For example, this figure depicts the probabilities that a random observation from this process exceeds 2.8 (the right limit). This probability is computed to be 0.756
#### Exercise 2
Go to the SOCR Modeler and select the Graphs tab and click the "Scale Up" check-box. Then select Normal_Model_Fit from the drop-down list of models and begin clicking on the graph panel. The latter allows you to construct manually a histogram of interest. Notice that these are not random measurements, but rather frequency counts that you are manually constructing the histogram of. Try to make the histogram bins form a unimodal, bell-shaped and symmetric graph. Observe that as you click, new histogram bins will appear and the model fit will update. Now click the Estimate Parameters check-box on the top-left and see the best-fit Normal curve appear superimposed on the manually constructed histogram. Under the Results tab you can find the maximum likelihood estimates for the mean and the standard deviation for the best Normal distribution fit to this specific frequency histogram.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8945499062538147, "perplexity": 570.430598655033}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824293.62/warc/CC-MAIN-20171020173404-20171020193404-00465.warc.gz"}
|
https://git.rud.is/hrbrmstr/splashr/blame/commit/fc77b059c20cc140e0782f3d532772e0dacec9ea/man/splash_active.Rd
|
Tools to Work with the 'Splash' JavaScript Rendering Service in R
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
#### 30 lines 624 B Raw Normal View History
6 years ago % Generated by roxygen2: do not edit by hand % Please edit documentation in R/splashr.r \name{splash_active} \alias{splash_active} \title{Test if a Splash server is up} \usage{ splash_active(splash_obj = splash_local) 6 years ago } \arguments{ \item{splash_obj}{A splash connection object} } \value{ \code{TRUE} if Slash server is running, otherwise \code{FALSE} } 6 years ago \description{ Test if a Splash server is up } 6 years ago \examples{ \dontrun{ sp <- splash() splash_active(sp) } } 6 years ago \seealso{ Other splash_info_functions: \code{\link{splash_debug}}, \code{\link{splash_history}}, \code{\link{splash_perf_stats}}, \code{\link{splash_version}} }
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28814786672592163, "perplexity": 16260.002404050967}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499744.74/warc/CC-MAIN-20230129144110-20230129174110-00209.warc.gz"}
|
http://etna.mcs.kent.edu/volumes/2001-2010/vol16/abstract.php?vol=16&pages=165-185
|
## Analysis of two-dimensional FETI-DP preconditioners by the standard additive Schwarz framework
Susanne C. Brenner
### Abstract
FETI-DP preconditioners for two-dimensional elliptic boundary value problems with heterogeneous coefficients are analyzed by the standard additive Schwarz framework. It is shown that the condition number of the preconditioned system for both second order and fourth order problems is bounded by $C\big(1+\ln(H/h)\big)^2$, where $H$ is the maximum of the diameters of the subdomains, $h$ is the mesh size of a quasiuniform triangulation, and the positive constant $C$ is independent of $h$, $H$, the number of subdomains and the coefficients of the boundary value problems on the subdomains. The sharpness of the bound for second order problems is also established.
Full Text (PDF) [264 KB]
### Key words
FETI-DP, additive Schwarz, domain decomposition, heterogeneous coefficients.
65N55, 65N30.
< Back
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.951435387134552, "perplexity": 465.03203524478425}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221210362.19/warc/CC-MAIN-20180815220136-20180816000136-00312.warc.gz"}
|
https://math.meta.stackexchange.com/tags/profile-page/hot
|
# Tag Info
38
The "people reached" is calculated as sum of views of non-deleted questions-threads to which you contributed a 'significant' post, where significant means (I think this is still current We're working on a new stat to help convey the reach of your posts here ): asked the question gave a 'significant' answer, meaning that it has a positive score (...
17
This is a network wide change announced here. Site moderators are unable to revert changes to the software, so math.se can't get it back unless it comes back for everyone (which seems unlikely).
15
I think this might be an issue with Gravatar and not Stack Exchange. I'll explain my reason for thinking this later in this answer. First off, there is possibly a way to get your old gravatar back, adapted from Caleb's answer on Meta Stack Exchange: Go to the Wayback Machine. Enter the URL for your profile page. If you're lucky, it's been archived there. ...
10
As @ArcticChar notes in the comments, it's under the profile button/dropdown: This is a recent change: We’ve shipped some changes to the user profile navigation and this feature isn't even properly mentioned, but the responsible developer commented about it here: Oooh, sorry it's not in the demo @Glorfindel. It shows you your Teams profiles, network ...
9
You got a recent vote on a CW answer to a very popular question, which brought your answer to a +5 score. This makes all the views on the question eligible for the impact counter. Which is why you have such impact. If someone downvotes that answer, the number will drop by a few hundred thousands. See also: Massive change in 'People Reached' ...
9
I have never accessed Meta through my profile. Here is another way: click on the rightmost icon on the top bar of the site, where you have all your other communities, and Meta.
8
The Community user is mostly a token in a data base. The edits done by the community user are for the most part suggested edits that were proposed and approved by users. In rare cases SE starts some automated mass edit and that is also attributed to this user. Other than that actions by deleted user accounts get transferred to Community user. It is also ...
8
The words "they", "them", "their" are also commonly used as singular gender-neutral forms. See https://en.wikipedia.org/wiki/Singular_they Thus, the "them" there is intentional and not in error.
7
A brief answer is that you should go to your profile and click on the number of consecutive days. (Remember that you'll find this in the profile tab, not in the activity tab.) You can find more details here: Statistics of visits on M.SE Implementation date for the consecutive days calendar How can I see which days I visited site? The tag-info for the ...
7
Thanks for the report! As I mentioned in the comments, this was a side-effect of upgrading to MathJax 2.7.5. The core of the issue is described in detail over on GitHub. Many thanks to Davide Cervone for pointing me in the right direction and suggesting a fix. It's deployed in production now. Please let me know if you see any other oddities.
7
is there a way to flag a user profile directly? There isn't, only posts and comments can be flagged. If not, what should one do instead? Besides using the contact link, you can also flag one of your posts and explain the situation (include a link to the problematic profile). This may, especially on weekends, lead to a faster reaction than the contact ...
7
The following old bug report on Meta SE seems to be identical to what you observed: Activity log entry before registration date. Though that report is marked status-completed, it seems that there is a regression of this bug on at least two other SE sites: Anime & Manga SE: According to the activity tab of my profile, I performed some actions on a site ...
6
You can use the advanced search functionality: searching for e.g. user:me is:q derivative will give all posts by you, which are questions, and contain the word 'derivative':
6
Some of the new styles on the page caused the rendered MathJax to be rendered in the incorrect position (and also hid it). You could see that as a flicker of the page, seeing the rendered MathJax (in the wrong position), which then suddenly disappeared. I also added the missing re-rendering when changing the sorting & filtering.
6
The information labeled private in your profile (including your email address) is visible only to you, to the moderators (those with a diamond sign), and some of SE employees (namely, those with a diamond on Math site). You can check this yourself by viewing your profile from another browser or computer on which you are not logged in to SE (or from ...
6
The overall count includes your votes on deleted posts. The tab with details of your votes doesn't. Back in the days, the overall count did not include votes on deleted posts. That was changed last year: "Votes cast" should include votes on deleted contributions. One reason is that the votes cast by a user on deleted posts are still contributions ...
6
A bit long method but it would help you to see Last Seen, profile views and member since then you can open this and click on Copy Snippet To Answer and then replace the second line with const params = "?order=desc&site=Mathematics&filter=!40D72h-7nG92Z1_td"; Then run the snippet and you can see all the three Features. For reference-
5
Go to your profile page on the main site (not on meta or it won't work), then click on Edit Profile & Settings in the navbar.
5
You can find several places where this is discussed. The main announcement is Some changes to the profile while we make it responsive (on Meta Stack Exchange). Specifically, John Omielan's answer is about this issue and contains links to some other places where it was raised/discussed. Also if you browse the linked questions, you can find other related posts ...
4
The use-case seems marginal to me, so I do not see a need to have it on the profile, but if you want to know just use search: "user:me wiki:yes" will return all your CW posts; you can further refine the query in various ways. Side note in view of comments: you would also note major amounts of up-votes arriving via badges.
4
We have now deployed some changes to the affected code, and it should work a lot better today (actually, Nick Craver had already done the work to change this - we just hadn't deployed it yet).
4
This statistic appears to be programmed to display the highest rank available between weekly, monthly, quarterly, yearly and all-time (with a bias towards the longer-term in case of ties). You can check by clicking on the profiles of the leaders on the reputation leagues.
4
Coincidentally, we had a similar question earlier this month on Meta Stack Exchange: Why is the 'Next badge', Curious, the same as the Newest badge in this profile?. I can repeat my answer verbatim: That's status-bydesign. This is how that widget looks for the owner (except that this is a different badge): Until they've made a choice for which badge ...
4
When you click your own profile, the default tab is the activity tab. You should go to the "Profile" tab, where you can see the "visited $x$ days, $y$ consecutive" text. Click this text to show the calendar.
3
Ya it was a bug. Now the bug has been fixed as claimed over here. As you can see the edit history the profile pic error has just been sorted 1 hour ago. When I clicked on your photo I can see that now your purple pic is coming. Even community profile pic issue has also been sorted.
3
I'm a Senior Product Designer at Stack Overflow. Thanks for reporting the issue. The Fifty Shades of Math.SE will be resolved with the next production build.
3
Quoting Haney's answer from Meta Stack Exchange: Thanks for bringing this up. We made a change to our SQL server configuration that caused our logins to fail, which meant the data driving our sites was inaccessible (thus the errors). We've fixed that error and things should now be working properly. Sorry for the inconvenience! A quote from Greg Bray, Site ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2011820524930954, "perplexity": 1646.7001382349652}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056902.22/warc/CC-MAIN-20210919220343-20210920010343-00067.warc.gz"}
|
https://emc3.lmd.jussieu.fr/en/publications/peer-reviewed-papers-1/lmd_EMC32012_bib.html
|
Personal tools
##### Sections
You are here: Home / lmd_EMC32012_bib.html
# lmd_EMC32012.bib
@comment{{This file has been generated by bib2bib 1.95}}
@comment{{Command line: /usr/bin/bib2bib --quiet -c 'not journal:"Discussions"' -c 'not journal:"Polymer Science"' -c year=2012 -c $type="ARTICLE" -oc lmd_EMC32012.txt -ob lmd_EMC32012.bib /home/WWW/LMD/public/Publis_LMDEMC3.link.bib}} @article{2012QJRMS.138.2182S, author = {{Sane}, Y. and {Bonazzola}, M. and {Rio}, C. and {Chambon}, P. and {Fiolleau}, T. and {Musat}, I. and {Hourdin}, F. and {Roca}, R. and {Grandpeix}, J.-Y. and {Diedhiou}, A.}, title = {{An analysis of the diurnal cycle of precipitation over Dakar using local rain-gauge data and a general circulation model}}, journal = {Quarterly Journal of the Royal Meteorological Society}, year = 2012, month = oct, volume = 138, pages = {2182-2195}, doi = {10.1002/qj.1932}, adsurl = {http://adsabs.harvard.edu/abs/2012QJRMS.138.2182S}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{2012GeoRL..3921801N, author = {{Nam}, C. and {Bony}, S. and {Dufresne}, J.-L. and {Chepfer}, H. }, title = {{The {\lsquo}too few, too bright{\rsquo} tropical low-cloud problem in CMIP5 models}}, journal = {\grl}, keywords = {Atmospheric Composition and Structure: Cloud/radiation interaction, Atmospheric Composition and Structure: Radiation: transmission and scattering, Global Change: Atmosphere (0315, 0325), Global Change: Earth system modeling (1225, 4316), Global Change: Global climate models (3337, 4928)}, year = 2012, month = nov, volume = 39, eid = {L21801}, pages = {21801}, abstract = {{Previous generations of climate models have been shown to under-estimate the occurrence of tropical low-level clouds and to over-estimate their radiative effects. This study analyzes outputs from multiple climate models participating in the Fifth phase of the Coupled Model Intercomparison Project (CMIP5) using the Cloud Feedback Model Intercomparison Project Observations Simulator Package (COSP), and compares them with different satellite data sets. Those include CALIPSO lidar observations, PARASOL mono-directional reflectances and CERES radiative fluxes at the top of the atmosphere. We show that current state-of-the-art climate models predict overly bright low-clouds, even for a correct low-cloud cover. The impact of these biases on the Earth' radiation budget, however, is reduced by compensating errors. Those include the tendency of models to under-estimate the low-cloud cover and to over-estimate the occurrence of mid- and high-clouds above low-clouds. Finally, we show that models poorly represent the dependence of the vertical structure of low-clouds on large-scale environmental conditions. The implications of this {\lsquo}too few, too bright low-cloud problem{\rsquo} for climate sensitivity and model development are discussed. }}, doi = {10.1029/2012GL053421}, adsurl = {http://adsabs.harvard.edu/abs/2012GeoRL..3921801N}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{2012ACP....1210817L, author = {{Lacour}, J.-L. and {Risi}, C. and {Clarisse}, L. and {Bony}, S. and {Hurtmans}, D. and {Clerbaux}, C. and {Coheur}, P.-F.}, title = {{Mid-tropospheric {$\delta$}D observations from IASI/MetOp at high spatial and temporal resolution}}, journal = {Atmospheric Chemistry \& Physics}, year = 2012, month = nov, volume = 12, pages = {10817-10832}, abstract = {{In this paper we retrieve atmospheric HDO, H$_{2}$O concentrations and their ratio {$\delta$}D from IASI radiances spectra. Our method relies on an existing radiative transfer model (Atmosphit) and an optimal estimation inversion scheme, but goes further than our previous work by explicitly considering correlations between the two species. A global HDO and H$_{2}$O a priori profile together with a covariance matrix were built from daily LMDz-iso model simulations of HDO and H$_{2}$O profiles over the whole globe and a whole year. The retrieval parameters are described and characterized in terms of errors. We show that IASI is mostly sensitive to {$\delta$}D in the middle troposphere and allows retrieving {$\delta$}D for an integrated 3-6 km column with an error of 38{\permil} on an individual measurement basis. We examine the performance of the retrieval to capture the temporal (seasonal and short-term) and spatial variations of {$\delta$}D for one year of measurement at two dedicated sites (Darwin and Iza{\~n}a) and a latitudinal band from -60{\deg} to 60{\deg} for a 15 day period in January. We report a generally good agreement between IASI and the model and indicate the capabilities of IASI to reproduce the large scale variations of {$\delta$}D (seasonal cycle and latitudinal gradient) with good accuracy. In particular, we show that there is no systematic significant bias in the retrieved {$\delta$}D values in comparison with the model, and that the retrieved variability is similar to the one in the model even though there are certain local differences. Moreover, the noticeable differences between IASI and the model are briefly examined and suggest modeling issues instead of retrieval effects. Finally, the results further reveal the unprecedented capabilities of IASI to capture short-term variations in {$\delta$}D, highlighting the added value of the sounder for monitoring hydrological processes. }}, doi = {10.5194/acp-12-10817-2012}, adsurl = {http://adsabs.harvard.edu/abs/2012ACP....1210817L}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{2012JGRD..117.5304R, author = {{Risi}, C. and {Noone}, D. and {Worden}, J. and {Frankenberg}, C. and {Stiller}, G. and {Kiefer}, M. and {Funke}, B. and {Walker}, K. and {Bernath}, P. and {Schneider}, M. and {Bony}, S. and {Lee}, J. and {Brown}, D. and {Sturm}, C.}, title = {{Process-evaluation of tropospheric humidity simulated by general circulation models using water vapor isotopic observations: 2. Using isotopic diagnostics to understand the mid and upper tropospheric moist bias in the tropics and subtropics}}, journal = {Journal of Geophysical Research (Atmospheres)}, keywords = {general circulation models, process-based evaluation, relative humidity, water isotopes, Atmospheric Composition and Structure: Cloud physics and chemistry, Atmospheric Composition and Structure: Troposphere: composition and chemistry, Atmospheric Processes: Global climate models (1626, 4928), Atmospheric Processes: Remote sensing (4337)}, year = 2012, month = mar, volume = 117, eid = {D05304}, pages = {5304}, abstract = {{Evaluating the representation of processes controlling tropical and subtropical tropospheric relative humidity (RH) in atmospheric general circulation models (GCMs) is crucial to assess the credibility of predicted climate changes. GCMs have long exhibited a moist bias in the tropical and subtropical mid and upper troposphere, which could be due to the mis-representation of cloud processes or of the large-scale circulation, or to excessive diffusion during water vapor transport. The goal of this study is to use observations of the water vapor isotopic ratio to understand the cause of this bias. We compare the three-dimensional distribution of the water vapor isotopic ratio measured from space and ground to that simulated by several versions of the isotopic GCM LMDZ. We show that the combined evaluation of RH and of the water vapor isotopic composition makes it possible to discriminate the most likely cause of RH biases. Models characterized either by an excessive vertical diffusion, an excessive convective detrainment or an underestimated in situ cloud condensation will all produce a moist bias in the free troposphere. However, only an excessive vertical diffusion can lead to a reversed seasonality of the free tropospheric isotopic composition in the subtropics compared to observations. Comparing seven isotopic GCMs suggests that the moist bias found in many GCMs in the mid and upper troposphere most frequently results from an excessive diffusion during vertical water vapor transport. This study demonstrates the added value of water vapor isotopic measurements for interpreting shortcomings in the simulation of RH by climate models. }}, doi = {10.1029/2011JD016623}, adsurl = {http://adsabs.harvard.edu/abs/2012JGRD..117.5304R}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{2012JGRD..117.5303R, author = {{Risi}, C. and {Noone}, D. and {Worden}, J. and {Frankenberg}, C. and {Stiller}, G. and {Kiefer}, M. and {Funke}, B. and {Walker}, K. and {Bernath}, P. and {Schneider}, M. and {Wunch}, D. and {Sherlock}, V. and {Deutscher}, N. and {Griffith}, D. and {Wennberg}, P.~O. and {Strong}, K. and {Smale}, D. and {Mahieu}, E. and {Barthlott}, S. and {Hase}, F. and {Garc{\'{\i}}A}, O. and {Notholt}, J. and {Warneke}, T. and {Toon}, G. and {Sayres}, D. and {Bony}, S. and {Lee}, J. and {Brown}, D. and {Uemura}, R. and {Sturm}, C.}, title = {{Process-evaluation of tropospheric humidity simulated by general circulation models using water vapor isotopologues: 1. Comparison between models and observations}}, journal = {Journal of Geophysical Research (Atmospheres)}, keywords = {general circulation models, process-based evaluation, relative humidity, water isotopes, Atmospheric Composition and Structure: Cloud physics and chemistry, Atmospheric Composition and Structure: Troposphere: composition and chemistry, Atmospheric Processes: Global climate models (1626, 4928), Atmospheric Processes: Regional modeling (4316), Paleoceanography: Global climate models (1626, 3337)}, year = 2012, month = mar, volume = 117, eid = {D05303}, pages = {5303}, abstract = {{The goal of this study is to determine how H$_{2}$O and HDO measurements in water vapor can be used to detect and diagnose biases in the representation of processes controlling tropospheric humidity in atmospheric general circulation models (GCMs). We analyze a large number of isotopic data sets (four satellite, sixteen ground-based remote-sensing, five surface in situ and three aircraft data sets) that are sensitive to different altitudes throughout the free troposphere. Despite significant differences between data sets, we identify some observed HDO/H$_{2}$O characteristics that are robust across data sets and that can be used to evaluate models. We evaluate the isotopic GCM LMDZ, accounting for the effects of spatiotemporal sampling and instrument sensitivity. We find that LMDZ reproduces the spatial patterns in the lower and mid troposphere remarkably well. However, it underestimates the amplitude of seasonal variations in isotopic composition at all levels in the subtropics and in midlatitudes, and this bias is consistent across all data sets. LMDZ also underestimates the observed meridional isotopic gradient and the contrast between dry and convective tropical regions compared to satellite data sets. Comparison with six other isotope-enabled GCMs from the SWING2 project shows that biases exhibited by LMDZ are common to all models. The SWING2 GCMs show a very large spread in isotopic behavior that is not obviously related to that of humidity, suggesting water vapor isotopic measurements could be used to expose model shortcomings. In a companion paper, the isotopic differences between models are interpreted in terms of biases in the representation of processes controlling humidity. }}, doi = {10.1029/2011JD016621}, adsurl = {http://adsabs.harvard.edu/abs/2012JGRD..117.5303R}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{2012JAtS...69.3788A, author = {{Arakelian}, A. and {Codron}, F.}, title = {{Southern Hemisphere Jet Variability in the IPSL GCM at Varying Resolutions}}, journal = {Journal of Atmospheric Sciences}, year = 2012, month = dec, volume = 69, pages = {3788-3799}, doi = {10.1175/JAS-D-12-0119.1}, adsurl = {http://adsabs.harvard.edu/abs/2012JAtS...69.3788A}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{2012GeoRL..3923202M, author = {{Madeleine}, J.-B. and {Forget}, F. and {Millour}, E. and {Navarro}, T. and {Spiga}, A.}, title = {{The influence of radiatively active water ice clouds on the Martian climate}}, journal = {\grl}, keywords = {Atmospheric Composition and Structure: Cloud/radiation interaction, Atmospheric Composition and Structure: Planetary atmospheres (5210, 5405, 5704), Atmospheric Processes: Clouds and cloud feedbacks, Planetary Sciences: Solid Surface Planets: Atmospheres (0343, 1060), Planetary Sciences: Solar System Objects: Mars}, year = 2012, month = dec, volume = 39, eid = {L23202}, pages = {23202}, abstract = {{Radiatively active water ice clouds (RAC) play a key role in shaping the thermal structure of the Martian atmosphere. In this paper, RAC are implemented in the LMD Mars Global Climate Model (GCM) and the simulated temperatures are compared to Thermal Emission Spectrometer observations over a full year. RAC change the temperature gradients and global dynamics of the atmosphere and this change in dynamics in turn implies large-scale adiabatic temperature changes. Therefore, clouds have both a direct and indirect effect on atmospheric temperatures. RAC successfully reduce major GCM temperature biases, especially in the regions of formation of the aphelion cloud belt where a cold bias of more than 10 K is corrected. Departures from the observations are however seen in the polar regions, and highlight the need for better modeling of cloud formation and evolution. }}, doi = {10.1029/2012GL053564}, adsurl = {http://adsabs.harvard.edu/abs/2012GeoRL..3923202M}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{2012ClDy...39.2091K, author = {{Konsta}, D. and {Chepfer}, H. and {Dufresne}, J.-L.}, title = {{A process oriented characterization of tropical oceanic clouds for climate model evaluation, based on a statistical analysis of daytime A-train observations}}, journal = {Climate Dynamics}, year = 2012, month = nov, volume = 39, pages = {2091-2108}, abstract = {{This paper aims at characterizing how different key cloud properties (cloud fraction, cloud vertical distribution, cloud reflectance, a surrogate of the cloud optical depth) vary as a function of the others over the tropical oceans. The correlations between the different cloud properties are built from 2 years of collocated A-train observations (CALIPSO-GOCCP and MODIS) at a scale close to cloud processes; it results in a characterization of the physical processes in tropical clouds, that can be used to better understand cloud behaviors, and constitute a powerful tool to develop and evaluate cloud parameterizations in climate models. First, we examine a case study of shallow cumulus cloud observed simultaneously by the two sensors (CALIPSO, MODIS), and develop a methodology that allows to build global scale statistics by keeping the separation between clear and cloudy areas at the pixel level (250, 330 m). Then we build statistical instantaneous relationships between the cloud cover, the cloud vertical distribution and the cloud reflectance. The vertical cloud distribution indicates that the optically thin clouds (optical thickness {\lt}1.5) dominate the boundary layer over the trade wind regions. Optically thick clouds (optical thickness {\gt}3.4) are composed of high and mid-level clouds associated with deep convection along the ITCZ and SPCZ and over the warm pool, and by stratocumulus low level clouds located along the East coast of tropical oceans. The cloud properties are analyzed as a function of the large scale circulation regime. Optically thick high clouds are dominant in convective regions (CF {\gt} 80 \%), while low level clouds with low optical thickness ({\lt}3.5) are present in regimes of subsidence but in convective regimes as well, associated principally to low cloud fractions (CF {\lt} 50 \%). A focus on low-level clouds allows us to quantify how the cloud optical depth increases with cloud top altitude and with cloud fraction. }}, doi = {10.1007/s00382-012-1533-7}, adsurl = {http://adsabs.harvard.edu/abs/2012ClDy...39.2091K}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{2012ACP....1210485F, author = {{Field}, R.~D. and {Risi}, C. and {Schmidt}, G.~A. and {Worden}, J. and {Voulgarakis}, A. and {LeGrande}, A.~N. and {Sobel}, A.~H. and {Healy}, R.~J.}, title = {{A Tropospheric Emission Spectrometer HDO/H$_{2}$O retrieval simulator for climate models}}, journal = {Atmospheric Chemistry \& Physics}, year = 2012, month = nov, volume = 12, pages = {10485-10504}, abstract = {{Retrievals of the isotopic composition of water vapor from the Aura Tropospheric Emission Spectrometer (TES) have unique value in constraining moist processes in climate models. Accurate comparison between simulated and retrieved values requires that model profiles that would be poorly retrieved are excluded, and that an instrument operator be applied to the remaining profiles. Typically, this is done by sampling model output at satellite measurement points and using the quality flags and averaging kernels from individual retrievals at specific places and times. This approach is not reliable when the model meteorological conditions influencing retrieval sensitivity are different from those observed by the instrument at short time scales, which will be the case for free-running climate simulations. In this study, we describe an alternative, ''categorical'' approach to applying the instrument operator, implemented within the NASA GISS ModelE general circulation model. Retrieval quality and averaging kernel structure are predicted empirically from model conditions, rather than obtained from collocated satellite observations. This approach can be used for arbitrary model configurations, and requires no agreement between satellite-retrieved and model meteorology at short time scales. To test this approach, nudged simulations were conducted using both the retrieval-based and categorical operators. Cloud cover, surface temperature and free-tropospheric moisture content were the most important predictors of retrieval quality and averaging kernel structure. There was good agreement between the {$\delta$}D fields after applying the retrieval-based and more detailed categorical operators, with increases of up to 30{\permil} over the ocean and decreases of up to 40{\permil} over land relative to the raw model fields. The categorical operator performed better over the ocean than over land, and requires further refinement for use outside of the tropics. After applying the TES operator, ModelE had {$\delta$}D biases of -8{\permil} over ocean and -34{\permil} over land compared to TES {$\delta$}D, which were less than the biases using raw model {$\delta$}D fields. }}, doi = {10.5194/acp-12-10485-2012}, adsurl = {http://adsabs.harvard.edu/abs/2012ACP....1210485F}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{2012JGRD..11719205S, author = {{Sherwood}, S.~C. and {Risi}, C.}, title = {{The HDO/H$_{2}$O relationship in tropospheric water vapor in an idealized {\ldquo}last-saturation{\rdquo} model}}, journal = {Journal of Geophysical Research (Atmospheres)}, keywords = {atmospheric convection, climate, isotopes, water vapor, Atmospheric Composition and Structure: Cloud physics and chemistry, Atmospheric Composition and Structure: Troposphere: composition and chemistry, Global Change: Water cycles (1836), Atmospheric Processes: Convective processes, Atmospheric Processes: Idealized model}, year = 2012, month = oct, volume = 117, number = d16, eid = {D19205}, pages = {19205}, abstract = {{Previous model studies have shown that the isotopic composition of tropospheric water vapor is sensitive to atmospheric water transport processes, but compositional information is difficult to interpret due to the complexity of the models. Here an attempt is made to clarify the sensitivity by computing the relationship between tropospheric HDO (via {$\delta$}D) and H$_{2}$O (via specific humidity q) in an idealized model atmosphere based on a ''last-saturation'' framework that includes convection coupled to a steady large-scale circulation with prescribed horizontal mixing. Multiple physical representations of convection and mixing allow key structural as well as parametric uncertainties to be explored. This model has previously been shown to reproduce the essential aspects of the humidity distribution. Variations of{$\delta$}D or qindividually are dominated by local dynamics, but their relationship is preserved advectively, thus revealing conditions in regions of convection. The model qualitatively agrees with satellite observations, and reproduces some parametric sensitivities seen in previous GCM experiments. Sensitivity to model assumptions is greatest in the upper troposphere, apparently because in-situ evaporation and condensation processes in convective regions are more dominant in the budget there. In general, vapor recycling analogous to that in continental interiors emerges as the crucial element in explaining why{$\delta$}D exceeds that predicted by a simple Rayleigh process; such recycling involves coexistent condensation sinks and convective moisture sources, induced respectively by (for example) waves and small-scale convective mixing. The relative humidity distribution is much less sensitive to such recycling. }}, doi = {10.1029/2012JD018068}, adsurl = {http://adsabs.harvard.edu/abs/2012JGRD..11719205S}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{2012JGRD..11719201M, author = {{Mrowiec}, A.~A. and {Rio}, C. and {Fridlind}, A.~M. and {Ackerman}, A.~S. and {Del Genio}, A.~D. and {Pauluis}, O.~M. and {Varble}, A.~C. and {Fan}, J.}, title = {{Analysis of cloud-resolving simulations of a tropical mesoscale convective system observed during TWP-ICE: Vertical fluxes and draft properties in convective and stratiform regions}}, journal = {Journal of Geophysical Research (Atmospheres)}, keywords = {TWP-ICE, cloud-resolving modeling, convection parameterization, mesoscale convective system, tropical convection, updrafts and downdrafts, Atmospheric Processes: Convective processes, Atmospheric Processes: Regional modeling (4316), Atmospheric Processes: Tropical convection}, year = 2012, month = oct, volume = 117, number = d16, eid = {D19201}, pages = {19201}, abstract = {{We analyze three cloud-resolving model simulations of a strong convective event observed during the TWP-ICE campaign, differing in dynamical core, microphysical scheme or both. Based on simulated and observed radar reflectivity, simulations roughly reproduce observed convective and stratiform precipitating areas. To identify the characteristics of convective and stratiform drafts that are difficult to observe but relevant to climate model parameterization, independent vertical wind speed thresholds are calculated to capture 90\% of total convective and stratiform updraft and downdraft mass fluxes. Convective updrafts are fairly consistent across simulations (likely owing to fixed large-scale forcings and surface conditions), except that hydrometeor loadings differ substantially. Convective downdraft and stratiform updraft and downdraft mass fluxes vary notably below the melting level, but share similar vertically uniform draft velocities despite differing hydrometeor loadings. All identified convective and stratiform downdrafts contain precipitation below {\tilde}10 km and nearly all updrafts are cloudy above the melting level. Cold pool properties diverge substantially in a manner that is consistent with convective downdraft mass flux differences below the melting level. Despite differences in hydrometeor loadings and cold pool properties, convective updraft and downdraft mass fluxes are linearly correlated with convective area, the ratio of ice in downdrafts to that in updrafts is {\tilde}0.5 independent of species, and the ratio of downdraft to updraft mass flux is {\tilde}0.5-0.6, which may represent a minimum evaporation efficiency under moist conditions. Hydrometeor loading in stratiform regions is found to be a fraction of hydrometeor loading in convective regions that ranges from {\tilde}10\% (graupel) to {\tilde}90\% (cloud ice). These findings may lead to improved convection parameterizations. }}, doi = {10.1029/2012JD017759}, adsurl = {http://adsabs.harvard.edu/abs/2012JGRD..11719201M}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{2012JCli...25.6885T, author = {{Tobin}, I. and {Bony}, S. and {Roca}, R.}, title = {{Observational Evidence for Relationships between the Degree of Aggregation of Deep Convection, Water Vapor, Surface Fluxes, and Radiation}}, journal = {Journal of Climate}, year = 2012, month = oct, volume = 25, pages = {6885-6904}, doi = {10.1175/JCLI-D-11-00258.1}, adsurl = {http://adsabs.harvard.edu/abs/2012JCli...25.6885T}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{2012GeoRL..3920807B, author = {{Brient}, F. and {Bony}, S.}, title = {{How may low-cloud radiative properties simulated in the current climate influence low-cloud feedbacks under global warming?}}, journal = {\grl}, keywords = {Atmospheric Composition and Structure: Cloud/radiation interaction, Global Change: Atmosphere (0315, 0325), Global Change: Global climate models (3337, 4928), Atmospheric Processes: Clouds and cloud feedbacks}, year = 2012, month = oct, volume = 39, eid = {L20807}, pages = {20807}, abstract = {{The influence of cloud modelling uncertainties on the projection of the tropical low-cloud response to global warming is explored by perturbing model parameters of the IPSL-CM5A climate model in a range of configurations (realistic general circulation model, aqua-planet, single-column model). While the positive sign and the mechanism of the low-cloud response to climate warming predicted by the model are robust, the amplitude of the response can vary considerably depending on the model tuning parameters. Moreover, the strength of the low-cloud response to climate change exhibits a strong correlation with the strength of the low-cloud radiative effects simulated in the current climate. We show that this correlation primarily results from a local positive feedback (referred to as the {\ldquo}beta feedback{\rdquo}) between boundary-layer cloud radiative cooling, relative humidity and low-cloud cover. Based on this correlation and observational constraints, it is suggested that the strength of the tropical low-cloud feedback predicted by the IPSL-CM5A model in climate projections might be overestimated by about fifty percent. }}, doi = {10.1029/2012GL053265}, adsurl = {http://adsabs.harvard.edu/abs/2012GeoRL..3920807B}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{2012ClDy...39.1329G, author = {{Guimberteau}, M. and {Laval}, K. and {Perrier}, A. and {Polcher}, J. }, title = {{Global effect of irrigation and its impact on the onset of the Indian summer monsoon}}, journal = {Climate Dynamics}, keywords = {Irrigation, ORCHIDEE, Indian monsoon, Onset, Mississippi}, year = 2012, month = sep, volume = 39, pages = {1329-1348}, abstract = {{In a context of increased demand for food and of climate change, the water consumptions associated with the agricultural practice of irrigation focuses attention. In order to analyze the global influence of irrigation on the water cycle, the land surface model ORCHIDEE is coupled to the GCM LMDZ to simulate the impact of irrigation on climate. A 30-year simulation which takes into account irrigation is compared with a simulation which does not. Differences are usually not significant on average over all land surfaces but hydrological variables are significantly affected by irrigation over some of the main irrigated river basins. Significant impacts over the Mississippi river basin are shown to be contrasted between eastern and western regions. An increase in summer precipitation is simulated over the arid western region in association with enhanced evapotranspiration whereas a decrease in precipitation occurs over the wet eastern part of the basin. Over the Indian peninsula where irrigation is high during winter and spring, a delay of 6 days is found for the mean monsoon onset date when irrigation is activated, leading to a significant decrease in precipitation during May to July. Moreover, the higher decrease occurs in June when the water requirements by crops are maximum, exacerbating water scarcity in this region. A significant cooling of the land surfaces occurs during the period of high irrigation leading to a decrease of the land-sea heat contrast in June, which delays the monsoon onset. }}, doi = {10.1007/s00382-011-1252-5}, adsurl = {http://adsabs.harvard.edu/abs/2012ClDy...39.1329G}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{2012JGRE..117.0J10C, author = {{Clancy}, R.~T. and {Sandor}, B.~J. and {Wolff}, M.~J. and {Smith}, M.~D. and {Lefèvre}, F. and {Madeleine}, J.-B. and {Forget}, F. and {Murchie}, S.~L. and {Seelos}, F.~P. and {Seelos}, K.~D. and {Nair}, H.~A. and {Toigo}, A.~D. and {Humm}, D. and {Kass}, D.~M. and {Kleinb{\"o}hl}, A. and {Heavens}, N.}, title = {{Extensive MRO CRISM observations of 1.27 {$\mu$}m O$_{2}$airglow in Mars polar night and their comparison to MRO MCS temperature profiles and LMD GCM simulations}}, journal = {Journal of Geophysical Research (Planets)}, keywords = {Planetary Sciences: Solid Surface Planets: Atmospheres (0343, 1060), Planetary Sciences: Solid Surface Planets: Aurorae and airglow, Planetary Sciences: Solid Surface Planets: Polar regions, Planetary Sciences: Solid Surface Planets: Remote sensing, Planetary Sciences: Solar System Objects: Mars}, year = 2012, month = aug, volume = 117, eid = {E00J10}, pages = {0}, abstract = {{The Martian polar night distribution of 1.27 {$\mu$}m (0-0) band emission from O$_{2}$singlet delta [O$_{2}$($^{1}${$\Delta$}$_{g}$)] is determined from an extensive set of Mars Reconnaissance Orbiter (MRO) Compact Reconnaissance Imaging Spectral Mapping (CRISM) limb scans observed over a wide range of Mars seasons, high latitudes, local times, and longitudes between 2009 and 2011. This polar nightglow reflects meridional transport and winter polar descent of atomic oxygen produced from CO$_{2}$photodissociation. A distinct peak in 1.27 {$\mu$}m nightglow appears prominently over 70-90NS latitudes at 40-60 km altitudes, as retrieved for over 100 vertical profiles of O$_{2}$($^{1}${$\Delta$}$_{g}$) 1.27 {$\mu$}m volume emission rates (VER). We also present the first detection of much ({\times}80 {\plusmn} 20) weaker 1.58 {$\mu$}m (0-1) band emission from Mars O$_{2}$($^{1}${$\Delta$}$_{g}$). Co-located polar night CRISM O$_{2}$($^{1}${$\Delta$}$_{g}$) and Mars Climate Sounder (MCS) (McCleese et al., 2008) temperature profiles are compared to the same profiles as simulated by the Laboratoire de Météorologie Dynamique (LMD) general circulation/photochemical model (e.g., Lefèvre et al., 2004). Both standard and interactive aerosol LMD simulations (Madeleine et al., 2011a) underproduce CRISM O$_{2}$($^{1}${$\Delta$}$_{g}$) total emission rates by 40\%, due to inadequate transport of atomic oxygen to the winter polar emission regions. Incorporation of interactive cloud radiative forcing on the global circulation leads to distinct but insufficient improvements in modeled polar O$_{2}$($^{1}${$\Delta$}$_{g}$) and temperatures. The observed and modeled anti-correlations between temperatures and 1.27 {$\mu$}m band VER reflect the temperature dependence of the rate coefficient for O$_{2}$($^{1}${$\Delta$}$_{g}$) formation, as provided in Roble (1995). }}, doi = {10.1029/2011JE004018}, adsurl = {http://adsabs.harvard.edu/abs/2012JGRE..117.0J10C}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{2012JGRD..11715112L, author = {{Lee}, J.-E. and {Risi}, C. and {Fung}, I. and {Worden}, J. and {Scheepmaker}, R.~A. and {Lintner}, B. and {Frankenberg}, C. }, title = {{Asian monsoon hydrometeorology from TES and SCIAMACHY water vapor isotope measurements and LMDZ simulations: Implications for speleothem climate record interpretation}}, journal = {Journal of Geophysical Research (Atmospheres)}, keywords = {Asian monsoon, amount effect, climate modeling, insolation, speleothem, water isotopes, Global Change: Abrupt/rapid climate change (4901, 8408), Global Change: Climate variability (1635, 3305, 3309, 4215, 4513), Global Change: Cryospheric change (0776), Global Change: Impacts of global change (1225, 4321), Global Change: Remote sensing (1855, 4337)}, year = 2012, month = aug, volume = 117, number = d16, eid = {D15112}, pages = {15112}, abstract = {{Observations show that heavy oxygen isotope composition in precipitation ({$\delta$}$^{18}$O$_{p}$) increases from coastal southeastern (SE) China to interior northwestern (NW) China during the wet season, contradicting expectations from simple Rayleigh distillation theory. Here we employ stable isotopes of precipitation and vapor from satellite measurements and climate model simulations to characterize the moisture processes that control Asian monsoon precipitation and relate these processes to speleothem paleoclimate records. We find that {$\delta$}$^{18}$O$_{p}$is low over SE China as a result of local and upstream condensation and that {$\delta$}$^{18}$O$_{p}$is high over NW China because of evaporative enrichment of$^{18}$O as raindrops fall through dry air. We show that {$\delta$}$^{18}$O$_{p}$at cave sites over southern China is weakly correlated with upstream precipitation in the core of the Indian monsoon region rather than local precipitation, but it is well-correlated with the {$\delta$}$^{18}$O$_{p}$over large areas of southern and central China, consistent with coherent speleothem {$\delta$}$^{18}$O$_{p}$variations over different parts of China. Previous studies have documented high correlations between speleothem {$\delta$}$^{18}$O$_{p}$and millennial timescale climate forcings, and we suggest that the high correlation between insolation and speleothem {$\delta$}$^{18}$O$_{p}$in southern China reflects the variations of hydrologic processes over the Indian monsoon region on millennial and orbital timescales. The {$\delta$}$^{18}$O$_{p}$in the drier part (north of {\tilde}30{\deg}N) of China, on the other hand, has consistently negative correlations with local precipitation and may capture local hydrologic processes related to changes in the extent of the Hadley circulation. }}, doi = {10.1029/2011JD017133}, adsurl = {http://adsabs.harvard.edu/abs/2012JGRD..11715112L}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{2012ACP....12.6775B, author = {{Browse}, J. and {Carslaw}, K.~S. and {Arnold}, S.~R. and {Pringle}, K. and {Boucher}, O.}, title = {{The scavenging processes controlling the seasonal cycle in Arctic sulphate and black carbon aerosol}}, journal = {Atmospheric Chemistry \& Physics}, year = 2012, month = aug, volume = 12, pages = {6775-6798}, abstract = {{The seasonal cycle in Arctic aerosol is typified by high concentrations of large aged anthropogenic particles transported from lower latitudes in the late Arctic winter and early spring followed by a sharp transition to low concentrations of locally sourced smaller particles in the summer. However, multi-model assessments show that many models fail to simulate a realistic cycle. Here, we use a global aerosol microphysics model (GLOMAP) and surface-level aerosol observations to understand how wet scavenging processes control the seasonal variation in Arctic black carbon (BC) and sulphate aerosol. We show that the transition from high wintertime concentrations to low concentrations in the summer is controlled by the transition from ice-phase cloud scavenging to the much more efficient warm cloud scavenging in the late spring troposphere. This seasonal cycle is amplified further by the appearance of warm drizzling cloud in the late spring and summer boundary layer. Implementing these processes in GLOMAP greatly improves the agreement between the model and observations at the three Arctic ground-stations Alert, Barrow and Zeppelin Mountain on Svalbard. The SO$_{4}$model-observation correlation coefficient (R) increases from: -0.33 to 0.71 at Alert (82.5{\deg} N), from -0.16 to 0.70 at Point Barrow (71.0{\deg} N) and from -0.42 to 0.40 at Zeppelin Mountain (78{\deg} N). The BC model-observation correlation coefficient increases from -0.68 to 0.72 at Alert and from -0.42 to 0.44 at Barrow. Observations at three marginal Arctic sites (Janiskoski, Oulanka and Karasjok) indicate a far weaker aerosol seasonal cycle, which we show is consistent with the much smaller seasonal change in the frequency of ice clouds compared to higher latitude sites. Our results suggest that the seasonal cycle in Arctic aerosol is driven by temperature-dependent scavenging processes that may be susceptible to modification in a future climate. }}, doi = {10.5194/acp-12-6775-2012}, adsurl = {http://adsabs.harvard.edu/abs/2012ACP....12.6775B}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{2012JGRD..11714105J, author = {{Jiang}, J.~H. and {Su}, H. and {Zhai}, C. and {Perun}, V.~S. and {Del Genio}, A. and {Nazarenko}, L.~S. and {Donner}, L.~J. and {Horowitz}, L. and {Seman}, C. and {Cole}, J. and {Gettelman}, A. and {Ringer}, M.~A. and {Rotstayn}, L. and {Jeffrey}, S. and {Wu}, T. and {Brient}, F. and {Dufresne}, J.-L. and {Kawai}, H. and {Koshiro}, T. and {Watanabe}, M. and {L{\'E}Cuyer}, T.~S. and {Volodin}, E.~M. and {Iversen}, T. and {Drange}, H. and {Mesquita}, M.~D.~S. and {Read}, W.~G. and {Waters}, J.~W. and {Tian}, B. and {Teixeira}, J. and {Stephens}, G.~L.}, title = {{Evaluation of cloud and water vapor simulations in CMIP5 climate models using NASA {\ldquo}A-Train{\rdquo} satellite observations}}, journal = {Journal of Geophysical Research (Atmospheres)}, keywords = {climate model, clouds, satellite observation, water vapor, Global Change: Atmosphere (0315, 0325), Global Change: Global climate models (3337, 4928), Global Change: Remote sensing (1855, 4337), Global Change: Water cycles (1836), Global Change: General or miscellaneous}, year = 2012, month = jul, volume = 117, number = d16, eid = {D14105}, pages = {14105}, abstract = {{Using NASA's A-Train satellite measurements, we evaluate the accuracy of cloud water content (CWC) and water vapor mixing ratio (H$_{2}$O) outputs from 19 climate models submitted to the Phase 5 of Coupled Model Intercomparison Project (CMIP5), and assess improvements relative to their counterparts for the earlier CMIP3. We find more than half of the models show improvements from CMIP3 to CMIP5 in simulating column-integrated cloud amount, while changes in water vapor simulation are insignificant. For the 19 CMIP5 models, the model spreads and their differences from the observations are larger in the upper troposphere (UT) than in the lower or middle troposphere (L/MT). The modeled mean CWCs over tropical oceans range from {\tilde}3\% to {\tilde}15{\times} of the observations in the UT and 40\% to 2{\times} of the observations in the L/MT. For modeled H$_{2}$Os, the mean values over tropical oceans range from {\tilde}1\% to 2{\times} of the observations in the UT and within 10\% of the observations in the L/MT. The spatial distributions of clouds at 215 hPa are relatively well-correlated with observations, noticeably better than those for the L/MT clouds. Although both water vapor and clouds are better simulated in the L/MT than in the UT, there is no apparent correlation between the model biases in clouds and water vapor. Numerical scores are used to compare different model performances in regards to spatial mean, variance and distribution of CWC and H$_{2}$O over tropical oceans. Model performances at each pressure level are ranked according to the average of all the relevant scores for that level. }}, doi = {10.1029/2011JD017237}, adsurl = {http://adsabs.harvard.edu/abs/2012JGRD..11714105J}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{2012ACP....12.6185V, author = {{Verma}, S. and {Boucher}, O. and {Shekar Reddy}, M. and {Upadhyaya}, H.~C. and {Le Van}, P. and {Binkowski}, F.~S. and {Sharma}, O.~P.}, title = {{Tropospheric distribution of sulphate aerosols mass and number concentration during INDOEX-IFP and its transport over the Indian Ocean: a GCM study}}, journal = {Atmospheric Chemistry \& Physics}, year = 2012, month = jul, volume = 12, pages = {6185-6196}, abstract = {{The sulphate aerosols mass and number concentration during the Indian Ocean Experiment (INDOEX) Intensive Field Phase-1999 (INDOEX-IFP) has been simulated using an interactive chemistry GCM. The model considers an interactive scheme for feedback from chemistry to meteorology with internally resolving microphysical properties of aerosols. In particular, the interactive scheme has the ability to predict both particle mass and number concentration for the Aitken and accumulation modes as prognostic variables. On the basis of size distribution retrieved from the observations made along the cruise route during IFP-1999, the model successfully simulates the order of magnitude of aerosol number concentration. The results show the southward migration of minimum concentrations, which follows ITCZ (Inter Tropical Convergence Zone) migration. Sulphate surface concentration during INDOEX-IFP at Kaashidhoo (73.46{\deg} E, 4.96{\deg} N) gives an agreement within a factor of 2 to 3. The measured aerosol optical depth (AOD) from all aerosol species at KCO was 0.37 {\plusmn} 0.11 while the model simulated sulphate AOD ranged from 0.05 to 0.11. As sulphate constitutes 29\% of the observed AOD, the model predicted values of sulphate AOD are hence fairly close to the measured values. The model thus has capability to predict the vertically integrated column sulphate burden. Furthermore, the model results indicate that Indian contribution to the estimated sulphate burden over India is more than 60\% with values upto 40\% over the Arabian Sea. }}, doi = {10.5194/acp-12-6185-2012}, adsurl = {http://adsabs.harvard.edu/abs/2012ACP....12.6185V}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{2012JAtS...69.2090G, author = {{Grandpeix}, J.-Y. and {Lafore}, J.-P.}, title = {{Reply to Comments on A Density Current Parameterization Coupled with Emanuel's Convection Scheme. Part I: The Models'''}}, journal = {Journal of Atmospheric Sciences}, year = 2012, month = jun, volume = 69, pages = {2090-2096}, doi = {10.1175/JAS-D-11-0127.1}, adsurl = {http://adsabs.harvard.edu/abs/2012JAtS...69.2090G}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{2012ERL.....7b4013B, author = {{Boucher}, O. and {Halloran}, P.~R. and {Burke}, E.~J. and {Doutriaux-Boucher}, M. and {Jones}, C.~D. and {Lowe}, J. and {Ringer}, M.~A. and {Robertson}, E. and {Wu}, P.}, title = {{Reversibility in an Earth System model in response to CO$_{2}$concentration changes}}, journal = {Environmental Research Letters}, year = 2012, month = jun, volume = 7, number = 2, eid = {024013}, pages = {024013}, abstract = {{We use the HadGEM2-ES Earth System model to examine the degree of reversibility of a wide range of components of the Earth System under idealized climate change scenarios where the atmospheric CO$_{2}$concentration is gradually increased to four times the pre-industrial level and then reduced at a similar rate from several points along this trajectory. While some modelled quantities respond almost immediately to the atmospheric CO$_{2}$concentrations, others exhibit a time lag relative to the change in CO$_{2}$. Most quantities also exhibit a lag relative to the global-mean surface temperature change, which can be described as a hysteresis behaviour. The most surprising responses are from low-level clouds and ocean stratification in the Southern Ocean, which both exhibit hysteresis on timescales longer than expected. We see no evidence of critical thresholds in these simulations, although some of the hysteresis phenomena become more apparent above 2 {\times} CO$_{2}$or 3 {\times} CO$_{2}$. Our findings have implications for the parametrization of climate impacts in integrated assessment and simple climate models and for future climate studies of geoengineering scenarios. }}, doi = {10.1088/1748-9326/7/2/024013}, adsurl = {http://adsabs.harvard.edu/abs/2012ERL.....7b4013B}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{2012AMT.....5.1459R, author = {{Rossignol}, S. and {Chiappini}, L. and {Perraudin}, E. and {Rio}, C. and {Fable}, S. and {Valorso}, R. and {Doussin}, J.~F. }, title = {{Development of a parallel sampling and analysis method for the elucidation of gas/particle partitioning of oxygenated semi-volatile organics: a limonene ozonolysis study}}, journal = {Atmospheric Measurement Techniques}, year = 2012, month = jun, volume = 5, pages = {1459-1489}, abstract = {{The gas/particle partitioning behaviour of the semi-volatile fraction of secondary organic matter and the associated multiphase chemistry are key features to accurately evaluate climate and health impacts of secondary organic aerosol (SOA). However, today, the partitioning of oxygenated secondary species is rarely assessed in experimental SOA studies and SOA modelling is still largely based on estimated partitioning data. This paper describes a new analytical approach, solvent-free and easy to use, to explore the chemical composition of the secondary organic matter at a molecular scale in both gas and particulate phases. The method is based on thermal desorption (TD) of gas and particulate samples, coupled with gas chromatography (GC) and mass spectrometry (MS), with derivatisation on sampling supports. Gaseous compounds were trapped on Tenax TA adsorbent tubes pre-coated with pentafluorobenzylhydroxylamine (PFBHA) or N-Methyl-N-(t-butyldimethylsilyl)trifluoroacetamide (MTBSTFA). Particulate samples were collected onto quartz or Teflon-quartz filters and subsequently subjected to derivatisation with PFBHA or MTBSTFA before TD-GC/MS analysis. Method development and validation are presented for an atmospherically relevant range of organic acids and carbonyl and hydroxyl compounds. Application of the method to a limonene ozonolysis experiment conducted in the EUPHORE simulation chamber under simulated atmospheric conditions of low concentrations of limonene precursor and relative humidity, provides an overview of the method capabilities. Twenty-five compounds were positively or tentatively identified, nine being in both gaseous and particulate phases; and twelve, among them tricarboxylic acids, hydroxyl dicarboxylic acids and oxodicarboxylic acids, being detected for the first time. }}, doi = {10.5194/amt-5-1459-2012}, adsurl = {http://adsabs.harvard.edu/abs/2012AMT.....5.1459R}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{2012ACP....12.5583D, author = {{Déandreis}, C. and {Balkanski}, Y. and {Dufresne}, J.~L. and {Cozic}, A.}, title = {{Radiative forcing estimates of sulfate aerosol in coupled climate-chemistry models with emphasis on the role of the temporal variability}}, journal = {Atmospheric Chemistry \& Physics}, year = 2012, month = jun, volume = 12, pages = {5583-5602}, abstract = {{This paper describes the impact on the sulfate aerosol radiative effects of coupling the radiative code of a global circulation model with a chemistry-aerosol module. With this coupling, temporal variations of sulfate aerosol concentrations influence the estimate of aerosol radiative impacts. Effects of this coupling have been assessed on net fluxes, radiative forcing and temperature for the direct and first indirect effects of sulfate. The direct effect respond almost linearly to rapid changes in concentrations whereas the first indirect effect shows a strong non-linearity. In particular, sulfate temporal variability causes a modification of the short wave net fluxes at the top of the atmosphere of +0.24 and +0.22 W m$^{-2}$for the present and preindustrial periods, respectively. This change is small compared to the value of the net flux at the top of the atmosphere (about 240 W m$^{-2}$). The effect is more important in regions with low-level clouds and intermediate sulfate aerosol concentrations (from 0.1 to 0.8 {$\mu$}g (SO$_{4}$) m$^{-3}$in our model). The computation of the aerosol direct radiative forcing is quite straightforward and the temporal variability has little effect on its mean value. In contrast, quantifying the first indirect radiative forcing requires tackling technical issues first. We show that the preindustrial sulfate concentrations have to be calculated with the same meteorological trajectory used for computing the present ones. If this condition is not satisfied, it introduces an error on the estimation of the first indirect radiative forcing. Solutions are proposed to assess radiative forcing properly. In the reference method, the coupling between chemistry and climate results in a global average increase of 8\% in the first indirect radiative forcing. This change reaches 50\% in the most sensitive regions. However, the reference method is not suited to run long climate simulations. We present other methods that are simpler to implement in a coupled chemistry/climate model and that offer the possibility to assess radiative forcing. }}, doi = {10.5194/acp-12-5583-2012}, adsurl = {http://adsabs.harvard.edu/abs/2012ACP....12.5583D}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{2012JGRE..117.0J07M, author = {{Madeleine}, J.-B. and {Forget}, F. and {Spiga}, A. and {Wolff}, M.~J. and {Montmessin}, F. and {Vincendon}, M. and {Jouglet}, D. and {Gondet}, B. and {Bibring}, J.-P. and {Langevin}, Y. and {Schmitt}, B.}, title = {{Aphelion water-ice cloud mapping and property retrieval using the OMEGA imaging spectrometer onboard Mars Express}}, journal = {Journal of Geophysical Research (Planets)}, keywords = {Atmospheric Composition and Structure: Planetary atmospheres (5210, 5405, 5704), Biogeosciences: Remote sensing, Mathematical Geophysics: Spectral analysis (3205, 3280, 4319), Atmospheric Processes: Clouds and aerosols, Planetary Sciences: Solar System Objects: Mars}, year = 2012, month = may, volume = 117, eid = {E00J07}, pages = {0}, abstract = {{Mapping of the aphelion clouds over the Tharsis plateau and retrieval of their particle size and visible opacity are made possible by the OMEGA imaging spectrometer aboard Mars Express. Observations cover the period from MY26 L$_{s}$= 330{\deg} to MY29 L$_{s}$= 180{\deg} and are acquired at various local times, ranging from 8 AM to 6 PM. Cloud maps of the Tharsis region constructed using the 3.1 {$\mu$}m ice absorption band reveal the seasonal and diurnal evolution of aphelion clouds. Four distinct types of clouds are identified: morning hazes, topographically controlled hazes, cumulus clouds and thick hazes. The location and time of occurrence of these clouds are analyzed and their respective formation process is discussed. An inverse method for retrieving cloud particle size and opacity is then developed and can only be applied to thick hazes. The relative error of these measurements is less than 30\% for cloud particle size and 20\% for opacity. Two groups of particles can be distinguished. The first group is found over flat plains and is composed of relatively small particles, ranging in size from 2 to 3.5 {$\mu$}m. The second group is characterized by particle sizes of {\tilde}5 {$\mu$}m which appear to be quite constant over L$_{s}$and local time. It is found west of Ascraeus and Pavonis Mons, and near Lunae Planum. These regions are preferentially exposed to anabatic winds, which may control the formation of these particles and explain their distinct properties. The water ice column is equal to 2.9 pr.{$\mu$}m on average, and can reach 5.2 pr.{$\mu$}m in the thickest clouds of Tharsis. }}, doi = {10.1029/2011JE003940}, adsurl = {http://adsabs.harvard.edu/abs/2012JGRE..117.0J07M}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{2012Icar..219..358K, author = {{Kerber}, L. and {Head}, J.~W. and {Madeleine}, J.-B. and {Forget}, F. and {Wilson}, L.}, title = {{The dispersal of pyroclasts from ancient explosive volcanoes on Mars: Implications for the friable layered deposits}}, journal = {\icarus}, year = 2012, month = may, volume = 219, pages = {358-381}, abstract = {{A number of voluminous, fine-grained, friable deposits have been mapped on Mars. The modes of origin for these deposits are debated. The feasibility for an origin by volcanic airfall for the friable deposits is tested using a global circulation model to simulate the dispersal of pyroclasts from candidate source volcanoes near each deposit. It is concluded that the Medusae Fossae Formation and Electris deposits are easily formed through volcanic processes, and that the Hellas deposits and south polar pitted deposits could have some contribution from volcanic sources in specific atmospheric regimes. The Arabia and Argyre deposits are not well replicated by modeled pyroclast dispersal, suggesting that these deposits were most likely emplaced by other means. }}, doi = {10.1016/j.icarus.2012.03.016}, adsurl = {http://adsabs.harvard.edu/abs/2012Icar..219..358K}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{2012Icar..219...25F, author = {{Fastook}, J.~L. and {Head}, J.~W. and {Marchant}, D.~R. and {Forget}, F. and {Madeleine}, J.-B.}, title = {{Early Mars climate near the Noachian-Hesperian boundary: Independent evidence for cold conditions from basal melting of the south polar ice sheet (Dorsa Argentea Formation) and implications for valley network formation}}, journal = {\icarus}, year = 2012, month = may, volume = 219, pages = {25-40}, abstract = {{Currently, and throughout much of the Amazonian, the mean annual surface temperatures of Mars are so cold that basal melting does not occur in ice sheets and glaciers and they are cold-based. The documented evidence for extensive and well-developed eskers (sediment-filled former sub-glacial meltwater channels) in the south circumpolar Dorsa Argentea Formation is an indication that basal melting and wet-based glaciation occurred at the South Pole near the Noachian-Hesperian boundary. We employ glacial accumulation and ice-flow models to distinguish between basal melting from bottom-up heat sources (elevated geothermal fluxes) and top-down induced basal melting (elevated atmospheric temperatures warming the ice). We show that under mean annual south polar atmospheric temperatures (-100 {\deg}C) simulated in typical Amazonian climate experiments and typical Noachian-Hesperian geothermal heat fluxes (45-65 mW/m$^{2}$), south polar ice accumulations remain cold-based. In order to produce significant basal melting with these typical geothermal heat fluxes, the mean annual south polar atmospheric temperatures must be raised from today's temperature at the surface (-100 {\deg}C) to the range of -50 to -75 {\deg}C. This mean annual polar surface atmospheric temperature range implies lower latitude mean annual temperatures that are likely to be below the melting point of water, and thus does not favor a ''warm and wet'' early Mars. Seasonal temperatures at lower latitudes, however, could range above the melting point of water, perhaps explaining the concurrent development of valley networks and open basin lakes in these areas. This treatment provides an independent estimate of the polar (and non-polar) surface temperatures near the Noachian-Hesperian boundary of Mars history and implies a cold and relatively dry Mars climate, similar to the Antarctic Dry Valleys, where seasonal melting forms transient streams and permanent ice-covered lakes in an otherwise hyperarid, hypothermal climate. }}, doi = {10.1016/j.icarus.2012.02.013}, adsurl = {http://adsabs.harvard.edu/abs/2012Icar..219...25F}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{2012ACP....12.4585H, author = {{Huneeus}, N. and {Chevallier}, F. and {Boucher}, O.}, title = {{Estimating aerosol emissions by assimilating observed aerosol optical depth in a global aerosol model}}, journal = {Atmospheric Chemistry \& Physics}, year = 2012, month = may, volume = 12, pages = {4585-4606}, abstract = {{This study estimates the emission fluxes of a range of aerosol species and one aerosol precursor at the global scale. These fluxes are estimated by assimilating daily total and fine mode aerosol optical depth (AOD) at 550 nm from the Moderate Resolution Imaging Spectroradiometer (MODIS) into a global aerosol model of intermediate complexity. Monthly emissions are fitted homogenously for each species over a set of predefined regions. The performance of the assimilation is evaluated by comparing the AOD after assimilation against the MODIS observations and against independent observations. The system is effective in forcing the model towards the observations, for both total and fine mode AOD. Significant improvements for the root mean square error and correlation coefficient against both the assimilated and independent datasets are observed as well as a significant decrease in the mean bias against the assimilated observations. These improvements are larger over land than over ocean. The impact of the assimilation of fine mode AOD over ocean demonstrates potential for further improvement by including fine mode AOD observations over continents. The Angstr{\"o}m exponent is also improved in African, European and dusty stations. The estimated emission flux for black carbon is 15 Tg yr$^{-1}$, 119 Tg yr$^{-1}$for particulate organic matter, 17 Pg yr$^{-1}$for sea salt, 83 TgS yr$^{-1}$for SO$_{2}$and 1383 Tg yr$^{-1}$for desert dust. They represent a difference of +45 \%, +40 \%, +26 \%, +13 \% and -39 \% respectively, with respect to the a priori values. The initial errors attributed to the emission fluxes are reduced for all estimated species. }}, doi = {10.5194/acp-12-4585-2012}, adsurl = {http://adsabs.harvard.edu/abs/2012ACP....12.4585H}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{2012JAMES...412001Z, author = {{Zhang}, M. and {Bretherton}, C.~S. and {Blossey}, P.~N. and {Bony}, S. and {Brient}, F. and {Golaz}, J.-C.}, title = {{The CGILS experimental design to investigate low cloud feedbacks in general circulation models by using single-column and large-eddy simulation models}}, journal = {Journal of Advances in Modeling Earth Systems}, keywords = {cloud feedbacks, Atmospheric Composition and Structure: Cloud/radiation interaction, Atmospheric Processes: Clouds and cloud feedbacks, Atmospheric Processes: Global climate models (1626, 4928)}, year = 2012, month = apr, volume = 4, eid = {M12001}, pages = {12001}, abstract = {{A surrogate climate change is designed to investigate low cloud feedbacks in the northeastern Pacific by using Single Column Models (SCMs), Cloud Resolving Models (CRMs), and Large Eddy Simulation models (LES), as part of the CGILS study (CFMIP-GASS Intercomparison of LES and SCM models). The constructed large-scale forcing fields, including subsidence and advective tendencies, and their perturbations in the warmer climate are shown to compare well with conditions in General Circulation Models (GCMs), but they are free from the impact of any GCM parameterizations. The forcing fields in the control climate are also shown to resemble the mean conditions in the ECMWF-Interim Reanalysis. Applications of the forcing fields in SCMs are presented. It is shown that the idealized design can offer considerable insight into the mechanisms of cloud feedbacks in the models. Caveats and advantages of the design are also discussed. }}, doi = {10.1029/2012MS000182}, adsurl = {http://adsabs.harvard.edu/abs/2012JAMES...412001Z}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{2012GeoRL..39.8805T, author = {{Tremoy}, G. and {Vimeux}, F. and {Mayaki}, S. and {Souley}, I. and {Cattani}, O. and {Risi}, C. and {Favreau}, G. and {Oi}, M.}, title = {{A 1-year long {$\delta$}$^{18}$O record of water vapor in Niamey (Niger) reveals insightful atmospheric processes at different timescales}}, journal = {\grl}, keywords = {Geochemistry: Stable isotope geochemistry (0454, 4870), Atmospheric Processes: Boundary layer processes, Atmospheric Processes: Climatology (1616, 1620, 3305, 4215, 8408), Atmospheric Processes: Convective processes, Atmospheric Processes: Instruments and techniques}, year = 2012, month = apr, volume = 39, eid = {L08805}, pages = {8805}, abstract = {{We present a 1-year long representative {$\delta$}$^{18}$O record of water vapor ({$\delta$}$^{18}$O$_{v}$) in Niamey (Niger) using the Wavelength Scanned-Cavity Ring Down Spectroscopy (WS-CRDS). We explore how local and regional atmospheric processes influence {$\delta$}$^{18}$O$_{v}$variability from seasonal to diurnal scale. At seasonal scale, {$\delta$}$^{18}$O$_{v}$exhibits a W-shape, associated with the increase of regional convective activity during the monsoon and the intensification of large scale subsidence North of Niamey during the dry season. During the monsoon season, {$\delta$}$^{18}$O$_{v}$records a broad range of intra-seasonal modes in the 25-40-day and 15-25-day bands that could be related to the well-known modes of the West African Monsoon (WAM). Strong {$\delta$}$^{18}$O$_{v}$modulations are also seen at the synoptic scale (5-9 days) during winter, driven by tropical-extra-tropical teleconnections through the propagation of a baroclinic wave train-like structure and intrusion of air originating from higher altitude and latitude. {$\delta$}$^{18}$O$_{v}$also reveals a significant diurnal cycle, which reflects mixing process between the boundary layer and the free atmosphere during the dry season, and records the propagation of density currents associated with meso-scale convective systems during the monsoon season. }}, doi = {10.1029/2012GL051298}, adsurl = {http://adsabs.harvard.edu/abs/2012GeoRL..39.8805T}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{2012ClDy...38.1675Z, author = {{Zhang}, H. and {Wang}, Z. and {Wang}, Z. and {Liu}, Q. and {Gong}, S. and {Zhang}, X. and {Shen}, Z. and {Lu}, P. and {Wei}, X. and {Che}, H. and {Li}, L.}, title = {{Simulation of direct radiative forcing of aerosols and their effects on East Asian climate using an interactive AGCM-aerosol coupled system}}, journal = {Climate Dynamics}, keywords = {AGCM, Aerosol, Radiative forcing, Climate effects, East Asian monsoon}, year = 2012, month = apr, volume = 38, pages = {1675-1693}, abstract = {{An interactive system coupling the Beijing Climate Center atmospheric general circulation model (BCC\_AGCM2.0.1) and the Canadian Aerosol Module (CAM) with updated aerosol emission sources was developed to investigate the global distributions of optical properties and direct radiative forcing (DRF) of typical aerosols and their impacts on East Asian climate. The simulated total aerosol optical depth (AOD), single scattering albedo, and asymmetry parameter were generally consistent with the ground-based measurements. Under all-sky conditions, the simulated global annual mean DRF at the top of the atmosphere was -2.03 W m$^{-2}$for all aerosols including sulfate, organic carbon (OC), black carbon (BC), dust, and sea salt; the global annual mean DRF was -0.23 W m$^{-2}$for sulfate, BC, and OC aerosols. The sulfate, BC, and OC aerosols led to decreases of 0.58{\deg} and 0.14 mm day$^{-1}$in the JJA means of surface temperature and precipitation rate in East Asia. The differences of land-sea surface temperature and surface pressure were reduced in East Asian monsoon region due to these aerosols, thus leading to the weakening of East Asian summer monsoon. Atmospheric dynamic and thermodynamic were affected due to the three types of aerosol, and the southward motion between 15{\deg}N and 30{\deg}N in lower troposphere was increased, which slowed down the northward transport of moist air carried by the East Asian summer monsoon, and moreover decreased the summer monsoon precipitation in south and east China. }}, doi = {10.1007/s00382-011-1131-0}, adsurl = {http://adsabs.harvard.edu/abs/2012ClDy...38.1675Z}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{2012JCli...25.1847B, author = {{Brachet}, S. and {Codron}, F. and {Feliks}, Y. and {Ghil}, M. and {Le Treut}, H. and {Simonnet}, E.}, title = {{Atmospheric Circulations Induced by a Midlatitude SST Front: A GCM Study}}, journal = {Journal of Climate}, year = 2012, month = mar, volume = 25, pages = {1847-1853}, doi = {10.1175/JCLI-D-11-00329.1}, adsurl = {http://adsabs.harvard.edu/abs/2012JCli...25.1847B}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{2012JGRD..117.3106B, author = {{Berkelhammer}, M. and {Risi}, C. and {Kurita}, N. and {Noone}, D.~C. }, title = {{The moisture source sequence for the Madden-Julian Oscillation as derived from satellite retrievals of HDO and H$_{2}$O}}, journal = {Journal of Geophysical Research (Atmospheres)}, keywords = {AURA, MJO, Tropical Climate, hydrology, isotopes, Atmospheric Composition and Structure: General or miscellaneous, Geochemistry: Stable isotope geochemistry (0454, 4870), Hydrology: Hydrometeorology, Hydrology: Water budgets}, year = 2012, month = feb, volume = 117, eid = {D03106}, pages = {3106}, abstract = {{A number of competing theories to explain the initiation mechanism, longevity and propagation characteristics of the Madden-Julian Oscillation (MJO) have been developed from observational analysis of the tropical climate and minimal dynamical models. Using the isotopic composition of atmospheric moisture from paired satellite retrievals of H$_{2}$O and HDO from the boundary layer and mid troposphere, we identify the different sources of moisture that feed MJO convection during its life cycle. These fluxes are then associated with specific dynamical processes. The HDO/H$_{2}$O isotope ratio data show that during the early phase of the MJO, the mid-troposphere is dominated by moisture evaporated from the ocean surface that was transported vertically undergoing minimal distillation. The contribution from the evaporative source diminishes during early convective activity but reappears during the peak of MJO activity along with an isotopically depleted flux, which is hypothesized to originate from easterly convergence. The contribution of different moisture sources as shown from the HDO/H$_{2}$O data is consistent with model results where the sustaining of deep convection requires a feedback between convergence, precipitation strength and evaporation. In the wake of an MJO event, the weak vertical isotopic gradient, depletion in boundary layer {$\delta$}D and the uniquely moist and depleted vapor in the mid troposphere all point toward a prominent presence of moisture originated from rainfall re-evaporation, which confirms the prediction that the transition from convective to stratiform rains is important to the moisture budget of the MJO. }}, doi = {10.1029/2011JD016803}, adsurl = {http://adsabs.harvard.edu/abs/2012JGRD..117.3106B}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{2012CliPa...8..205S, author = {{Shi}, C. and {Daux}, V. and {Zhang}, Q.-B. and {Risi}, C. and {Hou}, S.-G. and {Stievenard}, M. and {Pierre}, M. and {Li}, Z. and {Masson-Delmotte}, V.}, title = {{Reconstruction of southeast Tibetan Plateau summer climate using tree ring {$\delta$}$^{18}$O: moisture variability over the past two centuries}}, journal = {Climate of the Past}, year = 2012, month = feb, volume = 8, pages = {205-213}, abstract = {{A tree-ring {$\delta$}$^{18}$O chronology of Linzhi spruce, spanning from AD 1781 to 2005, was developed in Bomi, Southeast Tibetan Plateau (TP). During the period with instrumental data (AD 1961-2005), this record is strongly correlated with regional CRU (Climate Research Unit) summer cloud data, which is supported by a precipitation {$\delta$}$^{18}\$O simulation conducted with the isotope-enabled
atmospheric general circulation model LMDZiso. A reconstruction of a
regional summer cloud index, based upon the empirical relationship
between cloud and diurnal temperature range, was therefore achieved.
This index reflects regional moisture variability in the past 225 yr.
The climate appears drier and more stable in the 20th century than
previously. The drying trend in late 19th century of our reconstruction
is consistent with a decrease in the TP glacier accumulation recorded in
ice cores. An exceptional dry decade is documented in the 1810s,
possibly related to the impact of repeated volcanic eruptions on monsoon
flow.
}},
doi = {10.5194/cp-8-205-2012},
adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}
@article{2012QJRMS.138...56C,
author = {{Couvreux}, F. and {Rio}, C. and {Guichard}, F. and {Lothon}, M. and
{Canut}, G. and {Bouniol}, D. and {Gounou}, A.},
title = {{Initiation of daytime local convection in a semi-arid region analysed with high-resolution simulations and AMMA observations}},
journal = {Quarterly Journal of the Royal Meteorological Society},
year = 2012,
month = jan,
volume = 138,
pages = {56-71},
doi = {10.1002/qj.903},
adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}
@article{2012ClDy...38..379C,
author = {{Codron}, F.},
title = {{Ekman heat transport for slab oceans}},
journal = {Climate Dynamics},
keywords = {Slab ocean, Ekman, Heat transport},
year = 2012,
month = jan,
volume = 38,
pages = {379-389},
abstract = {{A series of schemes designed to include various representations of the
Ekman-driven heat fluxes in slab-ocean models is introduced. They work
by computing an Ekman mass flux, then deducing heat fluxes by the
surface flow and an opposite deep return flow. The schemes differ by the
computation of the return flow temperature: either diagnosed from the
SST or given by an active second layer. Both schemes conserve energy,
and use as few parameters as possible. Simulations in an aquaplanet
setting show that the schemes reproduce well the structure of the
meridional heat transport by the ocean. Compared to a diffusive
slab-ocean, the simulated SST is more flat in the tropics, and presents
a relative minimum at the equator, shifting the ITCZ into the summer
hemisphere. In a realistic setting with continents, the slab model
simulates correctly the mean state in many regions, especially in the
tropics. The lack of other dynamical features, such as barotropic gyres,
means that an optimal mean-state in regions such as the mid-latitudes
will require additional flux corrections.
}},
doi = {10.1007/s00382-011-1031-3},
adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}
`
Contact information
EMC3 group
LMD/CNRS/UPMC
Case 99
Tour 45-55, 3ème étage
4 Place Jussieu
75252 Paris Cedex 05
FRANCE
Tel: 33 + 1 44 27 27 99
33 + 6 16 27 34 18 (Dr F. Cheruy)
Tel: 33 + 1 44 27 35 25 (Secretary)
Fax: 33 + 1 44 27 62 72
email: emc3 at lmd.jussieu.fr
Map of our location
EUREC4A campaign
Click the above logo for
the operationnal center.
Today's LMDZ meteogram
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5559296607971191, "perplexity": 13315.194005657047}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038879374.66/warc/CC-MAIN-20210419111510-20210419141510-00133.warc.gz"}
|
https://mathhelpboards.com/threads/value-of-b-y-intercept-of-quadratic-graph.26570/
|
# Value of b, y-intercept of Quadratic graph
#### gazparkin
##### New member
Jul 1, 2019
17
Hi,
Can anyone help me understand how I get to the answer on this one?
The diagram shows a sketch of the graph of y = x2 + ax + b
The graph crosses the x-axis at (2, 0) and (4, 0).
Work out the value of b.
#### Attachments
• 23.9 KB Views: 17
#### Greg
##### Perseverance
Staff member
$$\displaystyle y=(x - 2)(x - 4)=x^2-6x+8$$
#### HallsofIvy
##### Well-known member
MHB Math Helper
Jan 29, 2012
1,151
The graph is of $$y= x^2+ ax+ b$$ and we are told that the graph goes through (2, 0). That means that when x= 2, y= 0. So we must have $$0= 2^2+ a(2)+ b= 4+ 2a+ b$$ or 2a+ b= -4. We are also told that the graph goes through (4, 0). That means that when x= 4, y= 0. So we must have $$0= 4^2+ a(4)+ b= 16+ 4a+ b$$ or 4a+ b= -16.
Solve the two equations, 2a+ b= -4 and 4a+ b= -16, for a and b.
#### gazparkin
##### New member
Jul 1, 2019
17
The graph is of $$y= x^2+ ax+ b$$ and we are told that the graph goes through (2, 0). That means that when x= 2, y= 0. So we must have $$0= 2^2+ a(2)+ b= 4+ 2a+ b$$ or 2a+ b= -4. We are also told that the graph goes through (4, 0). That means that when x= 4, y= 0. So we must have $$0= 4^2+ a(4)+ b= 16+ 4a+ b$$ or 4a+ b= -16.
Solve the two equations, 2a+ b= -4 and 4a+ b= -16, for a and b.
Thank you for this - really helped me understand.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6657461524009705, "perplexity": 646.4924378266287}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347439019.86/warc/CC-MAIN-20200604032435-20200604062435-00544.warc.gz"}
|
http://mathhelpforum.com/calculus/9770-limit.html
|
1. The Limit
Let $f(x)=5x^2+5x+4$ and let $g(h)=\frac{f(2+h)-f(2)}{h}$
Determine each of the following:
A. g(1)
B. g(0.1)
C. g(0.01)
You will notice that the values that you entered are getting closer and closer to a number $L$. This number is called the limit of $g(h)$ as $h$ approaches $0$ and is also called the derivative of f(x) at the point when x=2. Enter the value of the number $L$ ____?_____.
Thanks for all your help, as I am currently stuggling with some of these new concepts.
2. Hello, qbkr21!
Let $f(x)=5x^2+5x+4$ and let $g(h)=\frac{f(2+h)-f(2)}{h}$
Determine each of the following: . $A.\;g(1)\qquad B.\;g(0.1)\qquad C.\;g(0.01)$
You will notice that the values that you entered are getting closer and closer to a number $L$.
This number is called the limit of $g(h)$ as $h \to 0$
and is also called the derivative of $f(x)$ at the point when $x=2.$
Enter the value of the number $L$ ____?_____.
First, we'll determine $g(h)$ . . . and do all the algebra first.
There are three stages to $g(h):$
. . (1) Find $f(2 + h)$ . . . and simplify.
. . (2) Subtract $f(2)$ . . . and simplify.
. . (3) Divide by $h$ . . . and simplify.
(1) $f(2+h) \:=\:5(2+h)^2 + 5(2+h) + 4$
. . . . . . . . . $= \:20 + 20h + 5h^2 + 10 + 5h = 4$
. . . . . . . . . $= \:5h^2 + 25h + 34$
(2) $f(2) \:=\:5(2^2) + 5(2) + 4\:=\:34$
Subtract: . $f(2+h) - f(2) \;=\;(5h^2 + 25h + 34) - 34 \;=\;5h^2 + 25h$
(3) Divide by $h:\;\;\frac{f(2+h) - f(2)}{h} \;= \;\frac{5h^2 + 25h}{h}$
. . .Factor and simplify: . $\frac{5\!\!\not{h}(h + 5)}{\not{h}}\;=\;5(h+5)$
There! . . . $g(h) \:=\:5(h+5)$
~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~
Now we can crank out the answers.
$\begin{array}{ccc}A,\\B.\\C.\end{array}\begin{arra y}{ccc}g(1)\\g(0.1)\\g(0.01)\end{array}\begin{arra y}{ccc}= \\ = \\ =\end{array}\begin{array}{ccc}5(1 + 5) \\ 5(0.1 + 5) \\ 5(0.01 + 5)\end{array}\begin{array}{ccc}=\\=\\=\end{array}\ begin{array}{ccc}30\\25.5\\25.05\end{array}$
As $h\to0$ (gets smaller and smaller), $g(h)$ approaches 25.
And that it the limit they're asking for: . $L \:=\:25$
3. Maybe this will help heir.
(I am still not complete. But hopefully soon I will have everything made that I want to).
4. Thanks so much PerfectHacker you are the man!
5. Maybe you shoud check out PH's calc tutorial.
Anyway, take the derivative of your polynomial.
You get $f'(x)=10x+5$. When x=2, we have f'(x)=L.
Now, the concepts you're using are the nuts and bolts of the derivative.
Using $\frac{f(2+h)-f(2)}{h}$:
$\frac{5(2+h)^{2}+5(2+h)+4-(5(2)^{2}+5(2)+4)}{h}$
$=5(h+5)$
Now, if you enter in h=1, you get 30.
h=0.1, you get 25.5
h=0.01, you get 25.05
and so on.
Actually, if you let h approach 0, you are converging on this aforementioned number L. See what it is?. Does it jive with what we got by taking the derivative(the easy way) mentioned at the top.
$\lim_{h\rightarrow{0}}5(h+5)=L$
The closer h gets to 0, the closer you get to the slope(derivative) at that point.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 38, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9775016903877258, "perplexity": 448.3893595128623}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707773051/warc/CC-MAIN-20130516123613-00060-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://mailman-1.sys.kth.se/pipermail/gromacs.org_gmx-developers/2013-June/006935.html
|
# [gmx-developers] Question on SHAKE equations
Anton Feenstra k.a.feenstra at vu.nl
Tue Jun 11 11:36:17 CEST 2013
On 10/06/13 20:36, "Pablo García Risueño" wrote:
> Dear Anton
>
> Thank you very much for your explanations. They are very important for my
> research.
>
> I regret I still need to ask more. So what is the way of operationg for
> Gromacs when setting h_angles in the constraint block? Is it working with
> P-LINCS, or with the method of your JCC paper?
Constraints are handled by either SHAKE or LINCS for any other bond (or
angle), as chosen by the user. The only exception is SETTLE which is
defined explicitly in the SPC topology file.
Virtual sites (formerly called dummy atoms) are only *used* when
explicitly defined (by the user), or generated by pdb2gmx with the
-vsite option, IIRC. These constructions are quite distinct from
constraints, as v-sites have no mass but are only a point for an
interaction function (charge and/or LJ). (Note that for mass
conservation in a molecule, the mass of the v-site atom should be added
elsewhere.)
You can read all of this and some more, I think, in the paper and the
relevant section(s) in the Gromacs manual.
> Moreover, I cannot see why the C-N-C angle should be constrained when
> constraining the C1-N-H and the C2-N-H angles (see scheme below).
> H
> |
> C1--N--C2
>
> In principle C1-N-H and C2-N-H lie in different planes, they are two
> 'triangles' which share one edge (this edge is the H-N bond), but which
> can rotate around it in 3D, isn't this true?
Yes, but not for the protein backbone, which is planar around the amide
group (imposed by impropers). But also for e.g. a C-CH2-C group,
constriaining all C-C-H angles (4 of them) would effecively also
constrain the C-C-C angle.
> I know of the limitations of LINCS and P-LINCS, but I was wondering if
> there would be any problem if inverting the cosntraint matrix by using
> direct Gaussian elimination (which should not carry any instability
> problem). I don't know if constraining all the H angles a la Gromacs is
> actually leading to some impossible geometry.
For this, I'm bumping back our discussion to the gmx-dev list.
>> Dear Pablo,
>>
>>
>> There is two issues: limitations in LINCS and couplings between H bond
>> angles and other bond angles. My guess is you are referring to the
second,
>> but for sake of being complete I will also go into the first.
>>
>> 1) LINCS solves constraints by using an approximation to the
inversion of
>> the constraints matrix; this relies on the assumption that this
matrix is
>> sparse. For bond constraints only, that is true. But when also (all)
>> angles are constrained, the matix contains to many cross-terms. This
>> happens already in butane (with united atoms for all the - aliphatic -
>> hydrogens; so effecitvely only four atoms in the molecule) with three
>> bonds and two associate bond angles (if they are all constrained). For
>> more details, certainly on the technical and/or mathematical side,
>>
>> 2) in the case of, for example, a protein backbone, a problem can arise
>> when all H bond angles are constrained. At the N-H, we will
constrain both
>> C-N-H angles, effectively also constraining the C-N-C angle. This
has been
>> show a long time ago to give wrong behaviour of protein dynamics (before
>> 1998 at least, but I do not have a reference).
>>
>> Regarding your question below, no, as far as I know Gromacs does not use
>> v-sites by default for H atom bond angles. However, pdb2gmx has an
option
>> to generate v-sites for H atoms in proteins and in several other
building
>> blocks.
>>
>>
>>
>> P.s., please call me Anton.
>>
>> Groetjes,
>>
>>
>> Anton.
>>
>>
>>
>> Op 10 jun. 2013 om 16:55 heeft "Pablo García Risueño"
>> <Risueno at physik.hu-berlin.de> het volgende geschreven:
>>> Dear prof Feenstra
>>>
>>> Thank you very much for your reply. Then, I should understand that, in
>>> order to constrain H bond angles, Gromacs is not using P-LINCS, but the
>>> technique explained in J. Comput. Chem. 20 (8), 786-798 by default, is
>>> this correct? Isn't this having a non-negligible impact on observable
>>> quantities?
>>>
>>> I afraid I cannot understand completely the problem of the
>>> incompatibility
>>> between bond angle constraints yet. Why should this happen? I guess
>>> that
>>> we can freeze every H bond angle to a concrete value at the beginning
>>> of
>>> the simulation, and then to keep it throughout the simulation. Looking
>>> at
>>> the protein topologies, I'd swear that no incompatibility should arise,
>>> since a hydrogen atom is always at a extreme (e.g., there is never
>>> C-H-C).
>>> Isn't this true?
>>>
>>> Thank you very much for your help. Best regards
>>>
>>>
>>>
>>>> On 10/06/13 14:20, "Pablo García Risueño" wrote:
>>>>> Dear Berk
>>>>>
>>>>> the
>>>>> case of all H bond angles constrained. I was assuming that every H
>>>>> one
>>>>> single (arbitrary) constrained bond angle. This is, in the scheme
>>>>> below,
>>>>> e.g. the angle C1-C2-H would be constrained, but not the H-C2-C3
>>>>> angle.
>>>>> Isn't this true?
>>>>>
>>>>> H
>>>>> |
>>>>> C1--C2--C3
>>>>>
>>>>> In some literature I read that the vibration period of H bond angles
>>>>> is
>>>>> the same as that of heavy atoms bond lengths (the Gromacs option
>>>>> h-angles
>>>>> for constraints seems to assume this). Isn't this Gromacs option
>>>>> proceeding as stated above?
>>>>
>>>> Contrary to your expectation, all bond angles involving the H atom are
>>>> actually constrained. There is more on that in the paper cited below.
>>>>
>>>>> Can you give me some reference where the procedure of 'virtual sites'
>>>>> to
>>>>> constrain H bond angles is explained?
>>>>
>>>> Hello Pablo,
>>>>
>>>> Berk and I wrote a paper on that a while ago:
>>>> J. Comput. Chem. 20 (8), 786-798
>>>>
>>>>
>>>>> Thank you very much. Best regards
>>>>>
>>>>>
>>>>>> On 6/10/13 12:15 , David van der Spoel wrote:
>>>>>>> On 2013-06-10 10:57, "Pablo García Risueño" wrote:
>>>>>>>> Thank you very much for the answer. It is said that:
>>>>>>>>
>>>>>>>> "A practical issue is that not many people use shake, since LINCS
>>>>>>>> works in
>>>>>>>> parallel too."
>>>>>>>>
>>>>>>>> But I thought that SHAKE was used when bond angles are
>>>>>>>> constrained,
>>>>>>>> this
>>>>>>>> is when bond lengths of heavy atoms are constrained (since bond
>>>>>>>> lengths of
>>>>>>>> heavy atoms and bond angles of hydrogen have the same period).
>>>>>>>> Isn't
>>>>>>>> this
>>>>>>>> correct?
>>>>>>>>
>>>>>>> That's up to the user, but LINCS can be used too for angles when
>>>>>>> the
>>>>>>> number of iterations is increased a bit (although not too many
>>>>>>> people
>>>>>>> use angle constraints either). Nevertheless gromacs supports the
>>>>>>> shake
>>>>>>> algorithm and hence it should work.
>>>>>> But note that constraining all-angles will lead to incompatible
>>>>>> constraints, unless the molecule is very small.
>>>>>> Constraining all angles involving hydrogens has similar issues. We
>>>>>> use
>>>>>> virtual sites to remove H-angle vibrations.
>>>>>> So in practice the only molecule for which angle constraints are
>>>>>> used
>>>>>> is
>>>>>> water, but there SETTLE is algorithm of choice.
>>>>>>
>>>>>> Cheers,
>>>>>>
>>>>>> Berk
>>>>>>>> Thank you very much
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>> On 2013-06-07 19:26, "Pablo García Risueño" wrote:
>>>>>>>>>> Dear Gromacs developers
>>>>>>>>>>
>>>>>>>>>> I am studying the Gromacs implementation of the SHAKE algorithm.
>>>>>>>>>> As
>>>>>>>>>> far
>>>>>>>>>> as
>>>>>>>>>> I know, it corresponds to the cshake procedure, which lies in
>>>>>>>>>> the
>>>>>>>>>> shakef.c
>>>>>>>>>> file, and the equation (5.6) of the SHAKE paper
>>>>>>>>>> (see
>>>>>>>>>>
http://www.sciencedirect.com/science/article/pii/0021999177900985)
>>>>>>>>>> is
>>>>>>>>>> essentially in
>>>>>>>>>>
>>>>>>>>>> acor = omega*diff*m2[ll]/rrpr
>>>>>>>>>> lagr[ll] += acor;
>>>>>>>>>>
>>>>>>>>>> where the relationship between variables is:
>>>>>>>>>>
>>>>>>>>>> SHAKE var Gromacs var
>>>>>>>>>> omega=1
>>>>>>>>>> d^2 - (r')^2 diff
>>>>>>>>>> 1/(2(1/m_i+1/mj)) m2[ll]
>>>>>>>>>> \vec{r}_{ij}\cdot\vec{r}'_{ij} rrpr
>>>>>>>>>> g lagr[ll]
>>>>>>>>>>
>>>>>>>>>> being ll the constraint index. However, I think that the term
>>>>>>>>>> proportional
>>>>>>>>>> to g^2 in the equation (5.6) of SHAKE paper is not in the
>>>>>>>>>> Gromacs
>>>>>>>>>> code.
>>>>>>>>>> This term is relatively small, but I was assuming that it is not
>>>>>>>>>> negligible. Could some developer explain me whether this term is
>>>>>>>>>> neglected, or it is included in the calculations elsewhere?
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> I don't have the complete overview, so I will assume your
>>>>>>>>> observation is
>>>>>>>>> correct.
>>>>>>>>> Since the algorithm is iterative the second term may not be
>>>>>>>>> needed,
>>>>>>>>> although it might be converge in fewer iterations with it.
>>>>>>>>>
>>>>>>>>> A practical issue is that not many people use shake, since LINCS
>>>>>>>>> works
>>>>>>>>> in parallel too.
>>>>>>>>>
>>>>>>>>>> Thank you very much. Best regards
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>>
>>>>>>>>>> Dr. Pablo García Risueño
>>>>>>>>>>
>>>>>>>>>> Institut für Physik und IRIS Adlershof, Humboldt Universität zu
>>>>>>>>>> Berlin,
>>>>>>>>>> Zum Grossen Windkanal 6, 12489 Berlin, Germany
>>>>>>>>>>
>>>>>>>>>> Tel. +49 030 209366369
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> David van der Spoel, Ph.D., Professor of Biology
>>>>>>>>> Dept. of Cell & Molec. Biol., Uppsala University.
>>>>>>>>> Box 596, 75124 Uppsala, Sweden. Phone: +46184714205.
>>>>>>>>> spoel at xray.bmc.uu.se http://folding.bmc.uu.se
>>>>>>>>> --
>>>>>>>>> gmx-developers mailing list
>>>>>>>>> gmx-developers at gromacs.org
>>>>>>>>> http://lists.gromacs.org/mailman/listinfo/gmx-developers
>>>>>>>>> Please don't post (un)subscribe requests to the list. Use the
>>>>>>>>> www interface or send it to gmx-developers-request at gromacs.org.
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>>
>>>>>>>> Dr. Pablo García Risueño
>>>>>>>>
>>>>>>>> Institut für Physik und IRIS Adlershof, Humboldt Universität zu
>>>>>>>> Berlin,
>>>>>>>> Zum Grossen Windkanal 6, 12489 Berlin, Germany
>>>>>>>>
>>>>>>>> Tel. +49 030 209366369
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>> --
>>>>>> gmx-developers mailing list
>>>>>> gmx-developers at gromacs.org
>>>>>> http://lists.gromacs.org/mailman/listinfo/gmx-developers
>>>>>> Please don't post (un)subscribe requests to the list. Use the
>>>>>> www interface or send it to gmx-developers-request at gromacs.org.
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>>
>>>>> Dr. Pablo García Risueño
>>>>>
>>>>> Institut für Physik und IRIS Adlershof, Humboldt Universität zu
>>>>> Berlin,
>>>>> Zum Grossen Windkanal 6, 12489 Berlin, Germany
>>>>>
>>>>> Tel. +49 030 209366369
>>>>>
>>>>
>>>>
>>>> --
>>>> Groetjes,
>>>>
>>>> Anton
>>>> _____________ >>>> | |
>>>> | _ _ ___,| K. Anton Feenstra
>>>> | / \ / \'| | | IBIVU/Bioinformatics - Vrije Universiteit Amsterdam
>>>> |( | )| | | De Boelelaan 1081 - 1081 HV Amsterdam - Netherlands
>>>> | \_/ \_/ | | | Tel +31 20 59 87783 - Fax +31 20 59 87653 - Room
>>>> | | Feenstra at few.vu.nl - www.few.vu.nl/~feenstra/
>>>> | | "Is This the Right Room for an Argument ?" (Monty
>>>> | | Python)
>>>> |_____________|________________________________________________
>>>> --
>>>> gmx-developers mailing list
>>>> gmx-developers at gromacs.org
>>>> http://lists.gromacs.org/mailman/listinfo/gmx-developers
>>>> Please don't post (un)subscribe requests to the list. Use the
>>>> www interface or send it to gmx-developers-request at gromacs.org.
>>>>
>>>
>>>
>>> --
>>>
>>> Dr. Pablo García Risueño
>>>
>>> Institut für Physik und IRIS Adlershof, Humboldt Universität zu Berlin,
>>> Zum Grossen Windkanal 6, 12489 Berlin, Germany
>>>
>>> Tel. +49 030 209366369
>>>
>>
>
>
> --
>
> Dr. Pablo García Risueño
>
> Institut für Physik und IRIS Adlershof, Humboldt Universität zu Berlin,
> Zum Grossen Windkanal 6, 12489 Berlin, Germany
>
> Tel. +49 030 209366369
>
--
Groetjes,
Anton
_____________ _______________________________________________________
| | |
| _ _ ___,| K. Anton Feenstra |
| / \ / \'| | | IBIVU/Bioinformatics - Vrije Universiteit Amsterdam |
|( | )| | | De Boelelaan 1081 - 1081 HV Amsterdam - Netherlands |
| \_/ \_/ | | | Tel +31 20 59 87783 - Fax +31 20 59 87653 - Room P136 |
| | Feenstra at few.vu.nl - www.few.vu.nl/~feenstra/ |
| | "Does All This Money Really Have To Go To Charity ?" |
| | (Rick) |
|_____________|_______________________________________________________|
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7141958475112915, "perplexity": 13144.00828019599}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103205617.12/warc/CC-MAIN-20220626101442-20220626131442-00084.warc.gz"}
|
http://math.stackexchange.com/users/17982/colin-tan
|
# Colin Tan
less info
reputation
213
bio website location National University of Singapore age 29 member for 3 years seen Sep 29 at 19:55 profile views 270
I am a graduate student at the National University of Singapore studying complex geometry with To Wing Keung.
# 30 Questions
10 How to prove an inequality with square roots? 5 How to solve the inequality $2^x\ge a+bx$? 4 Does there exist bivariate polynomials $p$ and $q$ such that $p(x,y)^2 = q(x, y)^2 ( x^2 + y^2)$? 4 Does the homology, homotopy, and geometric realization functors of a simplicial group preserve colimits? 3 Homology of stunted infinite real projective space
# 534 Reputation
+15 Homology of stunted infinite real projective space +10 Differential Forms: High Level Approach to Real Analysis? +10 Generating function of $\binom{3n}{n}$ +10 Find the Maclaurin series for $\cos(2x)$ using the series for $\sin(2x)$.
3 Find the Maclaurin series for $\cos(2x)$ using the series for $\sin(2x)$. 3 Integrating $\int \limits_{-\infty}^{\infty} \dfrac{\sin^2(x)}{x^2} \operatorname d\!x$ 2 Differential Forms: High Level Approach to Real Analysis? 1 What is a co-dimension? 1 a curious definite integral
# 32 Tags
4 calculus × 6 2 differential-forms 3 integration 2 soft-question 3 complex-analysis 1 real-analysis × 3 3 taylor-expansion 1 general-topology × 2 3 contour-integration 1 terminology
# 7 Accounts
MathOverflow 2,392 rep 22052 Mathematics 534 rep 213 Buddhism 225 rep 16 Chinese Language 216 rep 3 Area 51 154 rep 2
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7318329215049744, "perplexity": 2013.9037885305129}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507444209.14/warc/CC-MAIN-20141017005724-00292-ip-10-16-133-185.ec2.internal.warc.gz"}
|
http://mathhelpforum.com/calculus/215706-inequalities-help-print.html
|
Inequalities help
• March 26th 2013, 07:33 PM
calmo11
1 Attachment(s)
Inequalities help
I need help with these two inequalities Attachment 27702
For c) I can get to x>or=5 but I know that it is true when x<0 too but I don't know how to show that algebraically
For d) I can get to x>3/2 but I also know that it is true when x<1 but cant show it please help
• March 26th 2013, 08:44 PM
Soroban
Re: Inequalities help
Hello, calmo11!
Quote:
$(c)\;\;e^{x^2} \:\ge\:e^{5x}$
We have: . $x^2 \:\ge\:5x \quad\Rightarrow\quad x^2 - 5x \:\ge\:0$
. . . . . . . . $x(x-5)\:\ge\:0$
It says: the product of two number is positive.
This is true if both factors are positive or both factors are negative.
We must consider these two cases.
Both positive:
. . $x \,\ge\, 0 \:\text{ and }\:x-5\,\ge\,0 \quad\Rightarrow\quad x \,\ge\,5$
. . Both are true if $x\,\ge\,5$
Both negative:
. . $x\,\le\,0\:\text{ and }\:x-5 \,\le\,0 \quad\Rightarrow\quad x\,\le\,5$
. . Both are true if $x\,\le\,0$
Answer: . $(x\,\ge\,5)\,\text{ or }\,(x\,\le\,0)$
Quote:
$(d)\;\;|4x-5| \:>\:1$
The absolute inequality gives us two statements:
. . $[1]\;4x-5 \,>\,1\;\text{ or }\;[2]\;4x-5\,<\,-1$
Solve them separately:
. . $[1]\;4x - 5 \,>\,1 \quad\Rightarrow\quad 4x\,>\.6 \quad\Rightarrow\quad x \,>\,\tfrac{3}{2}$
. . $[2]\;4x-5 \,<\,-1 \quad\Rightarrow\quad 4x \,<\,4 \quad\Rightarrow\quad x \,<\,1$
Answer: . $(x\,<\,1)\:\text{ or }\:(x\,>\,\tfrac{3}{2})$
• March 26th 2013, 09:30 PM
calmo11
Re: Inequalities help
You are a genius! Why can't you teach me my uni course!
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9584217667579651, "perplexity": 2064.8274348431096}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375096780.24/warc/CC-MAIN-20150627031816-00234-ip-10-179-60-89.ec2.internal.warc.gz"}
|
https://projects.skill-lync.com/projects/Global-Maxima-of-Stalagmite-Function-41047
|
## Global Maxima of Stalagmite Function
GENITIC ALGORITHM: A genetic algorithm is a search heuristic that is inspired by Charles Darwin’s theory of natural evolution. This algorithm reflects the process of natural selection where the fittest individuals are selected for reproduction in order to produce offspring of the next generation.
A genetic algorithm is a heuristic search method used in artificial intelligence and computing. It is used for finding optimized solutions to search problems based on the theory of natural selection and evolutionary biology. Genetic algorithms are excellent for searching through large and complex data sets. They are considered capable of finding reasonable solutions to complex issues as they are highly capable of solving unconstrained and constrained optimization issues.A genetic algorithm makes uses of techniques inspired from evolutionary biology such as selection, mutation, inheritance and recombination to solve a problem. The most commonly employed method in genetic algorithms is to create a group of individuals randomly from a given population. The individuals thus formed are evaluated with the help of the evaluation function provided by the programmer. Individuals are then provided with a score which indirectly highlights the fitness to the given situation. The best two individuals are then used to create one or more offspring, after which random mutations are done on the offspring. Depending on the needs of the application, the procedure continues until an acceptable solution is derived or until a certain number of generations have passed.
The Code to maximize the stalagmite function is as follows:
clear all
close all
clc
% input parametrs
x = linspace(0,0.6,150);
y = linspace(0,0.6,150);
[xx yy] = meshgrid(x,y);
num_case = 150;
for i = 1:length(x)
for j = 1:length(y)
input_vector(1) = xx(i,j);
input_vector(2) = yy(i,j);
F(i,j) = stalagmite(input_vector);
end
end
% study_1
tic
for i = 1:num_case
[inputs,Fopt(i)] = ga(@stalagmite,2);
xopt(i) = inputs(1);
yopt(i) = inputs(2);
end
study_time_1 = toc;
figure(1)
subplot(2,1,1);
hold on
surfc(xx,yy,F);
xlabel('x data');
ylabel('y data');
plot3(xopt,yopt,Fopt,'marker','o','markersize',5,'markerfacecolor','k')
title('Unbounded Inputs')
subplot(2,1,2)
plot(Fopt)
xlabel('Iterations')
ylabel('Function Maximum')
% Study 2
tic
for i = 1:num_case
[inputs,fopt(i)] = ga(@stalagmite,2,[],[],[],[],[0;0],[1;1]);
xopt(i) = inputs(1);
yopt(i) = inputs(2);
end
study_time_2 = toc;
figure(2)
subplot(2,1,1)
hold on
surfc(xx, yy, F)
xlabel 'X data'
ylabel 'Y data'
plot3(xopt,yopt,Fopt,'marker','o','markersize',5,'markerfacecolor','k')
title('Bounded Inputs')
subplot(2,1,2)
plot(Fopt)
xlabel('Iterations')
ylabel('Function Maximum')
% Study 3
options = optimoptions('ga');
options = optimoptions(options,'PopulationSize',450);
tic
for i = 1:num_case
[inputs,Fopt(i)] = ga(@stalagmite,2,[],[],[],[],[0;0],[1;1],[],[],options);
xopt(i) = inputs(1);
yopt(i) = inputs(2);
end
study_time_3 = toc;
figure(3)
subplot(2,1,1)
hold on
surfc(xx, yy, F);
xlabel 'X data'
ylabel 'Y data'
plot3(xopt,yopt,Fopt,'marker','o','markersize',5,'markerfacecolor','k');
title('Bounded Inputs with Increased Population Size');
subplot(2,1,2);
plot(Fopt);
xlabel('Iterations');
ylabel('Function Maximum');
max_value = [Fopt];
In the above code, we are running ourstalagmite function for 3 different scenarios:
1. Unbound Inputs where the inputs do not have any resrictions i.e., the inputs are randomly distributed even outside the working space
2. Bound Inputs where the inputs are restricted and are not allowed to go beyond our working space
3. Bounded Inputs with increased iterations: Here we are increasing the number of iterations and try out dufferent iterations.
Stalagmite Function:
function [F] = stalagmite(input_vector)
term_1 = (sin(5.1*pi*input_vector(1)+0.5))^6;
term_2 = (sin(5.1*pi*input_vector(2)+0.5))^6;
term_3 = exp((-4*log(2)*(((input_vector(1) - 0.0667)^2)/0.64)));
term_4 = exp((-4*log(2)*(((input_vector(2) - 0.0667)^2)/0.64)));
F = (term_1.*term_2.*term_3.*term_4);
F = 1/(1+F);
end
### a new title Harish Arroju · 2018-11-19 12:30:21
OBJECTIVETo Create a 3D Model of Centrifugal Pump and to run a Flow Simulation.To Obtain a relation between Pressure ratio and Mass Flow rate.INTRODUCTION Centrifugal Pumps are the sub class of the Turbomachinery. Centrifugal pumps are used Read more
### a new title Harish Arroju · 2018-11-19 12:30:11
OBJECTIVETo Create a 3D Model of Centrifugal Pump and to run a Flow Simulation.To Obtain a relation between Pressure ratio and Mass Flow rate.INTRODUCTION Centrifugal Pumps are the sub class of the Turbomachinery. Centrifugal pumps are used Read more
### Centrifugal pump design and analysis Harish Arroju · 2018-11-15 06:49:54
OBJECTIVE To Create a 3D Model of Centrifugal Pump and to run a Flow Simulation. To Obtain a relation between Pressure ratio and Mass Flow rate. INTRODUCTION Centrifugal Pumps are the sub class of the Turbomachinery. Centrifugal pu Read more
### a new title Harish Arroju · 2018-11-13 13:48:17
SIMULATING A FLOW OVER AN NACA AIRFOILIn this report, a flow over a NACA airfoil has been simulated at the different angle of attack. Our main aim was to determine and compare the drag force ( X-direction ) and the lifting force ( Y-direction ) on Read more
### Modelling and simulation of flowbench and testing valve lift Harish Arroju · 2018-11-01 14:52:40
A flow-bench is a device used for testing the internal aerodynamic qualities of a component of an engine, primarily for testing the intake and exhaust ports of cylinder heads of IC engines. Mostly this is used for designing purpose Geometry Creation Read more
### Flow over a cylinder Harish Arroju · 2018-10-29 17:53:33
This project was all about simulating the flow of air over the cylinder In this project, a cylinder was created using solid works CAD software and then the flow of air over the cylinder was simulated using Solidworks flow simulation. Geometry creation and Meshing-&nbs Read more
### Flow over an airfoil Harish Arroju · 2018-10-27 14:52:26
SIMULATING A FLOW OVER AN NACA AIRFOIL In this report, a flow over a NACA airfoil has been simulated at the different angle of attack. Our main aim was to determine and compare the drag force ( X-direction ) and the lifting force ( Y-direction ) &nb Read more
### Pipe Flow Simulation using solidworks Harish Arroju · 2018-10-27 06:38:57
This project was all about simulating the flow of water inside a pipe. In this project, a pipe was created using solid works CAD software and then the flow inside the pipe was simulated using SolidWorks flow simulation. Geometry creation- The pipe created was a hollo Read more
### DATA ANALYSIS Harish Arroju · 2018-10-09 12:45:56
1. This program uses the data from the extension(.out) file and It also checks the file.compatibility. If the file is not having a proper format then the code will throw an error statement. \". If the format is correct following steps:- 2. Data analysis is carried out Read more
### Curve Fitting Harish Arroju · 2018-10-08 13:45:40
Theory Curve fitting is the process of constructing a curve, or mathematical functions, which possess the closest proximity to the real series of data. By curve fitting, we can mathematically construct the functional relationship between the observed dataset and parame Read more
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4831722378730774, "perplexity": 3584.131931784822}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583658988.30/warc/CC-MAIN-20190117143601-20190117165601-00542.warc.gz"}
|
http://cs.stackexchange.com/questions/6801/every-simple-undirected-graph-with-more-than-n-1n-2-2-edges-is-connected
|
# Every simple undirected graph with more than $(n-1)(n-2)/2$ edges is connected
If a graph with $n$ vertices has more than $\frac{(n-1)(n-2)}{2}$ edges then it is connected.
I am a bit confused about this question, since I can always prove that for a graph to connected you need more than $|E|>n-1$ edges.
-
hint: What if you have one isolated vertex (not connected to any other vertices) what is the maximum number of edges in the graph? – Joe Nov 22 '12 at 0:42
I am not sure what bothers you but as I see it you are confused about the following two facts
1. If a graph is connected then $e \geq n-1.$
2. If a graph has more than $e > \frac{(n-1)(n-2)}{2}$ then it is connected.
Notice that the implications in 1 and 2 are in opposite directions.
For a proof of 2. you can check out this link.
-
I think your problem might be to prove that you cannot construct an undirected graph with $\dfrac{(n-1)(n-2)}{2}$ edges that is not connected. You are thinking about it the wrong way. The $E = n - 1$ formula about how few edges can you use to connect all the vertices.
Imagine you are an adversary trying to design a horrible highway system so that one town is disconnected. No matter how inefficiently you spend your roads, you'll still have to connect all the towns if there are so many roads.
Consider what the worst possible design could be, eg, the one that uses as many roads as possible but still leaves one town disconnected. How many edges does that have? What happens when you add one more edge to that?
-
1.As you mentioned we have:
$G\text{ is connected} \Rightarrow |V|-1 \le |E|$
But the other direction is not true, i.e:
$G\text{ is connected} \Leftrightarrow |V|-1 \le |E|$
is wrong statement.
So you can not use it for further reasoning. Sample counter example is this graph ($K_t$ is a complete graph on $t$ vertices, and $\cup$ means disjoint union of graphs):
$G = K_{n-1} \cup K_1$
$G$ has $n-1\choose 2$ edges and $n$ nodes, and ${n-1\choose 2} > n-1$ for $n>4$.
2.On the other hand, to prove that :
${|V|-1 \choose 2} < |E| \Rightarrow G\text{ is connected}$
We can do it as follow:
Suppose not, then $G$ is disjoint union of two graphs $G=G_1\cup G_2$, with $|G_1| = k, |G_2| = n-k, 0<k<n$, if we connect all the vertices of $G_1,G_2$ together to make graph $G"$, then $|E_{G"}|\le {n \choose 2}$ (because $G"$ has at most as complete graph edges) but:
${n-1 \choose 2} + 1 + k\cdot (n-k) \le |E_{G"}| \le {n \choose 2} \Rightarrow$
$(k-1)(n-k-1) + 1 \le 0\Rightarrow$ Contradicts with $0<k<n$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5082381367683411, "perplexity": 213.11233436863824}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645396463.95/warc/CC-MAIN-20150827031636-00246-ip-10-171-96-226.ec2.internal.warc.gz"}
|
https://brilliant.org/problems/maths-iq-challenge-3sum-to-infinity/
|
# MATHS iQ CHALLENGE - #3(sUm To InFiNiTy)
Calculus Level pending
The sum to infinity of -
( Please provide the solution and reason)
×
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9698952436447144, "perplexity": 21976.15161838657}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818690035.53/warc/CC-MAIN-20170924152911-20170924172911-00646.warc.gz"}
|
http://hal.in2p3.fr/in2p3-00011963
|
# Measurement of Bose-Einstein Correlations in $e^{+}e^{-}\to W^{+}W^{-}$ Events at LEP
3 CMS
IP2I Lyon - Institut de Physique des 2 Infinis de Lyon
Abstract : Bose-Einstein correlations in W-pair production at LEP are investigated in a data sample of 629 pb$^{-1}$ collected by the L3 detector at $\sqrt{s}=$ 189--209\,GeV. Bose-Einstein correlations between pions within a W decay are observed and found to be in good agreement with those in light-quark Z decay. No evidence is found for Bose-Einstein correlations between hadrons coming from different W's in the same event.
Document type :
Journal articles
http://hal.in2p3.fr/in2p3-00011963
Contributor : Sylvie Flores Connect in order to contact the contributor
Submitted on : Monday, October 28, 2002 - 3:27:49 PM
Last modification on : Friday, September 10, 2021 - 1:50:05 PM
### Citation
P. Achard, O. Adriani, M. Aguilar-Benitez, J. Alcaraz, G. Alemanni, et al.. Measurement of Bose-Einstein Correlations in $e^{+}e^{-}\to W^{+}W^{-}$ Events at LEP. Physics Letters B, Elsevier, 2002, 547, pp.139-150. ⟨in2p3-00011963⟩
Record views
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5927248001098633, "perplexity": 8964.987790158384}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587606.8/warc/CC-MAIN-20211024204628-20211024234628-00620.warc.gz"}
|
http://jeremykun.com/tag/random-variables/
|
# Martingales and the Optional Stopping Theorem
This is a guest post by my colleague Adam Lelkes.
The goal of this primer is to introduce an important and beautiful tool from probability theory, a model of fair betting games called martingales. In this post I will assume that the reader is familiar with the basics of probability theory. For those that need to refresh their knowledge, Jeremy’s excellent primers (1, 2) are a good place to start.
## The Geometric Distribution and the ABRACADABRA Problem
Before we start playing with martingales, let’s start with an easy exercise. Consider the following experiment: we throw an ordinary die repeatedly until the first time a six appears. How many throws will this take in expectation? The reader might recognize immediately that this exercise can be easily solved using the basic properties of the geometric distribution, which models this experiment exactly. We have independent trials, every trial succeeding with some fixed probability $p$. If $X$ denotes the number of trials needed to get the first success, then clearly $\Pr(X = k) = (1-p)^{k-1} p$ (since first we need $k-1$ failures which occur independently with probability $1-p$, then we need one success which happens with probability $p$). Thus the expected value of $X$ is
$\displaystyle E(X) = \sum_{k=1}^\infty k P(X = k) = \sum_{k=1}^\infty k (1-p)^{k-1} p = \frac1p$
by basic calculus. In particular, if success is defined as getting a six, then $p=1/6$ thus the expected time is $1/p=6$.
Now let us move on to a somewhat similar, but more interesting and difficult problem, the ABRACADABRA problem. Here we need two things for our experiment, a monkey and a typewriter. The monkey is asked to start bashing random keys on a typewriter. For simplicity’s sake, we assume that the typewriter has exactly 26 keys corresponding to the 26 letters of the English alphabet and the monkey hits each key with equal probability. There is a famous theorem in probability, the infinite monkey theorem, that states that given infinite time, our monkey will almost surely type the complete works of William Shakespeare. Unfortunately, according to astronomists the sun will begin to die in a few billion years, and the expected time we need to wait until a monkey types the complete works of William Shakespeare is orders of magnitude longer, so it is not feasible to use monkeys to produce works of literature.
So let’s scale down our goals, and let’s just wait until our monkey types the word ABRACADABRA. What is the expected time we need to wait until this happens? The reader’s first idea might be to use the geometric distribution again. ABRACADABRA is eleven letters long, the probability of getting one letter right is $\frac{1}{26}$, thus the probability of a random eleven-letter word being ABRACADABRA is exactly $\left(\frac{1}{26}\right)^{11}$. So if typing 11 letters is one trial, the expected number of trials is
$\displaystyle \frac1{\left(\frac{1}{26}\right)^{11}}=26^{11}$
which means $11\cdot 26^{11}$ keystrokes, right?
Well, not exactly. The problem is that we broke up our random string into eleven-letter blocks and waited until one block was ABRACADABRA. However, this word can start in the middle of a block. In other words, we considered a string a success only if the starting position of the word ABRACADABRA was divisible by 11. For example, FRZUNWRQXKLABRACADABRA would be recognized as success by this model but the same would not be true for AABRACADABRA. However, it is at least clear from this observation that $11\cdot 26^{11}$ is a strict upper bound for the expected waiting time. To find the exact solution, we need one very clever idea, which is the following:
## Let’s Open a Casino!
Do I mean that abandoning our monkey and typewriter and investing our time and money in a casino is a better idea, at least in financial terms? This might indeed be the case, but here we will use a casino to determine the expected wait time for the ABRACADABRA problem. Unfortunately we won’t make any money along the way (in expectation) since our casino will be a fair one.
Let’s do the following thought experiment: let’s open a casino next to our typewriter. Before each keystroke, a new gambler comes to our casino and bets $1 that the next letter will be A. If he loses, he goes home disappointed. If he wins, he bets all the money he won on the event that the next letter will be B. Again, if he loses, he goes home disappointed. (This won’t wreak havoc on his financial situation, though, as he only loses$1 of his own money.) If he wins again, he bets all the money on the event that the next letter will be R, and so on.
If a gambler wins, how much does he win? We said that the casino would be fair, i.e. the expected outcome should be zero. That means that it the gambler bets $1, he should receive$26 if he wins, since the probability of getting the next letter right is exactly $\frac{1}{26}$ (thus the expected value of the change in the gambler’s fortune is $\frac{25}{26}\cdot (-1) + \frac{1}{26}\cdot (+25) = 0$.
Let’s keep playing this game until the word ABRACADABRA first appears and let’s denote the number of keystrokes up to this time as $T$. As soon as we see this word, we close our casino. How much was the revenue of our casino then? Remember that before each keystroke, a new gambler comes in and bets $1, and if he wins, he will only bet the money he has received so far, so our revenue will be exactly $T$ dollars. How much will we have to pay for the winners? Note that the only winners in the last round are the players who bet on A. How many of them are there? There is one that just came in before the last keystroke and this was his first bet. He wins$26. There was one who came three keystrokes earlier and he made four successful bets (ABRA). He wins $\26^4$. Finally there is the luckiest gambler who went through the whole ABRACADABRA sequence, his prize will be $\26^{11}$. Thus our casino will have to give out $26^{11}+26^4+26$ dollars in total, which is just under the price of 200,000 WhatsApp acquisitions.
Now we will make one crucial observation: even at the time when we close the casino, the casino is fair! Thus in expectation our expenses will be equal to our income. Our income is $T$ dollars, the expected value of our expenses is $26^{11}+26^4+26$ dollars, thus $E(T)=26^{11}+26^4+26$. A beautiful solution, isn’t it? So if our monkey types at 150 characters per minute on average, we will have to wait around 47 million years until we see ABRACADABRA. Oh well.
## Time to be More Formal
After giving an intuitive outline of the solution, it is time to formalize the concepts that we used, to translate our fairy tales into mathematics. The mathematical model of the fair casino is called a martingale, named after a class of betting strategies that enjoyed popularity in 18th century France. The gambler’s fortune (or the casino’s, depending on our viewpoint) can be modeled with a sequence of random variables. $X_0$ will denote the gambler’s fortune before the game starts, $X_1$ the fortune after one round and so on. Such a sequence of random variables is called a stochastic process. We will require the expected value of the gambler’s fortune to be always finite.
How can we formalize the fairness of the game? Fairness means that the gambler’s fortune does not change in expectation, i.e. the expected value of $X_n$, given $X_1, X_2, \ldots, X_{n-1}$ is the same as $X_{n-1}$. This can be written as $E(X_n | X_1, X_2, \ldots, X_{n-1}) = X_{n-1}$ or, equivalently, $E(X_n - X_{n-1} | X_1, X_2, \ldots, X_{n-1}) = 0$.
The reader might be less comfortable with the first formulation. What does it mean, after all, that the conditional expected value of a random variable is another random variable? Shouldn’t the expected value be a number? The answer is that in order to have solid theoretical foundations for the definition of a martingale, we need a more sophisticated notion of conditional expectations. Such sophistication involves measure theory, which is outside the scope of this post. We will instead naively accept the definition above, and the reader can look up all the formal details in any serious probability text (such as [1]).
Clearly the fair casino we constructed for the ABRACADABRA exercise is an example of a martingale. Another example is the simple symmetric random walk on the number line: we start at 0, toss a coin in each step, and move one step in the positive or negative direction based on the outcome of our coin toss.
## The Optional Stopping Theorem
Remember that we closed our casino as soon as the word ABRACADABRA appeared and we claimed that our casino was also fair at that time. In mathematical language, the closed casino is called a stopped martingale. The stopped martingale is constructed as follows: we wait until our martingale X exhibits a certain behaviour (e.g. the word ABRACADABRA is typed by the monkey), and we define a new martingale X’ as follows: let $X'_n = X_n$ if $n < T$ and $X'_n = X_T$ if $n \ge T$ where $T$ denotes the stopping time, i.e. the time at which the desired event occurs. Notice that $T$ itself is a random variable.
We require our stopping time $T$ to depend only on the past, i.e. that at any time we should be able to decide whether the event that we are waiting for has already happened or not (without looking into the future). This is a very reasonable requirement. If we could look into the future, we could obviously cheat by closing our casino just before some gambler would win a huge prize.
We said that the expected wealth of the casino at the stopping time is the same as the initial wealth. This is guaranteed by Doob’s optional stopping theorem, which states that under certain conditions, the expected value of a martingale at the stopping time is equal to its expected initial value.
Theorem: (Doob’s optional stopping theorem) Let $X_n$ be a martingale stopped at step $T$, and suppose one of the following three conditions hold:
1. The stopping time $T$ is almost surely bounded by some constant;
2. The stopping time $T$ is almost surely finite and every step of the stopped martingale $X_n$ is almost surely bounded by some constant; or
3. The expected stopping time $E(T)$ is finite and the absolute value of the martingale increments $|X_n-X_{n-1}|$ are almost surely bounded by a constant.
Then $E(X_T) = E(X_0).$
We omit the proof because it requires measure theory, but the interested reader can see it in these notes.
For applications, (1) and (2) are the trivial cases. In the ABRACADABRA problem, the third condition holds: the expected stopping time is finite (in fact, we showed using the geometric distribution that it is less than $26^{12}$) and the absolute value of a martingale increment is either 1 or a net payoff which is bounded by $26^{11}+26^4+26$. This shows that our solution is indeed correct.
## Gambler’s Ruin
Another famous application of martingales is the gambler’s ruin problem. This problem models the following game: there are two players, the first player has $a$ dollars, the second player has $b$ dollars. In each round they toss a coin and the loser gives one dollar to the winner. The game ends when one of the players runs out of money. There are two obvious questions: (1) what is the probability that the first player wins and (2) how long will the game take in expectation?
Let $X_n$ denote the change in the second player’s fortune, and set $X_0 = 0$. Let $T_k$ denote the first time $s$ when $X_s = k$. Then our first question can be formalized as trying to determine $\Pr(T_{-b} < T_a)$. Let $t = \min \{ T_{-b}, T_a\}$. Clearly $t$ is a stopping time. By the optional stopping theorem we have that
$\displaystyle 0=E(X_0)=E(X_t)=-b\Pr(T_{-b} < T_a)+a(1-\Pr(T_{-b} < T_a))$
thus $\Pr(T_{-b} < T_a)=\frac{a}{a+b}$.
I would like to ask the reader to try to answer the second question. It is a little bit trickier than the first one, though, so here is a hint: $X_n^2-n$ is also a martingale (prove it), and applying the optional stopping theorem to it leads to the answer.
## A Randomized Algorithm for 2-SAT
The reader is probably familiar with 3-SAT, the first problem shown to be NP-complete. Recall that 3-SAT is the following problem: given a boolean formula in conjunctive normal form with at most three literals in each clause, decide whether there is a satisfying truth assignment. It is natural to ask if or why 3 is special, i.e. why don’t we work with $k$-SAT for some $k \ne 3$ instead? Clearly the hardness of the problem is monotone increasing in $k$ since $k$-SAT is a special case of $(k+1)$-SAT. On the other hand, SAT (without any bound on the number of literals per clause) is clearly in NP, thus 3-SAT is just as hard as $k$-SAT for any $k>3$. So the only question is: what can we say about 2-SAT?
It turns out that 2-SAT is easier than satisfiability in general: 2-SAT is in P. There are many algorithms for solving 2-SAT. Here is one deterministic algorithm: associate a graph to the 2-SAT instance such that there is one vertex for each variable and each negated variable and the literals $x$ and $y$ are connected by a directed edge if there is a clause $(\bar x \lor y)$. Recall that $\bar x \lor y$ is equivalent to $x \implies y$, so the edges show the implications between the variables. Clearly the 2-SAT instance is not satisfiable if there is a variable x such that there are directed paths $x \to \bar x$ and $\bar x \to x$ (since $x \Leftrightarrow \bar x$ is always false). It can be shown that this is not only a sufficient but also a necessary condition for unsatisfiability, hence the 2-SAT instance is satisfiable if and only if there is are no such path. If there are directed paths from one vertex of a graph to another and vice versa then they are said to belong to the same strongly connected component. There are several graph algorithms for finding strongly connected components of directed graphs, the most well-known algorithms are all based on depth-first search.
Now we give a very simple randomized algorithm for 2-SAT (due to Christos Papadimitriou in a ’91 paper): start with an arbitrary truth assignment and while there are unsatisfied clauses, pick one and flip the truth value of a random literal in it. Stop after $O(n^2)$ rounds where $n$ denotes the number of variables. Clearly if the formula is not satisfiable then nothing can go wrong, we will never find a satisfying truth assignment. If the formula is satisfiable, we want to argue that with high probability we will find a satisfying truth assignment in $O(n^2)$ steps.
The idea of the proof is the following: fix an arbitrary satisfying truth assignment and consider the Hamming distance of our current assignment from it. The Hamming distance of two truth assignments (or in general, of two binary vectors) is the number of coordinates in which they differ. Since we flip one bit in every step, this Hamming distance changes by $\pm 1$ in every round. It also easy to see that in every step the distance is at least as likely to be decreased as to be increased (since we pick an unsatisfied clause, which means at least one of the two literals in the clause differs in value from the satisfying assignment).
Thus this is an unfair “gambler’s ruin” problem where the gambler’s fortune is the Hamming distance from the solution, and it decreases with probability at least $\frac{1}{2}$. Such a stochastic process is called a supermartingale — and this is arguably a better model for real-life casinos. (If we flip the inequality, the stochastic process we get is called a submartingale.) Also, in this case the gambler’s fortune (the Hamming distance) cannot increase beyond $n$. We can also think of this process as a random walk on the set of integers: we start at some number and in each round we make one step to the left or to the right with some probability. If we use random walk terminology, 0 is called an absorbing barrier since we stop the process when we reach 0. The number $n$, on the other hand, is called a reflecting barrier: we cannot reach $n+1$, and whenever we get close we always bounce back.
There is an equivalent version of the optimal stopping theorem for supermartingales and submartingales, where the conditions are the same but the consequence holds with an inequality instead of equality. It follows from the optional stopping theorem that the gambler will be ruined (i.e. a satisfying truth assignment will be found) in $O(n^2)$ steps with high probability.
[1] For a reference on stochastic processes and martingales, see the text of Durrett .
# Optimism in the Face of Uncertainty: the UCB1 Algorithm
The software world is always atwitter with predictions on the next big piece of technology. And a lot of chatter focuses on what venture capitalists express interest in. As an investor, how do you pick a good company to invest in? Do you notice quirky names like “Kaggle” and “Meebo,” require deep technical abilities, or value a charismatic sales pitch?
This author personally believes we’re not thinking as big as we should be when it comes to innovation in software engineering and computer science, and that as a society we should value big pushes forward much more than we do. But making safe investments is almost always at odds with innovation. And so every venture capitalist faces the following question. When do you focus investment in those companies that have proven to succeed, and when do you explore new options for growth? A successful venture capitalist must strike a fine balance between this kind of exploration and exploitation. Explore too much and you won’t make enough profit to sustain yourself. Narrow your view too much and you will miss out on opportunities whose return surpasses any of your current prospects.
In life and in business there is no correct answer on what to do, partly because we just don’t have a good understanding of how the world works (or markets, or people, or the weather). In mathematics, however, we can meticulously craft settings that have solid answers. In this post we’ll describe one such scenario, the so-called multi-armed bandit problem, and a simple algorithm called UCB1 which performs close to optimally. Then, in a future post, we’ll analyze the algorithm on some real world data.
As usual, all of the code used in the making of this post are available for download on this blog’s Github page.
## Multi-Armed Bandits
The multi-armed bandit scenario is simple to describe, and it boils the exploration-exploitation tradeoff down to its purest form.
Suppose you have a set of $K$ actions labeled by the integers $\left \{ 1, 2, \dots, K \right \}$. We call these actions in the abstract, but in our minds they’re slot machines. We can then play a game where, in each round, we choose an action (a slot machine to play), and we observe the resulting payout. Over many rounds, we might explore the machines by trying some at random. Assuming the machines are not identical, we naturally play machines that seem to pay off well more frequently to try to maximize our total winnings.
This is the most general description of the game we could possibly give, and every bandit learning problem has these two components: actions and rewards. But in order to get to a concrete problem that we can reason about, we need to specify more details. Bandit learning is a large tree of variations and this is the point at which the field ramifies. We presently care about two of the main branches.
How are the rewards produced? There are many ways that the rewards could work. One nice option is to have the rewards for action $i$ be drawn from a fixed distribution $D_i$ (a different reward distribution for each action), and have the draws be independent across rounds and across actions. This is called the stochastic setting and it’s what we’ll use in this post. Just to pique the reader’s interest, here’s the alternative: instead of having the rewards be chosen randomly, have them be adversarial. That is, imagine a casino owner knows your algorithm and your internal beliefs about which machines are best at any given time. He then fixes the payoffs of the slot machines in advance of each round to screw you up! This sounds dismal, because the casino owner could just make all the machines pay nothing every round. But actually we can design good algorithms for this case, but “good” will mean something different than absolute winnings. And so we must ask:
How do we measure success? In both the stochastic and the adversarial setting, we’re going to have a hard time coming up with any theorems about the performance of an algorithm if we care about how much absolute reward is produced. There’s nothing to stop the distributions from having terrible expected payouts, and nothing to stop the casino owner from intentionally giving us no payout. Indeed, the problem lies in our measurement of success. A better measurement, which we can apply to both the stochastic and adversarial settings, is the notion of regret. We’ll give the definition for the stochastic case, and investigate the adversarial case in a future post.
Definition: Given a player algorithm $A$ and a set of actions $\left \{1, 2, \dots, K \right \}$, the cumulative regret of $A$ in rounds $1, \dots, T$ is the difference between the expected reward of the best action (the action with the highest expected payout) and the expected reward of $A$ for the first $T$ rounds.
We’ll add some more notation shortly to rephrase this definition in symbols, but the idea is clear: we’re competing against the best action. Had we known it ahead of time, we would have just played it every single round. Our notion of success is not in how well we do absolutely, but in how well we do relative to what is feasible.
## Notation
Let’s go ahead and draw up some notation. As before the actions are labeled by integers $\left \{ 1, \dots, K \right \}$. The reward of action $i$ is a $[0,1]$-valued random variable $X_i$ distributed according to an unknown distribution and possessing an unknown expected value $\mu_i$. The game progresses in rounds $t = 1, 2, \dots$ so that in each round we have different random variables $X_{i,t}$ for the reward of action $i$ in round $t$ (in particular, $X_{i,t}$ and $X_{i,s}$ are identically distributed). The $X_{i,t}$ are independent as both $t$ and $i$ vary, although when $i$ varies the distribution changes.
So if we were to play action 2 over and over for $T$ rounds, then the total payoff would be the random variable $G_2(T) = \sum_{t=1}^T X_{2,t}$. But by independence across rounds and the linearity of expectation, the expected payoff is just $\mu_2 T$. So we can describe the best action as the action with the highest expected payoff. Define
$\displaystyle \mu^* = \max_{1 \leq i \leq K} \mu_i$
We call the action which achieves the maximum $i^*$.
A policy is a randomized algorithm $A$ which picks an action in each round based on the history of chosen actions and observed rewards so far. Define $I_t$ to be the action played by $A$ in round $t$ and $P_i(n)$ to be the number of times we’ve played action $i$ in rounds $1 \leq t \leq n$. These are both random variables. Then the cumulative payoff for the algorithm $A$ over the first $T$ rounds, denoted $G_A(T)$, is just
$\displaystyle G_A(T) = \sum_{t=1}^T X_{I_t, t}$
and its expected value is simply
$\displaystyle \mathbb{E}(G_A(T)) = \mu_1 \mathbb{E}(P_1(T)) + \dots + \mu_K \mathbb{E}(P_K(T))$.
Here the expectation is taken over all random choices made by the policy and over the distributions of rewards, and indeed both of these can affect how many times a machine is played.
Now the cumulative regret of a policy $A$ after the first $T$ steps, denoted $R_A(T)$ can be written as
$\displaystyle R_A(T) = G_{i^*}(T) - G_A(T)$
And the goal of the policy designer for this bandit problem is to minimize the expected cumulative regret, which by linearity of expectation is
$\mathbb{E}(R_A(T)) = \mu^*T - \mathbb{E}(G_A(T))$.
Before we continue, we should note that there are theorems concerning lower bounds for expected cumulative regret. Specifically, for this problem it is known that no algorithm can guarantee an expected cumulative regret better than $\Omega(\sqrt{KT})$. It is also known that there are algorithms that guarantee no worse than $O(\sqrt{KT})$ expected regret. The algorithm we’ll see in the next section, however, only guarantees $O(\sqrt{KT \log T})$. We present it on this blog because of its simplicity and ubiquity in the field.
## The UCB1 Algorithm
The policy we examine is called UCB1, and it can be summed up by the principle of optimism in the face of uncertainty. That is, despite our lack of knowledge in what actions are best we will construct an optimistic guess as to how good the expected payoff of each action is, and pick the action with the highest guess. If our guess is wrong, then our optimistic guess will quickly decrease and we’ll be compelled to switch to a different action. But if we pick well, we’ll be able to exploit that action and incur little regret. In this way we balance exploration and exploitation.
The formalism is a bit more detailed than this, because we’ll need to ensure that we don’t rule out good actions that fare poorly early on. Our “optimism” comes in the form of an upper confidence bound (hence the acronym UCB). Specifically, we want to know with high probability that the true expected payoff of an action $\mu_i$ is less than our prescribed upper bound. One general (distribution independent) way to do that is to use the Chernoff-Hoeffding inequality.
As a reminder, suppose $Y_1, \dots, Y_n$ are independent random variables whose values lie in $[0,1]$ and whose expected values are $\mu_i$. Call $Y = \frac{1}{n}\sum_{i}Y_i$ and $\mu = \mathbb{E}(Y) = \frac{1}{n} \sum_{i} \mu_i$. Then the Chernoff-Hoeffding inequality gives an exponential upper bound on the probability that the value of $Y$ deviates from its mean. Specifically,
$\displaystyle \textup{P}(Y + a < \mu) \leq e^{-2na^2}$
For us, the $Y_i$ will be the payoff variables for a single action $j$ in the rounds for which we choose action $j$. Then the variable $Y$ is just the empirical average payoff for action $j$ over all the times we’ve tried it. Moreover, $a$ is our one-sided upper bound (and as a lower bound, sometimes). We can then solve this equation for $a$ to find an upper bound big enough to be confident that we’re within $a$ of the true mean.
Indeed, if we call $n_j$ the number of times we played action $j$ thus far, then $n = n_j$ in the equation above, and using $a = a(j,T) = \sqrt{2 \log(T) / n_j}$ we get that $\textup{P}(Y > \mu + a) \leq T^{-4}$, which converges to zero very quickly as the number of rounds played grows. We’ll see this pop up again in the algorithm’s analysis below. But before that note two things. First, assuming we don’t play an action $j$, its upper bound $a$ grows in the number of rounds. This means that we never permanently rule out an action no matter how poorly it performs. If we get extremely unlucky with the optimal action, we will eventually be convinced to try it again. Second, the probability that our upper bound is wrong decreases in the number of rounds independently of how many times we’ve played the action. That is because our upper bound $a(j, T)$ is getting bigger for actions we haven’t played; any round in which we play an action $j$, it must be that $a(j, T+1) = a(j,T)$, although the empirical mean will likely change.
With these two facts in mind, we can formally state the algorithm and intuitively understand why it should work.
UCB1:
Play each of the $K$ actions once, giving initial values for empirical mean payoffs $\overline{x}_i$ of each action $i$.
For each round $t = K, K+1, \dots$:
Let $n_j$ represent the number of times action $j$ was played so far.
Play the action $j$ maximizing $\overline{x}_j + \sqrt{2 \log t / n_j}$.
Observe the reward $X_{j,t}$ and update the empirical mean for the chosen action.
And that’s it. Note that we’re being super stateful here: the empirical means $x_j$ change over time, and we’ll leave this update implicit throughout the rest of our discussion (sorry, functional programmers, but the notation is horrendous otherwise).
Before we implement and test this algorithm, let’s go ahead and prove that it achieves nearly optimal regret. The reader uninterested in mathematical details should skip the proof, but the discussion of the theorem itself is important. If one wants to use this algorithm in real life, one needs to understand the guarantees it provides in order to adequately quantify the risk involved in using it.
Theorem: Suppose that UCB1 is run on the bandit game with $K$ actions, each of whose reward distribution $X_{i,t}$ has values in [0,1]. Then its expected cumulative regret after $T$ rounds is at most $O(\sqrt{KT \log T})$.
Actually, we’ll prove a more specific theorem. Let $\Delta_i$ be the difference $\mu^* - \mu_i$, where $\mu^*$ is the expected payoff of the best action, and let $\Delta$ be the minimal nonzero $\Delta_i$. That is, $\Delta_i$ represents how suboptimal an action is and $\Delta$ is the suboptimality of the second best action. These constants are called problem-dependent constants. The theorem we’ll actually prove is:
Theorem: Suppose UCB1 is run as above. Then its expected cumulative regret $\mathbb{E}(R_{\textup{UCB1}}(T))$ is at most
$\displaystyle 8 \sum_{i : \mu_i < \mu^*} \frac{\log T}{\Delta_i} + \left ( 1 + \frac{\pi^2}{3} \right ) \left ( \sum_{j=1}^K \Delta_j \right )$
Okay, this looks like one nasty puppy, but it’s actually not that bad. The first term of the sum signifies that we expect to play any suboptimal machine about a logarithmic number of times, roughly scaled by how hard it is to distinguish from the optimal machine. That is, if $\Delta_i$ is small we will require more tries to know that action $i$ is suboptimal, and hence we will incur more regret. The second term represents a small constant number (the $1 + \pi^2 / 3$ part) that caps the number of times we’ll play suboptimal machines in excess of the first term due to unlikely events occurring. So the first term is like our expected losses, and the second is our risk.
But note that this is a worst-case bound on the regret. We’re not saying we will achieve this much regret, or anywhere near it, but that UCB1 simply cannot do worse than this. Our hope is that in practice UCB1 performs much better.
Before we prove the theorem, let’s see how derive the $O(\sqrt{KT \log T})$ bound mentioned above. This will require familiarity with multivariable calculus, but such things must be endured like ripping off a band-aid. First consider the regret as a function $R(\Delta_1, \dots, \Delta_K)$ (excluding of course $\Delta^*$), and let’s look at the worst case bound by maximizing it. In particular, we’re just finding the problem with the parameters which screw our bound as badly as possible, The gradient of the regret function is given by
$\displaystyle \frac{\partial R}{\partial \Delta_i} = - \frac{8 \log T}{\Delta_i^2} + 1 + \frac{\pi^2}{3}$
and it’s zero if and only if for each $i$, $\Delta_i = \sqrt{\frac{8 \log T}{1 + \pi^2/3}} = O(\sqrt{\log T})$. However this is a minimum of the regret bound (the Hessian is diagonal and all its eigenvalues are positive). Plugging in the $\Delta_i = O(\sqrt{\log T})$ (which are all the same) gives a total bound of $O(K \sqrt{\log T})$. If we look at the only possible endpoint (the $\Delta_i = 1$), then we get a local maximum of $O(K \sqrt{\log T})$. But this isn’t the $O(\sqrt{KT \log T})$ we promised, what gives? Well, this upper bound grows arbitrarily large as the $\Delta_i$ go to zero. But at the same time, if all the $\Delta_i$ are small, then we shouldn’t be incurring much regret because we’ll be picking actions that are close to optimal!
Indeed, if we assume for simplicity that all the $\Delta_i = \Delta$ are the same, then another trivial regret bound is $\Delta T$ (why?). The true regret is hence the minimum of this regret bound and the UCB1 regret bound: as the UCB1 bound degrades we will eventually switch to the simpler bound. That will be a non-differentiable switch (and hence a critical point) and it occurs at $\Delta = O(\sqrt{(K \log T) / T})$. Hence the regret bound at the switch is $\Delta T = O(\sqrt{KT \log T})$, as desired.
## Proving the Worst-Case Regret Bound
Proof. The proof works by finding a bound on $P_i(T)$, the expected number of times UCB chooses an action up to round $T$. Using the $\Delta$ notation, the regret is then just $\sum_i \Delta_i \mathbb{E}(P_i(T))$, and bounding the $P_i$‘s will bound the regret.
Recall the notation for our upper bound $a(j, T) = \sqrt{2 \log T / P_j(T)}$ and let’s loosen it a bit to $a(y, T) = \sqrt{2 \log T / y}$ so that we’re allowed to “pretend” a action has been played $y$ times. Recall further that the random variable $I_t$ has as its value the index of the machine chosen. We denote by $\chi(E)$ the indicator random variable for the event $E$. And remember that we use an asterisk to denote a quantity associated with the optimal action (e.g., $\overline{x}^*$ is the empirical mean of the optimal action).
Indeed for any action $i$, the only way we know how to write down $P_i(T)$ is as
$\displaystyle P_i(T) = 1 + \sum_{t=K}^T \chi(I_t = i)$
The 1 is from the initialization where we play each action once, and the sum is the trivial thing where just count the number of rounds in which we pick action $i$. Now we’re just going to pull some number $m-1$ of plays out of that summation, keep it variable, and try to optimize over it. Since we might play the action fewer than $m$ times overall, this requires an inequality.
$P_i(T) \leq m + \sum_{t=K}^T \chi(I_t = i \textup{ and } P_i(t-1) \geq m)$
These indicator functions should be read as sentences: we’re just saying that we’re picking action $i$ in round $t$ and we’ve already played $i$ at least $m$ times. Now we’re going to focus on the inside of the summation, and come up with an event that happens at least as frequently as this one to get an upper bound. Specifically, saying that we’ve picked action $i$ in round $t$ means that the upper bound for action $i$ exceeds the upper bound for every other action. In particular, this means its upper bound exceeds the upper bound of the best action (and $i$ might coincide with the best action, but that’s fine). In notation this event is
$\displaystyle \overline{x}_i + a(P_i(t), t-1) \geq \overline{x}^* + a(P^*(T), t-1)$
Denote the upper bound $\overline{x}_i + a(i,t)$ for action $i$ in round $t$ by $U_i(t)$. Since this event must occur every time we pick action $i$ (though not necessarily vice versa), we have
$\displaystyle P_i(T) \leq m + \sum_{t=K}^T \chi(U_i(t-1) \geq U^*(t-1) \textup{ and } P_i(t-1) \geq m)$
We’ll do this process again but with a slightly more complicated event. If the upper bound of action $i$ exceeds that of the optimal machine, it is also the case that the maximum upper bound for action $i$ we’ve seen after the first $m$ trials exceeds the minimum upper bound we’ve seen on the optimal machine (ever). But on round $t$ we don’t know how many times we’ve played the optimal machine, nor do we even know how many times we’ve played machine $i$ (except that it’s more than $m$). So we try all possibilities and look at minima and maxima. This is a pretty crude approximation, but it will allow us to write things in a nicer form.
Denote by $\overline{x}_{i,s}$ the random variable for the empirical mean after playing action $i$ a total of $s$ times, and $\overline{x}^*_s$ the corresponding quantity for the optimal machine. Realizing everything in notation, the above argument proves that
$\displaystyle P_i(T) \leq m + \sum_{t=K}^T \chi \left ( \max_{m \leq s < t} \overline{x}_{i,s} + a(s, t-1) \geq \min_{0 < s' < t} \overline{x}^*_{s'} + a(s', t-1) \right )$
Indeed, at each $t$ for which the max is greater than the min, there will be at least one pair $s,s'$ for which the values of the quantities inside the max/min will satisfy the inequality. And so, even worse, we can just count the number of pairs $s, s'$ for which it happens. That is, we can expand the event above into the double sum which is at least as large:
$\displaystyle P_i(T) \leq m + \sum_{t=K}^T \sum_{s = m}^{t-1} \sum_{s' = 1}^{t-1} \chi \left ( \overline{x}_{i,s} + a(s, t-1) \geq \overline{x}^*_{s'} + a(s', t-1) \right )$
We can make one other odd inequality by increasing the sum to go from $t=1$ to $\infty$. This will become clear later, but it means we can replace $t-1$ with $t$ and thus have
$\displaystyle P_i(T) \leq m + \sum_{t=1}^\infty \sum_{s = m}^{t-1} \sum_{s' = 1}^{t-1} \chi \left ( \overline{x}_{i,s} + a(s, t) \geq \overline{x}^*_{s'} + a(s', t) \right )$
Now that we’ve slogged through this mess of inequalities, we can actually get to the heart of the argument. Suppose that this event actually happens, that $\overline{x}_{i,s} + a(s, t) \geq \overline{x}^*_{s'} + a(s', t)$. Then what can we say? Well, consider the following three events:
(1) $\displaystyle \overline{x}^*_{s'} \leq \mu^* - a(s', t)$
(2) $\displaystyle \overline{x}_{i,s} \geq \mu_i + a(s, t)$
(3) $\displaystyle \mu^* < \mu_i + 2a(s, t)$
In words, (1) is the event that the empirical mean of the optimal action is less than the lower confidence bound. By our Chernoff bound argument earlier, this happens with probability $t^{-4}$. Likewise, (2) is the event that the empirical mean payoff of action $i$ is larger than the upper confidence bound, which also occurs with probability $t^{-4}$. We will see momentarily that (3) is impossible for a well-chosen $m$ (which is why we left it variable), but in any case the claim is that one of these three events must occur. For if they are all false, we have
$\displaystyle \begin{matrix} \overline{x}_{i,s} + a(s, t) \geq \overline{x}^*_{s'} + a(s', t) & > & \mu^* - a(s',t) + a(s',t) = \mu^* \\ \textup{assumed} & (1) \textup{ is false} & \\ \end{matrix}$
and
$\begin{matrix} \mu_i + 2a(s,t) & > & \overline{x}_{i,s} + a(s, t) \geq \overline{x}^*_{s'} + a(s', t) \\ & (2) \textup{ is false} & \textup{assumed} \\ \end{matrix}$
But putting these two inequalities together gives us precisely that (3) is true:
$\mu^* < \mu_i + 2a(s,t)$
This proves the claim.
By the union bound, the probability that at least one of these events happens is $2t^{-4}$ plus whatever the probability of (3) being true is. But as we said, we’ll pick $m$ to make (3) always false. Indeed $m$ depends on which action $i$ is being played, and if $s \geq m > 8 \log T / \Delta_i^2$ then $2a(s,t) \leq \Delta_i$, and by the definition of $\Delta_i$ we have
$\mu^* - \mu_i - 2a(s,t) \geq \mu^* - \mu_i - \Delta_i = 0$.
Now we can finally piece everything together. The expected value of an event is just its probability of occurring, and so
\displaystyle \begin{aligned} \mathbb{E}(P_i(T)) & \leq m + \sum_{t=1}^\infty \sum_{s=m}^t \sum_{s' = 1}^t \textup{P}(\overline{x}_{i,s} + a(s, t) \geq \overline{x}^*_{s'} + a(s', t)) \\ & \leq \left \lceil \frac{8 \log T}{\Delta_i^2} \right \rceil + \sum_{t=1}^\infty \sum_{s=m}^t \sum_{s' = 1}^t 2t^{-4} \\ & \leq \frac{8 \log T}{\Delta_i^2} + 1 + \sum_{t=1}^\infty \sum_{s=1}^t \sum_{s' = 1}^t 2t^{-4} \\ & = \frac{8 \log T}{\Delta_i^2} + 1 + 2 \sum_{t=1}^\infty t^{-2} \\ & = \frac{8 \log T}{\Delta_i^2} + 1 + \frac{\pi^2}{3} \\ \end{aligned}
The second line is the Chernoff bound we argued above, the third and fourth lines are relatively obvious algebraic manipulations, and the last equality uses the classic solution to Basel’s problem. Plugging this upper bound in to the regret formula we gave in the first paragraph of the proof establishes the bound and proves the theorem.
$\square$
## Implementation and an Experiment
The algorithm is about as simple to write in code as it is in pseudocode. The confidence bound is trivial to implement (though note we index from zero):
def upperBound(step, numPlays):
return math.sqrt(2 * math.log(step + 1) / numPlays)
And the full algorithm is quite short as well. We define a function ub1, which accepts as input the number of actions and a function reward which accepts as input the index of the action and the time step, and draws from the appropriate reward distribution. Then implementing ub1 is simply a matter of keeping track of empirical averages and an argmax. We implement the function as a Python generator, so one can observe the steps of the algorithm and keep track of the confidence bounds and the cumulative regret.
def ucb1(numActions, reward):
payoffSums = [0] * numActions
numPlays = [1] * numActions
ucbs = [0] * numActions
# initialize empirical sums
for t in range(numActions):
payoffSums[t] = reward(t,t)
yield t, payoffSums[t], ucbs
t = numActions
while True:
ucbs = [payoffSums[i] / numPlays[i] + upperBound(t, numPlays[i]) for i in range(numActions)]
action = max(range(numActions), key=lambda i: ucbs[i])
theReward = reward(action, t)
numPlays[action] += 1
payoffSums[action] += theReward
yield action, theReward, ucbs
t = t + 1
The heart of the algorithm is the second part, where we compute the upper confidence bounds and pick the action maximizing its bound.
We tested this algorithm on synthetic data. There were ten actions and a million rounds, and the reward distributions for each action were uniform from $[0,1]$, biased by $1/k$ for some $5 \leq k \leq 15$. The regret and theoretical regret bound are given in the graph below.
The regret of ucb1 run on a simple example. The blue curve is the cumulative regret of the algorithm after a given number of steps. The green curve is the theoretical upper bound on the regret.
Note that both curves are logarithmic, and that the actual regret is quite a lot smaller than the theoretical regret. The code used to produce the example and image are available on this blog’s Github page.
## Next Time
One interesting assumption that UCB1 makes in order to do its magic is that the payoffs are stochastic and independent across rounds. Next time we’ll look at an algorithm that assumes the payoffs are instead adversarial, as we described earlier. Surprisingly, in the adversarial case we can do about as well as the stochastic case. Then, we’ll experiment with the two algorithms on a real-world application.
Until then!
# Probabilistic Bounds — A Primer
Probabilistic arguments are a key tool for the analysis of algorithms in machine learning theory and probability theory. They also assume a prominent role in the analysis of randomized and streaming algorithms, where one imposes a restriction on the amount of storage space an algorithm is allowed to use for its computations (usually sublinear in the size of the input).
While a whole host of probabilistic arguments are used, one theorem in particular (or family of theorems) is ubiquitous: the Chernoff bound. In its simplest form, the Chernoff bound gives an exponential bound on the deviation of sums of random variables from their expected value.
This is perhaps most important to algorithm analysis in the following mindset. Say we have a program whose output is a random variable $X$. Moreover suppose that the expected value of $X$ is the correct output of the algorithm. Then we can run the algorithm multiple times and take a median (or some sort of average) across all runs. The probability that the algorithm gives a wildly incorrect answer is the probability that more than half of the runs give values which are wildly far from their expected value. Chernoff’s bound ensures this will happen with small probability.
So this post is dedicated to presenting the main versions of the Chernoff bound that are used in learning theory and randomized algorithms. Unfortunately the proof of the Chernoff bound in its full glory is beyond the scope of this blog. However, we will give short proofs of weaker, simpler bounds as a straightforward application of this blog’s previous work laying down the theory.
If the reader has not yet intuited it, this post will rely heavily on the mathematical formalisms of probability theory. We will assume our reader is familiar with the material from our first probability theory primer, and it certainly wouldn’t hurt to have read our conditional probability theory primer, though we won’t use conditional probability directly. We will refrain from using measure-theoretic probability theory entirely (some day my colleagues in analysis will like me, but not today).
## Two Easy Bounds of Markov and Chebyshev
The first bound we’ll investigate is almost trivial in nature, but comes in handy. Suppose we have a random variable $X$ which is non-negative (as a function). Markov’s inequality is the statement that, for any constant $a > 0$,
$\displaystyle \textup{P}(X \geq a) \leq \frac{\textup{E}(X)}{a}$
In words, the probability that $X$ grows larger than some fixed constant is bounded by a quantity that is inversely proportional to the constant.
The proof is quite simple. Let $\chi_a$ be the indicator random variable for the event that $X \geq a$ ($\chi_a = 1$ when $X \geq a$ and zero otherwise). As with all indicator random variables, the expected value of $\chi_a$ is the probability that the event happens (if this is mysterious, use the definition of expected value). So $\textup{E}(\chi_a) = \textup{P}(X \geq a)$, and linearity of expectation allows us to include a factor of $a$:
$\textup{E}(a \chi_a) = a \textup{P}(X \geq a)$
The rest of the proof is simply the observation that $\textup{E}(a \chi_a) \leq \textup{E}(X)$. Indeed, as random variables we have the inequality $a \chi_a \leq X$. Whenever $a < X$, the former is equal to zero while the latter is nonnegative. And whenever $a \geq X$, the former is precisely $a$ while the latter is by assumption at least $a$. It follows that $\textup{E}(a \chi_a) \leq \textup{E}(X)$.
This last point is a simple property of expectation we omitted from our first primer. It usually goes by monotonicity of expectation, and we prove it here. First, if $X \geq 0$ then $\textup{E}(X) \geq 0$ (this is trivial). Second, if $0 \leq X \leq Y$, then define a new random variable $Z = Y-X$. Since $Z \geq 0$ and using linearity of expectation, it must be that $\textup{E}(Z) = \textup{E}(Y) - \textup{E}(X) \geq 0$. Hence $\textup{E}(X) \leq \textup{E}(Y)$. Note that we do require that $X$ has a finite expected value for this argument to work, but if this is not the case then Markov’s inequality is nonsensical anyway.
Markov’s inequality by itself is not particularly impressive or useful. For example, if $X$ is the number of heads in a hundred coin flips, Markov’s inequality ensures us that the probability of getting at least 99 heads is at most 50/99, which is about 1/2. Shocking. We know that the true probability is much closer to $2^{-100}$, so Markov’s inequality is a bust.
However, it does give us a more useful bound as a corollary. This bound is known as Chebyshev’s inequality, and its use is sometimes referred to as the second moment method because it gives a bound based on the variance of a random variable (instead of the expected value, the “first moment”).
The statement is as follows.
Chebyshev’s Inequality: Let $X$ be a random variable with finite expected value and positive variance. Then we can bound the probability that $X$ deviates from its expected value by a quantity that is proportional to the variance of $X$. In particular, for any $\lambda > 0$,
$\displaystyle \textup{P}(|X - \textup{E}(X)| \geq \lambda) \leq \frac{\textup{Var}(X)}{\lambda^2}$
And without any additional assumptions on $X$, this bound is sharp.
Proof. The proof is a simple application of Markov’s inequality. Let $Y = (X - \textup{E}(X))^2$, so that $\textup{E}(Y) = \textup{Var}(X)$. Then by Markov’s inequality
$\textup{P}(Y \geq \lambda^2) \leq \frac{\textup{E}(Y)}{\lambda^2}$
Since $Y$ is nonnegative $|X - \textup{E}(X)| = \sqrt(Y)$, and $\textup{P}(Y \geq \lambda^2) = \textup{P}(|X - \textup{E}(X)| \geq \lambda)$. The theorem is proved. $\square$
Chebyshev’s inequality shows up in so many different places (and usually in rather dry, technical bits), that it’s difficult to give a good example application. Here is one that shows up somewhat often.
Say $X$ is a nonnegative integer-valued random variable, and we want to argue about when $X = 0$ versus when $X > 0$, given that we know $\textup{E}(X)$. No matter how large $\textup{E}(X)$ is, it can still be possible that $\textup{P}(X = 0)$ is arbitrarily close to 1. As a colorful example, let $X$ is the number of alien lifeforms discovered in the next ten years. We might debate that $\textup{E}(X)$ can arbitrarily large: if some unexpected scientific and technological breakthroughs occur tomorrow, we could discover an unbounded number of lifeforms. On the other hand, we are very likely not to discover any, and probability theory allows for such a random variable to exist.
If we know everything about $\textup{Var}(X)$, however, we can get more informed bounds.
Theorem: If $\textup{E}(X) \neq 0$, then $\displaystyle \textup{P}(X = 0) \leq \frac{\textup{Var}(X)}{\textup{E}(X)^2}$.
Proof. Simply choose $\lambda = \textup{E}(X)$ and apply Chebyshev’s inequality.
$\displaystyle \textup{P}(X = 0) \leq \textup{P}(|X - \textup{E}(X)| \geq \textup{E}(X)) \leq \frac{\textup{Var}(X)}{\textup{E}(X)^2}$
The first inequality follows from the fact that the only time $X$ can ever be zero is when $|X - \textup{E}(X)| = \textup{E}(X)$, and $X=0$ only accounts for one such possibility. $\square$
This theorem says more. If we know that $\textup{Var}(X)$ is significantly smaller than $\textup{E}(X)^2$, then $X > 0$ is more certain to occur. More precisely, and more computationally minded, suppose we have a sequence of random variables $X_n$ so that $\textup{E}(X_n) \to \infty$ as $n \to \infty$. Then the theorem says that if $\textup{Var}(X_n) = o(\textup{E}(X_n)^2)$, then $\textup{P}(X_n > 0) \to 1$. Remembering one of our very early primers on asymptotic notation, $f = o(g)$ means that $f$ grows asymptotically slower than $g$, and in terms of this fraction $\textup{Var}(X) / \textup{E}(X)^2$, this means that the denominator dominates the fraction so that the whole thing tends to zero.
## The Chernoff Bound
The Chernoff bound takes advantage of an additional hypothesis: our random variable is a sum of independent coin flips. We can use this to get exponential bounds on the deviation of the sum. More rigorously,
Theorem: Let $X_1 , \dots, X_n$ be independent random $\left \{ 0,1 \right \}$-valued variables, and let $X = \sum X_i$. Suppose that $\mu = \textup{E}(X)$. Then the probability that $X$ deviates from $\mu$ by more than a factor of $\lambda > 0$ is bounded from above:
$\displaystyle \textup{P}(X > (1+\lambda)\mu) \leq \frac{e^{\lambda \mu}}{(1+\lambda)^{(1+\lambda)\mu}}$
The proof is beyond the scope of this post, but we point the interested reader to these lecture notes.
We can apply the Chernoff bound in an easy example. Say all $X_i$ are fair coin flips, and we’re interested in the probability of getting more than 3/4 of the coins heads. Here $\mu = n/2$ and $\lambda = 1/2$, so the probability is bounded from above by
$\displaystyle \left ( \frac{e}{(3/2)^3} \right )^{n/4} \approx \frac{1}{5^n}$
So as the number of coin flips grows, the probability of seeing such an occurrence diminishes extremely quickly to zero. This is important because if we want to test to see if, say, the coins are biased toward flipping heads, we can simply run an experiment with $n$ sufficiently large. If we observe that more than 3/4 of the flips give heads, then we proclaim the coins are biased and we can be assured we are correct with high probability. Of course, after seeing 3/4 of more heads we’d be really confident that the coin is biased. A more realistic approach is to define some $\varepsilon$ that is small enough so as to say, “if some event occurs whose probability is smaller than $\varepsilon$, then I call shenanigans.” Then decide how many coins and what bound one would need to make the bad event have probability approximately $\varepsilon$. Finding this balance is one of the more difficult aspects of probabilistic algorithms, and as we’ll see later all of these quantities are left as variables and the correct values are discovered in the course of the proof.
## Chernoff-Hoeffding Inequality
The Hoeffding inequality (named after the Finnish statistician, Wassily Høffding) is a variant of the Chernoff bound, but often the bounds are collectively known as Chernoff-Hoeffding inequalities. The form that Hoeffding is known for can be thought of as a simplification and a slight generalization of Chernoff’s bound above.
Theorem: Let $X_1, \dots, X_n$ be independent random variables whose values are within some range $[a,b]$. Call $\mu_i = \textup{E}(X_i)$, $X = \sum_i X_i$, and $\mu = \textup{E}(X) = \sum_i \mu_i$. Then for all $t > 0$,
$\displaystyle \textup{P}(|X - \mu| > t) \leq 2e^{-2t^2 / n(b-a)^2}$
For example, if we are interested in the sum of $n$ rolls of a fair six-sided die, then the probability that we deviate from $(7/2)n$ by more than $5 \sqrt{n \log n}$ is bounded by $2e^{(-2 \log n)} = 2/n^2$. Supposing we want to know how many rolls we need to guarantee with probability 0.01 that we don’t deviate too much, we just do the algebra:
$2n^{-2} < 0.01$
$n^2 > 200$
$n > \sqrt{200} \approx 14$
So with 15 rolls we can be confident that the sum of the rolls will lie between 20 and 85. It’s not the best possible bound we could come up with, because we’re completely ignoring the known structure on dice rolls (that they follow a uniform distribution!). The benefit is that it’s a quick and easy bound that works for any kind of random variable with that expected value.
Another version of this theorem concerns the average of the $X_i$, and is only a minor modification of the above.
Theorem: If $X_1, \dots, X_n$ are as above, and $X = \frac{1}{n} \sum_i X_i$, with $\mu = \frac{1}{n}(\sum_i \mu_i)$, then for all $t > 0$, we get the following bound
$\displaystyle \textup{P}(|X - \mu| > t) \leq 2e^{-2nt^2/(b-a)^2}$
The only difference here is the extra factor of $n$ in the exponent. So the deviation is exponential both in the amount of deviation ($t^2$), and in the number of trials.
This theorem comes up very often in learning theory, in particular to prove Boosting works. Mathematicians will joke about how all theorems in learning theory are just applications of Chernoff-Hoeffding-type bounds. We’ll of course be seeing it again as we investigate boosting and the PAC-learning model in future posts, so we’ll see the theorems applied to their fullest extent then.
Until next time!
# Probability Theory — A Primer
It is a wonder that we have yet to officially write about probability theory on this blog. Probability theory underlies a huge portion of artificial intelligence, machine learning, and statistics, and a number of our future posts will rely on the ideas and terminology we lay out in this post. Our first formal theory of machine learning will be deeply ingrained in probability theory, we will derive and analyze probabilistic learning algorithms, and our entire treatment of mathematical finance will be framed in terms of random variables.
And so it’s about time we got to the bottom of probability theory. In this post, we will begin with a naive version of probability theory. That is, everything will be finite and framed in terms of naive set theory without the aid of measure theory. This has the benefit of making the analysis and definitions simple. The downside is that we are restricted in what kinds of probability we are allowed to speak of. For instance, we aren’t allowed to work with probabilities defined on all real numbers. But for the majority of our purposes on this blog, this treatment will be enough. Indeed, most programming applications restrict infinite problems to finite subproblems or approximations (although in their analysis we often appeal to the infinite).
We should make a quick disclaimer before we get into the thick of things: this primer is not meant to connect probability theory to the real world. Indeed, to do so would be decidedly unmathematical. We are primarily concerned with the mathematical formalisms involved in the theory of probability, and we will leave the philosophical concerns and applications to future posts. The point of this primer is simply to lay down the terminology and basic results needed to discuss such topics to begin with.
So let us begin with probability spaces and random variables.
## Finite Probability Spaces
We begin by defining probability as a set with an associated function. The intuitive idea is that the set consists of the outcomes of some experiment, and the function gives the probability of each event happening. For example, a set $\left \{ 0,1 \right \}$ might represent heads and tails outcomes of a coin flip, while the function assigns a probability of one half (or some other numbers) to the outcomes. As usual, this is just intuition and not rigorous mathematics. And so the following definition will lay out the necessary condition for this probability to make sense.
Definition: A finite set $\Omega$ equipped with a function $f: \Omega \to [0,1]$ is a probability space if the function $f$ satisfies the property
$\displaystyle \sum_{\omega \in \Omega} f(\omega) = 1$
That is, the sum of all the values of $f$ must be 1.
Sometimes the set $\Omega$ is called the sample space, and the act of choosing an element of $\Omega$ according to the probabilities given by $f$ is called drawing an example. The function $f$ is usually called the probability mass function. Despite being part of our first definition, the probability mass function is relatively useless except to build what follows. Because we don’t really care about the probability of a single outcome as much as we do the probability of an event.
Definition: An event $E \subset \Omega$ is a subset of a sample space.
For instance, suppose our probability space is $\Omega = \left \{ 1, 2, 3, 4, 5, 6 \right \}$ and $f$ is defined by setting $f(x) = 1/6$ for all $x \in \Omega$ (here the “experiment” is rolling a single die). Then we are likely interested in more exquisite kinds of outcomes; instead of asking the probability that the outcome is 4, we might ask what is the probability that the outcome is even? This event would be the subset $\left \{ 2, 4, 6 \right \}$, and if any of these are the outcome of the experiment, the event is said to occur. In this case we would expect the probability of the die roll being even to be 1/2 (but we have not yet formalized why this is the case).
As a quick exercise, the reader should formulate a two-dice experiment in terms of sets. What would the probability space consist of as a set? What would the probability mass function look like? What are some interesting events one might consider (if playing a game of craps)?
Of course, we want to extend the probability mass function $f$ (which is only defined on single outcomes) to all possible events of our probability space. That is, we want to define a probability measure $\textup{P}: 2^\Omega \to \mathbb{R}$, where $2^\Omega$ denotes the set of all subsets of $\Omega$. The example of a die roll guides our intuition: the probability of any event should be the sum of the probabilities of the outcomes contained in it. i.e. we define
$\displaystyle \textup{P}(E) = \sum_{e \in E} f(e)$
where by convention the empty sum has value zero. Note that the function $\textup{P}$ is often denoted $\textup{Pr}$.
So for example, the coin flip experiment can’t have zero probability for both of the two outcomes 0 and 1; the sum of the probabilities of all outcomes must sum to 1. More coherently: $\textup{P}(\Omega) = \sum_{\omega \in \Omega} f(\omega) = 1$ by the defining property of a probability space. And so if there are only two outcomes of the experiment, then they must have probabilities $p$ and $1-p$ for some $p$. Such a probability space is often called a Bernoulli trial.
Now that the function $\textup{P}$ is defined on all events, we can simplify our notation considerably. Because the probability mass function $f$ uniquely determines $\textup{P}$ and because $\textup{P}$ contains all information about $f$ in it ($\textup{P}(\left \{ \omega \right \}) = f(\omega)$), we may speak of $\textup{P}$ as the probability measure of $\Omega$, and leave $f$ out of the picture. Of course, when we define a probability measure, we will allow ourselves to just define the probability mass function and the definition of $\textup{P}$ is understood as above.
There are some other quick properties we can state or prove about probability measures: $\textup{P}(\left \{ \right \}) = 0$ by convention, if $E, F$ are disjoint then $\textup{P}(E \cup F) = \textup{P}(E) + \textup{P}(F)$, and if $E \subset F \subset \Omega$ then $\textup{P}(E) \leq \textup{P}(F)$. The proofs of these facts are trivial, but a good exercise for the uncomfortable reader to work out.
## Random Variables
The next definition is crucial to the entire theory. In general, we want to investigate many different kinds of random quantities on the same probability space. For instance, suppose we have the experiment of rolling two dice. The probability space would be
$\displaystyle \Omega = \left \{ (1,1), (1,2), (1,3), \dots, (6,4), (6,5), (6,6) \right \}$
Where the probability measure is defined uniformly by setting all single outcomes to have probability 1/36. Now this probability space is very general, but rarely are we interested only in its events. If this probability space were interpreted as part of a game of craps, we would likely be more interested in the sum of the two dice than the actual numbers on the dice. In fact, we are really more interested in the payoff determined by our roll.
Sums of numbers on dice are certainly predictable, but a payoff can conceivably be any function of the outcomes. In particular, it should be a function of $\Omega$ because all of the randomness inherent in the game comes from the generation of an output in $\Omega$ (otherwise we would define a different probability space to begin with).
And of course, we can compare these two different quantities (the amount of money and the sum of the two dice) within the framework of the same probability space. This “quantity” we speak of goes by the name of a random variable.
Definition: random variable $X$ is a real-valued function on the sample space $\Omega \to \mathbb{R}$.
So for example the random variable for the sum of the two dice would be $X(a,b) = a+b$. We will slowly phase out the function notation as we go, reverting to it when we need to avoid ambiguity.
We can further define the set of all random variables $\textup{RV}(\Omega)$. It is important to note that this forms a vector space. For those readers unfamiliar with linear algebra, the salient fact is that we can add two random variables together and multiply them by arbitrary constants, and the result is another random variable. That is, if $X, Y$ are two random variables, so is $aX + bY$ for real numbers $a, b$. This function operates linearly, in the sense that its value is $(aX + bY)(\omega) = aX(\omega) + bY(\omega)$. We will use this property quite heavily, because in most applications the analysis of a random variable begins by decomposing it into a combination of simpler random variables.
Of course, there are plenty of other things one can do to functions. For example, $XY$ is the product of two random variables (defined by $XY(\omega) = X(\omega)Y(\omega)$) and one can imagine such awkward constructions as $X/Y$ or $X^Y$. We will see in a bit why it these last two aren’t often used (it is difficult to say anything about them).
The simplest possible kind of random variable is one which identifies events as either occurring or not. That is, for an event $E$, we can define a random variable which is 0 or 1 depending on whether the input is a member of $E$. That is,
Definition: An indicator random variable $1_E$ is defined by setting $1_E(\omega) = 1$ when $\omega \in E$ and 0 otherwise. A common abuse of notation for singleton sets is to denote $1_{\left \{ \omega \right \} }$ by $1_\omega$.
This is what we intuitively do when we compute probabilities: to get a ten when rolling two dice, one can either get a six, a five, or a four on the first die, and then the second die must match it to add to ten.
The most important thing about breaking up random variables into simpler random variables will make itself clear when we see that expected value is a linear functional. That is, probabilistic computations of linear combinations of random variables can be computed by finding the values of the simpler pieces. We can’t yet make that rigorous though, because we don’t yet know what it means to speak of the probability of a random variable’s outcome.
Definition: Denote by $\left \{ X = k \right \}$ the set of outcomes $\omega \in \Omega$ for which $X(\omega) = k$. With the function notation, $\left \{ X = k \right \} = X^{-1}(k)$.
This definition extends to constructing ranges of outcomes of a random variable. i.e., we can define $\left \{ X < 5 \right \}$ or $\left \{ X \textup{ is even} \right \}$ just as we would naively construct sets. It works in general for any subset of $S \subset \mathbb{R}$. The notation is $\left \{ X \in S \right \} = X^{-1}(S)$, and we will also call these sets events. The notation becomes useful and elegant when we combine it with the probability measure $\textup{P}$. That is, we want to write things like $\textup{P}(X \textup{ is even})$ and read it in our head “the probability that $X$ is even”.
This is made rigorous by simply setting
$\displaystyle \textup{P}(X \in S) = \sum_{\omega \in X^{-1}(S)} \textup{P}(\omega)$
In words, it is just the sum of the probabilities that individual outcomes will have a value under $X$ that lands in $S$. We will also use for $\textup{P}(\left \{ X \in S \right \} \cap \left \{ Y \in T \right \})$ the shorthand notation $\textup{P}(X \in S, Y \in T)$ or $\textup{P}(X \in S \textup{ and } Y \in T)$.
Often times $\left \{ X \in S \right \}$ will be smaller than $\Omega$ itself, even if $S$ is large. For instance, let the probability space be the set of possible lottery numbers for one week’s draw of the lottery (with uniform probabilities), let $X$ be the profit function. Then $\textup{P}(X > 0)$ is very small indeed.
We should also note that because our probability spaces are finite, the image of the random variable $\textup{im}(X)$ is a finite subset of real numbers. In other words, the set of all events of the form $\left \{ X = x_i \right \}$ where $x_i \in \textup{im}(X)$ form a partition of $\Omega$. As such, we get the following immediate identity:
$\displaystyle 1 = \sum_{x_i \in \textup{im} (X)} P(X = x_i)$
The set of such events is called the probability distribution of the random variable $X$.
The final definition we will give in this section is that of independence. There are two separate but nearly identical notions of independence here. The first is that of two events. We say that two events $E,F \subset \Omega$ are independent if the probability of both $E, F$ occurring is the product of the probabilities of each event occurring. That is, $\textup{P}(E \cap F) = \textup{P}(E)\textup{P}(F)$. There are multiple ways to realize this formally, but without the aid of conditional probability (more on that next time) this is the easiest way. One should note that this is distinct from $E,F$ being disjoint as sets, because there may be a zero-probability outcome in both sets.
The second notion of independence is that of random variables. The definition is the same idea, but implemented using events of random variables instead of regular events. In particular, $X,Y$ are independent random variables if
$\displaystyle \textup{P}(X = x, Y = y) = \textup{P}(X=x)\textup{P}(Y=y)$
for all $x,y \in \mathbb{R}$.
## Expectation
We now turn to notions of expected value and variation, which form the cornerstone of the applications of probability theory.
Definition: Let $X$ be a random variable on a finite probability space $\Omega$. The expected value of $X$, denoted $\textup{E}(X)$, is the quantity
$\displaystyle \textup{E}(X) = \sum_{\omega \in \Omega} X(\omega) \textup{P}(\omega)$
Note that if we label the image of $X$ by $x_1, \dots, x_n$ then this is equivalent to
$\displaystyle \textup{E}(X) = \sum_{i=1}^n x_i \textup{P}(X = x_i)$
The most important fact about expectation is that it is a linear functional on random variables. That is,
Theorem: If $X,Y$ are random variables on a finite probability space and $a,b \in \mathbb{R}$, then
$\displaystyle \textup{E}(aX + bY) = a\textup{E}(X) + b\textup{E}(Y)$
Proof. The only real step in the proof is to note that for each possible pair of values $x, y$ in the images of $X,Y$ resp., the events $E_{x,y} = \left \{ X = x, Y=y \right \}$ form a partition of the sample space $\Omega$. That is, because $aX + bY$ has a constant value on $E_{x,y}$, the second definition of expected value gives
$\displaystyle \textup{E}(aX + bY) = \sum_{x \in \textup{im} (X)} \sum_{y \in \textup{im} (Y)} (ax + by) \textup{P}(X = x, Y = y)$
and a little bit of algebraic elbow grease reduces this expression to $a\textup{E}(X) + b\textup{E}(Y)$. We leave this as an exercise to the reader, with the additional note that the sum $\sum_{y \in \textup{im}(Y)} \textup{P}(X = x, Y = y)$ is identical to $\textup{P}(X = x)$. $\square$
If we additionally know that $X,Y$ are independent random variables, then the same technique used above allows one to say something about the expectation of the product $\textup{E}(XY)$ (again by definition, $XY(\omega) = X(\omega)Y(\omega)$). In this case $\textup{E}(XY) = \textup{E}(X)\textup{E}(Y)$. We leave the proof as an exercise to the reader.
Now intuitively the expected value of a random variable is the “center” of the values assumed by the random variable. It is important, however, to note that the expected value need not be a value assumed by the random variable itself; that is, it might not be true that $\textup{E}(X) \in \textup{im}(X)$. For instance, in an experiment where we pick a number uniformly at random between 1 and 4 (the random variable is the identity function), the expected value would be:
$\displaystyle 1 \cdot \frac{1}{4} + 2 \cdot \frac{1}{4} + 3 \cdot \frac{1}{4} + 4 \cdot \frac{1}{4} = \frac{5}{2}$
But the random variable never achieves this value. Nevertheless, it would not make intuitive sense to call either 2 or 3 the “center” of the random variable (for both 2 and 3, there are two outcomes on one side and one on the other).
Let’s see a nice application of the linearity of expectation to a purely mathematical problem. The power of this example lies in the method: after a shrewd decomposition of a random variable $X$ into simpler (usually indicator) random variables, the computation of $\textup{E}(X)$ becomes trivial.
tournament $T$ is a directed graph in which every pair of distinct vertices has exactly one edge between them (going one direction or the other). We can ask whether such a graph has a Hamiltonian path, that is, a path through the graph which visits each vertex exactly once. The datum of such a path is a list of numbers $(v_1, \dots, v_n)$, where we visit vertex $v_i$ at stage $i$ of the traversal. The condition for this to be a valid Hamiltonian path is that $(v_i, v_{i+1})$ is an edge in $T$ for all $i$.
Now if we construct a tournament on $n$ vertices by choosing the direction of each edges independently with equal probability 1/2, then we have a very nice probability space and we can ask what is the expected number of Hamiltonian paths. That is, $X$ is the random variable giving the number of Hamiltonian paths in such a randomly generated tournament, and we are interested in $\textup{E}(X)$.
To compute this, simply note that we can break $X = \sum_p X_p$, where $p$ ranges over all possible lists of the vertices. Then $\textup{E}(X) = \sum_p \textup{E}(X_p)$, and it suffices to compute the number of possible paths and the expected value of any given path. It isn’t hard to see the number of paths is $n!$ as this is the number of possible lists of $n$ items. Because each edge direction is chosen with probability 1/2 and they are all chosen independently of one another, the probability that any given path forms a Hamiltonian path depends on whether each edge was chosen with the correct orientation. That’s just
$\textup{P}(\textup{first edge and second edge and } \dots \textup{ and last edge})$
which by independence is
$\displaystyle \prod_{i = 1}^n \textup{P}(i^\textup{th} \textup{ edge is chosen}) = \frac{1}{2^{n-1}}$
That is, the expected number of Hamiltonian paths is $n!2^{-(n-1)}$.
## Variance and Covariance
Just as expectation is a measure of center, variance is a measure of spread. That is, variance measures how thinly distributed the values of a random variable $X$ are throughout the real line.
Definition: The variance of a random variable $X$ is the quantity $\textup{E}((X - \textup{E}(X))^2)$.
That is, $\textup{E}(X)$ is a number, and so $X - \textup{E}(X)$ is the random variable defined by $(X - \textup{E}(X))(\omega) = X(\omega) - \textup{E}(X)$. It is the expectation of the square of the deviation of $X$ from its expected value.
One often denotes the variance by $\textup{Var}(X)$ or $\sigma^2$. The square is for silly reasons: the standard deviation, denoted $\sigma$ and equivalent to $\sqrt{\textup{Var}(X)}$ has the same “units” as the outcomes of the experiment and so it’s preferred as the “base” frame of reference by some. We won’t bother with such physical nonsense here, but we will have to deal with the notation.
The variance operator has a few properties that make it quite different from expectation, but nonetheless fall our directly from the definition. We encourage the reader to prove a few:
• $\textup{Var}(X) = \textup{E}(X^2) - \textup{E}(X)^2$.
• $\textup{Var}(aX) = a^2\textup{Var}(X)$.
• When $X,Y$ are independent then variance is additive: $\textup{Var}(X+Y) = \textup{Var}(X) + \textup{Var}(Y)$.
• Variance is invariant under constant additives: $\textup{Var}(X+c) = \textup{Var}(X)$.
In addition, the quantity $\textup{Var}(aX + bY)$ is more complicated than one might first expect. In fact, to fully understand this quantity one must create a notion of correlation between two random variables. The formal name for this is covariance.
Definition: Let $X,Y$ be random variables. The covariance of $X$ and $Y$, denoted $\textup{Cov}(X,Y)$, is the quantity $\textup{E}((X - \textup{E}(X))(Y - \textup{E}(Y)))$.
Note the similarities between the variance definition and this one: if $X=Y$ then the two quantities coincide. That is, $\textup{Cov}(X,X) = \textup{Var}(X)$.
There is a nice interpretation to covariance that should accompany every treatment of probability: it measures the extent to which one random variable “follows” another. To make this rigorous, we need to derive a special property of the covariance.
Theorem: Let $X,Y$ be random variables with variances $\sigma_X^2, \sigma_Y^2$. Then their covariance is at most the product of the standard deviations in magnitude:
$|\textup{Cov}(X,Y)| \leq \sigma_X \sigma_Y$
Proof. Take any two non-constant random variables $X$ and $Y$ (we will replace these later with $X - \textup{E}(X), Y - \textup{E}(Y)$). Construct a new random variable $(tX + Y)^2$ where $t$ is a real variable and inspect its expected value. Because the function is squared, its values are all nonnegative, and hence its expected value is nonnegative. That is, $\textup{E}((tX + Y)^2)$. Expanding this and using linearity gives
$\displaystyle f(t) = t^2 \textup{E}(X^2) + 2t \textup{E}(XY) + \textup{E}(Y^2) \geq 0$
This is a quadratic function of a single variable $t$ which is nonnegative. From elementary algebra this means the discriminant is at most zero. i.e.
$\displaystyle 4 \textup{E}(XY)^2 - 4 \textup{E}(X^2) \textup{E}(Y^2) \leq 0$
and so dividing by 4 and replacing $X,Y$ with $X - \textup{E}(X), Y - \textup{E}(Y)$, resp., gives
$\textup{Cov}(X,Y)^2 \leq \sigma_X^2 \sigma_Y^2$
and the result follows. $\square$
Note that equality holds in the discriminant formula precisely when $Y = -tX$ (the discriminant is zero), and after the replacement this translates to $Y - \textup{E}(Y) = -t(X - \textup{E}(X))$ for some fixed value of $t$. In other words, for some real numbers $a,b$ we have $Y = aX + b$.
This has important consequences even in English: the covariance is maximized when $Y$ is a linear function of $X$, and otherwise is bounded from above and below. By dividing both sides of the inequality by $\sigma_X \sigma_Y$ we get the following definition:
Definition: The Pearson correlation coefficient of two random variables $X,Y$ is defined by
$\displaystyle r= \frac{\textup{Cov}(X,Y)}{\sigma_X \sigma_Y}$
If $r$ is close to 1, we call $X$ and $Y$ positively correlated. If $r$ is close to -1 we call them negatively correlated, and if $r$ is close to zero we call them uncorrelated.
The idea is that if two random variables are positively correlated, then a higher value for one variable (with respect to its expected value) corresponds to a higher value for the other. Likewise, negatively correlated variables have an inverse correspondence: a higher value for one correlates to a lower value for the other. The picture is as follows:
The horizontal axis plots a sample of values of the random variable $X$ and the vertical plots a sample of $Y$. The linear correspondence is clear. Of course, all of this must be taken with a grain of salt: this correlation coefficient is only appropriate for analyzing random variables which have a linear correlation. There are plenty of interesting examples of random variables with non-linear correlation, and the Pearson correlation coefficient fails miserably at detecting them.
Here are some more examples of Pearson correlation coefficients applied to samples drawn from the sample spaces of various (continuous, but the issue still applies to the finite case) probability distributions:
Various examples of the Pearson correlation coefficient, credit Wikipedia.
Though we will not discuss it here, there is still a nice precedent for using the Pearson correlation coefficient. In one sense, the closer that the correlation coefficient is to 1, the better a linear predictor will perform in “guessing” values of $Y$ given values of $X$ (same goes for -1, but the predictor has negative slope).
But this strays a bit far from our original point: we still want to find a formula for $\textup{Var}(aX + bY)$. Expanding the definition, it is not hard to see that this amounts to the following proposition:
Proposition: The variance operator satisfies
$\displaystyle \textup{Var}(aX+bY) = a^2\textup{Var}(X) + b^2\textup{Var}(Y) + 2ab \textup{Cov}(X,Y)$
And using induction we get a general formula:
$\displaystyle \textup{Var} \left ( \sum_{i=1}^n a_i X_i \right ) = \sum_{i=1}^n \sum_{j = 1}^n a_i a_j \textup{Cov}(X_i,X_j)$
Note that in the general sum, we get a bunch of terms $\textup{Cov}(X_i,X_i) = \textup{Var}(X_i)$.
Another way to look at the linear relationships between a collection of random variables is via a covariance matrix.
Definition: The covariance matrix of a collection of random variables $X_1, \dots, X_n$ is the matrix whose $(i,j)$ entry is $\textup{Cov}(X_i,X_j)$.
As we have already seen on this blog in our post on eigenfaces, one can manipulate this matrix in interesting ways. In particular (and we may be busting out an unhealthy dose of new terminology here), the covariance matrix is symmetric and nonnegative, and so by the spectral theorem it has an orthonormal basis of eigenvectors, which allows us to diagonalize it. In more direct words: we can form a new collection of random variables $Y_j$ (which are linear combinations of the original variables $X_i$) such that the covariance of distinct pairs $Y_j, Y_k$ are all zero. In one sense, this is the “best perspective” with which to analyze the random variables. We gave a general algorithm to do this in our program gallery, and the technique is called principal component analysis.
## Next Up
So far in this primer we’ve seen a good chunk of the kinds of theorems one can prove in probability theory. Fortunately, much of what we’ve said for finite probability spaces holds for infinite (discrete) probability spaces and has natural analogues for continuous probability spaces.
Next time, we’ll investigate how things change for discrete probability spaces, and should we need it, we’ll follow that up with a primer on continuous probability. This will get our toes wet with some basic measure theory, but as every mathematician knows: analysis builds character.
Until then!
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 610, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9467002749443054, "perplexity": 221.03450610881424}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115891802.74/warc/CC-MAIN-20150124161131-00244-ip-10-180-212-252.ec2.internal.warc.gz"}
|
https://idoc.ias.universite-paris-saclay.fr/biblio?f%5Bauthor%5D=1389&language=en
|
# Biblio
Found 37 results
Filters: Author is Auchère, F. [Clear All Filters]
2021
Extreme-UV quiet Sun brightenings observed by the Solar Orbiter/EUI, , Astron. Astrophys., Volume 656, p.L4, (2021)
Stereoscopy of extreme UV quiet Sun brightenings observed by Solar Orbiter/EUI, , Astron. Astrophys., Volume 656, p.A35, (2021)
2020
20 Years of ACE Data: How Superposed Epoch Analyses Reveal Generic Features in Interplanetary CME Profiles, , Journal of Geophysical Research (Space Physics), Volume 125, p.e28150, (2020)
Models and data analysis tools for the Solar Orbiter mission, , Astron. Astrophys., Volume 642, p.A2, (2020)
2018
On the Occurrence of Thermal Nonequilibrium in Coronal Loops, , Astrophys. J., Volume 855, (2018)
2016
On the Fourier and Wavelet Analysis of Coronal Time Series, , The Astrophysical Journal, Volume 825, (2016)
Thermal Non-equilibrium Revealed by Periodic Pulses of Random Amplitudes in Solar Coronal Loops, , The Astrophysical Journal, Volume 827, (2016)
2015
Evidence for Evaporation-incomplete Condensation Cycles in Warm Solar Coronal Loops, , The Astrophysical Journal, Volume 807, Issue 2, p.158, (2015)
2014
Long-period intensity pulsations in the solar corona during activity cycle 23, , Astronomy & Astrophysics, Volume 563, p.A8, (2014)
2013
Can the Differential Emission Measure Constrain the Timescale of Energy Deposition in the Corona?, , The Astrophysical Journal, Volume 774, Issue 1, p.31, (2013)
Coronal Temperature Maps from Solar EUV Images: A Blind Source Separation Approach, , Solar Physics, Mar, Volume 283, p.31-47, (2013)
TomograPy: A Fast, Instrument-Independent, Solar Tomography Software, , Solar Physics, Volume 283, p.227-245, (2013)
2011
Initiation and Early Development of the 2008 April 26 Coronal Mass Ejection, , Astrophysical Journal, Mar, Volume 729, p.107, (2011)
Morphology, dynamics and plasma parameters of plumes and inter-plume regions in solar coronal holes, , Astronomy and Astrophysics Reviews, Volume 19, p.35, (2011)
Morphology, dynamics and plasma parameters of plumes and inter-plume regions in solar coronal holes, , Astronomy and Astrophysics Reviews, Jun, Volume 19, p.35, (2011)
2010
Automatic detection and statistical analysis of intensity oscillations in the solar corona with SDO, , 38th COSPAR Scientific Assembly, Volume 38, p.2863, (2010)
Large-scale Extreme-Ultraviolet Disturbances Associated with a Limb Coronal Mass Ejection, , Astrophysical Journal, Volume 708, p.913-919, (2010)
On the observations and possible interpretations of very long period intensity oscillations of solar coronal loops, , 38th COSPAR Scientific Assembly, Volume 38, p.2853, (2010)
The SDO data centre at IDOC/MEDOC in France, , 38th COSPAR Scientific Assembly, Volume 38, p.2888, (2010)
2009
Coronal and Interplanetary Structures Associated with Type III Bursts, , Solar Physics, May, Volume 256, p.101-110, (2009)
2007
Multispectral analysis of solar EUV images: linking temperature to morphology, , Astronomy and Astrophysics, Apr, Volume 466, p.347-355, (2007)
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9208167195320129, "perplexity": 17284.456007093133}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104683708.93/warc/CC-MAIN-20220707063442-20220707093442-00060.warc.gz"}
|
http://www.math.kit.edu/iana3/lehre/mi1w-forschsemdauer/en
|
Workgroup Functional Analysis
Secretariat
Kollegiengebäude Mathematik (20.30)
Room 2.041
Karlsruher Institut für Technologie
Institut für Analysis
Englerstraße 2
76131 Karlsruhe
[email protected]
Office hours:
9:00 - 11:00
Tel.: +49 721 608 43727
Fax.: +49 721 608 67650
# Research Seminar (Continuing Class)
Talks in the winter term 2019/2020
Unless otherwise stated the talks take place in room 2.066 in the "Kollegiengebäude Mathematik" (20.30) from 14:00 to 15:30.
15.10.2019 Nick Lindemulder (Karlsruhe) An Intersection Representation for a Class of Anisotropic Vector-valued Function Spaces In this talk we discuss an intersection representation for a class of anisotropic vector-valued function spaces in an axiomatic setting à la Hedberg & Netrusov, which includes weighted anisotropic mixed-norm Besov and Triebel-Lizorkin spaces. In the special case of the classical Triebel-Lizorkin spaces, the intersection representation gives an improvement of the well-known Fubini property. The motivation comes from the weighted -maximal regularity problem for parabolic boundary value problems, where weighted anisotropic mixed-norm Triebel-Lizorkin spaces occur as spaces of boundary data. 22.10.2019 Bas Nieraeth (Karlsruhe) Weighted theory and extrapolation for multilinear operators 19.11.2019 Andreas Geyer-Schulz (Karlsruhe) On global well-posedness of the Maxwell–Schrödinger system 02.12.2019 Wenqi Zhang (Canberra) Localisation of eigenfunctions via an effective potential for Schrödinger operators For Schrödinger operators with potentials (possibly random) we introduce the Landscape function as an effective potential. Due to the nicer properties of this Landscape function we are able to recover localisation estimates for continuous potentials, and specialise these estimates to obtain an approximate diagonalisation. We give a brief sketch of these arguments. This talk takes place in seminar room 2.066 at 10.30 am. 03.12.2019 Yonas Mesfun (Darmstadt) On the stability of a chemotaxis system with logistic growth In this talk we are concerned with the asymptotic behavior of the solution to a certain Neumann initial-boundary value problem which is a variant of the so-called Keller-Segel model describing chemotaxis. Chemotaxis is the directed movement of cells in response to an external chemical signal and plays an important role in various biochemical processes such as e.g. cancer growth. We show a result due to Winkler which says that under specific conditions, there exists a unique classical solution to this Neumann problem which converges to the equilibrium solution with respect to the -norm. For this purpose we study the Neumann Laplacian in , in particular some decay properties of its semigroup and embedding properties of the domain of its fractional powers, and then use those properties to prove Winkler's result. 10.12.2019 Emiel Lorist (Delft) Singular stochastic integral operators: The vector-valued and the mixed-norm approach Singular integral operators play a prominent role in harmonic analysis. By replacing integration with respect to some measure by integration with respect to Brownian motion, one obtains stochastic singular integral operators, which arise naturally in questions related to stochastic PDEs. In this talk I will introduce Calderón-Zygmund theory for these singular stochastic integral operators from both a vector-valued and a mixed-norm viewpoint. 14.01.2020 Alex Amenta (Bonn) Vector-valued time-frequency analysis and the bilinear Hilbert transform The bilinear Hilbert transform is a bilinear singular integral operator (or Fourier multiplier) which is invariant not only under translations and dilations, but also under modulations. This additional symmetry turns out to make proving -bounds especially difficult. I will give an overview of how time-frequency analysis is used in proving these -bounds, with focus on the recently understood setting of functions valued in UMD Banach spaces. 21.01.2020 Willem van Zuijlen (Berlin) Spectral asymptotics of the Anderson Hamiltonian In this talk I will discuss the asymptotics of the eigenvalues of the Anderson Hamiltonian, which is the operator given by . We consider to be (a realisation of) white noise and consider the operator on a box with Dirichlet boundary conditions. I will discuss the result in joint work with Khalil Chouk: almost surely the eigenvalues divided by the logarithm of the size of the box converge to the same limit. I will also discuss the application of this to obtain the large-time asymptotics of the total mass of the parabolic Anderson model, which is the SPDE given by .
18.02.2020 TULKKA in Konstanz
The workshop is taking place in room A 704 (University of Konstanz).
11:45-12:15 Adrian Spener (Ulm) Curvature-dimension inequalities for nonlocal operators 12:30-13:45 Lunch break 13:45-14:30 Sophia Rau (Konstanz) Stability results for thermoelastic plate-membrane systems 14:45-15:30 Andreas Geyer-Schulz (Karlsruhe) On global well-posedness of the Maxwell-Schrödinger system 15:30-16:15 Coffee break 16:15-17:00 Delio Mugnolo (Hagen) Linear hyperbolic systems
You find previous talks in the archive of the research seminar.
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8960179090499878, "perplexity": 1115.3831083778623}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370504930.16/warc/CC-MAIN-20200331212647-20200401002647-00439.warc.gz"}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.