url
stringlengths 14
1.76k
| text
stringlengths 100
1.02M
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|
https://www.cuemath.com/ncert-solutions/q-4-exercise-13-3-surface-areas-and-volumes-class-9-maths/
|
# Ex.13.3 Q4 Surface Areas and Volumes Solution - NCERT Maths Class 9
Go back to 'Ex.13.3'
## Question
A conical tent is $$10\,\rm m$$ high and the radius of its base is $$24\,\rm m.$$ Find
i. Slant height of the tent.
ii. Cost of the canvas required to make the tent, if the cost of $$1 \, \rm {m^2}$$ canvas is $$\rm Rs. \,70.$$
Video Solution
Surface-Areas-And-Volumes
Ex exercise-13-3 | Question 4
## Text Solution
Reasoning:
Curved surface area of the cone of base radius $$r$$ and slant height $$l$$ is\begin{align}\pi rl \end{align}. Where, \begin{align}l = \sqrt {{r^2} + {h^2}} \end{align} using the Pythagoras Theorem. And cost of canvas required will be the product of area and cost per meter square of canvas.
What is known?
Height of the cone and its base radius.
What is unknown?
i. Slant height of the tent.
Steps:
Slant height $$l = \sqrt {{r^2} + {h^2}}$$
Radius $$(r) = 24\rm\, m$$
Height $$(h) = 10\rm \, m$$
\begin{align}l &= \sqrt {{r^2} + {h^2}} \\ l &= \sqrt {{{(24)}^2} + {{(10)}^2}} \\ & = \sqrt {576 + 100} \\ &= \sqrt {676} \end{align}
Slant height of the conical tent $$= 26\, \rm m$$
ii. Cost of canvas required to make if $$1 \rm{m^2}$$ canvas is $$\rm Rs\, 70.$$
Canvas required to make the tent is equal to the curved surface area of the cone.
Curved surface area of the cone $$= \pi rl$$
Radius $$(r) = 24\rm\, m$$
Slant height \begin{align}(l) = 26\,\, \rm m \end{align}
\begin{align}CSA = \frac{{22}}{7} \times 24 \times 26\,\, \rm {m^2} \end{align}
\begin{align}\therefore \end{align} Cost of \begin{align}\frac{{22}}{7} \times 24 \times 26\,\rm {m^2} \end{align} canvas \begin{align} = \frac{{22}}{7} \times 24 \times 26 \times 70 = \rm Rs\,\,137280 \end{align}
Cost of the canvas required to make the tent $$= \rm Rs. 137280$$
(i) Slant height of the tent is $$26\,\rm m.$$
(ii) The cost of the canvas is $$\rm Rs. 137280$$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 16, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999947547912598, "perplexity": 4302.817975326804}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991252.15/warc/CC-MAIN-20210512035557-20210512065557-00443.warc.gz"}
|
http://scitation.aip.org/content/aip/journal/jmp/47/4/10.1063/1.2188209
|
• journal/journal.article
• aip/jmp
• /content/aip/journal/jmp/47/4/10.1063/1.2188209
• jmp.aip.org
1887
No data available.
No metrics data to plot.
The attempt to plot a graph for these metrics has failed.
A Morse-theoretical analysis of gravitational lensing by a Kerr-Newman black hole
USD
10.1063/1.2188209
View Affiliations Hide Affiliations
Affiliations:
1 TU Berlin, Sekr. PN 7-1, 10623 Berlin, Germany and Wilhelm Foerster Observatory, Munsterdamm 90, 12169 Berlin, Germany
2 TU Berlin, Sekr. PN 7-1, 10623 Berlin, Germany
a) Electronic mail: [email protected]
b) Electronic mail: [email protected]
J. Math. Phys. 47, 042503 (2006)
/content/aip/journal/jmp/47/4/10.1063/1.2188209
http://aip.metastore.ingenta.com/content/aip/journal/jmp/47/4/10.1063/1.2188209
View: Figures
## Figures
FIG. 1.
The surfaces (top) and (bottom) are drawn here for the case and . The picture shows the (half-)plane , with on the horizontal and on the vertical axis. The spheres of radius and are indicated by dashed lines; they meet the equatorial plane in the photon circles. The boundary of the ergosphere coincides with the surface and is indicated in the bottom figure by a thick line; it meets the equatorial plane at .
FIG. 2.
The regions , , and defined in Proposition 1 are shown here for the case and . Again, as in Fig. 1, we plot on the horizontal and on the vertical axis. Some of the spherical lightlike geodesics that fill the photon region are indicated. meets the equatorial plane in the photon circles at and and the axis at radius given by . This picture can also be found as Fig. 21 in the online article (Ref. 29).
/content/aip/journal/jmp/47/4/10.1063/1.2188209
2006-04-14
2014-04-24
Article
content/aip/journal/jmp
Journal
5
3
### Most cited this month
More Less
This is a required field
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8072673082351685, "perplexity": 2009.2963900956768}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00185-ip-10-147-4-33.ec2.internal.warc.gz"}
|
https://mathematica.stackexchange.com/questions/80241/analogue-for-maples-dchange-change-of-variables-in-differential-expressions?noredirect=1
|
# Analogue for Maple's dchange - change of variables in differential expressions
Maple owns an interesting function called dchange which can change the variables of differential equations, but there seems to be no such function in Mathematica.
Has any one ever tried to write something similar? I found this, this and this post related, but none of them attracted a general enough answer.
"So, what have you tried?" - Well, nothing. I decided to ask this question first to see if someone has already implemented the functionality and waited for a chance to make it public. If this question finally elicits no answer, I'll have a try.
The imaginary syntax for the function is
dChange[DE, relation, var]
where DE is the differential equation(s) to be transformed, and relation is the transformation relation(s) expressed as equation(s) i.e. with head Equal, var is the variable(s) to be changed.
Here are some examples for the imaginary behaviour:
Example 1
Originated from this answer implementing stereographic projection.
dChange[1/η D[η D[f[η], η], η] + (1 - s^2/η^2) f[η] - f[η]^3 == 0,
η == Sqrt[(1 + z)/(1 - z)], η]
(1/(1 + z)) ((-(1 + s^2 (-1 + z) + z)) f[z] + (1 + z) f[z]^3 +
(-1 + z)^2 (1 + z) (2 z f'[z] + (-1 + z^2) f''[z])) == 0
Example 2
Originated from this answer for Stefan's problem.
dChange[D[u[x, t], t] == D[u[x, t], {x, 2}], x == ξ s[t], x]
Derivative[0, 1][u][ξ, t] - (ξ s'[t]
Derivative[1, 0][u][ξ, t])/s[t] == Derivative[2, 0][u][ξ, t]/s[t]^2
Example 3
Originated from this answer. This technique is also used in the reduction of d'Alembert's formula.
dChange[D[y[x, t], t] - 2 D[y[x, t], x] == Exp[-(t - 1)^2 - (x - 5)^2],
{ξ == t + x/2, η == t}, {x, t}]
Derivative[0, 1][y][ξ, η] == E^(-(-1 + η)^2 - (5 + 2 η - 2 ξ)^2)
I'll add more if I recall other representative examples.
• possible duplicate of Change variables in differential expressions – m0nhawk Apr 18 '15 at 8:32
• @m0nhawk Well, as I mentioned above, that's just one of the related questions that are not general enough. – xzczd Apr 18 '15 at 8:33
• For a quiet long time of using Mathematica the Replace and ReplaceAll are more than enough and, actually, I found them much powerful than Maple's dchange. – m0nhawk Apr 18 '15 at 8:37
• The link has a few examples, and (re: @m0nhawk) I'm not sure that simply RepkaceAll will provide the same functionality. – Sjoerd C. de Vries Apr 18 '15 at 9:12
• I don't know much about Maple, but it seems from your examples that it's less "careful" when simplifying expressions: Mathematica leaves expressions unevaluated if it can't get a result that's valid generically or consistent with the given assumptions. So probably one would have to allow an additional Assumptions option in the dChange emulation to tell Mathematica which variables are positive, or complex, etc... so it has a better chance of inverting and simplifying the required relations. Anyway, I like the idea... – Jens Apr 18 '15 at 16:32
I've put this code on a GitHub but I don't know what features are needed or what problems it may give. I'm just not using it.
But I will incorporate incomming suggestions as soon as I have time.
Feedback in form of tests and suggestions very appreciated!
(If[DirectoryQ[#], DeleteDirectory[#, DeleteContents -> True]];
CreateDirectory[#];
URLSave[
"https://raw.githubusercontent.com/" <>
"kubaPod/MoreCalculus/master/MoreCalculus/MoreCalculus.m"
,
FileNameJoin[{#, "MoreCalculus.m"}]
]
) & @ FileNameJoin[{\$UserBaseDirectory, "Applications", "MoreCalculus"}]
https://github.com/kubaPod/MoreCalculus
So this is a package MoreCalculus` with the function DChange inside.
## What's new:
DChange automatically takes under consideration range assumptions for built-in transformations: (not heavily tested)
DChange[
D[f[x, y], x, x] + D[f[x, y], y, y] == 0,
"Cartesian" -> "Polar", {x, y}, {r, θ}, f[x, y]
]
## Usage:
DChange[expresion, {transformations}, {oldVars}, {newVars}, {functions}]
DChange[expresion, "Coordinates1"->"Coordinates2", ...]
DChange[expresion, {functionsSubstitutions}]
You can also skip {} if a list has only one element.
## Examples:
### Change of coordinates
• rules accepted by CoordinateTransform are now incorporated, as well as coordinates ranges assumptions associated with them
DChange[
D[f[x, y], x, x] + D[f[x, y], y, y] == 0,
"Cartesian" -> "Polar", {x, y}, {r, θ}, f[x, y]
]
The transformation is returned too, to check if the canonical (in MMA) order of variables was used.
• wave equation in retarded/advanced coordinates
DChange[
D[u[x, t], {t, 2}] == c^2 D[u[x, t], {x, 2}]
,
{a == x + c t, r == x - c t}, {x, t}, {a, r}, {u[x, t]} ]
c Derivative[1, 1][u][a, r] == 0
• stereographic projection
DChange[
D[η*D[f[η], η], η]/η + (1 - s^2/η^2)*f[η] - f[η]^3 == 0
,
η == Sqrt[(1+z)/(1-z)], η, z, f[η] ]
((z-1)^2 (z+1)((z^2-1) f''[z]+2 z f'[z])-f[z] (s^2 (z-1)+z+1)+(z+1) f[z]^3)/(z+1)==0
Example from @Takoda
$$\begin{pmatrix}\dot{x}\\ \dot{y} \end{pmatrix}=\begin{pmatrix}-y\sqrt{x^{2}+y^{2}}\\ x\sqrt{x^{2}+y^{2}} \end{pmatrix}$$
out = DChange[
Dt[{x, y}, t] == {-y r^2, x r^2}, "Cartesian" -> "Polar",
{x, y}, {r, θ}, {}
]
Solve[out[[1]], {Dt[r, t], Dt[θ, t]}]
{{Dt[r, t] -> 0, Dt[θ, t] -> r^2}}
### Functions replacement
• example on special case separation of Fokker-Planck equation
DChange[
-D[u[x, t], {x, 2}] + D[u[x, t], {t}] - D[x u[x, t], {x}]
,
u[x, t] == Exp[-1/2 x^2] f[x] T[t]
] // Simplify
% / Exp[-x^2/2] / f[x] / T[t] // Expand
ClearAll[DChange];
DChange[expr_, transformations_List, oldVars_List, newVars_List, functions_List] :=
Module[ {pos, functionsReplacements, variablesReplacements, arguments,
,
pos = Flatten[
Outer[Position, functions, oldVars],
{{1}, {2}, {3, 4}}
];
arguments = List @@@ functions;
newVarsSolved = newVars /. Solve[transformations, newVars][[1]];
functionsReplacements = Map[
Function[i,
Function[#, #2] &[
arguments[[i]],
] )
]
,
Range @ Length @ functions
];
variablesReplacements = Solve[transformations, oldVars][[1]];
expr /. functionsReplacements /. variablesReplacements // Simplify // Normal
];
DChange[expr_, functions : {(_[___] == _) ..}] := expr /. Replace[
functions, (f_[vars__] == body_) :> (f -> Function[{vars}, body]), {1}]
DChange[expr_, x___] := DChange[expr, ##] & @@ Replace[{x},
var : Except[_List] :> {var}, {1}];
DChange[expr_, coordinates:Verbatim[Rule][__String], oldVars_List,
newVars_List, functions_ ]:=Module[{mapping, transformation},
mapping = Check[
CoordinateTransformData[coordinates, "Mapping", oldVars],
Abort[]
];
transformation = Thread[newVars == mapping ];
{
DChange[expr, transformation, oldVars, newVars, functions],
transformation
}
];
## TODO:
• add some user friendly DownValues for simple cases
• heavy testing needed, feedback appreciated
• exceptions/errors handling. it is only as powerful as Solve so may brake for more convoluted implicit relations
• it is not designed as a scoping construct
• Great work ;); +1 – Sektor Apr 18 '15 at 19:49
• Your design for the syntax is undoubtedly more Mathematica-like and more reasonable. (I admit that when writing the question I haven't deliberated on the syntax design. ) – xzczd Apr 20 '15 at 7:36
• Does this excellent code exist in package form? Thanks. – bbgodfrey Jan 10 '16 at 2:19
• I would strongly suggest to look at very old package by Dr. Boris Rubinstein (can find in Mathsource) lt.tabiste.eu/w/library.wolfram.com+6493+05KTT9M3++jikAvB7Z/… With just adding two semicolons it worked with current versions almost perfectly in most cases. – user18792 Mar 7 '16 at 13:10
• @user18792, could you give some details on how to change this packages? Or examples – tanghe2014 Dec 3 '16 at 4:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2701510488986969, "perplexity": 7041.541119122761}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574058.75/warc/CC-MAIN-20190920175834-20190920201834-00538.warc.gz"}
|
https://marcofrasca.wordpress.com/2013/07/13/waiting-for-eps-hep-2013-some-thoughts/
|
## Waiting for EPS HEP 2013: Some thoughts
On 18th July the first summer HEP Conference will start in Stockholm. We do not expect great announcements from CMS and ATLAS as most of the main results from 2011-2012 data were just unraveled. The conclusions is that the particle announced on 4th July last year is a Higgs boson. It decays in all the modes foreseen by the Standard Model and important hints favor spin 0. No other resonance is seen at higher energies behaving this way. It is a single yet. There are a lot of reasons to be happy: We have likely seen the guilty for the breaking of the symmetry in the Standard Model and, absolutely for the first time, we have a fundamental particle behaving like a scalar. Both of these properties were looked upon for a long time and now this search is finally ended. On the bad side, no hint of new physics is seen anywhere and probably we will have to wait the restart of LHC on 2015. The long sought SUSY is at large yet.
Notwithstanding this hopeless situation for theoretical physics, my personal view is that there is something that gives important clues to great novelties that possibly will transmute into something of concrete at the restart. It is important to note that there seem to exist some differences between CMS and ATLAS and this small disagreement can hide interesting news for the future. I cannot say if, due to the different conception of this two detectors, something different should be seen but is there. Anyway, they should agree in the end of the story and possibly this will happen in the near future.
The first essential point, that is often overlooked due to the overall figure, is the decay of the Higgs particle in a couple of W or Z. WW decay has a significantly large number of events and what CMS claims is indeed worth some deepening. This number is significantly below one. There is a strange situation here because CMS gives $0.76\pm 0.21$ and in the overall picture just write $0.68\pm 0.20$ and so, I cannot say what is the right one. But they are consistent each other so not a real problem here. Similarly, ZZ decay yields $0.91^{+0.30}_{-0.24}$. ATLAS, on the other side, yields for WW decay $0.99^{+0.31}_{-0.28}$ and for ZZ decay $1.43^{+0.40}_{-0.35}$. Error bars are large yet and fluctuations can change these values. The interesting point here, but this has the value of a clue as these data agree with Standard Model at $2\sigma$, is that the lower values for the WW decay can be an indication that this Higgs particle could be a conformal one. This would mean room for new physics. For ZZ decay apparently ATLAS seems to have a lower number of events as this figure is somewhat larger and the error bar as well. Anyway, a steady decrease has been seen for the WW decay as a larger dataset was considered. This decrease, if confirmed at the restart, would mean a major finding after the discovery of the Higgs particle. It should be said that ATLAS already published updated results with the full dataset (see here). I would like to emphasize that a conformal Standard Model can imply SUSY.
The second point is a bump found by CMS in the $\gamma\gamma$ channel (see here). This is what they see
but ATLAS sees nothing there and this is possibly a fluke. Anyway, this is about $3\sigma$ and so CMS reported about on a publication of them.
Finally, it is also possible that heavier Higgs particles could have depressed production rates and so are very rare. This also would be consistent with a conformal Standard Model. My personal view is that all hopes to see new physics at LHC are essentially untouched and maybe this delay to unveil it is just due to the unlucky start of the LHC on 2008. Meantime, we have to use the main virtue of a theoretical physicist: keeping calm and being patient.
Update: Here is the press release from CERN.
ATLAS Collaboration (2013). Measurements of Higgs boson production and couplings in diboson final
states with the ATLAS detector at the LHC arXiv arXiv: 1307.1427v1
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 8, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9277880191802979, "perplexity": 709.3151315783256}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375099755.63/warc/CC-MAIN-20150627031819-00247-ip-10-179-60-89.ec2.internal.warc.gz"}
|
http://math.stackexchange.com/questions/244947/standing-wave-problem-in-deep-water-limit-h-to-infty-show-that-omega2-gk
|
# Standing Wave problem-In deep water limit $h\to\infty$ show that $\omega^2=gk$
The equations I have are
$\phi=(-ag/\omega)\cos(kx)\sin(\omega t)e^{kz}$
and
$\eta=a\cos(kx)\cos(\omega t)$
I know that $d\phi/dz=d\eta/dt$
but when I partially differentiate and rearrange I get
$gk e^{kz}= \omega^2$ and I don't know how to get rid of the exponential function.
-
How does $h$ enter your equations? Do the parameters depend on $h$? – Johan Nov 26 '12 at 13:23
The original equation is $\phi=-ag/\omega cos(kx)sin(\omega t) (cosh(k[z+h])/ cosh(kH))$ but as h tends to inifity $\phi$ simplifies to the equation in the question. – Adam Nov 26 '12 at 13:37
Do you know why the original equation simplifies as h tends to infinity? How do I show that the original equations simplifies as h tends to infinity? – user52290 Dec 8 '12 at 23:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9626029133796692, "perplexity": 505.8887050400482}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011082123/warc/CC-MAIN-20140305091802-00065-ip-10-183-142-35.ec2.internal.warc.gz"}
|
http://stats.stackexchange.com/users/2669/rolando2
|
# rolando2
less info
reputation
1330
bio website integrativestatistics.com location Boston, MA age member for 3 years, 9 months seen 12 hours ago profile views 788
15 What skills are required to perform large scale statistical analyses? 14 What are common statistical sins? 10 Survey Method on Personal Isues 10 Significant variable with no effect in logistic regression 9 Are all values within a 95% confidence interval equally likely?
# 6,220 Reputation
+10 Assumptions of factor analysis +115 Survey Method on Personal Isues +10 Confidence Interval for $\eta^2$ +10 How robust is ANOVA to violations of normality?
# 14 Questions
25 Accommodating entrenched views of p-values 14 How to model this odd-shaped distribution (almost a reverse-J) 14 Seeking certain type of ARIMA explanation 7 Should factor loadings be dominated by items' ranges of answer options? 6 OLS vs. logistic regression for exploratory analysis with a binary outcome
# 198 Tags
104 regression × 60 36 r × 18 65 logistic × 29 33 multiple-regression × 12 46 spss × 30 31 multivariate-analysis × 10 44 correlation × 26 30 survey × 11 39 factor-analysis × 19 29 statistical-significance × 15
# 4 Accounts
Cross Validated 6,220 rep 1330 Stack Overflow 149 rep 8 Science Fiction & Fantasy 101 rep 1 English Language Learners 101 rep 1
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2227991670370102, "perplexity": 9417.201197571243}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663167.8/warc/CC-MAIN-20140930004103-00323-ip-10-234-18-248.ec2.internal.warc.gz"}
|
https://math.libretexts.org/Courses/Long_Beach_City_College/Book%3A_Intermediate_Algebra/Text/08%3A_Rational_Expressions_and_Equations/8.5%3A_Simplify_Complex_Rational_Expressions
|
$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$
# 8.5: Simplify Complex Rational Expressions
$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$
##### Learning Objectives
By the end of this section, you will be able to:
• Simplify a complex rational expression by writing it as division
• Simplify a complex rational expression by using the LCD
##### Note
Before you get started, take this readiness quiz.
If you miss a problem, go back to the section listed and review the material.
1. Simplify: $$\frac{\frac{3}{5}}{\frac{9}{10}}$$.
If you missed this problem, review Exercise 1.6.25.
2. Simplify: $$\frac{1−\frac{1}{3}}{4^2+4·5}$$.
If you missed this problem, review Exercise 1.6.31.
Complex fractions are fractions in which the numerator or denominator contains a fraction. In Chapter 1 we simplified complex fractions like these:
$\begin{array}{cc} {\frac{\frac{3}{4}}{\frac{5}{8}}}&{\frac{\frac{x}{2}}{\frac{xy}{6}}}\\ \nonumber \end{array}$
In this section we will simplify complex rational expressions, which are rational expressions with rational expressions in the numerator or denominator.
##### Definition: COMPLEX RATIONAL EXPRESSION
A complex rational expression is a rational expression in which the numerator or denominator contains a rational expression.
Here are a few complex rational expressions:
$$\frac{\frac{4}{y−3}}{\frac{8}{y^2−9}}$$
$$\frac{\frac{1}{x}+\frac{1}{y}}{\frac{x}{y}−\frac{y}{x}}$$
$$\frac{\frac{2}{x+6}}{\frac{4}{x−6}−\frac{4}{x^2−36}}$$
Remember, we always exclude values that would make any denominator zero.
We will use two methods to simplify complex rational expressions.
## Simplify a Complex Rational Expression by Writing it as Division
We have already seen this complex rational expression earlier in this chapter.
$$\frac{\frac{6x^2−7x+2}{4x−8}}{\frac{2x^2−8x+3}{x^2−5x+6}}$$
We noted that fraction bars tell us to divide, so rewrote it as the division problem
$$(\frac{6x^2−7x+2}{4x−8})÷(\frac{2x^2−8x+3}{x^2−5x+6})$$
Then we multiplied the first rational expression by the reciprocal of the second, just like we do when we divide two fractions.
This is one method to simplify rational expressions. We write it as if we were dividing two fractions.
##### Example $$\PageIndex{1}$$
$$\frac{\frac{4}{y−3}}{\frac{8}{y^2−9}}$$.
$$\frac{\frac{4}{y−3}}{\frac{8}{y^2−9}}$$ Rewrite the complex fraction as division. $$\frac{4}{y−3}÷\frac{8}{y^2−9}$$ Rewrite as the product of first times the reciprocal of the second. $$\frac{4}{y−3}·\frac{y^2−9}{8}$$ Multiply. $$\frac{4(y^2−9)}{8(y−3)}$$ Factor to look for common factors. $$\frac{4(y−3)(y+3)}{8(y−3)}$$ Simplify. $$\frac{y+3}{2}$$
Are there any value(s) of y that should not be allowed? The simplified rational expression has just a constant in the denominator. But the original complex rational expression had denominators of y−3 and $$y^2−9$$. This expression would be undefined if y=3 or y=−3
##### Example $$\PageIndex{2}$$
$$\frac{\frac{2}{x^2−1}}{\frac{3}{x+1}}$$.
$$\frac{2}{3(x−1)}$$
##### Example $$\PageIndex{3}$$
$$\frac{\frac{1}{x^2−7x+12}}{\frac{2}{x−4}}$$.
$$\frac{1}{2(x−3)}$$
##### Example $$\PageIndex{4}$$
$$\frac{\frac{1}{3}+\frac{1}{6}}{\frac{1}{2}−\frac{1}{3}}$$.
Simplify the numerator and denominator. Find the LCD and add the fractions in the numerator. Find the LCD and add the fractions in the denominator. Simplify the numerator and denominator. Simplify the numerator and denominator, again. Rewrite the complex rational expression as a division problem. Multiply the first times by the reciprocal of the second. Simplify.
##### Example $$\PageIndex{5}$$
$$\frac{\frac{1}{2}+\frac{2}{3}}{\frac{5}{6}+\frac{1}{12}}$$.
$$\frac{14}{11}$$
##### Example $$\PageIndex{6}$$
$$\frac{\frac{3}{4}−\frac{1}{3}}{\frac{1}{8}+\frac{5}{6}}$$.
$$\frac{10}{23}$$
How to Simplify a Complex Rational Expression by Writing it as Division
##### Example $$\PageIndex{7}$$
$$\frac{\frac{1}{x}+\frac{1}{y}}{\frac{x}{y}−\frac{y}{x}}$$.
##### Example $$\PageIndex{8}$$
$$\frac{\frac{1}{x}+\frac{1}{y}}{\frac{1}{x}−\frac{1}{y}}$$.
$$\frac{y+x}{y−x}$$
##### Example $$\PageIndex{9}$$
$$\frac{\frac{1}{a}+\frac{1}{b}}{\frac{1}{a^2}−\frac{1}{b^2}}$$.
$$\frac{ab}{b−a}$$
##### Definition: SIMPLIFY A COMPLEX RATIONAL EXPRESSION BY WRITING IT AS DIVISION.
1. Simplify the numerator and denominator.
2. Rewrite the complex rational expression as a division problem.
3. Divide the expressions.
##### Example $$\PageIndex{10}$$
$$\frac{n−\frac{4n}{n+5}}{\frac{1}{n+5}+\frac{1}{n−5}}$$
Simplify the numerator and denominator. Find the LCD and add the fractions in the numerator. Find the LCD and add the fractions in the denominator. Simplify the numerators. Subtract the rational expressions in the numerator and add in the denominator. Rewrite as fraction division. Multiply the first times the reciprocal of the second. Factor any expressions if possible. Remove common factors. Simplify.
##### Example $$\PageIndex{11}$$
$$\frac{b−\frac{3b}{b+5}}{\frac{2}{b+5}+\frac{1}{b−5}}$$.
b(b+2)
##### Example $$\PageIndex{12}$$
$$\frac{1−\frac{3}{c+4}}{\frac{1}{c+4}+\frac{c}{3}}$$.
3c+3
## Simplify a Complex Rational Expression by Using the LCD
We “cleared” the fractions by multiplying by the LCD when we solved equations with fractions. We can use that strategy here to simplify complex rational expressions. We will multiply the numerator and denominator by LCD of all the rational expressions.
Let’s look at the complex rational expression we simplified one way in Example. We will simplify it here by multiplying the numerator and denominator by the LCD. When we multiply by $$\frac{LCD}{LCD}$$ we are multiplying by 1, so the value stays the same.
##### Example $$\PageIndex{13}$$
Simplify: $$\frac{\frac{1}{3}+\frac{1}{6}}{\frac{1}{2}−\frac{1}{3}}$$.
The LCD of all the fractions in the whole expression is 6. Clear the fractions by multiplying the numerator and denominator by that LCD. Distribute. Simplify.
##### Example $$\PageIndex{14}$$
Simplify: $$\frac{\frac{1}{2}+\frac{1}{5}}{\frac{1}{10}+\frac{1}{5}}$$.
$$\frac{7}{3}$$
##### Example $$\PageIndex{15}$$
Simplify: $$\frac{\frac{1}{4}+\frac{3}{8}}{\frac{1}{2}−\frac{5}{16}}$$.
$$\frac{7}{3}$$
How to Simplify a Complex Rational Expression by Using the LCD
##### Example $$\PageIndex{16}$$
Simplify: $$\frac{\frac{1}{x}+\frac{1}{y}}{\frac{x}{y}−\frac{y}{x}}$$.
##### Example $$\PageIndex{17}$$
Simplify: $$\frac{\frac{1}{a}+\frac{1}{b}}{\frac{a}{b}−\frac{b}{a}}$$.
$$\frac{b+a}{a^2+b^2}$$
##### Example $$\PageIndex{18}$$
Simplify: $$\frac{\frac{1}{x^2}−\frac{1}{y^2}}{\frac{1}{x}−\frac{1}{y}}$$.
$$\frac{y−x}{xy}$$
##### Definition: SIMPLIFY A COMPLEX RATIONAL EXPRESSION BY USING THE LCD.
1. Find the LCD of all fractions in the complex rational expression.
2. Multiply the numerator and denominator by the LCD.
3. Simplify the expression.
Be sure to start by factoring all the denominators so you can find the LCD.
##### Example $$\PageIndex{19}$$
Simplify: $$\frac{\frac{2}{x+6}}{\frac{4}{x−6}−\frac{4}{x^2−36}}$$.
Find the LCD of all fractions in the complex rational expression. The LCD is (x+6)(x−6) Multiply the numerator and denominator by the LCD. Simplify the expression. Distribute in the denominator. Simplify. Simplify. To simplify the denominator, distribute and combine like terms. Remove common factors. Simplify. Notice that there are no more factors common to the numerator and denominator.
##### Example $$\PageIndex{20}$$
Simplify: $$\frac{\frac{3}{x+2}}{\frac{5}{x−2}−\frac{3}{x^2−4}}$$.
$$\frac{3x−6}{5x+7}$$
##### Example $$\PageIndex{21}$$
Simplify: $$\frac{\frac{2}{x−7}−\frac{1}{x+7}}{\frac{6}{x+7}−\frac{1}{x^2−49}}$$.
$$\frac{x+21}{6x+43}$$
##### Example $$\PageIndex{22}$$
Simplify: $$\frac{\frac{4}{m^2−7m+12}}{\frac{3}{m−3}−\frac{2}{m−4}}$$.
Find the LCD of all fractions in the complex rational expression. The LCD is (m−3)(m−4) Multiply the numerator and denominator by the LCD. Simplify. Simplify. Distribute. Combine like terms.
##### Example $$\PageIndex{23}$$
Simplify: $$\frac{\frac{3}{x^2+7x+10}}{\frac{4}{x+2}+\frac{1}{x+5}}$$.
$$\frac{3}{5x+22}$$
##### Example $$\PageIndex{24}$$
Simplify: $$\frac{\frac{4y}{y+5}+\frac{2}{y+6}}{\frac{3y}{y^2+11y+30}}$$.
$$\frac{6y+34}{3y}$$
##### Example $$\PageIndex{25}$$
Simplify: $$\frac{\frac{y}{y+1}}{1+\frac{1}{y−1}}$$.
Find the LCD of all fractions in the complex rational expression. The LCD is (y+1)(y−1) Multiply the numerator and denominator by the LCD. Distribute in the denominator and simplify. Simplify. Simplify the denominator, and leave the numerator factored. Factor the denominator, and remove factors common with the numerator. Simplify.
##### Example $$\PageIndex{26}$$
Simplify: $$\frac{\frac{x}{x+3}}{1+\frac{1}{x+3}}$$.
$$\frac{x}{x+4}$$
##### Example $$\PageIndex{27}$$
Simplify: $$\frac{1+\frac{1}{x−1}}{\frac{3}{x+1}}$$.
$$\frac{x(x+1)}{3(x−1)}$$
## Key Concepts
• To Simplify a Rational Expression by Writing it as Division
1. Simplify the numerator and denominator.
2. Rewrite the complex rational expression as a division problem.
3. Divide the expressions.
• To Simplify a Complex Rational Expression by Using the LCD
1. Find the LCD of all fractions in the complex rational expression.
2. Multiply the numerator and denominator by the LCD.
3. Simplify the expression.
## Glossary
complex rational expression
A complex rational expression is a rational expression in which the numerator or denominator contains a rational expression.
8.5: Simplify Complex Rational Expressions is shared under a CC BY-NC-SA license and was authored, remixed, and/or curated by OpenStax.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9940531849861145, "perplexity": 1111.1353821652613}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103036176.7/warc/CC-MAIN-20220625220543-20220626010543-00562.warc.gz"}
|
https://www.udacity.com/wiki/david-a.-hale/shortwikiguide
|
--------------------------------------------------------------------------------
EDITING & CREATING PAGES, LINKS, IMAGES, and DRAWINGS
To edit a page, click "Edit (GUI)" or "Edit" at the top of the page. Use the Preview button often.
"Preview" also saves a backup copy of your work. When you are done, click "Save Changes".
Append ?action=raw to the URL of a page to read the bare code without opening the editor.
If you don't want it to be a link, prefix it with an exclamation point: !CamelCase.
Two backticks can be used to add characters without creating a new page: WikiNames
For page names with spaces, or without camel-case, use two brackets: Some New Page
To create a sub-page, start the name with a slash: Some Sub-Page
For links to other sites, you can just type the URL: http://example.net
If you want different text, use brackets, and put the text after a pipe character.
This is also necessary if the URL includes spaces: clickable text
To create an anchor (a linkable point on a page), use the Anchor macro: <<Anchor(anchorname)>>
To link to an anchor on the same page: #anchorname or clickable text
To add images (or other file types): {{attachment:image.png|alt text|width=100 height=150}}
Some files will be displayed with special formatting: {{attachment:myfile.py}}
To provide a link to an attached file: a file with blanks in its name.txt
To provide a link to an attached file on another page: filename.ext
To use an image as a link: {{attachment:imagefile.png|alt text|width=100}}
DRAWINGS: {{drawing:mypic}} will start a Java applet for editing vector diagrams.
The applet will store three attachments: mypic.draw, mypic.png, mypic.map
After you have saved the drawing, it will be displayed where you type {{drawing:mypic}}.
The map file is used to activate parts of the image as links.
To edit a drawing after the first save, click on AttachFile and use the
link that is displayed instead of [view] for the .draw attachment. You can also
click on the invisible 5 pixel border around the picture to reach the edit mode.
--------------------------------------------------------------------------------
BASIC FORMATTING
## Double-hash creates a single-line comment.
Blank lines create new paragraphs. <<BR>> inserts a linebreak.
plain text {{{plain text}}} (To make an entire page plain text, use #format plain.)
''italic'' '''bold''' __underline__ ~-smaller-~ ~+larger+~
super^script^ sub,,script,, --(strike through)--
MATH FORMULAS:
INLINE (...): r=\sqrt{{{x}^{2}}+{{y}^{2}}}
BLOCK (...): x=\frac{-b\pm \sqrt{{{b}^{2}}-4ac}}{2a}
--------------------------------------------------------------------------------
Horizontal rules (<hr> from HTML) are created by four to ten hyphens at the beginning of a line.
#pragma section-numbers off (No numbers for section headings.)
#pragma section-numbers on (Number all section headings.)
#pragma section-numbers 2 (Number only section headings level 2 or 1.)
== Level 2 ==
=== Level 3 ===
==== Level 4 ====
===== Level 5 =====
====== Level 6 ======
--------------------------------------------------------------------------------
LISTS
To create a list, you simply indent by one or more spaces.
Each additional space creates a new sub-list.
Various kinds of lists are created by following the space with a specific character:
. Bare list item (no bullet or number).
* Bulleted list item.
1. Numbered item (You type 1. for every item: incrementing is automatic.)
1.#5 Numbered item (with 5 used as the starting number).
A. Uppercase letters. a. Lowercase letters.
I. Uppercase Roman numerals. i. Lowercase Roman numerals.
To display a new line of text at the same level:
* List item <<BR>> More text, horizontally aligned with "List item".
This will also work:
* List item <<BR>>
More text, horizontally aligned with "List item".
--------------------------------------------------------------------------------
TABLES
|| top left cell || top right cell ||
|| bottom left cell || bottom right cell ||
Table, row, and cell attributes can be changed with CSS
||<tablestyle="..."> use given style for table html
||<rowstyle="..."> use given style for row (tr) html
||<style="..."> use given style for cell (td) html
They can be included together in one tag:
||<tablestyle="..." rowstyle="...">
A few of the most useful CSS properties and values:
border: none;
text-align: left; left|center|right
vertical-align: top; top|bottom
background-color: #DDDDDD #000000 to #FFFFFF
color: #FF0000;
font-weight: bold;
||||This cell will be two columns wide.||
||<rowspan=2>This cell will be two rows tall.||
||This cell will<<BR>>have two lines.||
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2609734833240509, "perplexity": 13572.625771570463}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173761.96/warc/CC-MAIN-20170219104613-00419-ip-10-171-10-108.ec2.internal.warc.gz"}
|
https://mirror.rcg.sfu.ca/mirror/CRAN/web/packages/bdots/vignettes/refitCoef.html
|
# Refit with Saved Parameters
library(bdots)
## Overview
This vignette walks through using a text file of previously fit model parameters to use in the bdotsRefit function. This is convenient if you have already gone through the refitting process and would like to save/load the refitted parameters in a new session.
To demonstate this process, we start with fitting a set of curves to our data
fit <- bdotsFit(data = cohort_unrelated,
subject = "Subject",
time = "Time",
y = "Fixations",
group = c("Group", "LookType"),
curveType = doubleGauss(concave = TRUE),
cor = TRUE,
numRefits = 2,
cores = 2,
verbose = FALSE)
refit <- bdotsRefit(fit, quickRefit = TRUE, fitCode = 5)
#> All observations fitCode greater than 5. Nothing to refit :)
From this, we can create an appropriate data.table that can be used in a later session
parDT <- coefWriteout(refit)
#> Subject Group LookType mu ht sig1 sig2 base1
#> 1: 1 50 Cohort 429.7595 0.1985978 159.8869 314.6389 0.009709831
#> 2: 1 65 Cohort 634.9292 0.2635044 303.8080 215.3845 -0.020636088
#> 3: 2 50 Cohort 647.0655 0.2543769 518.9633 255.9870 -0.213087542
#> 4: 2 65 Cohort 723.0547 0.2582110 392.9495 252.9384 -0.054826156
#> 5: 3 50 Cohort 501.4822 0.2247729 500.8480 158.4180 -0.331679043
#> 6: 3 65 Cohort 460.7152 0.3067659 382.7321 166.0833 -0.243308563
#> base2
#> 1: 0.03376106
#> 2: 0.02892360
#> 3: 0.01368196
#> 4: 0.03197291
#> 5: 0.02522681
#> 6: 0.03992168
It’s important that columns are included that match the unique identifying columns in our bdotsObj, and that the parameters match the coefficients used from bdotsFit
## Subject, Group, and LookType
#> Subject Group LookType fit R2 AR1 fitCode
#> 1: 1 50 Cohort <gnls[18]> 0.9697202 TRUE 0
#> 2: 1 65 Cohort <gnls[18]> 0.9804901 TRUE 0
#> 3: 2 50 Cohort <gnls[18]> 0.9811708 TRUE 0
#> 4: 2 65 Cohort <gnls[18]> 0.9697466 TRUE 0
#> 5: 3 50 Cohort <gnls[18]> 0.9761906 TRUE 0
#> 6: 3 65 Cohort <gnls[18]> 0.9534922 FALSE 3
## doubleGauss pars
colnames(coef(refit))
#> [1] "mu" "ht" "sig1" "sig2" "base1" "base2"
We can save our parameter data.table for later use, or read in any other appropriately formatted data.frame
## Save this for later using data.table::fwrite
fwrite(parDT, file = "mypars.csv")
parDT <- fread("mypars.csv")
Once we have this, we can pass it as an argument to the bdotsRefit function. Doing so will ignore the remaining arguments
new_refit <- bdotsRefit(refit, paramDT = parDT)
We end up with a bdotsObj that matches what we had previously. As seeds have not yet been implemented, the resulting parameters may not be exact. It will, however, assist with not having to go through the entire refitting process again manually (although, there is always the option to save the entire object with save(refit, file = "refit.RData))
head(new_refit)
#> Subject Group LookType fit R2 AR1 fitCode
#> 1: 1 50 Cohort <gnls[18]> 0.9697202 TRUE 0
#> 2: 1 50 Unrelated_Cohort <gnls[18]> 0.9789994 TRUE 0
#> 3: 1 65 Cohort <gnls[18]> 0.9804901 TRUE 0
#> 4: 1 65 Unrelated_Cohort <gnls[18]> 0.8716404 TRUE 1
#> 5: 10 50 Cohort <gnls[18]> 0.8723338 TRUE 1
#> 6: 10 50 Unrelated_Cohort <gnls[18]> 0.9345526 TRUE 1
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.16672928631305695, "perplexity": 14932.369565514125}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585507.26/warc/CC-MAIN-20211022114748-20211022144748-00509.warc.gz"}
|
http://planetmath.org/loopandquasigroup
|
# loop and quasigroup
A quasigroup is a groupoid $G$ with the property that for every $x,y\in G$, there are unique elements $w,z\in G$ such that $xw=y$ and $zx=y$.
A loop is a quasigroup which has an identity element.
What distinguishes a loop from a group is that the former need not satisfy the associative law.
Title loop and quasigroup LoopAndQuasigroup 2013-03-22 13:02:08 2013-03-22 13:02:08 mclase (549) mclase (549) 4 mclase (549) Definition msc 20N05 Groupoid LoopOfAGraph AlternativeDefinitionOfGroup loop quasigroup
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 5, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.580833375453949, "perplexity": 2298.4067833414697}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257649961.11/warc/CC-MAIN-20180324073738-20180324093738-00631.warc.gz"}
|
https://www.askmattrab.com/notes/389-lens-maker-formula
|
# Lens Maker Formula
Lens maker formula is an expression that shows relation between the refractive index, the focal length and the radii of curvatures of the lens.
1f = (n1)(1R1 + 1R2)
where,
f = focal length of the lens
n = refractive index of the lens
R1 ,R2 = radii of curvatures of the lens
Assumptions:
1. The lens is thin
2. deviation produced by the thin lens is similar to that of a small angle prism
3. angle made by incident ray and refracted ray with principle axis is small
:
from fig:
In ΔXOF,
tanδ = OX/OF
or, tanδ = h/f
for small angle tanδ ≈ δ
.˙. δ = h/f.............:1
we know,
δ = A(n-1)............:2
from equations 1 and 2
h/f =A(n-1)............:3
from geometry,
angle MXG = A
or, A = a+a'
or, A = h/R1 + h/R2
or A =h(1/R1 + 1/R2).............:4
now, from equation 3
or, h/f = h(n-1)(1/R1 + 1/R2)
1/f = (n-1)(1/R1 + 1/R2)
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8336740732192993, "perplexity": 2953.963896339264}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057416.67/warc/CC-MAIN-20210923013955-20210923043955-00532.warc.gz"}
|
http://orbi.ulg.ac.be/browse?type=author&value=Gillon%2C+Micha%C3%ABl+p074577
|
References of "Gillon, Michaël" in Complete repository Arts & humanities Archaeology Art & art history Classical & oriental studies History Languages & linguistics Literature Performing arts Philosophy & ethics Religion & theology Multidisciplinary, general & others Business & economic sciences Accounting & auditing Production, distribution & supply chain management Finance General management & organizational theory Human resources management Management information systems Marketing Strategy & innovation Quantitative methods in economics & management General economics & history of economic thought International economics Macroeconomics & monetary economics Microeconomics Economic systems & public economics Social economics Special economic topics (health, labor, transportation…) Multidisciplinary, general & others Engineering, computing & technology Aerospace & aeronautics engineering Architecture Chemical engineering Civil engineering Computer science Electrical & electronics engineering Energy Geological, petroleum & mining engineering Materials science & engineering Mechanical engineering Multidisciplinary, general & others Human health sciences Alternative medicine Anesthesia & intensive care Cardiovascular & respiratory systems Dentistry & oral medicine Dermatology Endocrinology, metabolism & nutrition Forensic medicine Gastroenterology & hepatology General & internal medicine Geriatrics Hematology Immunology & infectious disease Laboratory medicine & medical technology Neurology Oncology Ophthalmology Orthopedics, rehabilitation & sports medicine Otolaryngology Pediatrics Pharmacy, pharmacology & toxicology Psychiatry Public health, health care sciences & services Radiology, nuclear medicine & imaging Reproductive medicine (gynecology, andrology, obstetrics) Rheumatology Surgery Urology & nephrology Multidisciplinary, general & others Law, criminology & political science Civil law Criminal law & procedure Criminology Economic & commercial law European & international law Judicial law Metalaw, Roman law, history of law & comparative law Political science, public administration & international relations Public law Social law Tax law Multidisciplinary, general & others Life sciences Agriculture & agronomy Anatomy (cytology, histology, embryology...) & physiology Animal production & animal husbandry Aquatic sciences & oceanology Biochemistry, biophysics & molecular biology Biotechnology Entomology & pest control Environmental sciences & ecology Food science Genetics & genetic processes Microbiology Phytobiology (plant sciences, forestry, mycology...) Veterinary medicine & animal health Zoology Multidisciplinary, general & others Physical, chemical, mathematical & earth Sciences Chemistry Earth sciences & physical geography Mathematics Physics Space science, astronomy & astrophysics Multidisciplinary, general & others Social & behavioral sciences, psychology Animal psychology, ethology & psychobiology Anthropology Communication & mass media Education & instruction Human geography & demography Library & information sciences Neurosciences & behavior Regional & inter-regional studies Social work & social policy Sociology & social sciences Social, industrial & organizational psychology Theoretical & cognitive psychology Treatment & clinical psychology Multidisciplinary, general & others Showing results 1 to 20 of 227 1 2 3 4 5 6 The small binary asteroid (939) IsbergaCarry, B.; Matter, A.; Scheirich, P. et alin Icarus (2015), 248In understanding the composition and internal structure of asteroids, their density is perhaps the most diagnostic quantity. We aim here at characterizing the surface composition, mutual orbit, size, mass ... [more ▼]In understanding the composition and internal structure of asteroids, their density is perhaps the most diagnostic quantity. We aim here at characterizing the surface composition, mutual orbit, size, mass, and density of the small main-belt binary asteroid (939) Isberga. For that, we conduct a suite of multi-technique observations, including optical lightcurves over many epochs, near-infrared spectroscopy, and interferometry in the thermal infrared. We develop a simple geometric model of binary systems to analyze the interferometric data in combination with the results of the lightcurve modeling. From spectroscopy, we classify Ibserga as a Sq-type asteroid, consistent with the albedo of 0.14-0.06+0.09 (all uncertainties are reported as 3-σ range) we determine (average albedo of S-types is 0.197 ± 0.153, see Pravec et al. (Pravec et al. [2012]. Icarus 221, 365-387). Lightcurve analysis reveals that the mutual orbit has a period of 26.6304 ± 0.0001 h, is close to circular (eccentricity lower than 0.1), and has pole coordinates within 7° of (225°, +86°) in Ecliptic J2000, implying a low obliquity of 1.5-1.5+6.0 deg . The combined analysis of lightcurves and interferometric data allows us to determine the dimension of the system and we find volume-equivalent diameters of 12.4-1.2+2.5 km and 3.6-0.3+0.7 km for Isberga and its satellite, circling each other on a 33 km wide orbit. Their density is assumed equal and found to be 2.91-2.01+1.72 gcm-3 , lower than that of the associated ordinary chondrite meteorites, suggesting the presence of some macroporosity, but typical of S-types of the same size range (Carry [2012]. Planet. Space Sci. 73, 98-118). The present study is the first direct measurement of the size of a small main-belt binary. Although the interferometric observations of Isberga are at the edge of MIDI capabilities, the method described here is applicable to others suites of instruments (e.g., LBT, ALMA). [less ▲]Detailed reference viewed: 8 (0 ULg) The binary near-Earth Asteroid (175706) 1996 FG3 - An observational constraint on its orbital evolutionScheirich, P.; Pravec, P.; Jacobson, S. A. et alin Icarus (2015), 245Using our photometric observations taken between April 1996 and January 2013 and other published data, we derived properties of the binary near-Earth Asteroid (175706) 1996 FG3 including new ... [more ▼]Using our photometric observations taken between April 1996 and January 2013 and other published data, we derived properties of the binary near-Earth Asteroid (175706) 1996 FG3 including new measurements constraining evolution of the mutual orbit with potential consequences for the entire binary asteroid population. We also refined previously determined values of parameters of both components, making 1996 FG3 one of the most well understood binary asteroid systems. With our 17-year long dataset, we determined the orbital vector with a substantially greater accuracy than before and we also placed constraints on a stability of the orbit. Specifically, the ecliptic longitude and latitude of the orbital pole are 266 ° and - 83 ° , respectively, with the mean radius of the uncertainty area of 4 ° , and the orbital period is 16.1508 ± 0.0002 h (all quoted uncertainties correspond to 3σ). We looked for a quadratic drift of the mean anomaly of the satellite and obtained a value of 0.04 ± 0.20 deg /yr2 , i.e., consistent with zero. The drift is substantially lower than predicted by the pure binary YORP (BYORP) theory of McMahon and Scheeres (McMahon, J., Scheeres, D. [2010]. Icarus 209, 494-509) and it is consistent with the tigidity and quality factor of μQ = 1.3 ×107 Pa using the theory that assumes an elastic response of the asteroid material to the tidal forces. This very low value indicates that the primary of 1996 FG3 is a 'rubble pile', and it also calls for a re-thinking of the tidal energy dissipation in close asteroid binary systems. [less ▲]Detailed reference viewed: 8 (0 ULg) Dust from Comet 209P/LINEAR during its 2014 Return: Parent Body of a New Meteor Shower, the May CamelopardalidsIshiguro, Masateru; Kuroda, Daisuke; Hanayama, Hidekazu et alin The Astrophysical Journal Letters (2015), 798We report a new observation of the Jupiter family comet 209P/LINEAR during its 2014 return. The comet is recognized as a dust source of a new meteor shower, the May Camelopardalids. 209P/LINEAR was ... [more ▼]We report a new observation of the Jupiter family comet 209P/LINEAR during its 2014 return. The comet is recognized as a dust source of a new meteor shower, the May Camelopardalids. 209P/LINEAR was apparently inactive at a heliocentric distance rh = 1.6 AU and showed weak activity at rh <= 1.4 AU. We found an active region of <0.001% of the entire nuclear surface during the comet's dormant phase. An edge-on image suggests that particles up to 1 cm in size (with an uncertainty of factor 3-5) were ejected following a differential power-law size distribution with index q = –3.25 ± 0.10. We derived a mass-loss rate of 2-10 kg s–1 during the active phase and a total mass of ≈5 × 107 kg during the 2014 return. The ejection terminal velocity of millimeter- to centimeter-sized particles was 1-4 m s–1, which is comparable to the escape velocity from the nucleus (1.4 m s–1). These results imply that such large meteoric particles marginally escaped from the highly dormant comet nucleus via the gas drag force only within a few months of the perihelion passage. [less ▲]Detailed reference viewed: 7 (0 ULg) WASP-94 A and B planets: hot-Jupiter cousins in a twin-star systemNeveu-VanMalle, M.; Queloz, D.; Anderson, D. R. et alin Astronomy and Astrophysics (2014), 572We report the discovery of two hot-Jupiter planets, each orbiting one of the stars of a wide binary system. WASP-94A (2MASS 20550794-3408079) is an F8 type star ... [more ▼]We report the discovery of two hot-Jupiter planets, each orbiting one of the stars of a wide binary system. WASP-94A (2MASS 20550794-3408079) is an F8 type star hosting a transiting planet with a radius of 1.72 ± 0.06 RJup, a mass of 0.452 ± 0.034 MJup, and an orbital period of 3.95 days. The Rossiter-McLaughlin effect is clearly detected, and the measured projected spin-orbit angle indicates that the planet occupies a retrograde orbit. WASP-94B (2MASS 20550915-3408078) is an F9 stellar companion at an angular separation of 15'' (projected separation 2700 au), hosting a gas giant with a minimum mass of 0.618 ± 0.028 MJup with a period of 2.008 days, detected by Doppler measurements. The orbital planes of the two planets are inclined relative to each other, indicating that at least one of them is inclined relative to the plane of the stellar binary. These hot Jupiters in a binary system bring new insights into the formation of close-in giant planets and the role of stellar multiplicity. The radial-velocity and photometric data used for this work are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/572/A49 [less ▲]Detailed reference viewed: 7 (1 ULg) Three sub-Jupiter-mass planets: WASP-69b & WASP-84b transit active K dwarfs and WASP-70Ab transits the evolved primary of a G4+K3 binaryAnderson, D. R.; Collier Cameron, A.; Delrez, Laetitia et alin Monthly Notices of the Royal Astronomical Society (2014), 445(2), We report the discovery of the transiting exoplanets WASP-69b, WASP-70Ab and WASP-84b, each of which orbits a bright star (V ˜ 10). WASP-69b is a bloated Saturn-mass planet (0.26 MJup, 1.06 RJup) in a 3 ... [more ▼]We report the discovery of the transiting exoplanets WASP-69b, WASP-70Ab and WASP-84b, each of which orbits a bright star (V ˜ 10). WASP-69b is a bloated Saturn-mass planet (0.26 MJup, 1.06 RJup) in a 3.868-d period around an active, ˜1-Gyr, mid-K dwarf. ROSAT detected X-rays 60±27 arcsec from WASP-69. If the star is the source then the planet could be undergoing mass-loss at a rate of ˜1012 g s-1. This is one to two orders of magnitude higher than the evaporation rate estimated for HD 209458b and HD 189733b, both of which have exhibited anomalously large Lyman alpha absorption during transit. WASP-70Ab is a sub-Jupiter-mass planet (0.59 MJup, 1.16 RJup) in a 3.713-d orbit around the primary of a spatially resolved, 9-10-Gyr, G4+K3 binary, with a separation of 3.3 arcsec (>=800 au). WASP-84b is a sub-Jupiter-mass planet (0.69 MJup, 0.94 RJup) in an 8.523-d orbit around an active, ˜1-Gyr, early-K dwarf. Of the transiting planets discovered from the ground to date, WASP-84b has the third-longest period. For the active stars WASP-69 and WASP-84, we pre-whitened the radial velocities using a low-order harmonic series. We found that this reduced the residual scatter more than did the oft-used method of pre-whitening with a fit between residual radial velocity and bisector span. The system parameters were essentially unaffected by pre-whitening. [less ▲]Detailed reference viewed: 10 (1 ULg) A global analysis of Spitzer and new HARPS data confirms the loneliness and metal-richness of GJ 436 bLanotte, Audrey ; Gillon, Michaël ; Demory, B.-O. et alin Astronomy and Astrophysics (2014), 572Context. GJ 436b is one of the few transiting warm Neptunes for which a detailed characterisation of the atmosphere is possible, whereas its non-negligible orbital eccentricity calls for further ... [more ▼]Context. GJ 436b is one of the few transiting warm Neptunes for which a detailed characterisation of the atmosphere is possible, whereas its non-negligible orbital eccentricity calls for further investigation. Independent analyses of several individual datasets obtained with Spitzer have led to contradicting results attributed to the different techniques used to treat the instrumental effects. Aims. We aim at investigating these previous controversial results and developing our knowledge of the system based on the full Spitzer photometry dataset combined with new Doppler measurements obtained with the HARPS spectrograph. We also want to search for additional planets. Methods. We optimise aperture photometry techniques and the photometric deconvolution algorithm DECPHOT to improve the data reduction of the Spitzer photometry spanning wavelengths from 3-24 {\mu}m. Adding the high precision HARPS radial velocity data, we undertake a Bayesian global analysis of the system considering both instrumental and stellar effects on the flux variation. Results. We present a refined radius estimate of RP=4.10 +/- 0.16 R_Earth, mass MP=25.4 +/- 2.1 M_Earth and eccentricity e= 0.162 +/- 0.004 for GJ 436b. Our measured transit depths remain constant in time and wavelength, in disagreement with the results of previous studies. In addition, we find that the post-occultation flare-like structure at 3.6 {\mu}m that led to divergent results on the occultation depth measurement is spurious. We obtain occultation depths at 3.6, 5.8, and 8.0 {\mu}m that are shallower than in previous works, in particular at 3.6 {\mu}m. However, these depths still appear consistent with a metal-rich atmosphere depleted in methane and enhanced in CO/CO2, although perhaps less than previously thought. We find no evidence for a potential planetary companion, stellar activity, nor for a stellar spin-orbit misalignment. [ABRIDGED] [less ▲]Detailed reference viewed: 16 (1 ULg) The TRAPPIST comet survey in 2014Jehin, Emmanuel ; Opitom, Cyrielle ; Manfroid, Jean et alin Bulletin of the American Astronomical Society (2014, November 01), 46TRAPPIST (TRAnsiting Planets and PlanetesImals Small Telescope) is a 60-cm robotic telescope that has been installed in June 2010 at the ESO La Silla Observatory [1]. Operated from Liège (Belgium) it is ... [more ▼]TRAPPIST (TRAnsiting Planets and PlanetesImals Small Telescope) is a 60-cm robotic telescope that has been installed in June 2010 at the ESO La Silla Observatory [1]. Operated from Liège (Belgium) it is devoted to the detection and characterisation of exoplanets and to the study of comets and other small bodies in the Solar System. A set of narrowband cometary filters designed by the NASA for the Hale-Bopp Observing Campaign [2] is permanently mounted on the telescope along with classic Johnson-Cousins filters. We describe here the hardware and the goals of the project. For relatively bright comets (V < 12) we measure several times a week the gaseous production rates (using a Haser model) and the spatial distribution of several species among which OH, NH, CN, C2 and C3 as well as ions like CO+. The dust production rates (Afrho) and color of the dust aredetermined through four dust continuum bands from the UV to the red (UC, BC, GC, RC filters). We will present the dust and gas production rates of the brightest comets observed in 2014: C/2012 K1 (PANSTARRS), C/2014 E2 (Jacques), C/2013 A1 (Siding Springs) and C/2013 V5 (Oukaimeden). Each of these comets have been observed at least once a week for several weeks to several months. Light curves with respect to the heliocentric distance will be presented and discussed. [1] Jehin et al., The Messenger, 145, 2-6, 2011.[2] Farnham et al., Icarus, 147, 180-204, 2000. [less ▲]Detailed reference viewed: 17 (1 ULg) TRAPPIST monitoring of comet C/2013 A1 (Siding Spring)Opitom, Cyrielle ; Jehin, Emmanuel ; Manfroid, Jean et alin Bulletin of the American Astronomical Society (2014, November 01), 46C/2013 A1 (Siding Spring) is a long period comet discovered by Robert H McNaught at Siding Spring Observatory in Australia on January 3, 2013 at 7.2 au from the Sun. This comet will make a close encounter ... [more ▼]C/2013 A1 (Siding Spring) is a long period comet discovered by Robert H McNaught at Siding Spring Observatory in Australia on January 3, 2013 at 7.2 au from the Sun. This comet will make a close encounter with Mars on October 19, 2014. At this occasion the comet will be extensively observed both from Earth and from several orbiters around Mars.On September 20, 2013 when the comet was around 5 au from the Sun, we started a monitoring with the TRAPPIST robotic telescope installed at La Silla observatory [1]. A set of narrowband cometary filters designed by the NASA for the Hale-Bopp Observing Campaign [2] is permanently mounted on the telescope along with classic Johnson-Cousins B, V, Rc, and Ic filters.We observed the comet continuously at least once a week from September 20, 2013 to April 6, 2014 with broad band filters. We then recovered the comet on May 20. At this time we could detect the gas and started the observations with narrow band filters until early November, covering the close approach to Mars and the perihelion passage.We present here our first results about comet Siding Springs. From the images in the broad band filters and in the dust continuum filters we derived A(θ)fρ values [3] and studied the evolution of the comet activity with the heliocentric distance from September 20, 2013 to early November 2014. We could also detect gas since May 20, 2014. We thus derived gas production rates using a Haser model [4]. We present the evolution of gas production rates and gas production rates ratios with the heliocentric distance.Finally, we discuss the dust and gas coma morphology. [less ▲]Detailed reference viewed: 18 (3 ULg) Planets and Stellar Activity: Hide and Seek in the CoRoT-7 systemHaywood, R. D.; Cameron, A. C.; Queloz, D. et alin Monthly Notices of the Royal Astronomical Society (2014), 443(3), 2517-2531Since the discovery of the transiting Super-Earth CoRoT-7b, several investigations have been made of the number and precise masses of planets present in the system, but they all yield different results ... [more ▼]Since the discovery of the transiting Super-Earth CoRoT-7b, several investigations have been made of the number and precise masses of planets present in the system, but they all yield different results, owing to the star's high level of activity. Radial velocity (RV) variations induced by stellar activity therefore need to be modelled and removed to allow a reliable detection of all planets in the system. We re-observed CoRoT-7 in January 2012 with both HARPS and the CoRoT satellite, so that we now have the benefit of simultaneous RV and photometric data. We fitted the off-transit variations in the CoRoT lightcurve using a harmonic decomposition similar to that implemented in Queloz et al. (2009). This fit was then used to model the stellar RV contribution, according to the methods described by Aigrain et al. (2011). This model was incorporated into a Monte Carlo Markov Chain in order to make a precise determination of the orbits of CoRoT-7b and CoRoT-7c. We also assess the evidence for the presence of one or two additional planetary companions. [less ▲]Detailed reference viewed: 10 (0 ULg) WASP-117b: a 10-day-period Saturn in an eccentric and misaligned orbitLendl, Monika ; Triaud, A. H. M. J.; Anderson, D. R. et alin Astronomy and Astrophysics (2014), 568We report the discovery of WASP-117b, the first planet with a period beyond 10 days found by the WASP survey. The planet has a mass of M_p = 0.2755 (+/-0.0090) M_jup, a radius of R_p = 1.021 (-0.065 +0 ... [more ▼]We report the discovery of WASP-117b, the first planet with a period beyond 10 days found by the WASP survey. The planet has a mass of M_p = 0.2755 (+/-0.0090) M_jup, a radius of R_p = 1.021 (-0.065 +0.076) R_jup and is in an eccentric (e = 0.302 +/-0.023), 10.02165 +/- 0.00055 d orbit around a main-sequence F9 star. The host star's brightness (V=10.15 mag) makes WASP-117 a good target for follow-up observations, and with a planetary equilibrium temperature of T_eq = 1024 (-26 +30) K and a low planetary density (rho_p = 0.259 (-0.048 +0.054) rho_jup) it is one of the best targets for transmission spectroscopy among planets with periods around 10 days. From a measurement of the Rossiter-McLaughlin effect, we infer a projected angle between the planetary orbit and stellar spin axes of beta = -44 (+/-11) deg, and we further derive an orbital obliquity of psi = 69.5 (+3.6 -3.1) deg. Owing to the large orbital separation, tidal forces causing orbital circularization and realignment of the planetary orbit with the stellar plane are weak, having had little impact on the planetary orbit over the system lifetime. WASP-117b joins a small sample of transiting giant planets with well characterized orbits at periods above ~8 days. [less ▲]Detailed reference viewed: 7 (1 ULg) WASP-104b and WASP-106b: two transiting hot Jupiters in 1.75-day and 9.3-day orbitsSmith, A. M. S.; Anderson, D. R.; Armstrong, D. J. et alE-print/Working paper (2014)We report the discovery from the WASP survey of two exoplanetary systems, each consisting of a Jupiter-sized planet transiting an 11th magnitude (V) main-sequence star. WASP-104b orbits its star in 1.75 d ... [more ▼]We report the discovery from the WASP survey of two exoplanetary systems, each consisting of a Jupiter-sized planet transiting an 11th magnitude (V) main-sequence star. WASP-104b orbits its star in 1.75 d, whereas WASP-106b has the fourth-longest orbital period of any planet discovered by means of transits observed from the ground, orbiting every 9.29 d. Each planet is more massive than Jupiter (WASP-104b has a mass of 1.27±0.05 MJup, while WASP-106b has a mass of 1.93±0.08 MJup). Both planets are just slightly larger than Jupiter, with radii of 1.14±0.04 and 1.09±0.04 RJup for WASP-104 and WASP-106 respectively. No significant orbital eccentricity is detected in either system, and while this is not surprising in the case of the short-period WASP-104b, it is interesting in the case of WASP-106b, because many otherwise similar planets are known to have eccentric orbits. [less ▲]Detailed reference viewed: 9 (1 ULg) Colour-magnitude diagrams of transiting Exoplanets - II. A larger sample from photometric distancesTriaud, Amaury H. M. J.; Lanotte, Audrey ; Smalley, Barry et alin Monthly Notices of the Royal Astronomical Society (2014), 444(1), 711-728CColour-magnitude diagrams form a traditional way of presenting luminous objects in the Universe and compare them to each other. Here, we estimate the photometric distance of 44 transiting exoplanetary ... [more ▼]CColour-magnitude diagrams form a traditional way of presenting luminous objects in the Universe and compare them to each other. Here, we estimate the photometric distance of 44 transiting exoplanetary systems. Parallaxes for seven systems confirm our methodology. Combining those measurements with fluxes obtained while planets were occulted by their host stars, we compose colour-magnitude diagrams in the near and mid-infrared. When possible, planets are plotted alongside very low mass stars and field brown dwarfs, who often share similar sizes and equilibrium temperatures. They offer a natural, empirical, comparison sample. We also include directly imaged exoplanets and the expected loci of pure blackbodies. Irradiated planets do not match blackbodies; their emission spectra are not featureless. For a given luminosity, hot Jupiters' daysides show a larger variety in colour than brown dwarfs do and display an increasing diversity in colour with decreasing intrinsic luminosity. The presence of an extra absorbent within the 4.5 μm band would reconcile outlying hot Jupiters with ultra-cool dwarfs' atmospheres. Measuring the emission of gas giants cooler than 1000 K would disentangle whether planets' atmospheres behave more similarly to brown dwarfs' atmospheres than to blackbodies, whether they are akin to the young directly imaged planets, or if irradiated gas giants form their own sequence. [less ▲]Detailed reference viewed: 17 (0 ULg) Transiting exoplanets from the CoRoT space mission. XXVI. CoRoT-24: a transiting multiplanet systemAlonso, R.; Moutou, C.; Endl, M. et alin Astronomy and Astrophysics (2014), 567We present the discovery of a candidate multiply transiting system, the first one found in the CoRoT mission. Two transit-like features with periods of 5.11 and 11.76 d are detected in the CoRoT light ... [more ▼]We present the discovery of a candidate multiply transiting system, the first one found in the CoRoT mission. Two transit-like features with periods of 5.11 and 11.76 d are detected in the CoRoT light curve around a main sequence K1V star of r = 15.1. If the features are due to transiting planets around the same star, these would correspond to objects of 3.7 ± 0.4 and 5.0 ± 0.5 R[SUB]⊕[/SUB] , respectively. Several radial velocities serve to provide an upper limit of 5.7 M[SUB]⊕[/SUB] for the 5.11 d signal and to tentatively measure a mass of 28[SUP]+11[/SUP][SUB]-11[/SUB] M[SUB]⊕[/SUB] for the object transiting with a 11.76 d period. These measurements imply low density objects, with a significant gaseous envelope. The detailed analysis of the photometric and spectroscopic data serves to estimate the probability that the observations are caused by transiting Neptune-sized planets as much as over 26 times higher than a blend scenario involving only one transiting planet and as much as over 900 times higher than a scenario involving two blends and no planets. The radial velocities show a long-term modulation that might be attributed to a 1.5 M[SUB]Jup[/SUB] planet orbiting at 1.8 AU from the host, but more data are required to determine the precise orbital parameters of this companion. The CoRoT space mission, launched on 27 December 2006, has been developed and is operated by the CNES, with the contribution of Austria, Belgium, Brazil, ESA (RSSD and Science Program), Germany, and Spain. Some of the observations were made with the HARPS spectrograph at ESO La Silla Observatory (184.C-0639) and with the HIRES spectrograph at the Keck Telescope (N035Hr, N143Hr 260 and N095Hr). Partly based on observations obtained at ESO Paranal Observatory, Chile (086.C-0235(A) and B).Tables 2-4 and Fig. 12 are available in electronic form at http://www.aanda.org [less ▲]Detailed reference viewed: 7 (0 ULg) HD 97658 and its super-Earth. Spitzer & MOST transit analysis and modeling of the host starVan Grootel, Valérie ; Gillon, Michaël ; Valencia, D. et alConference (2014, July)Super-Earths transiting nearby bright stars are key objects that simultaneously allow for accurate measurements of both their mass and radius, providing essential constraints on their internal composition ... [more ▼]Super-Earths transiting nearby bright stars are key objects that simultaneously allow for accurate measurements of both their mass and radius, providing essential constraints on their internal composition. We present here the confirmation, based on Spitzer transit observations, that the super-Earth HD 97658 b transits its host star. HD 97658 is a low-mass ($M_*=0.77\pm0.05\,M_{\odot}$) K1 dwarf, as determined from the Hipparcos parallax and stellar evolution modeling. To constrain the planet parameters, we carry out Bayesian global analyses of Keck-HIRES radial velocities, and MOST and Spitzer photometry. HD 97658 b is a massive ($M_P=7.55^{+0.83}_{-0.79} M_{\oplus}$) and large ($R_{P} = 2.247^{+0.098}_{-0.095} R_{\oplus}$ at 4.5 $\mu$m) super-Earth. We investigate the possible internal compositions for HD 97658 b. Our results indicate a large rocky component, by at least 60% by mass, and very little H-He components, at most 2% by mass. We also discuss how future asteroseismic observations can improve the knowledge of the HD 97658 system, in particular by constraining its age. Orbiting a bright host star, HD 97658 b will be a key target for coming space missions TESS, CHEOPS, PLATO, and also JWST, to characterize thoroughly its structure and atmosphere. [less ▲]Detailed reference viewed: 8 (1 ULg) Ground-based transmission spectrum of WASP-80 b, a gas giant transiting an M-dwarfDelrez, Laetitia ; Gillon, Michaël ; Lendl, Monika et alPoster (2014, June 09)We present here some results from our ground-based multi-object spectroscopy program aiming to measure the transmission spectrum of the transiting hot Jupiter WASP-80b using the VLT/FORS2 instrument ... [more ▼]We present here some results from our ground-based multi-object spectroscopy program aiming to measure the transmission spectrum of the transiting hot Jupiter WASP-80b using the VLT/FORS2 instrument. WASP-80b is a unique object as it is the only known specimen of gas giant orbiting an M-dwarf that is bright enough for high SNR follow-up measurements. Due to the nature of its host star, this hot Jupiter is actually more warm' than hot', with an estimated equilibrium temperature of only 800K. It is thus a prime target to improve our understanding of giant exoplanet atmospheres in this temperature range. [less ▲]Detailed reference viewed: 5 (0 ULg) Extremely Organic-rich Coma of Comet C/2010 G2 (Hill) during its Outburst in 2012Kawakita, Hideyo; Dello Russo, Neil; Vervack, Ron et alin Astrophysical Journal (2014), 788We performed high-dispersion near-infrared spectroscopic observations of comet C/2010 G2 (Hill) at 2.5 AU from the Sun using NIRSPEC (R ≈ 25,000) at the Keck II Telescope on UT 2012 January 9 and 10 ... [more ▼]We performed high-dispersion near-infrared spectroscopic observations of comet C/2010 G2 (Hill) at 2.5 AU from the Sun using NIRSPEC (R ≈ 25,000) at the Keck II Telescope on UT 2012 January 9 and 10, about a week after an outburst had occurred. Over the two nights of our observations, prominent emission lines of CH[SUB]4[/SUB] and C[SUB]2[/SUB]H[SUB]6[/SUB], along with weaker emission lines of H[SUB]2[/SUB]O, HCN, CH[SUB]3[/SUB]OH, and CO were detected. The gas production rate of CO was comparable to that of H[SUB]2[/SUB]O during the outburst. The mixing ratios of CO, HCN, CH[SUB]4[/SUB], C[SUB]2[/SUB]H[SUB]6[/SUB], and CH[SUB]3[/SUB]OH with respect to H[SUB]2[/SUB]O were higher than those for normal comets by a factor of five or more. The enrichment of CO and CH[SUB]4[/SUB] in comet Hill suggests that the sublimation of these hypervolatiles sustained the outburst of the comet. Some fraction of water in the inner coma might exist as icy grains that were likely ejected from nucleus by the sublimation of hypervolatiles. Mixing ratios of volatiles in comet Hill are indicative of the interstellar heritage without significant alteration in the solar nebula. [less ▲]Detailed reference viewed: 8 (0 ULg) The binary near-Earth asteroid (175706) 1996 FG3 - An observational constraint on its orbital stabilityScheirich, P.; Pravec, P.; Jacobson, S. A. et alE-print/Working paper (2014)Using our photometric observations taken between April 1996 and January 2013 and other published data, we derive properties of the binary near-Earth asteroid (175706) 1996 FG3 including new measurements ... [more ▼]Using our photometric observations taken between April 1996 and January 2013 and other published data, we derive properties of the binary near-Earth asteroid (175706) 1996 FG3 including new measurements constraining evolution of the mutual orbit with potential consequences for the entire binary asteroid population. We also refined previously determined values of parameters of both components, making 1996 FG3 one of the most well understood binary asteroid systems. We determined the orbital vector with a substantially greater accuracy than before and we also placed constraints on a stability of the orbit. Specifically, the ecliptic longitude and latitude of the orbital pole are 266{\deg} and -83{\deg}, respectively, with the mean radius of the uncertainty area of 4{\deg}, and the orbital period is 16.1508 +\- 0.0002 h (all uncertainties correspond to 3sigma). We looked for a quadratic drift of the mean anomaly of the satellite and obtained a value of 0.04 +\- 0.20 deg/yr^2, i.e., consistent with zero. The drift is substantially lower than predicted by the pure binary YORP (BYORP) theory of McMahon and Scheeres (McMahon, J., Scheeres, D. [2010]. Icarus 209, 494-509) and it is consistent with the theory of an equilibrium between BYORP and tidal torques for synchronous binary asteroids as proposed by Jacobson and Scheeres (Jacobson, S.A., Scheeres, D. [2011]. ApJ Letters, 736, L19). Based on the assumption of equilibrium, we derived a ratio of the quality factor and tidal Love number of Q/k = 2.4 x 10^5 uncertain by a factor of five. We also derived a product of the rigidity and quality factor of mu Q = 1.3 x 10^7 Pa using the theory that assumes an elastic response of the asteroid material to the tidal forces. This very low value indicates that the primary of 1996 FG3 is a 'rubble pile', and it also calls for a re-thinking of the tidal energy dissipation in close asteroid binary systems. [less ▲]Detailed reference viewed: 9 (2 ULg) A window on exoplanet dynamical histories: Rossiter-McLaughlin observations of WASP-13b and WASP-32bBrothwell, R.D.; Watson, C.A.; Hébrard, G. et alin Monthly Notices of the Royal Astronomical Society (2014), 440(4), 3392-3401We present Rossiter-McLaughlin observations of WASP-13b and WASP-32b and determine the sky-projected angle between the normal of the planetary orbit and the stellar rotation axis (lambda). WASP-13b and ... [more ▼]We present Rossiter-McLaughlin observations of WASP-13b and WASP-32b and determine the sky-projected angle between the normal of the planetary orbit and the stellar rotation axis (lambda). WASP-13b and WASP-32b both have prograde orbits and are consistent with alignment with measured sky-projected angles of lambda =8°^{+13}_{-12} and lambda =-2°^{+17}_{-19}, respectively. Both WASP-13 and WASP-32 have Teff < 6250 K, and therefore, these systems support the general trend that aligned planetary systems are preferentially found orbiting cool host stars. A Lomb-Scargle periodogram analysis was carried out on archival SuperWASP data for both systems. A statistically significant stellar rotation period detection (above 99.9 per cent confidence) was identified for the WASP-32 system with Prot = 11.6 ± 1.0 days. This rotation period is in agreement with the predicted stellar rotation period calculated from the stellar radius, R*, and vsin i if a stellar inclination of i* = 90° is assumed. With the determined rotation period, the true 3D angle between the stellar rotation axis and the planetary orbit, psi, was found to be psi = 11° ± 14°. We conclude with a discussion on the alignment of systems around cool host stars with Teff < 6150 K by calculating the tidal dissipation time-scale. We find that systems with short tidal dissipation time-scales are preferentially aligned and systems with long tidal dissipation time-scales have a broad range of obliquities. [less ▲]Detailed reference viewed: 9 (1 ULg) TRAPPIST monitoring of comets C/2012 S1 (Ison) and C/2013 R1 (Lovejoy)Opitom, Cyrielle ; Jehin, Emmanuel ; Manfroid, Jean et alConference (2014, June)Detailed reference viewed: 18 (4 ULg) Transiting hot Jupiters from WASP-South, Euler and TRAPPIST: WASP-95b to WASP-101bHellier, Coel; Anderson, D. R.; Collier Cameron, A. et alin Monthly Notices of the Royal Astronomical Society (2014)We report the discovery of the transiting exoplanets WASP-95b, WASP-96b, WASP-97b, WASP-98b, WASP-99b, WASP-100b and WASP-101b. All are hot Jupiters with orbital periods in the range 2.1-5.7 d, masses of ... [more ▼]We report the discovery of the transiting exoplanets WASP-95b, WASP-96b, WASP-97b, WASP-98b, WASP-99b, WASP-100b and WASP-101b. All are hot Jupiters with orbital periods in the range 2.1-5.7 d, masses of 0.5-2.8 MJup and radii of 1.1-1.4 RJup. The orbits of all the planets are compatible with zero eccentricity. WASP-99b produces the shallowest transit yet found by WASP-South, at 0.4 per cent. The host stars are of spectral type F2-G8. Five have metallicities of [Fe/H] from -0.03 to +0.23, while WASP-98 has a metallicity of -0.60, exceptionally low for a star with a transiting exoplanet. Five of the host stars are brighter than V = 10.8, which significantly extends the number of bright transiting systems available for follow-up studies. WASP-95 shows a possible rotational modulation at a period of 20.7 d. We discuss the completeness of WASP survey techniques by comparing to the HATnet project. [less ▲]Detailed reference viewed: 10 (0 ULg)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7947299480438232, "perplexity": 6007.59461005729}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115858727.26/warc/CC-MAIN-20150124161058-00218-ip-10-180-212-252.ec2.internal.warc.gz"}
|
https://tex.stackexchange.com/questions/392113/two-equations-with-same-label/392116
|
# Two equations with same label
I have to translate a dissertation to a researcher. There is a problem I couldn't solve. Namely, there is an introduction chapter where the author has named an equation as (A). Then in the chapter one there is again an equation named as (A). Those equations are not the same in sense that first one has parameter z but another has parameter e^z. So how can I name two different equations with the same label? For example
e^{ix}+1=0 (A)
and
$e^{iy}=-1$ (A)
• Welcome to TeX SX! Virtually they're the same. You might add the chapter number in front of the second equation, for instance. – Bernard Sep 19 '17 at 9:16
If you are using the amsmath package, you can specify the displayed label using the \tag macro:
\documentclass{article}
\usepackage{amsmath}
\begin{document}
$$\tag{A} e^{ix}+1=0$$
$$\tag{A} e^{iy}+1=0$$
\end{document}
Let's look at a possible scenario.
The text has an unnumbered first chapter, where equations are identified by letters, while the body of the book has equations numbered like (chapter.equation).
\documentclass[oneside]{book}
\usepackage{amsmath}
% this and 'oneside' is just for making small pictures
\usepackage[a6paper]{geometry}
\numberwithin{equation}{chapter}
\begin{document}
\frontmatter
\renewcommand\theequation{\Alph{equation}}
\chapter{Introduction}
Some text
$$\label{eq:Euler} e^{ix}+1=0$$
some text
\mainmatter
\renewcommand\theequation{\thechapter.\arabic{equation}}
\chapter{Title}
Some text followed by an equation
$$\label{eq:easy} 1+1=2$$
and here we use an equivalent formulation of an equation in the introduction
$$\tag{\ref{eq:Euler}} e^{iy}=-1$$
Some other text
\end{document}
Using \ref in the recalled equation allows to make this independent of the actual number used in the introduction.
The author might have not used \renewcommand\theequation and have assigned “A” manually with \tag; but the result would be the same
% in the introduction
$$\label{eq:Euler}\tag{A} e^{ix}+1=0$$
% in the body
$$\tag{\ref{eq:Euler}} e^{iy}=-1$$
If you want to reset the number of the equations conter you could use:
\setcounter{equation}{0}
right before the second equation (A).
If you want to use the equation number A just for that equation and keep with normal numbering you could use:
$$\tag{A} e^{iy}=-1$$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 8, "x-ck12": 0, "texerror": 0, "math_score": 0.8966493606567383, "perplexity": 1174.8566013533311}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573284.48/warc/CC-MAIN-20190918110932-20190918132932-00298.warc.gz"}
|
http://math.stackexchange.com/questions/335995/how-can-i-prove-big-oh-relation-between-log-2-log-2-n-and-sqrt-log-2-n
|
How can I prove big-oh relation between $\log_2(\log_2 n)$ and $\sqrt{\log_2 n}$
How can I prove big-O relation between $f=\log_2(\log_2 n)$ and $g=\sqrt{\log_2 n}\,$?
I want to find the constants, $c, N$ such that $\ g(x) \leq cf(x)$ for all $x>N$.
-
First you can find when $\log_2\log_2 N=\sqrt{\log_2 N}$. What happens if you take $x>N$ after that? – Ian Coley Mar 20 '13 at 16:36
A useful result, if $\lim_{n\to \infty} \frac{g(n)}{f(n)}=a$, then $g=O(f)$. – Mhenni Benghorbal Mar 20 '13 at 16:49
As an additional comment, you need only check the relation between $\log_2x$ and $\sqrt x$. You may solve for $c,N'$ in this case let $N=2^{N'}$. – Ian Coley Mar 20 '13 at 16:53
You can't. Did you mean $f(x) \le c g(x)$? – Aryabhata Apr 3 '13 at 9:06
@FrankMcGovern I fail to see the point of solving $\log_2x=\sqrt{x}$. – Did Apr 3 '13 at 9:26
The derivative of the usual logarithm function is less than $1$ on $(1,+\infty)$ hence $\ln x\leqslant x-1$ on $x\geqslant1$. This implies $\log_2x\leqslant2x$ on $x\geqslant1$. Since $\log_2x=2\log_2\sqrt{x}$, $\log_2x\leqslant4\sqrt{x}$ on $x\geqslant1$.
Appplying this to $x=\log_2n$, one sees that $f(n)\leqslant4g(n)$ for every $n\geqslant2$.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9911825060844421, "perplexity": 288.3935738929651}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375097512.42/warc/CC-MAIN-20150627031817-00107-ip-10-179-60-89.ec2.internal.warc.gz"}
|
https://undergroundmathematics.org/hyperbolic-functions/square-wheels/solution
|
Under construction Resources under construction may not yet have been fully reviewed.
Food for thought
## Solution
(This resource is still in draft form.)
This applet shows a square with slide length $2$ rolling over an upside-down catenary with equation $y=-\cosh x$. When the square is horizontal, the centre of its base touches the vertex of the catenary.
Move the slider to roll the square.
Brief solutions (require more detail):
• What is the locus of the centre of the square?
The locus of the centre of the square is a straight line along the $x$-axis.
• How far can the square roll with the same side still touching the catenary?
The square can rotate until the vertex of the square touches the catenary. At this point, the arc length of the catenary from the vertex of the catenary to this point equals half of the square’s side length, which is $1$. The arc length from $(0,-1)$ to $(x, -\cosh x)$ is $\sinh x$, so this occurs when $\sinh x=1$, or $x=\arsinh 1$. Using the formula for $\arsinh$ or solving $\sinh x=1$ directly gives an alternative expression for this: $x=\ln(1+\sqrt{2})$.
Furthermore, at this point, the gradient of the catenary is $y'=-\sinh x=-1$. This means that the square has rotated by $45^\circ$, and thus the centre of the square is also at $x=\arsinh 1$.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8633239269256592, "perplexity": 322.437170070248}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662520936.24/warc/CC-MAIN-20220517225809-20220518015809-00482.warc.gz"}
|
https://csgillespie.wordpress.com/tag/inverse-cdf/
|
# Why?
## November 28, 2010
### Random variable generation (Pt 1 of 3)
Filed under: AMCMC, R — Tags: , , , , — csgillespie @ 7:35 pm
As I mentioned in a recent post, I’ve just received a copy of Advanced Markov Chain Monte Carlo Methods. Chapter 1.4 in the book (very quickly) covers random variable generation.
## Inverse CDF Method
A standard algorithm for generating random numbers is the inverse cdf method. The continuous version of the algorithm is as follows:
1. Generate a uniform random variable $U$
2. Compute and return $X = F^{-1}(U)$
where $F^{-1}(\cdot)$ is the inverse of the CDF. Well known examples of this method are the exponential distribution and the Box-Muller transform.
## Example: Logistic distribution
I teach this algorithm in one of my classes and I’m always on the look-out for new examples. Something that escaped my notice is that it is easy to generate RN’s using this technique from the Logistic distribution. This distribution has CDF
$\displaystyle F(x; \mu, s) = \frac{1}{1 + \exp(-(x-\mu)/s)}$
and so we can generate a random number from the logistic distribution using the following formula:
$\displaystyle X = \mu + s \log\left(\frac{U}{1-U}\right)$
Which is easily converted to R code:
myRLogistic = function(mu, s){ u = runif(1) return(mu + s log(u/(1-u))) }
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 5, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.843347430229187, "perplexity": 985.8097709324949}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246652631.96/warc/CC-MAIN-20150417045732-00206-ip-10-235-10-82.ec2.internal.warc.gz"}
|
http://membran.at/Projekte_2014?q=node/125
|
# Nanofiltration as key technology for the separation of LA and AA
• Posted on: 12 June 2018
• By: mmiltner
Title Nanofiltration as key technology for the separation of LA and AA Publication Type Journal Article Year of Publication 2012 Authors Ecker J, Raab T., Harasek M Journal Journal of Membrane Science Volume 389 Pagination 389-398 Keywords Amino acids, Green Biorefinery, Lactic acid, Nanofiltration Abstract Nanofiltration as state-of-the-art technology was used for the separation of lactic acid (LA) and amino acids (AA) in a ‘Green Biorefinery’ pilot plant. For this process, the performances of six different nanofiltration membranes were compared by experiments in lab scale. In this work the focus was on the separation of the two products, LA and AA. Enhanced differences in the retentions were required to produce two purified process streams, LA enriched permeate and amino acid enriched retentate. In the reference experiment, performed with original solution from the ‘Green Biorefinery’ pilot plant, the retention values were about 60% for LA, and about 88% for AA, this hindered good performance in the separation of the main components. Process optimization with pH value variations and different diafiltration-modes were investigated; one experiment was done with original solution, two tests dealt with varying pH-values, two with different diafiltration rates. A pH-variation from 3.9 (reference solution) down to 2.5 transferred the chemical structure of LA, which reduced the retention of the LA significantly from 67% to 42% for the membrane DL (Osmonics). Beside the separation, further attention was given to the flux behaviour. All screening scenarios were compared with a reference experiment done with original solution and standard process parameters as used in the plant itself to evaluate the efficiency trends shown in the tests. It was shown that a nanofiltration unit allowed the separation of sufficient degree for further treatment technologies between AA and LA, a membrane screening for the optimization of this process ensured best performance in practice. URL https://www.sciencedirect.com/science/article/pii/S0376738811008118 DOI 10.1016/j.memsci.2011.11.004
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.833128809928894, "perplexity": 4604.501386597358}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247484689.3/warc/CC-MAIN-20190218053920-20190218075920-00533.warc.gz"}
|
https://forum.dopdf.com/troubleshooting-f4/spacing-problem-with-tamil-unicode-character-%E0%AE%B5-t3097.html
|
## Spacing problem with Tamil UNICODE character வீ
Post here if you have problems installing or using doPDF.
umapathy
Posts: 3
Joined: Thu Mar 01, 2012 7:52 pm
I have come across a peculiar problem with spacing w.r.t Tamil UNICODE character வீ (pronounced like whee)when used in words like வீதி (pronounced like wheethi means Road). When I use Office 2010 and if I use the built in feature I have not come across the problem but if I use dopdf it causes inconvenience. I have created a document in Tamil UNICODE and wanted to converted to pdf where I have come across this issue. I shall be thankful if this can be sorted out. {Since dopdf is the easy way to convert 2 A4 sheet document to 1 single A4 page that alteast as of now cannot be done in Microsoft Office products.}
Claudiu (Softland)
Posts: 1506
Joined: Thu May 23, 2013 7:19 am
Hello,
Please send us the Tamil font you are using to our support team at [email protected] so we can install it and better troubleshoot the issue. We have manage to reproduce it locally using the word you mentioned but we need the exact font type installation to know what to include in the application as embedded.
Thank you.
umapathy
Posts: 3
Joined: Thu Mar 01, 2012 7:52 pm
Dear Softland,
I have send you the files. I also got a response back from you. In any case if you need any other information let me know. I shall be thnakful if this could be fixed.
Claudiu (Softland)
Posts: 1506
Joined: Thu May 23, 2013 7:19 am
Hello,
This has been fixed in our latest doPDF build. You can download the aplication from the homepage.
Thank you.
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8597137928009033, "perplexity": 2567.681285924197}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107902745.75/warc/CC-MAIN-20201029040021-20201029070021-00313.warc.gz"}
|
https://www.shaalaa.com/question-bank-solutions/nature-roots-roots-quadratic-equation_5052
|
Solution - Nature of Roots
Account
User
Register
Share
Books Shortlist
ConceptNature of Roots
Question
Find the value of p for which the quadratic equation (2p + 1)x2 − (7p + 2)x + (7p − 3) = 0 has equal roots. Also find these roots.
Solution
You need to to view the solution
Is there an error in this question or solution?
Similar questions VIEW ALL
Find the nature of the roots of the following quadratic equations. If the real roots exist, find them;
3x^2 - 4sqrt3x + 4 = 0
view solution
Solve for x :
2x^2+6sqrt3x-60=0
view solution
Is it possible to design a rectangular park of perimeter 80 and area 400 m2? If so find its length and breadth
view solution
If x=1/2, is a solution of the quadratic equation 3x2+2kx3=0, find the value of k
view solution
Without solving, examine the nature of roots of the equation 2x2 + 2x + 3 = 0
view solution
Reference Material
Solution for concept: Nature of Roots. For the course 8th-10th CBSE
S
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5271955132484436, "perplexity": 1696.486141311872}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823153.58/warc/CC-MAIN-20171018214541-20171018234541-00339.warc.gz"}
|
https://testbook.com/blog/verbal-reasoning-quiz-1-for-ssc-and-railways-exams/
|
• Save
• Share
# Verbal Reasoning Quiz 1 for SSC & Railways Exams
1 year ago .
If you are preparing for Government Recruitment or Entrance exams, you will likely need to solve a section on Reasoning. Verbal Reasoning Quiz 1 for SSC and Railways Exams will help you learn concepts on important topics in Logical Reasoning – Verbal Reasoning. This Verbal Reasoning Quiz 1 is important for exams such as SSC CGL, CHSL, Stenographer, Railways RRB NTPC.
## Verbal Reasoning Quiz 1 for SSC and Railways Exams –
Que. 1
In the question below is given a statement followed by two assumptions numbered I and II. An assumption is something supposed or taken for granted. You have to consider the statement and the following assumptions and decide which of the assumptions is implicit in the statement.
Statement: This video is invested to guide the layman to learn C programming in the absence of a teacher.
Assumptions:
I. A teacher of C-programming may not be available to everyone.
II. C-programming can be learnt with the help of videos.
1.
Only assumption I is implicit.
2.
Only assumption II is implicit.
3.
Neither assumption I nor II is implicit.
4.
Both assumptions I and II are implicit.
Que. 2
In the question below is given a statement followed by two assumptions numbered I and II. An assumption is something supposed or taken for granted. You have to consider the statement and the following assumptions and decide which of the assumptions is implicit in the statement.
Statement: More than the quality, good advertisements boost the sale of a product.
Assumptions:
1.
Only assumption I is implicit.
2.
Only assumption II is implicit.
3.
Neither assumption I nor II is implicit.
4.
Both assumptions I and II are implicit.
Que. 3
Directions: In the following question, one statement is given followed by two assumptions, I and II. You have to consider the statements to be true, even if they seem to be at variance from commonly known facts. You are to decide which of the given assumptions can definitely be drawn from the given statements. Indicate your answer.
Statement: Regular reading of newspaper enhances one’s general knowledge.
Assumptions:
I. Newspaper contains a lot of general knowledge.
II. Enhancement of general knowledge enables success in life.
1.
Only I is implicit
2.
Only II is implicit
3.
Both I and II are implicit
4.
Neither I nor II is implicit
Que. 4
In the question below is given a statement followed by two assumptions numbered I and II. An assumption is something supposed or taken for granted. You have to consider the statement and the following assumptions and decide which of the assumptions is implicit in the statement.
Statement: The odd-even traffic system to fight increased air pollution has received mixed response from people.
Assumptions:
I. Air pollution has decreased due to odd-even system.
II. Every citizen has welcomed the odd-even system.
1.
Only assumption I is implicit.
2.
Only assumption II is implicit.
3.
Neither assumption I nor II is implicit.
4.
Both assumptions I and II are implicit
Que. 5
In the question below is given a statement followed by two assumptions numbered I and II. An assumption is something supposed or taken for granted. You have to consider the statement and the following assumptions and decide which of the assumptions is implicit in the statement.
Statement: Clean India campaign has evoked good response from all parts of the country.
Assumptions:
I. People are interested in the campaign.
II. India is a very clean country.
1.
Only assumption I is implicit.
2.
Only assumption II is implicit.
3.
Neither assumption I nor II is implicit.
4.
Both assumptions I and II are implicit
Que. 6
In the question below is given a statement followed by two assumptions numbered I and II. An assumption is something supposed or taken for granted. You have to consider the statement and the following assumptions and decide which of the assumptions is implicit in the statement.
Statement: The older generation of people prefer basic phones instead of touch screen phones.
Assumptions:
I. Basic phones are easy to operate.
II. Touch-screen phones are much available these days.
1.
Only assumption I is implicit.
2.
Only assumption II is implicit.
3.
Neither assumption I nor II is implicit.
4.
Both assumptions I and II are implicit
Que. 7
In the question given below is given a statement followed by two assumptions numbered I and II. An assumption is something taken for granted. You have to consider the statement and the following assumptions and decide which of the assumptions is implicit in the statement.
Statement: Many global companies started to invest in India after the economic liberalization undertaken by the Congress government in 1994.
Assumptions:
1. The economic liberalization allowed the investment of foreign companies which was restricted earlier
2. The Indian markets were extremely beneficial for these companies
1.
Only assumption I is implicit
2.
Only assumption II is implicit
3.
Neither assumption I nor II is implicit
4.
Both assumptions I and II are implicit
Que. 8
In the question given below is given a statement followed by two assumptions numbered I and II. An assumption is something taken for granted. You have to consider the statement and the following assumptions and decide which of the assumptions is implicit in the statement.
Statement: The applicant was told that he was appointed as a programmer with a probation period of one year and that his performance would be reviewed at the end of the period for confirmation.
Assumptions:
1. The performance of an individual was generally not known at the time of appointment offer.
2. Generally an individual tries to prove his worth during the probation period.
1.
Only assumption I is implicit
2.
Only assumption II is implicit
3.
Neither assumption I nor II is implicit
4.
Both assumptions I and II are implicit
Que. 9
In the question below is given a statement followed by two assumptions numbered I and II. An assumption is something supposed or taken for granted. You have to consider the statement and the following assumptions and decide which of the assumptions is implicit in the statement.
Statements: More commuters now travel by this route, but there is no public demand for more buses.
Assumptions:
I. The number of buses depends upon the number of passengers.
II. Usually people do not tolerate inconvenience.
1.
Only assumption I is implicit.
2.
Only assumption II is implicit.
3.
Neither assumption I nor II is implicit.
4.
Both assumptions I and II are implicit.
Que. 10
In the question below is given a statement followed by two assumptions numbered I and II. An assumption is something supposed or taken for granted. You have to consider the statement and the following assumptions and decide which of the assumptions is implicit in the statement.
Statements: Detergents should be used to clean clothes.
Assumptions:
I. Detergents form more lather.
II. Detergents help to dislodge grease and dirt.
1.
Only assumption I is implicit.
2.
Only assumption II is implicit.
3.
Neither assumption I nor II is implicit.
4.
Both assumptions I and II are implicit.
Did you like this Verbal Reasoning Quiz 1 for SSC and Railways Exams? Let us know! You may also like –
• Save
2 hours ago
1 day ago
2 days ago
3 days ago
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8218635320663452, "perplexity": 1864.10605253874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948214.37/warc/CC-MAIN-20180426125104-20180426145104-00624.warc.gz"}
|
http://mathhelpforum.com/pre-calculus/147590-find-equation-parallel-perpendicular-lines.html
|
# Math Help - Find Equation of Parallel and Perpendicular Lines
1. ## Find Equation of Parallel and Perpendicular Lines
Find an equation of the following parallel and perpendicular lines.
A.) The line parallel to $x+3=0$ and passing through $(-6, -7)$
B.) The line perpendicular to $y-4=0$ passing through $(-1, 6)$
Is there a specific formula I can use to to solve these?
2. Originally Posted by larry21
Find an equation of the following parallel and perpendicular lines.
A.) The line parallel to $x+3=0$ and passing through $(-6, -7)$
B.) The line perpendicular to $y-4=0$ passing through $(-1, 6)$
Is there a specific formula I can use to to solve these?
The equation of a line passing through a point $(x_1, y_1)$ is given by:
$y-y_1 = m(x-x_1)$
where m is the slope.
Now, two lines are parallel if their slopes are equal.
and two lines are perpendicular if the product of their slopes is -1.
3. Originally Posted by harish21
$y-y_1 = m(x-x_1)$
For reference, this equation is known as the point-slope equation.
4. 1)required line is parallel to x+3=0 and passes through (-6,-7)
equation of such line will be x=c. since it passes through (-6,-7)
-6=c or
x=-6
x+6=0
[NOTE: line x+k=0 is perpendicular to x axis so any line parallel to it must also be perpendicular to x axis and in the form x+k=0]
2) line perpendicular to y-4=0 will be in the form x=c.
since it passes through(-1,6)
we have -1=c
or c=-1
x=-1
x+1=0
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8158038258552551, "perplexity": 480.38632313327224}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042987034.19/warc/CC-MAIN-20150728002307-00174-ip-10-236-191-2.ec2.internal.warc.gz"}
|
https://www.physicsoverflow.org/35942/simplify-k-matrix
|
# Simplify K-matrix
+ 3 like - 0 dislike
337 views
2+1D Abelian topologically ordered states are believed to be described by multicomponent $U(1)$ Chern-Simons theories, with Lagrangian
\mathcal{L}=\frac{K_{IJ}}{4\pi}\epsilon^{\mu\nu\lambda}a_\mu^I\partial_\nu a_\lambda^J-\frac{1}{2\pi}t_I\epsilon^{\mu\nu\lambda}A_\mu\partial_\nu a_\lambda^I
where $K$ is a invertible symmetric matrix with integer entries, and $t$ is a vector with integer entries.
Different $K$-matrices differing by a matrix in $GL(N,\mathbb{Z})$ are equivalent, namely, if $K_1$ and $K_2$ satisfy
K_1=W^TK_2W
where $W$ is a matrix with integer entries and unit determinant, then $K_1$ and $K_2$ describe the same physics.
Sometimes it is useful to simplify a $K$-matrix by using an appropriate $W$ matrix, i.e. $K\rightarrow W^TKW$, so that the resulting new $K$ is block diagonal. I do not know any general procedure of doing this, and I appreciate if anyone can help.
If a general procedure is too hard, it is also helpful just to show how to find the appropriate $W$ that simplifies the following specific $K$-matrix:
$$$K= \left( \begin{array}{cccc} 2&-1&0&0\\ -1&0&2&1\\ 0&2&0&0\\ 0&1&0&1 \end{array} \right)$$$
edited Apr 26, 2016
Every matrix can be transformed by an equivalence transformation of the kind you state to bring it into diagonal form, e.g., by a change of basis to an orthogonal eigensystem of $K$. A corrsponding change of the annihilation operators then simplifies that action wiithout changing the physics.
So is there any general procedure of doing it?
Always if you don't insist that the new $K$ is integral, too. (I hadn't noticed the integrality constraint when I wrote my first comment.) Integer congruence transformation to diagonal or block diagonal form do not always exist, only if the lattice defined by $K$ splits into a direct sum of smaller lattices.
Could you please give a reference where the action appears, so that I can understand the origin of the integrality condition. Is $K$ known to be positive definite, or known to be indefinite?
The origin of the integrality is charge quantization, or more formally, the compactness of the gauge field. Being compact, we require the partition function of the Chern-Simons theory to be invariant under $a\rightarrow a+2\pi$, and it turns out that only if the matrix $K$ is integral will this be satisfied. There is no positivity condition on $K$ though. For references, Xiao-Gang Wen's book Quantum Field Theory of Many-Body Systems may be good. Thank you.
In the positive definite case, the problem of simplifying $K$ is the problem of finding a normal form for the Gram matrix of an integral lattice. This is a well-studied (though in high dimensions very difficult) problem in number thoery. I suggest that you look into the book Sphere Packings, Lattices and Groups by Conway and Sloane. I'll write a proper answer after having looked more at the context of your question - this may take a while.
Many thanks!
+ 4 like - 0 dislike
Let $K$ be a symmetric $n\times n$ matrix with integer coefficients. The additive abelian group of integer vectors of size $n$ gets a lattice structure by defining the (not necessarily definite) integral inner product $(x,y):=x^TKy$. It is conventional to call the integer $(x,x)$ the norm of $x$ (rather than its square root, as in Hilbert spaces). A standard referecne for lattices is the book Sphere Packings, Lattices and Groups by Conway and Sloane. (It covers the definite case only. For the indefinite case see, e.g., the book An introduction to the theory of numbers by Cassels.
If $K$ is positive semidefinite, one has a Euclidean lattice in which all vectors have nonnegative integral norm $(x,x)$. The vectors of zero norm are just the integral null vectors of $K$; they form a subgroup that can be factored out, leaving a definite lattice of smaller dimensions where all nonzero points have positive norm.
In a definite lattice, there are only finitely many lattice points of a given norm. coming in antipodal pairs that can be found by a complete enumeration procedure (typically an application of the LLL algorithm followed by Schnorr-Euchner search, or more advanced variations). The collection of all vectors of small norm define a graph whose edges are labelled by the nonzero inner products. Lattice isomorphism (corresponding to equivalence of the $K$ under $GL(n,Z)$) can be tested efficiently by testing these labelled graphs for isomorphism, e.g., using the graph isomorphism package nauty. (Of course one first checks whether the determinant of $K$, which is an invariant is the same.) This makes checking for decomposability a finite procedure (recursive in the dimension). It is not very practical in higher dimensions unless the lattice decomposes into a large number of lattices generated by vectors of norm 1 and 2. However, if some of these graphs are disconnected they suggest decompositions that can be used in a heuristic fashion.
In an indefinite lattice (i.e., when $K$ is indefinite) there are always vectors of norm zero that are not null vectors of $K$. The norm zero vectors no longer form a subgroup. Classification and isomorphism testing is instead done by working modulo various primes, giving genera. Again one has a finite procedure.
To solve the decomposition problem posed for given $K$, one shouldn't need to do a full classification of all lattices of dimension $n$. But I don't know a simpler systematic procedure that is guaranteed to work in general. In higher dimensions most lattices are indecomposable, though there are no simple criteria for indecomposability. The key to decomposition is to transform the basis in such a way that a subset of the basis vectors becomes orthogonal to the remaining ones, and to repeat this recursively as long as feasible.
This gives rise to the following heuristics that works well for the specific matrix given. One transforms the basis so that it contains points $x$ of absolutely small nonzero norm (reflected by corresponding diagonal entries) and subtracts integral multiples of $x$ from the other basis vectors in order to make the absolute values of the inner products as small as possible. [Finding these short vectors is often trivial by inspection, but if the original diagonal entries are large lattice reduction methods (of which LLL is the simplest) must be used to find them or to show their nonexistence.] This is repeated as long as the sum of absolute values of the off-diagonal entries decreases. If a diagonal entry is $\pm1$ one can make in this way all off-diagonal entries zero and obtains a 1-dimensional sublattice that decomposes the given lattice. (For absolutely larger diagonal entries there is no such guarantee, but the case of norm $\pm2$ is usually tractable, too, since one can use the structure theory of root lattices to handle the sublattice generated by norm 2 vectors.)
In the specific (indefinite) case given, the fourth unit vector $e^4$ has norm 1, and transforming the off-diagonals in the 4th column to zero produces the reduced matrix {2,-1,0,-1,-1,2;0,2,0]. Now the second unit vector has norm -1, and doing the same with column 2 gives the reduced matrix [3 -2;-2,4]. This matrix is definite, and one can enumerate all vectors of norm $\le 4$ to check that it is indecomposable. One can still improve the basis a little bit , replacing [3 -2;-2,4] by [3 1;1 3].
Collecting the transformations done one finds a unimodular matrix that transforms $K$ into the direct sum of [3,-2;-2,,4], [-1], and [1]. Or [3 1;1 3], [-1], and [1].
answered Apr 26, 2016 by (12,790 points)
edited Apr 27, 2016
Thank you for your detailed answer. But it seems the simplified matrix you gave has a different determinant compared to the original one... Can you write down the $W$-matrix explicitly?
@Mr.Gentleman: Sorry, silly mistake. I corrected my calculation. The determinant is now always $-8$. An explicit $W$ is given in the answer by Meng if you apply a column permutation interchanging coordinates 2 and 3.
+ 3 like - 0 dislike
There is no way to do this in general. Here is some background http://www.maths.ed.ac.uk/~aar/papers/conslo.pdf
Interestingly, there is a classification of *indefinite* bilinear forms over Z.
answered Apr 23, 2016 by (1,875 points)
''No way'' is too much said. Every single dimension is decidable (with a finite number of equivalence classes). It just gets harder with the dimension.
Is that a theorem? Maybe eventually it gets undecidable.
Yes, it is theorem. The absolute value of the determinant is an invariant. Using LLL reduction one can always find (for any fixed dimension and determinant) an explicit basis of bounded length. Then the number of possible Gram matrices in such a reduced basis is finite. Since one can decide lattice isomorphism, it follows that one can figure out the precise number of equivalence classes for each dimension and determinant.
Thanks!
Thank you for the comments. Because of my lack of background, I think I will have to learn these. However, at this moment I have a particular problem with $K=(2,-1,0,0;-1,0,2,1;0,2,0,0;0,1,0,1)$ (the comma separates elements in the same row and the semicolon separates different rows), can anyone help me block diagonalize it?
@Arnold Neumaier: updated, thank you!
+ 3 like - 0 dislike
Let $$W=\left( \begin{array}{cccc} 1 & 0 & 1 & 0 \\ -1 & 0 & 1 & 1 \\ 1 & -1 & 0 & -1 \\ -1 & 1 & 1 & 1 \\ \end{array} \right),$$
then
$$W^T K W=\left(\begin{array}{cccc} 3 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 1 & 0 & 3 & 0 \\ 0 & 0 & 0 & -1 \\ \end{array}\right)$$
W is found by trial and error.
answered Apr 27, 2016 by (550 points)
How to try and err? There are infinitely many possibilities for $W$ and only a few work. So what was your heuristics to find $W$?
Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor) Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar$ysics$\varnothing$verflowThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.999897837638855, "perplexity": 184.6741494624017}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945724.44/warc/CC-MAIN-20180423031429-20180423051429-00246.warc.gz"}
|
https://stops.r-forge.r-project.org/
|
This is the homepage of the Structure Optimized Proximity Scaling (STOPS) project. On this page you can find links to papers, talks, data and software related to STOPS. One can also find a tutorial for COPS and STOPS and the MDS functions below.
### Talks:
Title Event Date Place
Structure Optimized Proximity Scaling (STOPS): A Framework for Hyperparameter Selection in Multidimensional Scaling Psychoco 2017 09.02.2017-10.02.2017 WU Vienna, Austria
COPS and STOPS: Cluster and/or Structure Optimized Proximity Scaling Brown Bag Seminar, Institute for Statistics and Mathematics 07.12.2016 WU Vienna, Austria
The OPTICS Cordillera: Nonparametric Assessment of Clusteredness Brown Bag Seminar, Institute for Statistics and Mathematics 23.10.2016 WU Vienna, Austria
COPS: Cluster Optimized Proximity Scaling Psychoco 2015 12.02.2015-13.02.2015 Amsterdam, The Netherlands
Scaling for Clusters with COPS: Cluster Optimized Proximity Scaling CFE-ERCIM 2014 06.12.2014-08.12.2014 Pisa, Italy
### Software:
The project summary page you can find here.
The most recent build is available for Windows and Linux here: STOPS Package
### People:
• Thomas Rusch
• Patrick Mair
• Kurt Hornik
• Jan de Leeuw
• A tutorial on Stucture Optimized Proximity Scaling (STOPS)
In this document we introduce the functionality avalaiable in stops for fitting multidimensional scaling (MDS; Borg & Groenen 2005) or proximity scaling (PS) models either with a STOPS or COPS idea or not. We start with a short introduction to PS and the models that we have available. We then explain fitting of these models with the stops package. Next, we introduce the reader to COPS (Rusch et al. 2015a) and STOPS (Rusch et al. 2015b) models and show how to fit those. For illustration we use the smacof::kinshipdelta data set (Rosenberg, S. & Kim, M. P., 1975) which lists percentages of how often 15 kinship terms were not grouped together by college students.
library(stops)
## Loading required package: smacof
##
## Attaching package: 'stops'
##
## The following object is masked from 'package:stats':
##
## cmdscale
## Proximity Scaling
For proximity scaling (PS) or multidimensional scaling (MDS) the input is typically an $$N\times N$$ matrix $$\Delta^*=f(\Delta)$$, a matrix of proximities with elements $$\delta^*_{ij}$$, that is a function of a matrix of observed non-negative dissimilarities $$\Delta$$ with elements $$\delta_{ij}$$. $$\Delta^*$$ usually is symmetric (but does not need to be). The main diagonal of $$\Delta$$ is 0. We call a $$f: \delta_{ij} \mapsto \delta^*_{ij}$$ a proximity transformation function. In the MDS literature these $$\delta_{ij}^*$$ are often called dhats or disparities. The problem that proximity scaling solves is to locate an $$N \times M$$ matrix $$X$$ (the configuration) with row vectors $$x_i, i=1,\ldots,N$$ in low-dimensional space $$(\mathbb{R}^M, M \leq N)$$ in such a way that transformations $$g(d_{ij}(X))$$ of the fitted distances $$d_{ij}(X)=d(x_i,x_j)$$—i.e., the distance between different $$x_i, x_j$$—approximate the $$\delta^*_{ij}$$ as closely as possible. We call a $$g: d_{ij}(X) \mapsto d_{ij}^*(X)$$ a distance transformation function. In other words, proximity scaling means finding $$X$$ so that $$d^*_{ij}(X)=g(d_{ij}(X))\approx\delta^*_{ij}=f(\delta_{ij})$$.
This approximation $$D^*(X)$$ to the matrix $$\Delta^*$$ is found by defining a fit criterion (the loss function), $$\sigma_{MDS}(X)=L(\Delta^*,D^*(X))$$, that is used to measure how closely $$D^*(X)$$ approximates $$\Delta^*$$. Usually, they are closely related to the quadratic loss function. A general formulation of a loss function based on a quadratic loss is $$\label{eq:stress} \sigma_{MDS}(X)=\sum^N_{i=1}\sum^N_{j=1} z_{ij} w_{ij}\left[d^*_{ij}(X)-\delta^*_{ij}\right]^2=\sum^N_{i=1}\sum^N_{j=1} z_{ij}w_{ij}\left[g\left(d_{ij}(X)\right)-f(\delta_{ij})\right]^2$$
Here, the $$w_{ij}$$ and $$z_{ij}$$ are finite weights, with $$z_{ij}=0$$ if the entry is missing and $$z_{ij}=1$$ otherwise.
The loss function used is then minimized to find the vectors $$x_1,\dots,x_N$$, i.e., $$\label{eq:optim} \arg \min_{X}\ \sigma_{MDS}(X).$$
There are a number of optimization techniques one can use to solve this optimization problem.
### Stress Models
The first popular type of PS supported in stops is based on the loss function type stress (Kruskall 1964). This uses some type of Minkowski distance ($$p > 0$$) as the distance fitted to the points in the configuration, $$\label{eq:dist} d_{ij}(X) = ||x_{i}-x_{j}||_p=\left( \sum_{m=1}^M |x_{im}-x_{jm}|^p \right)^{1/p} \ i,j = 1, \dots, N.$$
Typically, the norm used is the Euclidean norm, so $$p=2$$. In standard MDS $$g(\cdot)=f(\cdot)=I(\cdot)$$, the identity function.
This formulation enables one to express a large number of PS methods many of which are implemented in stops. In stops we allow to use specific choices for $$f(\cdot)$$ and $$g(\cdot)$$ from the family of power transformations so one can fit the following stress models:
• Explicitly normalized stress: $$w_{ij}=(\sum_{ij}\delta^{*2}_{ij})^{-1}$$, $$\delta_{ij}^*=\delta_{ij}$$, $$d_{ij}(X)^*=d_{ij}(X)$$
• Stress-1: $$w_{ij}=(\sum_{ij} d^{*2}_{ij}(X))^{-1}$$, $$\delta_{ij}^*=\delta_{ij}$$, $$d_{ij}(X)^*=d_{ij}(X)$$
• Sammon stress (Sammon 1969): $$w_{ij}=\delta^{*-1}_{ij}$$ , $$\delta_{ij}^*=\delta_{ij}$$, $$d_{ij}(X)^*=d_{ij}(X)$$
• Elastic scaling stress (McGee 1966): $$w_{ij}=\delta^{*-2}_{ij}$$, $$\delta_{ij}^*=\delta_{ij}$$, $$d_{ij}(X)^*=d_{ij}(X)$$
• S-stress (Takane et al. 1977): $$\delta^*_{ij}=\delta_{ij}^2$$ and $$d^*_{ij}(X)=d^2_{ij}(X)$$, $$w_{ij}=1$$
• R-stress (de Leeuw, 2014): $$\delta^*_{ij}=\delta_{ij}$$ and $$d^*_{ij}=d^{2r}_{ij}$$, $$w_{ij}=1$$
• Power MDS (Buja et al. 2008, Rusch et al. 2015a): $$\delta^*_{ij}=\delta_{ij}^\lambda$$ and $$d^*_{ij}=d^\kappa_{ij}$$, $$w_{ij}=1$$
• Power elastic scaling (Rusch et al. 2015a): $$w_{ij}=\delta^{*-2}_{ij}$$, $$\delta^*_{ij}=\delta_{ij}^\lambda$$ and $$d^*_{ij}=d^\kappa_{ij}$$
• Power Sammon mapping (Rusch et al. 2015a): $$w_{ij}=\delta^{*-1}_{ij}$$, $$\delta^*_{ij}=\delta_{ij}^\lambda$$ and $$d^*_{ij}=d^\kappa_{ij}$$
• Powerstress (encompassing all previous models; Buja et al. 2008, Rusch et al. 2015a): $$\delta^*_{ij}=\delta_{ij}^\lambda$$, $$d^*_{ij}=d^\kappa_{ij}$$ and $$w_{ij}=w_{ij}^\nu$$ for arbitrary $$w_{ij}$$ (e.g., a function of the $$\delta_{ij}$$)
For all of these models one can use the function powerStressMin which uses majorization to find the solution (de Leeuw, 2014) . The function allows to specify a kappa, lambda and nu argument as well as a weightmat (the $$w_{ij}$$).
The object returned from powerStressMin is of class smacofP which extends the smacof classes (de Leeuw & Mair, 2009) to allow for the power transformations. Apart from that the objects are made so that they have maximum compatibility to methods from smacof. Accordingly, the following S3 methods are available:
Method Description
print Prints the object
summary A summary of the object
plot 2D Plots of the object
plot3d Dynamic 3D configuration plot
plot3dstatic Static 3D configuration plot
residuals Residuals
coef Model Coefficients
Let us illustrate the usage
dis<-as.matrix(smacof::kinshipdelta)
• A standard MDS (stress)
res1<-powerStressMin(dis,kappa=1,lambda=1)
res1
##
## Call: powerStressMin(delta = dis, kappa = 1, lambda = 1)
##
## Model: Power Stress SMACOF
## Number of objects: 15
## Stress-1 value: 0.2667
## Number of iterations: 5219
• A sammon mapping
res2<-powerStressMin(dis,kappa=1,lambda=1,nu=-1,weightmat=dis)
res2
##
## Call: powerStressMin(delta = dis, kappa = 1, lambda = 1, nu = -1, weightmat = dis)
##
## Model: Power Stress SMACOF
## Number of objects: 15
## Stress-1 value: 7.023
## Number of iterations: 74860
Alternatively, one can use the faster sammon function from MASS (Venables & Ripley, 2002) for which we provide a wrapper that adds class attributes and methods (and overloads the function).
res2a<-sammon(dis)
## Initial stress : 0.17053
## stress after 3 iters: 0.10649
res2a
##
## Call: sammon(d = dis)
##
## Model: Sammon Scaling
## Number of objects: 15
## Stress: 0.1065
• An elastic scaling
res3<-powerStressMin(dis,kappa=1,lambda=1,nu=-2,weightmat=dis)
res3
##
## Call: powerStressMin(delta = dis, kappa = 1, lambda = 1, nu = -2, weightmat = dis)
##
## Model: Power Stress SMACOF
## Number of objects: 15
## Stress-1 value: 59.87
## Number of iterations: 1e+05
• A sstress model
res4<-powerStressMin(dis,kappa=2,lambda=2)
res4
##
## Call: powerStressMin(delta = dis, kappa = 2, lambda = 2)
##
## Model: Power Stress SMACOF
## Number of objects: 15
## Stress-1 value: 0.3516
## Number of iterations: 39427
• An rstress model (with $$r=1$$ as $$r=\kappa/2$$)
res5<-powerStressMin(dis,kappa=2,lambda=1)
res5
##
## Call: powerStressMin(delta = dis, kappa = 2, lambda = 1)
##
## Model: Power Stress SMACOF
## Number of objects: 15
## Stress-1 value: 0.4133
## Number of iterations: 17686
• A powermds model
res6<-powerStressMin(dis,kappa=2,lambda=1.5)
res6
##
## Call: powerStressMin(delta = dis, kappa = 2, lambda = 1.5)
##
## Model: Power Stress SMACOF
## Number of objects: 15
## Stress-1 value: 0.3733
## Number of iterations: 39377
• A powersammon model
res7<-powerStressMin(dis,kappa=2,lambda=1.5,nu=-1,weightmat=dis)
res7
##
## Call: powerStressMin(delta = dis, kappa = 2, lambda = 1.5, nu = -1,
## weightmat = dis)
##
## Model: Power Stress SMACOF
## Number of objects: 15
## Stress-1 value: 7.271
## Number of iterations: 1e+05
• A powerelastic scaling
res8<-powerStressMin(dis,kappa=2,lambda=1.5,nu=-2,weightmat=dis)
res8
##
## Call: powerStressMin(delta = dis, kappa = 2, lambda = 1.5, nu = -2,
## weightmat = dis)
##
## Model: Power Stress SMACOF
## Number of objects: 15
## Stress-1 value: 62.61
## Number of iterations: 1e+05
• A powerstress model
res9<-powerStressMin(dis,kappa=2,lambda=1.5,nu=-1.5,weightmat=2*1-diag(nrow(dis)))
res9
##
## Call: powerStressMin(delta = dis, kappa = 2, lambda = 1.5, nu = -1.5,
## weightmat = 2 * 1 - diag(nrow(dis)))
##
## Model: Power Stress SMACOF
## Number of objects: 15
## Stress-1 value: 0.8362
## Number of iterations: 44052
summary(res9)
##
## Configurations:
## D1 D2
## Aunt -0.1228 0.2497
## Brother 0.1961 -0.1405
## Cousin 0.0526 0.3101
## Daughter -0.2048 -0.1259
## Father 0.1637 -0.1824
## Granddaughter -0.2358 -0.0525
## Grandfather 0.2149 -0.1329
## Grandmother -0.2362 -0.0861
## Grandson 0.2148 -0.1073
## Mother -0.2097 -0.1366
## Nephew 0.1709 0.2108
## Niece -0.1234 0.2441
## Sister -0.2216 -0.0973
## Son 0.1700 -0.1710
## Uncle 0.1713 0.2179
##
##
## Stress per point:
## SPP SPP(%)
## Niece 0.0022 4.688
## Nephew 0.0022 4.712
## Aunt 0.0023 4.937
## Uncle 0.0023 5.028
## Daughter 0.0028 6.084
## Son 0.0029 6.275
## Father 0.0030 6.452
## Mother 0.0031 6.686
## Cousin 0.0032 6.798
## Sister 0.0034 7.272
## Brother 0.0034 7.283
## Grandson 0.0037 7.979
## Granddaughter 0.0038 8.102
## Grandmother 0.0041 8.819
## Grandfather 0.0041 8.885
plot(res9)
plot(res9,"transplot")
plot(res9,"Shepard")
plot(res9,"resplot")
plot(res9,"bubbleplot")
### Strain Models
The second popular type of PS supported in stops is based on the loss function type . Here the $$\Delta^*$$ are a transformation of the $$\Delta$$, $$\Delta^*= f (\Delta)$$ so that $$f(\cdot)=-(h\circ l)(\cdot)$$ where $$l$$ is any function and $$h(\cdot)$$ is a double centering operation, $$h(\Delta)=\Delta-\Delta_{i.}-\Delta_{.j}+\Delta_{..}$$ where $$\Delta_{i.}, \Delta_{.j}, \Delta_{..}$$ are matrices consisting of the row, column and grand marginal means respectively. These then get approximated by (functions of) the inner product matrices of $$X$$ $$\label{eq:dist2} d_{ij}(X) = \langle x_{i},x_{j} \rangle$$
We can thus express classical scaling as a special case of the general PS loss with $$d_{ij}(X)$$ as an inner product, $$g(\cdot) = I(\cdot)$$ and $$f(\cdot)=-(h \circ I)(\cdot)$$.
If we again allow power transformations for $$g(\cdot)$$ and $$f(\cdot)$$ one can fit the following strain models with stops
• Classical scaling (Torgerson, 1958): $$\delta^*_{ij}=-h(\delta_{ij})$$ and $$d^*_{ij}=d_{ij}$$
• Powerstrain (Buja et al. 2008, Rusch et al. 2015a): $$\delta^*_{ij}=-h(\delta_{ij}^\lambda)$$, $$d^*_{ij}=d_{ij}$$ and $$w_{ij}=w_{ij}^\nu$$ for arbitrary $$w_{ij}$$
In stops we have a wrapper to cmdscale (overloading the base function) which extend functionality by offering an object that matches smacofP objects with corresponding methods.
A powerstrain model is rather easy to fit with simply subjecting the dissimilarity matrix to some power. Here we use $$\lambda=3$$.
resc<-cmdscale(kinshipdelta^3)
resc
##
## Call: cmdscale(d = kinshipdelta^3)
##
## Model: Torgerson-Gower Scaling
## Number of objects: 15
## GOF: 0.4258 0.6282
summary(resc)
##
## Configurations:
## D1 D2
## Aunt 178193 204987
## Brother -174369 -94357
## Cousin -48355 265057
## Daughter 169149 -109936
## Father -145389 -168604
## Granddaughter 187039 -44851
## Grandfather -180116 -103668
## Grandmother 199145 -83039
## Grandson -169768 -72359
## Mother 185798 -138677
## Nephew -195964 173124
## Niece 168278 208870
## Sister 182224 -70764
## Son -149876 -135883
## Uncle -205989 170101
summary(resc)
plot(resc)
## Augmenting MDS with structure considerations: STOPS and COPS
The main contribution of the stops package is not in solely fitting the powerstress or powerstrain models and their variants from above, but allowing to choose the right transformation to achieve a “structured” MDS result automatically. This can be useful in a variety of contextes: to explore or generate structures, to restrict the target space, to avoid artefacts, to preserve certain types of structures and so forth.
For this, an MDS loss function is subjected to nonlinear transformations and is augmented to include penalties for the type of structures one is aiming for. This combination of an MDS loss with a structuredness penalty is what we call “structure optimized loss” (stoploss) and the resulting MDS is coined “Structured Optimized Proximity Scaling” (or STOPS). The prime example for a STOPS model is “Cluster Optimized Proximity Scaling” (COPS) which selects optimal parameters so that the clusteredness appearance of the configuation is improved (see below).
### STOPS
Following Rusch et al. (2015b) the general idea is that from given observations $$\Delta$$ we look for a configuration $$X$$. This achieves this by minimizing some loss function $$\sigma_{MDS}(X^*;\Delta^*)$$ where the $$\Delta^*, X^*$$ are functions of the $$\Delta$$ and $$X$$. The $$X$$ has properties with regards to its structural appearance, which we call c-structuredness for configuration-structuredness. There are different types of c-structuredness people might be interested in (say, how clustered the result is, or that dimensions are orthogonal or if there is some submanifold that the data live on). We developed indices for these types of c-structuredness that capture that essence in the configuration.
We have as part of a STOPS model a proximity scaling loss function $$\sigma_{MDS}(\cdot)$$, a $$\Delta$$ and an $$X$$ and some transformation $$f_{ij}(\delta_{ij};\theta)$$ and $$g_{ij}(d_{ij};\theta)$$ that is parametrized (with $$\theta$$ either finite or infinite dimensional, e.g., a transformation applied to all observations like a power transformation or even an individual transformation per object). These transformations achieve a sort of push towards more structure, so different values for $$\theta$$ will in general lead to different c-structuredness.
We further have $$K$$ different indices $$I_k(X)$$ that measure different types of c-structuredness. We can then define as methods that are of the form (additive STOPS) $$\text{aSTOPS}(X, \theta, v_0, \dots, v_k; \Delta) = v_0 \cdot \sigma_{MDS}(X^*(\theta)) + \sum^K_{k=1} v_k I_k(X(\theta))$$ or (multiplicative STOPS) $$\text{mSTOPS}(X, \theta, v_0, \dots, v_k; \Delta) = \sigma_{MDS}(X^*(\theta))^{v_0} \cdot \prod^K_{k=1} I_k(X(\theta))^{v_k}$$
(which can be expressed as aSTOPS by logarithms). Here the $$v_0,...,v_k$$ are weights that determine how the individual parts (mds loss and c-structuredness indices) are aggregated.
The job is then to find $$\arg\min_{\vartheta}\ \text{aSTOPS}(X, \theta, v_0, \dots, v_k; \Delta)\ \text{or} \ \arg\min_{\vartheta}\ \text{mSTOPS}(X, \theta, v_0, \dots, v_k; \Delta)$$
where $$\vartheta \subseteq \{X,\theta, v_0, \dots, v_k\}$$. Typically $$\vartheta$$ will be a subset of all possible parameters here (e.g., the weights might be given a priori). Currently, the transformations that can be used in stops are limited to power transformations.
Minimizing stoploss can be difficult. In stops we use a nested algorithm combining optimization that internally first solves for $$X$$ given $$\theta$$, $$\arg\min_X \sigma_{MDS}\left(X,\theta\right)$$, and then optimize over $$\theta$$ with a metaheuristic. Implemented are a simulated annealing (optimmethod="SANN") or particle swarm optimization (optimmethod="pso") and a variant of the Luus-Jaakola (optimmethod="ALJ") procedure . We suggest to use the latter. A Bayesian optimization approach is currently under way.
Currently the following c-structuredness types are supported:
• c-clusteredness (cclusteredness): A clustered appearance of the configuration ($$I_k$$ is the normed OPTICS cordillera (COPS; Rusch et al. 2015a); 0 means no c-clusteredness, 1 means perfect c-clusteredness)
• c-linearity (clinearity): Projections lie close to a linear subspace of the configuration ($$I_k$$ is maximal multiple correlation; 0 means orthogonal, 1 means perfectly linear)
• c-manifoldness (cmanifoldness): Projections lie on a sub manifold of the configuration ($$I_k$$ is maximal correlation (Sarmanov, 1958); only available for two dimensions; 1 means perfectly smooth function)
• c-dependence (cdependence): Random vectors of projections onto the axes are stochastically dependent ($$I_k$$ is distance correlation (Szekely et al., 2007); only available for two dimensions; 0 means they are stochastically independent)
• c-association (cassociation): Pairwise nonlinear association between dimensions ($$I_k$$ is the pairwise maximal maximum information coefficient (Reshef et al. 2011), 1 means perfect functional association)
• c-nonmonotonicity (cnonmonotonicity): Deviation from monotonicity ($$I_k$$ is the pairwise maximal maximum assymmetry score (Reshef et al. 2011), the higher the less monotone)
• c-functionality (cfunctionality): Pairwise functional, smooth, noise-free relationship between dimensions ($$I_k$$ is the mean pairwise maximum edge value (Reshef et al. 2011), 1 means perfect functional association)
• c-complexity (ccomplexity): Measures the degree of complexity of the functional relationship between any two dimensions ($$I_k$$ is the pairwise maximal minimum cell number (Reshef et al. 2011), the higher the more complex)
• c-faithfulness (cfaithfulness): How accurate is the neighbourhood of $$\Delta$$ preserved in $$D$$ ($$I_k$$ is the adjusted Md index of Chen & Buja, 2013; note that this index deviates from the others by being a function of both $$X^*$$ and $$\Delta^*$$ rather than $$X^*$$ alone)
• c-randomness: How close to a random pattern (under some model) is the configuration ($$I_k$$ is not clear yet; not yet implemented)
If we have a single $$I(X)=OC(X)$$, the OPTICS cordillera (Rusch et al. 2015a), and the transformations applied are power transformations and the weights for the $$I(X)$$ is negative we essentially have COPS (see below).
For the MDS loss (argument loss in functions stops and cops), the functions currently support all losses derived from powerstress and powerstrain and can in principle be fitted with powerStressMin alone. However, for many models offer dedicated functions that either use workhorses that are more optimized for the problem at hand and/or restrict the parameter space for the distance/proximity transformations and thus can be faster. They are:
• stress, smacofSym: Kruskall’s stress; Workhorse: smacofSym, Optimization over $$\lambda$$
• smacofSphere: Kruskall’s stress for projection onto a sphere; Workhorse smacofSphere, Optimizes over $$\lambda$$
• strain, powerstrain: Classical scaling; Workhorse: cmdscale, Optimization over $$\lambda$$
• sammon, sammon2: Sammon scaling; Workhorse: sammon or smacofSym, Optimization over $$\lambda$$
• elastic: Elastic scaling; Workhorse: smacofSym, Optimization over $$\lambda$$
• sstress: S-stress; Workhorse: powerStressMin, Optimization over $$\lambda$$
• rstress: S-stress; Workhorse: powerStressMin, Optimization over $$\kappa$$
• powermds: MDS with powers; Workhorse: powerStressMin, Optimization over $$\kappa$$, $$\lambda$$
• powersammon: Sammon scaling with powers; Workhorse: powerStressMin, Optimization over $$\kappa$$, $$\lambda$$
• powerelastic: Elastic scaling with powers; Workhorse: powerStressMin, Optimization over $$\kappa$$, $$\lambda$$
• powerstress: Power stress model; Workhorse: powerStressMin, Optimization over $$\kappa$$, $$\lambda$$, $$\nu$$
#### Usage
The syntax for fitting a stops model is rather straightforward. One has to supply the arguments dis which is a dissimilarity matrix and structures a character vector listing the c-structuredness type that should be used to augment the PS loss (see above for the types of structures and losses). The parameters for the structuredness indices should be given with strucpars, a list whose elements are lists corresponding to each structuredness index and listing the parameters (if the default should be used the list element should be set to NULL). The PS loss can be chosen with the argument loss. The type of aggregation for the multi-objective optimization is specified in type and can be one of additive or multiplicative. One can pass additional parameters to the fitting workhorses with ....
stops(dis, structures = c("cclusteredness","clinearity"), loss="stress", ...)
One then has all the S3 methods of smacofP at one’s disposal.
For example, let us fit an mSTOPS model that looks for a transformation of the $$\delta_{ij}$$ so that a) the result has maximal c-clusteredness (which is 1 in the best case, so we set a negative weight for this structure) b) the projection onto the principal axes are nearly orthogonal (c-clinearity close to 0, so we set a positive weight for this structure) c) but the projections onto the principal axes should be stochastically dependent (negative weight on c-dependence) and d) the fit of the MDS is also factored in (so we set positive weight on the MDS loss). Since we use mSTOPS and a negative weight for c-linearity and c-dependence, a c-linearity/c-dependence close to 0 will overall dominate the stoploss function with the other two criteria being more of an afterthought - in aSTOPS that would be different.
!!: This is generally the approach to be chosen: We minimize the stoploss, so a c-structuredness index that should be (numerically) large needs a negative weight and a c-structuredness index that should be (numerically) small needs a positive weight.
We first set up the parameters for the structuredness indices. For the OPTICS cordillera we use a $$d_{max}$$ of 1, epsilon=10 and minpts=2, for c-linearity we have no parameters (so using NULL will work) and for the c-dependence we have a single parameter, index, which we set to 2.
strucpars<-list(list(epsilon=10,minpts=2,rang=c(0,1.3)), #cordillera
NULL, # c-linearity (has no parameters)
list(index=2) #c-dependence
)
ressm<-stops(kinshipdelta,loss="stress",stressweight=1,structures=c("cclusteredness","clinearity","cdependence"),strucweight=c(-0.33,0.33,-0.33),verbose=0,strucpars=strucpars,type="multiplicative")
ressm
##
## Call: stops(dis = kinshipdelta, loss = "stress", structures = c("cclusteredness",
## "clinearity", "cdependence"), stressweight = 1, strucweight = c(-0.33,
## 0.33, -0.33), strucpars = strucpars, verbose = 0, type = "multiplicative")
##
## Model: multiplicative STOPS with stress loss function and theta parameters= 1 3.095 1
##
## Number of objects: 15
## MDS loss value: 1
## C-Structuredness Indices: cclusteredness 0.53027 clinearity 0.01802 cdependence 0.01802
## Structure optimized loss (stoploss): 1.233
## MDS loss weight: 1 c-structuredness weights: -0.33 0.33 -0.33
## Number of iterations of ALJ optimization: 117
plot(ressm)
Let us compare this with the corresponding aSTOPS
ressa<-stops(kinshipdelta,loss="stress",stressweight=1,structures=c("cclusteredness","clinearity","cdependence"),strucweight=c(-0.33,0.33,-0.33),verbose=0,strucpars=strucpars,type="additive")
ressa
##
## Call: stops(dis = kinshipdelta, loss = "stress", structures = c("cclusteredness",
## "clinearity", "cdependence"), stressweight = 1, strucweight = c(-0.33,
## 0.33, -0.33), strucpars = strucpars, verbose = 0, type = "additive")
##
## Model: additive STOPS with stress loss function and theta parameters= 1 3.114 1
##
## Number of objects: 15
## MDS loss value: 1
## C-Structuredness Indices: cclusteredness 0.53021 clinearity 0.01785 cdependence 0.01785
## Structure optimized loss (stoploss): 0.825
## MDS loss weight: 1 c-structuredness weights: -0.33 0.33 -0.33
## Number of iterations of ALJ optimization: 81
plot(ressa)
We see that the c-clusteredness is higher as compared to the mSTOPS result - we have a number of distinct object clusters (with at least minpts=2) and they are more spread out and distributed more evenly. The dimensions on the other hand are now farther from being orthogonal but the stochastic dependence is higher (which is a non-linear one obviously).
When choosing a c-structuredness index, one needs to be clear what structure one might be interested in and how it interacts with the PS loss chosen. Consider the following example: We fit a powermds model to the kinship data and want to maximize c-association (i.e., any non-linear relationship) and c-manifoldness but minimize the c-linearity. In other words we try to find a power transformation of $$\Delta$$ and $$D$$ so that the objects are positioned in the configuration in such a way that the projection onto the principal axes are as close as possible to being related by a smooth but non-linear function.
resa<-stops(kinshipdelta,structures=c("cassociation","cmanifoldness","clinearity"),loss="powermds",verbose=0,strucpars=list(NULL,NULL,NULL),type="additive",strucweight=c(-0.5,-0.5,0.5))
resa
##
## Call: stops(dis = kinshipdelta, loss = "powermds", theta = c(2.9429394,
## 1.67850653, 1.57140404), structures = c("cassociation", "cmanifoldness",
## "clinearity"), strucweight = c(-0.5, -0.5, 0.5), strucpars = list(NULL,
## NULL, NULL), verbose = 0, type = "additive", itmax = 1)
##
## Model: additive STOPS with powermds loss function and theta parameters= 2.943 1.679 1
##
## Number of objects: 15
## MDS loss value: 0.9995
## C-Structuredness Indices: cassociation 1.0000000 cmanifoldness 0.9918892 clinearity 0.0001654
## Structure optimized loss (stoploss): 0.003602
## MDS loss weight: 1 c-structuredness weights: -0.5 -0.5 0.5
## Number of iterations of ALJ optimization: 1
We see in this model (resa) that indeed the c-association is 1, which says we have a near perfect non-linear relationship. How does this relationship look like?
plot(resa)
It is a parabolic shape, so the projections are so that the points on D2 are a near parabolic function of the D1 (projecting onto some structure resembling a conic section is often the case for r-stress which is essentially what we got here - setting a negative weight on c-assocation can combat that if that is an artefact). What we can also see is that there are three clear clusters, so c-clusteredness should be high. But when looking at the OPTICS cordillera here, we find that the OPTICS cordillera is lower than for the result from above with using stress and lambda=2.66 (the ressa model).
c1<-cordillera(resa$fit$conf,minpts=2,epsilon=10,rang=c(0,1.3))
c2<-cordillera(ressa$fit$conf,minpts=2,epsilon=10,rang=c(0,1.3))
c1
## raw normed
## 7.9441 0.4365
c2
## raw normed
## 9.6499 0.5302
This discrepancy comes from the definition of c-clusteredness (Rusch et al, 2015a) where more clusters, more spread-out clusters, more evenly distributed clusters and denser clusters all increase c-clusteredness. In the example with maximizing c-association we have two very dense clusters of 5 points and 1 relatively non-dense cluster of five other points. In the model maximizing c-clusteredness (and others) we get 6 relatively moderate dense clusters with 2 or 3 points each, which is also the minimum number of points we wanted to be grouped together. Most importantly, they are projected onto a much larger range of the target space as the $$X$$ obtained from the stress loss is different than the one obtained from the powermds loss, so the $$d_{max}$$ is very different. Since we use the normed OPTICS cordillera there, we look at c-clusteredness relative to the most clustered appearance with two points per cluster. Thus, the second result has more c-clusteredness. If we would define a cluster as having at most 5 points then c-clusteredness of the result with high c-association also has a large c-clusteredness because then the clusters found match the definition of high c-clusteredness.
c3<-cordillera(resa$fit$conf,minpts=5,epsilon=10,rang=c(0,1.3))
c3
## raw normed
## 3.8650 0.5946
Note that it may just as well be possible to have a high c-association and no c-clusteredness at all (e.g., points lying equidistant on a smooth non-linear curve). Note also that the models are not necessarily comparable due to different stress functions - the transformation in powermds that is optimal with respect to c-clusteredness would be different.
Indeed one can optimize for c-clusteredness alone and using it as a “goodness-of-clusteredness” index (i.e., the $$d_{max}$$ is not constant over configurations but varies conditional on the configuration) then we get a projection with c-clusteredness of 0.67.
resa2<-stops(kinshipdelta,structures=c("cclusteredness"),loss="powermds",verbose=0,strucpars=list(list(epsilon=10,rang=NULL,minpts=2)),type="additive",strucweight=-1,stressweight=0)
For convenience it is also possible to use the stops function for finding the loss-optimal transformation in the the non-augmented models specified in loss, by setting the strucweight, the weight of any c-structuredness, to 0. Then the function optimizes the MDS loss function only.
ressa<-stops(kinshipdelta,structure=c("clinearity"),strucweight=0,loss="stress",verbose=0)
### COPS
A special STOPS model is COPS (Rusch et al. 2015a) for “Cluster Optimized Proximity Scaling”. This is also one of the main use cases for STOPS models. Let us write $$X(\theta)=\arg\min_X \sigma_{MDS}(X,\theta)$$ for the optimal configuration for given transformation parameter $$\theta$$. Following the outline of STOPS the overall objective function, which we call , is a weighted combination of the $$\theta-$$parametrized loss function, $$\sigma_{MDS}\left(X(\theta),\theta\right)$$, and a c-clusteredness measure, the OPTICS cordillera or $$OC(X(\theta);\epsilon,k,q)$$ to be optimized as a function of $$\theta$$ or $$\label{eq:spstress} \text{coploss}(\theta) = v_1 \cdot \sigma_{MDS}\left(X(\theta),\theta \right) - v_2 \cdot \text{OC}\left(X(\theta);\epsilon,k,q\right)$$ with $$v_1,v_2 \in \mathbb{R}$$ controlling how much weight should be given to the scaling fit measure and the c-clusteredness. In general $$v_2,v_2$$ are either determined values that make sense for the application or may be used to trade-off fit and c-clusteredness in a way for them to be commensurable. In the latter case we suggest taking the fit function value as it is ($$v_1=1$$) and fixing the scale such that $$\text{coploss}=0$$ for the scaling result with no transformations ($$\theta=\theta_0$$), i.e., $$\label{eq:spconstant0} v^{0}_{1}=1, \quad v^{0}_2=\frac{\sigma_{MDS}\left(X(\theta_0),\theta_0\right)}{\text{OC}\left(X(\theta_0);\epsilon,k,q\right)},$$
with $$\theta_0=(1,1)^\top$$ in case of loss functions with power transformations. Thus an increase of 1 in the MDS loss measure can be compensated by an increase of $$v^0_1/v^0_2$$ in c-clusteredness. Selecting $$v_1=1,v_2=v^{0}_2$$ this way is in line with the idea of pushing the configurations towards a more clustered appearance relative to the initial solution.
Another possibility is to choose them in such a way that $$\text{coploss}=0$$ in the optimum value, i.e., choosing $$v^{opt}_{1}, v^{opt}_2$$ so that $$v^{opt}_1 \cdot \sigma_{MDS}\left(X(\theta^*),\theta^*\right)-v^{opt}_2 \cdot \text{OC}\left(X(\theta^*);\epsilon,k,q \right) = 0$$
with $$\theta^*:=\arg\min_\theta \text{coploss}(\theta)$$. This is in line with having $$\text{coploss}(\theta)>0$$ for $$\theta \neq \theta^*$$ and allows to optimize over $$v_1,v_2$$.
The optimization problem in COPS is then to find
$$\label{eq:soemdsopt2} \arg\min_{\theta} \text{coploss}(\theta)$$ by doing $$\label{eq:soemdsopt} v_1 \cdot \sigma_{MDS}\left(X(\theta),\theta\right) - v_2 \cdot \text{OC}\left(X(\theta);\epsilon,k,q\right) \rightarrow \min_\theta!$$
For a given $$\theta$$ if $$v_2=0$$ than the result of optimizing the above is the same as solving the respective original PS problem. Letting $$\theta$$ be variable, $$v_2=0$$ will minimize the loss over configurations obtained from using different $$\theta$$.
The c-clusteredness index we use is the OPTICS cordillera which measures how clustered a configuration appears. It is based on the OPTICS algorithm that outputs an ordering together with a distance. The OPTICS cordillera is now simply an agregation of that information. Since we know what constitutes a maximally clustered result, we can derive an upper bound and normalize the index to lie between 0 and 1. If it is maximally clustered, the index gets a value of 1,and it gets 0 if all points are equidistant to their nearest neighbours (a matchstick embedding). See Rusch et al (2015a) for details.
#### Usage
Even though one can fit a COPS model with stops, there is a dedicated function cops. Its syntax works pretty much like in stops only that the structure argument is non-existant.
cops(dis,loss,...)
For the example we use a COPS model for a classical scaling (strain loss)
resc<-cops(kinshipdelta,loss="strain")
resc
##
## Call: cops(dis = kinshipdelta, loss = "strain")
##
## Model: COPS with strain loss function and parameters kappa= 1 lambda= 1.606 nu= 1
##
## Number of objects: 15
## MDS loss value: 0.3237
## OPTICS cordillera: Raw 10.87 Normed 0.3087
## Cluster optimized loss (coploss): -0.03458
## MDS loss weight: 1 OPTICS cordillera weight: 1.16
## Number of iterations of ALJ optimization: 90
summary(resc)
##
## Configurations:
## D1 D2
## Aunt -345.9 492.49
## Brother 365.5 -253.89
## Cousin 107.3 560.94
## Daughter -394.8 -262.97
## Father 320.3 -371.50
## Granddaughter -411.9 -99.39
## Grandfather 370.3 -214.95
## Grandmother -412.2 -148.94
## Grandson 370.4 -181.08
## Mother -411.9 -289.88
## Nephew 424.9 398.15
## Niece -340.0 485.92
## Sister -402.9 -183.98
## Son 328.5 -338.10
## Uncle 432.4 407.18
A number of plots are availabe
plot(resc,"confplot")
plot(resc,"Shepard")
plot(resc,"transplot")
plot(resc,"reachplot")
For convenience it is also possible to use the cops function for finding the loss-optimal transformation in the the non-augmented models specified in loss, by setting the cordweight, the weight of the OPTICS cordillera, to 0. Then the function optimizes the MDS loss function only.
resca<-cops(kinshipdelta,cordweight=0,loss="strain")
resca
##
## Call: cops(dis = kinshipdelta, loss = "strain", cordweight = 0)
##
## Model: COPS with strain loss function and parameters kappa= 1 lambda= 1.586 nu= 1
##
## Number of objects: 15
## MDS loss value: 0.3237
## OPTICS cordillera: Raw 10.87 Normed 0.3087
## Cluster optimized loss (coploss): 0.3237
## MDS loss weight: 1 OPTICS cordillera weight: 0
## Number of iterations of ALJ optimization: 48
Here the results match the result from using the standard cordweight. We can give more weight to the c-clusteredness though:
rescb<-cops(kinshipdelta,cordweight=20,loss="strain")
rescb
##
## Call: cops(dis = kinshipdelta, loss = "strain", cordweight = 20)
##
## Model: COPS with strain loss function and parameters kappa= 1 lambda= 2.06 nu= 1
##
## Number of objects: 15
## MDS loss value: 0.3395
## OPTICS cordillera: Raw 11.01 Normed 0.3128
## Cluster optimized loss (coploss): -5.916
## MDS loss weight: 1 OPTICS cordillera weight: 20
## Number of iterations of ALJ optimization: 94
plot(resca,main="with cordweight=0")
plot(rescb,main="with cordweight=20")
This result has more c-clusteredness but less fit. The higher c-clusteredness is discernable in the Grandfather/Brother and Grandmother/Sister clusters (we used a minimum number of 2 observations to make up a cluster, minpts=2).
## Other Functions
The package also provides functions that are used by the cops and stops and powerStressMin functions but may be of interest to a end user beyond that.
### OPTICS and OPTICS cordillera
For calculating a COPS solution, we need the OPTICS algorithm and the OPTICS cordillera. In the package we also provide a rudimentary interface to the OPTICS impementation in ELKI.
data(iris)
res<-optics(iris[,1:4],minpts=2,epsilon=1000)
print(res)
## observation reachability
## 1 ID=1 reachability=∞
## 2 ID=18 reachability=0.1
## 3 ID=41 reachability=0.14142136
## 4 ID=5 reachability=0.14142136
## 5 ID=38 reachability=0.14142136
## 6 ID=40 reachability=0.14142136
## 7 ID=8 reachability=0.1
## 8 ID=50 reachability=0.14142136
## 9 ID=29 reachability=0.14142136
## 10 ID=28 reachability=0.14142136
## 11 ID=36 reachability=0.2236068
## 12 ID=49 reachability=0.2236068
## 13 ID=11 reachability=0.1
## 14 ID=27 reachability=0.2236068
## 15 ID=24 reachability=0.2
## 16 ID=44 reachability=0.2236068
## 17 ID=12 reachability=0.2236068
## 18 ID=30 reachability=0.2236068
## 19 ID=31 reachability=0.14142136
## 20 ID=35 reachability=0.14142136
## 21 ID=10 reachability=0.1
## 22 ID=2 reachability=0.14142136
## 23 ID=46 reachability=0.14142136
## 24 ID=13 reachability=0.14142136
## 25 ID=26 reachability=0.17320508
## 26 ID=4 reachability=0.17320508
## 27 ID=48 reachability=0.14142136
## 28 ID=3 reachability=0.14142136
## 29 ID=43 reachability=0.2236068
## 30 ID=39 reachability=0.2
## 31 ID=9 reachability=0.14142136
## 32 ID=7 reachability=0.2236068
## 33 ID=20 reachability=0.24494897
## 34 ID=22 reachability=0.14142136
## 35 ID=47 reachability=0.14142136
## 36 ID=14 reachability=0.24494897
## 37 ID=25 reachability=0.3
## 38 ID=37 reachability=0.3
## 39 ID=21 reachability=0.3
## 40 ID=32 reachability=0.28284271
## 41 ID=17 reachability=0.34641016
## 42 ID=6 reachability=0.34641016
## 43 ID=19 reachability=0.33166248
## 44 ID=33 reachability=0.34641016
## 45 ID=34 reachability=0.34641016
## 46 ID=45 reachability=0.36055513
## 47 ID=16 reachability=0.36055513
## 48 ID=15 reachability=0.41231056
## 49 ID=23 reachability=0.45825757
## 50 ID=42 reachability=0.6244998
## 51 ID=99 reachability=1.64012195
## 52 ID=58 reachability=0.38729833
## 53 ID=94 reachability=0.14142136
## 54 ID=61 reachability=0.36055513
## 55 ID=82 reachability=0.64807407
## 56 ID=81 reachability=0.14142136
## 57 ID=70 reachability=0.17320508
## 58 ID=90 reachability=0.24494897
## 59 ID=54 reachability=0.2
## 60 ID=93 reachability=0.26457513
## 61 ID=83 reachability=0.14142136
## 62 ID=68 reachability=0.24494897
## 63 ID=100 reachability=0.26457513
## 64 ID=97 reachability=0.14142136
## 65 ID=96 reachability=0.14142136
## 66 ID=95 reachability=0.17320508
## 67 ID=89 reachability=0.17320508
## 68 ID=91 reachability=0.26457513
## 69 ID=62 reachability=0.3
## 70 ID=56 reachability=0.31622777
## 71 ID=67 reachability=0.3
## 72 ID=85 reachability=0.2
## 73 ID=79 reachability=0.33166248
## 74 ID=92 reachability=0.2
## 75 ID=64 reachability=0.14142136
## 76 ID=74 reachability=0.2236068
## 77 ID=72 reachability=0.34641016
## 78 ID=98 reachability=0.33166248
## 79 ID=75 reachability=0.2
## 80 ID=76 reachability=0.26457513
## 81 ID=66 reachability=0.14142136
## 82 ID=59 reachability=0.24494897
## 83 ID=55 reachability=0.24494897
## 84 ID=52 reachability=0.31622777
## 85 ID=57 reachability=0.26457513
## 86 ID=87 reachability=0.31622777
## 87 ID=53 reachability=0.28284271
## 88 ID=51 reachability=0.26457513
## 89 ID=78 reachability=0.31622777
## 90 ID=77 reachability=0.31622777
## 91 ID=80 reachability=0.34641016
## 92 ID=86 reachability=0.37416574
## 93 ID=60 reachability=0.38729833
## 94 ID=148 reachability=0.41231056
## 95 ID=111 reachability=0.2236068
## 96 ID=112 reachability=0.34641016
## 97 ID=117 reachability=0.36055513
## 98 ID=138 reachability=0.14142136
## 99 ID=104 reachability=0.24494897
## 100 ID=129 reachability=0.33166248
## 101 ID=133 reachability=0.1
## 102 ID=105 reachability=0.3
## 103 ID=146 reachability=0.36055513
## 104 ID=142 reachability=0.24494897
## 105 ID=141 reachability=0.36055513
## 106 ID=145 reachability=0.24494897
## 107 ID=121 reachability=0.26457513
## 108 ID=144 reachability=0.2236068
## 109 ID=125 reachability=0.3
## 110 ID=113 reachability=0.34641016
## 111 ID=140 reachability=0.17320508
## 112 ID=116 reachability=0.37416574
## 113 ID=149 reachability=0.3
## 114 ID=137 reachability=0.24494897
## 115 ID=147 reachability=0.37416574
## 116 ID=124 reachability=0.24494897
## 117 ID=127 reachability=0.17320508
## 118 ID=128 reachability=0.24494897
## 119 ID=139 reachability=0.14142136
## 120 ID=71 reachability=0.2236068
## 121 ID=150 reachability=0.28284271
## 122 ID=143 reachability=0.33166248
## 123 ID=102 reachability=0
## 124 ID=114 reachability=0.26457513
## 125 ID=122 reachability=0.31622777
## 126 ID=84 reachability=0.36055513
## 127 ID=134 reachability=0.33166248
## 128 ID=73 reachability=0.36055513
## 129 ID=103 reachability=0.4
## 130 ID=126 reachability=0.38729833
## 131 ID=130 reachability=0.34641016
## 132 ID=65 reachability=0.42426407
## 133 ID=101 reachability=0.42426407
## 134 ID=120 reachability=0.43588989
## 135 ID=108 reachability=0.43588989
## 136 ID=131 reachability=0.26457513
## 137 ID=115 reachability=0.48989795
## 138 ID=63 reachability=0.48989795
## 139 ID=69 reachability=0.50990195
## 140 ID=88 reachability=0.26457513
## 141 ID=106 reachability=0.52915026
## 142 ID=123 reachability=0.26457513
## 143 ID=119 reachability=0.41231056
## 144 ID=136 reachability=0.53851648
## 145 ID=135 reachability=0.53851648
## 146 ID=109 reachability=0.55677644
## 147 ID=110 reachability=0.63245553
## 148 ID=107 reachability=0.73484692
## 149 ID=118 reachability=0.81853528
## 150 ID=132 reachability=0.41231056
summary(res)
##
## An OPTICS results with minpts= 2 and epsilon= 1000
##
## Five Point Summary of the Minimum Reachabilities:
## Min. 1st Qu. Median Mean 3rd Qu. Max. NA's
## 0.000 0.173 0.265 0.292 0.346 1.640 1
##
## Stem and Leaf Display of the Minimum Reachabilities:
##
## The decimal point is 1 digit(s) to the left of the |
##
## 0 | 0
## 1 | 00000444444444444444444444444447777777
## 2 | 00000022222222222244444444444466666666666888
## 3 | 0000000022222233333355555555566666666777999
## 4 | 011112244699
## 5 | 13446
## 6 | 235
## 7 | 3
## 8 | 2
## 9 |
## 10 |
## 11 |
## 12 |
## 13 |
## 14 |
## 15 |
## 16 | 4
plot(res,withlabels=TRUE)
and a function for calculating and displaying the OPTICS cordillera.
cres<-cordillera(iris[,1:4],minpts=2,epsilon=1000,scale=FALSE)
cres
## raw normed
## 14.96797 0.06125
summary(cres)
##
## OPTICS cordillera values with minpts= 2 and epsilon= 1000
##
## Raw OC: 14.97
## Normalization: 244.4
## Normed OC: 0.06125
plot(cres)
### Optimization
Since the inner optimization problem in STOPS models is hard and takes long, Rusch et al. (2015a) developed a metaheuristic for the outer optimization problem that needs typically less calls to the inner minimization than pso or SANN, albeit without the guarantees of convergence to a global minimum for non-smooth functions. It is an adaptation of the Luus-Jaakola random search (Luus & Jaakola 1973). It can used with the function ljoptim which modeled its output after optim. It needs as arguments x a starting value, fun a function to optimize, a lower and upper box constraint for the search region. By using the argument adaptive=TRUE or FALSE one can switch between our adaptive version and the original LJ algorithm. Accuracy of the optimization can be controlled with the maxit (maximum number of iterations), accd (terminates after the length of the search space is below this number ) and acc arguments (terminates if difference of two subsequent function values are below this value).
We optimize a “Wild Function” with the non-adaptive LJ version (and numerical accuracies of at least 1e-16 for accd and acc).
set.seed(210485)
fwild <- function (x) 10*sin(0.3*x)*sin(1.3*x^2) + 0.00001*x^4 + 0.2*x+80
res2
## $par ## [1] -15.82 ## ##$value
## [1] 67.47
##
## $counts ## function gradient ## 463 NA ## ##$convergence
## [1] 0
##
## $message ## NULL plot(fwild, -50, 50, n = 1000, main = "ljoptim() minimising 'wild function'") points(res2$par,res2\$value,col="red",pch=19)
We also provide a procrustes adjustment to make two configurations visually comparable. The function is conf_adjust and takes two configurations conf1 the reference configuration and conf2 another configuration. It returns the adjusted versions
conf_adjust(conf1,conf2)
## References
• Borg I, Groenen PJ (2005). Modern multidimensional scaling: Theory and applications. 2nd edition. Springer, New York
• Buja A, Swayne DF, Littman ML, Dean N, Hofmann H, Chen L (2008). Data visualization with multidimensional scaling. Journal of Computational and Graphical Statistics, 17 (2), 444-472.
• Chen L, Buja A (2013). Stress functions for nonlinear dimension reduction, proximity analysis, and graph drawing. Journal of Machine Learning Research, 14, 1145-1173.
• de Leeuw J (2014). Minimizing r-stress using nested majorization. Technical Report, UCLA, Statistics Preprint Series.
• de Leeuw J, Mair P (2009). Multidimensional Scaling Using Majorization: SMACOF in R. Journal of Statistical Software, 31 (3), 1-30.
• Kruskal JB (1964). Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis. Psychometrika, 29 (1), 1-27.
• Luus R, Jaakola T (1973). by direct search and systematic reduction of the size of search region. American Institute of Chemical Engineers Journal (AIChE), 19 (4), 760-766.
• McGee VE (1966). The multidimensional analysis of ‘elastic’ distances. British Journal of Mathematical and Statistical Psychology, 19 (2), 181-196.
• Reshef D, Reshef Y, Finucane H, Grossman S, McVean G, Turnbaugh P, Lander E, Mitzenmacher M, Sabeti P (2011). Detecting novel associations in large datasets. Science, 334, 6062.
• Rosenberg, S. & Kim, M. P. (1975). The method of sorting as a data gathering procedure in multivariate research. Multivariate Behavioral Research, 10, 489-502.
• Rusch, T., Mair, P. and Hornik, K. (2015a) COPS: Cluster Optimized Proximity Scaling. Discussion Paper Series / Center for Empirical Research Methods, 2015/1. WU Vienna University of Economics and Business, Vienna.
• Rusch, T., Mair, P. and Hornik, K. (2015b). Structuredness Indices and Augmented Nonlinear Dimension Reduction. In preparation.
• Sammon JW (1969). A nonlinear mapping for data structure analysis. IEEE Transactions on Computers, 18 (5), 401-409
• Sarmanov OV (1958) The maximum correlation coefficient (symmetric case). Dokl. Akad. Nauk SSSR, 120 : 4 (1958), 715 - 718.
• Székely, G. J. Rizzo, M. L. and Bakirov, N. K. (2007). Measuring and testing independence by correlation of distances, The Annals of Statistics, 35:6, 2769–2794.
• Takane Y, Young F, de Leeuw J (1977). Nonmetric individual differences multidimensional scaling: an alternating least squares method with optimal scaling features. Psychometrika, 42 (1), 7-67.
• Torgerson WS (1958). Theory and methods of scaling. Wiley.
• Venables WN, Ripley BD (2002). Modern Applied Statistics with S. Fourth edition. Springer, New York.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 12, "x-ck12": 0, "texerror": 0, "math_score": 0.7243876457214355, "perplexity": 8130.351634996872}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710931.81/warc/CC-MAIN-20221203111902-20221203141902-00426.warc.gz"}
|
http://mathhelpforum.com/algebra/116905-defined.html
|
1. ## defined by
what does these mean?
...is defined by....
...can be defined by...
...is defined as...
thanks a lot
2. "The definition is" or "the definition can be given by", or something similar. The phrases are used when listing the properties, rules, or other hallmarks of some given thing.
Lacking a specific context, however, little more can be said.
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9846917390823364, "perplexity": 5054.464657004408}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698543614.1/warc/CC-MAIN-20161202170903-00057-ip-10-31-129-80.ec2.internal.warc.gz"}
|
https://proofwiki.org/wiki/Definition:Subspace_Topology
|
# Definition:Topological Subspace
## Definition
Let $T = \struct {S, \tau}$ be a topological space.
Let $H \subseteq S$ be a non-empty subset of $S$.
Define:
$\tau_H := \set {U \cap H: U \in \tau} \subseteq \powerset H$
where $\powerset H$ denotes the power set of $H$.
Then the topological space $T_H = \struct {H, \tau_H}$ is called a (topological) subspace of $T$.
The set $\tau_H$ is referred to as the subspace topology on $H$ (induced by $\tau$).
## Also known as
The subspace topology $\tau_H$ induced by $\tau$ can be referred to as just the induced topology (on $H$) if there is no ambiguity.
The term relative topology can also be found.
## Also see
• Results about topological subspaces can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9808613061904907, "perplexity": 146.67539023072806}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038119532.50/warc/CC-MAIN-20210417102129-20210417132129-00331.warc.gz"}
|
http://tex.stackexchange.com/questions/146671/how-to-plot-a-curve-in-a-polar-form-r-f%ce%98/162901
|
# How to plot a curve in a polar form r = f(Θ)?
Can we use LaTeX to make the graph of $\rho = \sec(\theta)$?
-
Have a look at this question: tex.stackexchange.com/a/65447/15925 . – Andrew Swann Nov 24 '13 at 11:36
You have a list of your questions here. Some of them have already had answers. If the answers satisfied you, please kindly accept them by clicking the check mark button below the score labels. Optionally you should vote them up by clicking the upward arrow button. You can also vote them down but only for the extreme cases! – kiss my armpit Nov 24 '13 at 15:45
You can use pgfplots to achieve this:
\documentclass[tikz]{standalone}
\usepackage{pgfplots}
\pgfplotsset{compat=1.9}
\usepgfplotslibrary{polar}
\begin{document}
\begin{tikzpicture}
\begin{polaraxis}
\def\FREQUENCY{3}
\end{polaraxis}
\end{tikzpicture}
\end{document}
The source code is self-explanatory for anyone who have minimal experience in LaTeX.
Note: pgfplots by default uses degrees, for using radians you then need to convert it to degree via deg() function.
And the \rho=\sec\theta (that's a pretty ugly function to plot it on the polar axis):
\documentclass[tikz]{standalone}
\usepackage{pgfplots}
\pgfplotsset{compat=1.9}
\usepgfplotslibrary{polar}
\begin{document}
\begin{tikzpicture}
\begin{polaraxis}
\end{polaraxis}
\end{tikzpicture}
\end{document}
-
It is not \rho=\sec \theta. – kiss my armpit Nov 24 '13 at 11:50
@DonutE.Knot: I've added the \rho=\sec\theta. It's a straight line indeed. – m0nhawk Nov 24 '13 at 11:56
The polar with the corresponding cartesian plot:
\documentclass[border=12pt,pstricks]{standalone}
\usepackage{pst-plot}
\begin{document}
\begin{pspicture}(-3.5,-3.5)(3.5,3.5)
\psaxes[axesstyle=polar,subticklinestyle=dashed,subticks=2,labelFontSize=\scriptstyle](3,0)
\psplot[polarplot,algebraic,linecolor=red,linewidth=2pt,plotpoints=500,
yMaxValue=3.5]{Pi neg}{Pi}{1/(cos(x))}
\psplot[algebraic,linecolor=blue,plotpoints=5000,yMaxValue=3.5]{Pi neg}{Pi}{1/(cos(x))}
\end{pspicture}
\end{document}
If you want to see the calculated points use showpoints:
\documentclass[border=12pt,pstricks]{standalone}
\usepackage{pst-plot}
\begin{document}
\begin{pspicture}(-3.5,-3.5)(3.5,3.5)
\psaxes[axesstyle=polar,subticklinestyle=dashed,subticks=2,labelFontSize=\scriptstyle](3,0)
\psplot[polarplot,algebraic,linecolor=red,linewidth=1.5pt,plotpoints=25,showpoints,
yMaxValue=3.5]{Pi neg}{Pi}{1/(cos(x))}
\end{pspicture}
\end{document}
-
A recommended solution with PSTricks. \rho=\sec\theta represents the line x=1. So the correct curve should be like below.
\documentclass[border=12pt,pstricks]{standalone}
\usepackage{pst-plot}
\psset{runit=1.2cm,unit=\psrunit}
\pstVerb{/const 2 def}
\begin{document}
\begin{pspicture}(-3.5,-3.5)(3.5,3.5)
\psaxes[axesstyle=polar,subticklinestyle=dashed,subticks=2,labelFontSize=\scriptstyle](3,3)
\psplot[polarplot,algebraic=true,linecolor=red,linewidth=2pt,plotpoints=1000,yMaxValue=3.5,yMinValue=-3.5]{0}{TwoPi}{const/(cos(x))}
\end{pspicture}
\end{document}
-
My package xpicture plots polar curves (or more general, parametric curves).
For example
% Cardioide: r = 1+cos t
\SUMfunction{\ONEfunction}{\COSfunction}{\ffunction} % Define f(t)=1 + cos t
\POLARfunction{\ffunction}{\cardioide} % Declare \cardioide as r=f(\phi)
% \degreespolarlabels % Uncomment to label angles in degrees
\begin{center}
\def\runitdivisions{2} % 2 divisions of unity in the r-axis
\setlength{\unitlength}{1.5cm}
\begin{Picture}(-2.5,-2.5)(2.5,2.5)
\polargrid{2}{16} % Draw a polar grid for 0<r<2 and 16 divisions of circle
\pictcolor{blue}\linethickness{1pt}
\PlotParametricFunction[20]{\cardioide}{0}{\numberTWOPI}
% Draw \cardioide for 0<\phi<2\pi
\end{Picture}
$\rho=1+\cos\phi$
\end{center}
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7887177467346191, "perplexity": 4985.878091474454}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207927185.70/warc/CC-MAIN-20150521113207-00095-ip-10-180-206-219.ec2.internal.warc.gz"}
|
https://selene.flatironinstitute.org/overview/cli.html
|
# Selene CLI operations and outputs¶
Selene provides a command-line interface (CLI) that takes in a user-specified configuration file containing the operations the user wants to run and the parameters required for these operations. (See Operations for more detail.)
The sections that follow describe in detail how the various components that make up the configuration file are specified. For operation-specific sections (e.g. training, evaluation), we also explain what the expected outputs are.
We strongly recommend you read through the first 4 sections (Overview, Operations, General configurations, and Model architecture) and then pick other sections based on your use case.
## Overview¶
Selene’s CLI accepts configuration files in the YAML format that are composed of 4 main (high-level) groups:
1. list of operations
2. general configuration parameters
3. model configuration
4. operation-specific configurations
“Operation-specific configurations” require you to specify the input parameters for different classes and methods that we have implemented in Selene. These configurations are parsed using code adapted from the Pylearn2 library and will instantiate the appropriate Python object or function based on your inputs. You may use Selene’s API documentation to determine what parameters are accepted by the constructors/methods implemented in Selene. For your convenience, we have created this document to specifically describe the parameters necessary to build configuration files for the Selene CLI.
We recommend you start off by using one of the example configuration files provided in the repository as a template for your own configuration file:
There are also various configuration files associated with the Jupyter notebook tutorials and manuscript case studies that you may use as a starting point.
## Operations¶
Every file should start with the operations that you want to run.
ops: [train, evaluate, analyze]
The ops key expects one or more of [train, evaluate, analyze] to be specified as a list. In addition to the general and model architecture configurations described in the next 2 sections, each of these operations will require some additional set of configurations attached to the following keys:
Note: You should be able to use multiple operations (i.e. specify the necessary configuration keys for those operations in a single file). However, if [train, evaluate] are both specified, we expect that they will both rely on the same sampler. If you need to train and evaluate using different samplers, please create 2 separate YAML files.
## General configurations¶
In addition to the ops key, you can specify the following parameters:
random_seed: 1337
output_dir: /absolute/path/to/output/dir
create_subdirectory: True
lr: 0.01
Note that there should not be any commas at the end of these lines.
• random_seed: Set a random seed for torch and torch.cuda (if using CUDA-enabled GPUs) for reproducibility.
• output_dir: The output directory to use for all operations. If no output_dir is specified, Selene assumes that the output_dir is specified in all relevant function-type values for operations in Selene. (More information on what function-type values are in later sections.) We recommend using this parameter for train and evaluate operations.
• create_subdirectory: If True, creates a directory within output_dir with the name formatted as %Y-%m-%d-%H-%M-%S—the date/time when Selene was run. (This is only applicable if output_dir has been specified.)
• lr: The learning rate. If you use our CLI script, you can pass this in as a command-line argument rather than having it specified in the configuration file.
• load_test_set: This is only applicable if you have specified ops: [train, evaluate]. You can set this parameter to True (by default it is False and the test set is only loaded when training ends) if you would like to load the test set into memory before training begins—and therefore save the test data generated by a sampler to a .bed file. You would find this useful if you want to save a test dataset (see Samplers used for training) and you do not know if your model will finish training and evaluation within the allotted time that your job is run. You should also be running Selene on a machine that can support such an increase in memory usage (on the order of GBs, depending on how many classes your model predicts, how large the test dataset is, etc.).
## Model architecture¶
For all operations, Selene requires that you specify the model architecture, loss, and optimizer as inputs.
### Expected input class and methods¶
There are two possible formats you can use to do this:
• single Python file: We expect that most people will start using Selene with model architectures in this format. In this case, you implement your architecture as a class and include 2 static methods, criterion and get_optimizer in the same file. See our DeepSEA model file as an example.
• The criterion method should not take any input arguments and must return a loss function object of type torch.nn._Loss.
• The get_optimizer method should accept a single input lr, the learning rate. (Note that this method is not used for the evaluate and analyze operations in Selene.) It returns a tuple, where tuple[0] is the optimizer class torch.optim.Optimizer and tuple[1] is a dictionary of any optional arguments with which Selene can then instantiate the class. Selene will first instantiate the model and then pass the required model.parameters() argument as input to the torch.optim.Optimizer class constructor.
• Python module: For more complicated architectures, you may want to write custom PyTorch modules and use them in your final architecture. In this case, it is likely your model architecture imports other custom classes. We ask that you then specify your architecture within a Python module. That is, the directory containing your architecture, loss, and optimizer must have a __init__.py that imports the architecture class, criterion, and get_optimizer.
### Model architecture configuration¶
model: {
path: /absolute/path/to/file_or_model,
class: ModelArchitectureClassName,
class_args: {
arg1: val1,
arg2: val2,
...
},
non_strand_specific: mean
}
• path: This can be the path to a Python file or a Python module (directory). See the previous section for details.
• class: The model architecture class name.
• class_args: The arguments needed to instantiate the class. In the case of DeepSEA, the class_args keys would be sequence_length and n_genomic_features.
• non_strand_specific: Optional, possible values are mean or max if you want to use this parameter. (Otherwise, do not use this key in your model configuration.) If your model does not need to train on strand-specific input sequences, we have implemented a class that will pass both the forward and reverse sequence to the model and either take the mean or the max value across the two sets of predictions for a sample.
## A note for the following sections¶
For training, evaluation, and analysis [of sequences using trained models], Selene requires that specific keys in the YAML file correspond to function-type values. The function-type value is used to construct an object that is a class in selene_sdk. Our documentation website is an important resource for debugging configuration-related errors when you run Selene via the CLI.
We have covered the most common configurations in this document.
## Train¶
An example configuration for training:
train_model: !obj:selene_sdk.TrainModel {
batch_size: 64,
max_steps: 960000,
report_stats_every_n_steps: 32000,
save_checkpoint_every_n_steps: 1000,
save_new_checkpoints_after_n_steps: 640000,
n_validation_samples: 64000,
n_test_samples: 960000,
use_cuda: True,
data_parallel: True,
logging_verbosity: 2,
metrics: {
roc_auc: !import:sklearn.metrics.roc_auc_score,
average_precision: !import:sklearn.metrics.average_precision_score
},
checkpoint_resume: False
}
### Required parameters¶
• batch_size: Number of samples in one forward/backward pass (a single step).
• max_steps: Total number of steps for which to train the model.
• report_stats_every_n_steps: The frequency with which to report summary statistics. You can set this value to be equivalent to a training epoch (n_steps * batch_size) being the total number of samples seen by the model so far. Selene evaluates the model on the validation dataset every report_stats_every_n_steps and, if the model obtains the best performance so far (based on the user-specified loss function), Selene saves the model state to a file called best_model.pth.tar in output_dir.
### Optional parameters¶
• save_checkpoint_every_n_steps: Default is 1000. The number of steps before Selene saves a new checkpoint model weights file. If this parameter is set to None, we will set it to the same value as report_stats_every_n_steps.
• save_new_checkpoints_after_n_steps: Default is None. The number of steps after which Selene will continually save new checkpoint model weights files (checkpoint-<TIMESTAMP>.pth.tar) every save_checkpoint_every_n_steps. Before this, the file checkpoint.pth.tar is overwritten every save_checkpoint_every_n_steps to limit the memory requirements.
• n_validation_samples: Default is None. Specify the number of validation samples in the validation set. If None
• and the data sampler you use is of type selene_sdk.samplers.OnlineSampler, we will by default retrieve 32000 validation samples.
• and you are using a selene_sdk.samplers.MultiFileSampler, we will use all the validation samples available in the appropriate data file.
• n_test_samples: Default is None. Specify the number of test samples in the test set. If None and
• the sampler you specified has no test partition, you should not specify evaluate as one of the operations in the ops list. That is, Selene will not automatically evaluate your trained model on a test dataset, because the sampler you are using does not have any test data.
• the sampler you use is of type selene_sdk.samplers.OnlineSampler (and the test partition exists), we will retrieve 640000 test samples.
• the sampler you use is of type selene_sdk.samplers.MultiFileSampler (and the test partition exists), we will use all the test samples available in the appropriate data file.
• cpu_n_threads: Default is 1. The number of OpenMP threads used for parallelizing CPU operations in PyTorch.
• use_cuda: Default is False. Specify whether CUDA-enabled GPUs are available for torch to use during training.
• data_parallel: Default is False. Specify whether multiple GPUs are available for torch to use during training.
• logging_verbosity: Default is 2. Possible values are {0, 1, 2} . Sets the logging verbosity level:
• 0: only warnings are logged
• 1: information and warnings are logged
• 2: debug messages, information, and warnings are all logged
• metrics: Default is a dictionary with "roc_auc" mapped to sklearn.metrics.roc_auc_score and "average_precision" mapped to sklearn.metrics.average_precision_score. metrics is a dictionary that maps metric names (str) to metric functions. In addition to the loss function you specified with your model architecture, these are the metrics that you would like to monitor during the training/evaluation process (they all get reported every report_stats_every_n_steps). See the Regression Models in Selene tutorial for a different input to the metrics parameter. You can !import metrics from scipy, scikit-learn, statsmodels. Each metric function should require, in order, the true values and predicted values as input arguments. For example, sklearn.metrics.average_precision_score takes y_true and y_score as input.
• checkpoint_resume: Default is None. If not None, you should pass in the path to a model weights file generated by torch.save (and can now be read by torch.load) to resume training.
Attentive readers might have noticed that in the documentation for the TrainModel class there are more input arguments than are required to instantiate the class through the CLI configuration file. This is because they are assumed to be carried through/retrieved from other configuration keys for consistency. Specifically:
• output_dir can be specified as a top-level key in the configuration. You can specify it within each function-type constructor (e.g. !obj:selene_sdk.TrainModel) if you prefer. If output_dir exists as a top-level key, Selene does use the top-level output_dir and ignores all other output_dir keys. The output_dir is omitted in many of the configurations for this reason.
• model, loss_criterion, optimizer_class, optimizer_kwargs are all retrieved from the path in the model configuration.
• data_samplerhas its own separate configuration that you will need to specify in the same YAML file. Please see Sampler configurations for more information.
### Expected outputs for training¶
These outputs will be written to output_dir (a top-level parameter, can also be specified within the function-type constructor, see above).
• best_model.pth.tar: the best performing model so far. IMPORTANT: for all *.pth.tar files output by Selene right now, we save additional information beyond the model’s state dictionary so that users may continue training these models through Selene if they wish. If you would like to save only the state dictionary, you can run out = torch.load(<*.pth.tar>) and then save only the state_dict key with torch.save(out["state_dict"], <state_dict_only.pth.tar>).
• checkpoint.pth.tar: model saved every save_checkpoint_every_n_steps steps
• selene_sdk.train_model.log: a detailed log file containing information about how much time it takes for batches to sampled and propagated through the model, how the model is performing, etc.
• selene_sdk.train_model.train.txt: model training loss is printed to this file every report_stats_every_n_steps.
• Visualize using matplotlib (plt.plot)
• selene_sdk.train_model.validation.txt: model validation loss and other metrics you have specified (defaults would be ROC AUC and AUPRC) are printed to this file (tab-separated) every report_stats_every_n_steps.
• Visualize one of these columns using matplotlib (plt.plot)
• saved sampled datasets (if applicable), e.g. test_data.bed: if the save_datasets value is not an empty list, Selene periodically saves all the data sampled so far in these .bed files. The columns of these files are [chr, start, end, strand, semicolon_separated_class_indices]. In the future, we will adjust this file to support non-binary labels (i.e. since we are only storing class indices in these output .bed files, we can only label sequences with 1/0, presence/absence, of a given class).
## Evaluate¶
An example configuration for evaluation:
evaluate_model: !obj:selene_sdk.EvaluateModel {
input_path: /path/to/features_list.txt
},
trained_model_path: /path/to/trained/model.pth.tar,
batch_size: 64,
n_test_samples: 640000,
report_gt_feature_n_positives: 50,
use_cuda: True
}
### Required parameters¶
• features: The list of distinct features the model predicts. (input_path to the function-type value that loads the features as a list.)
• trained_model_path: Path to the trained model weights file, which should have been generated/saved using torch.save. (i.e. you can pass in the saved model file generated by Selene’s TrainModel class.)
### Optional parameters¶
• batch_size: Default is 64. Specify the batch size to process examples. Should be a power of 2.
• n_test_samples: Default is None. Use n_test_samples if you want to limit the number of samples on which you evaluate your model. If you are using a sampler of type selene_sdk.samplers.OnlineSampler)—you must specify a test partition in this case—it will default to 640000 test samples if n_test_samples = None. If you are using a file sampler (multiple-file sampler or BED/matrix file samplers), it will use all samples available in the file.
• report_gt_feature_n_positives: Default is 10. In total, each class/feature must have more than report_gt_feature_n_positives positive examples in the test set to be considered in the performance computation. The output file that reports each class’s performance will report ‘NA’ for classes that do not have enough positive samples.
• use_cuda: Default is False. Specify whether CUDA-enabled GPUs are available for torch to use.
• data_parallel: Default is False. Specify whether multiple GPUs are available for torch to use.
Similar to the train_model configuration, any arguments that you find in the documentation that are not present in the function-type value’s arguments are automatically instantiated and passed in by Selene.
If you use a sampler with multiple data partitions with the evaluate_model configuration, please make sure that your sampler configuration’s mode parameter is set to test.
### Expected outputs for evaluation¶
These outputs will be written to output_dir (a top-level parameter, can also be specified within the function-type constructor).
• test_performance.txt: columns are class and whatever other metrics you specified (defaults: roc_auc and average_precision). The breakdown of performance metrics by each class that the model predicts.
• test_predictions.npz: The model predictions for each sample in the test set. Useful if you want to make your own visualizations/figures.
• test_targets.npz: The actual classes for each sample in the test set. Useful if you want to make your own visualizations/figures.
• precision_recall_curves.svg: If using AUPRC as a metric, this is an AUPRC figure that we generate for you. Each curve corresponds to one of the classes the model predicts.
• roc_curves.svg: If using ROC AUC as a metric, this is an ROC AUC figure that we generate for you. Each curve corresponds to one of the classes the model predicts.
• selene_sdk.evaluate_model.log: Note that if evaluate is run through train_model (that is, no evaluate_model configuration was specified, but you used ops: [train, evaluate]) you will only see selene_sdk.train_model.log. selene_sdk.evaluate_model.log is created when evaluate_model is used and will output some logging information related to the selene_sdk.EvaluateModel class (some debug statements and performance metrics).
## Analyze sequences¶
The analyze operation allows you to apply a trained model to new sequences of interest. Currently, we support 3 “sub-operations” for analyze:
1. Prediction on sequences: Output the model predictions for a list of sequences.
2. Variant effect prediction: Output the model predictions for sequences centered on specific variants (will output reference and alternate predictions as separate files).
3. In silico mutagenesis: In silico mutagenesis (ISM) involves computationally “mutating” every position in the sequence to every other possible base (DNA and RNA) or amino acid (protein sequences) and examining the consequences of these “mutations”. For ISM, Selene outputs the model predictions for the reference (original) sequence along with each of the mutated sequences.
For variant effect prediction and in silico mutagenesis, a number of scores can be computed using the predictions from the reference and alternate alleles. You may select 1 or more of the following as outputs:
• predictions (output the predictions for each variant, as described above)
• diffs (difference scores): The difference between alt and ref predictions.
• abs_diffs (absolute difference scores): The absolute difference between alt and ref predictions.
• logits (log-fold change scores): The difference between logit(alt) and logit(ref) predictions.
You’ll find examples of how these scores are specified in the variant effect prediction and in silico mutagenesis sections.
In all analyze-related operations, we ask that you specify 2 configuration keys. One will always be the analyze_sequences key and the other one is dependent on which of the 3 sub-operations you use: prediction, variant_effect_prediction or in_silico_mutagenesis.
analyze_sequences: !obj:selene_sdk.predict.AnalyzeSequences {
trained_model_path: /path/to/trained/model.pth.tar,
sequence_length: 1000,
input_path: /path/to/features_list.txt
},
batch_size: 64,
use_cuda: False,
reference_sequence: !obj:selene_sdk.sequences.Genome {
input_path: /path/to/reference_sequence.fa
},
write_mem_limit: 5000
}
### Required parameters¶
• trained_model_path: Path to the trained model weights file, which should have been generated/saved using torch.save. (i.e. You can pass in the saved model file generated by Selene’s TrainModel class.)
• sequence_length: The sequence length the model is expecting for each input.
• features: The list of distinct features the model predicts. (input_path to the function-type value that loads the features as a list.)
### Optional parameters¶
• batch_size: Default is 64. The size of the mini-batches to use.
• use_cuda: Default is False. Specify whether CUDA-enabled GPUs are available for torch to use.
• reference_sequence: Default is the class selene_sdk.sequences.Genome. The type of sequence on which this analysis will be performed (must be type selene.sequences.Sequence).
• IMPORTANT: For variant effect prediction, the reference sequence version should correspond to the version used to specify the chromosome and position of each variant, NOT necessarily the one on which your model was trained.
• For prediction on sequences and in silico mutagenesis, the only thing that matters is the sequence type—that is, Selene uses the static variables in the class for information about the sequence alphabet and encoding. One problem with our current configuration file parsing is that it asks you to pass in a valid input FASTA file even though you do not need the reference sequence for these 2 sub-operations. We will see if this issue can be resolved in the future.
• write_mem_limit: Default is 5000. Specify, in MB, the amount of memory you want to allocate to storing model predictions/scores. When running one of the sub-operations in analyze, prediction/score handlers will accumulate data in memory and write this data to files periodically. By default, Selene will write to files when the total amount of data (that is, across all handlers) takes up 5000MB of space. Please keep in mind that Selene will not monitor the amount of memory needed to actually carry out a sub-operation (or load the model beforehand), so write_mem_limit must always be less than the total amount of CPU memory you have available on your machine. It is hard to recommend a specific proportion of memory you would allocate for write_mem_limit because it is dependent on your input file size (we may change this soon, but Selene currently loads all variants/sequences in a file into memory before running the sub-operation), the model size, and whether the model will run on CPU or GPU.
### Prediction on sequences¶
For prediction on sequences, we require that a user specifies the path to a FASTA file.
An example configuration for prediction on sequences:
prediction: {
input_path: /path/to/sequences.fa,
output_dir: /path/to/output/dir,
output_format: tsv
}
#### Parameters¶
• input_path: Input path to the FASTA file.
• output_dir: Output directory to write the model predictions. The resulting file will have the same filename prefix (e.g. example.fasta will output example_predictions.tsv).
• output_format: Default is ‘tsv’. You may specify either ‘tsv’ or ‘hdf5’. ‘tsv’ is suitable if you do not have many sequences (<1000) or your model does not predict very many classes (<1000) and you want to be able to view the full set of predictions quickly and easily (via a text editor or Excel). ‘hdf5’ is suitable for downstream analysis. You can access the data in the HDF5 file using the Python package h5py. Once the file is loaded, the full matrix is accessible under the key/name "data". Saving to TSV is much slower (more than 2x slower) than saving to HDF5. An additional .txt file with the row labels (descriptions for each sequence in the FASTA) will be output for the HDF5 format as well. It should be ordered in the same way as your input file. The matrix rows will correspond to each sequence and the columns the classes the model predicts.
### Variant effect prediction¶
Currently, we expect that all sequences passed as input to a model must be the same length N.
• For SNPs, Selene outputs the model predictions for the ref and alt sequences centered at the (chr, pos) specified.
• For indels, sequences are centered at pos + (N_bases / 2), for the reference sequence of length N_bases. Selene queries for start = pos + (N_bases / 2) - (N / 2) and end = pos + (N_bases / 2) + (N / 2) to get the sequence of length N.
An example configuration for variant effect prediction:
variant_effect_prediction: {
vcf_files: [
/path/to/file1.vcf,
/path/to/file2.vcf,
...
],
save_data: [abs_diffs],
output_dir: /path/to/output/predictions/dir,
output_format: hdf5,
strand_index: 7
}
#### Parameters¶
• vcf_files: Path to a VCF file. Must contain the columns [#CHROM, POS, ID, REF, ALT], in order. Column header does not need to be present. (All other columns in the file will be ignored.)
• save_data: A list of the data files to output. Must input 1 or more of the following options: [abs_diffs, diffs, logits, predictions]. (Note that the raw prediction values will not be outputted by default—you must specify predictions in the list if you want them.)
• output_dir: Output directory to write the model predictions. The resulting file will have the same filename prefix.
• output_format: Default is ‘tsv’. You may specify either ‘tsv’ or ‘hdf5’. ‘tsv’ is suitable if you do not have many variants (on the order of 10^4 or less) or your model does not predict very many classes (<1000) and you want to be able to view the full set of predictions quickly and easily (via a text editor or Excel). ‘hdf5’ is suitable for downstream analysis. You can access the data in the HDF5 file using the Python package h5py. Once the file is loaded, the full matrix is accessible under the key/name "data". Saving to TSV is much slower (more than 2x slower) than saving to HDF5. When the output is in HDF5 format, an additional .txt file of row labels (corresponding to the columns (chrom, pos, id, ref, alt)) will be output so that you can match up the data matrix rows with the particular variant. Columns of the matrix correspond to the classes the model predicts.
• strand_index: Default is None. If applicable, specify the column index (0-based) in the VCF file that contains strand information for each variant. Note that currently Selene assumes that, for multiple input VCF files, the strand column is the same for all the files.
You may find that there are more output files than you expect in output_dir at the end of variant effect prediction. The following cases may occur:
• NAs: for some variants, Selene may not be able to construct a reference sequence centered at pos of the specified sequence length. This is likely because pos is near the end or the beginning of the chromosome and the sequence length the model accepts as input is large. You will find a list of NA variants in a file that ends with the extension .NA.
• Warnings: Selene may detect that the ref base(s) in a variant do not match with the bases specified in the reference sequence FASTA at the (chrom, pos). In this case, Selene will use the ref base(s) specified in the VCF file in place of those in the reference genome and output predictions accordingly. However, the predictions will be diverted to a file prefixed with warning. so that you may review these variants and determine whether you still want to use those predictions/scores. If you find that most of the variants are showing up in the warning file, it may be that you have specified the wrong reference genome version—please check this before proceeding.
### In silico mutagenesis¶
An example configuration for in silico mutagenesis when using a single sequence as input:
in_silico_mutagenesis: {
input_sequence: ATCGATAAAATTCTGGAG...,
save_data: [predictions, diffs],
output_path_prefix: /path/to/output/dir/filename_prefix,
mutate_n_bases: 1
}
#### Parameters for a single sequence input¶
• sequence: A sequence you are interested in. If the sequence length is less than or greater than the expected model’s input sequence length, Selene truncates or pads (with unknown base, e.g. N) the sequence for you.
• save_data: A list of the data files to output. Must input 1 or more of the following options: [abs_diffs, diffs, logits, predictions]. (Note that the raw prediction values will not be outputted by default—you must specify predictions in the list if you want them.)
• output_path_prefix: Optional, default is “ism”. The path to which the data files are written. We have specified that it should be a filename prefix because we will append additional information depending on what files you would like to output (e.g. fileprefix_logits.tsv) If directories in the path do not yet exist, they will automatically be created.
• mutate_n_bases: Optional, default is 1. The number of bases to mutate at any time. Standard in silico mutagenesis only mutates a single base at a time, so we encourage users to start by leaving this value at 1. Double/triple mutations will be more difficult to interpret and are something we may work on in the future.
An example configuration for in silico mutagenesis when using a FASTA file as input:
in_silico_mutagenesis: {
input_path: /path/to/sequences1.fa,
save_data: [logits],
output_dir: /path/to/output/predictions/dir,
mutate_n_bases: 1,
use_sequence_name: True
}
#### Parameters for FASTA file input:¶
• input_path: Input path to the FASTA file. If you have multiple FASTA files, you can replace this key with fa_files and submit an input list, the same way it is done in variant effect prediction.
• save_data: A list of the data files to output. Must input 1 or more of the following options: [abs_diffs, diffs, logits, predictions].
• output_dir: Output directory to write the model predictions.
• mutate_n_bases: Optional, default is 1. The number of bases to mutate at any time. Standard in silico mutagenesis only mutates a single base at a time, so we encourage users to start by leaving this value at 1.
• use_sequence_name: Optional, default is True.
• If use_sequence_name, output files are prefixed by the sequence name/description corresponding to each sequence in the FASTA file. Spaces in the description are replaced with underscores ‘_’.
• If not use_sequence_name, output files are prefixed with the index i corresponding to the ith sequence in the FASTA file.
## Sampler configurations¶
Data sampling is used during model training and evaluation. You must specify the sampler in the configuration YAML file alongside the other operation-specific configurations (i.e. train_model or evaluate_model).
### Samplers used for training (and evaluation, optionally)¶
Training requires a sampler that specifies the data for training, validation, and (optionally) testing. While Selene can directly evaluate a trained model on a test dataset when training is finished, it is not a required step and so the test dataset specification is also optional. Here, we provide examples for the samplers we have implemented that can be used for training.
There are 2 kinds of samplers implemented in Selene right now: “online” samplers and file samplers. Online samplers generate data samples on-the-fly and require you to pass in a reference sequence FASTA file and a tabix-indexed BED file so that Selene can query for an input sequence and its associated biological classes using genomic coordinates. The file sampler we use supports loading different .mat or .bed files (can support more formats upon request) for the training, validation, and test sets.
For increased efficiency during the training of large models, we would recommend using the online sampler to create datasets (.bed or .mat) and then loading the generated data with a file sampler. We are actively working to incorporate PyTorch dataloaders and other improvements to data sampling into Selene to reduce the time and memory requirements of training. Feel free to contact us through our Github issues if you have comments or want to contribute to this effort!
#### Random positions sampler¶
The random positions sampler will construct data samples by randomly selecting a position in the genome and then using the sequence and classes centered at that position as the input and targets for the model to predict.
An example configuration for the random positions sampler:
sampler: !obj:selene_sdk.samplers.RandomPositionsSampler {
reference_sequence: !obj:selene_sdk.sequences.Genome {
input_path: /path/to/reference_sequence.fa,
blacklist_regions: hg19
},
input_path: /path/to/features_list.txt
},
target_path: /path/to/targets_bed.gz,
seed: 123,
validation_holdout: [chr6, chr7],
test_holdout: [chr8, chr9],
sequence_length: 1000,
center_bin_to_predict: 200,
feature_thresholds: 0.5,
mode: train,
save_datasets: [train, validate, test]
}
##### Required parameters¶
• reference_sequence: Path to a reference sequence FASTA file we can query to create our data samples.
• blacklist_regions is an optional argument for selene_sdk.sequences.Genome that allows you to specify the blacklist regions for the hg19 or hg38 reference sequence. The lists of blacklisted intervals are provided by Anshul Kundaje for ENCODE and support for more organisms can be included upon request.
• target_path: Path to a tabix-indexed, compressed BED file (.bed.gz) of genomic coordinates corresponding to the measurements for genomic features/classes the model should predict.
• features: The list of distinct features the model predicts. (input_path to the function-type value that loads the file of features as a list.)
##### Optional parameters¶
• seed: Default is 436.
• validation_holdout: Default is [chr6, chr7]. Holdout can be regional (i.e. chromosomal) or proportional.
• If regional, expects a list where the regions must match those specified in the first column of the tabix-indexed BED file target_path (which must also match the FASTA descriptions for every record in reference_sequence).
• If proportional, specify a percentage between (0.0, 1.0). Typically 0.10 or 0.20.
• test_holdout: Default is [chr8, chr9]. Holdout can be regional (i.e. chromosomal) or proportional. See description of validation_holdout.
• sequence_length: Default is 1000. Model is trained on sequences of sequence_length.
• center_bin_to_predict: Default is 200. Query the tabix-indexed file for a region of length center_bin_to_predict, centered in the input sequence of sequence_length.
• feature_thresholds: Default is 0.5. The threshold to pass to the selene_sdk.targets.Targets object. Because we have only implemented support for genomic features right now, we reproduce the threshold inputs for that here:
• A genomic region is determined to be a positive sample if at least one genomic feature interval takes up some proportion of the region greater than or equal to the corresponding threshold.
• float: A single threshold applied to all the features in your dataset.
• dict: A dictionary mapping feature names (str) to thresholds (float). This is used if you want to assign different thresholds for different features. If a feature’s threshold is not specified in the dictionary, you must have the key default with a default threshold value we can use for that feature.
• mode: Default is ‘train’. Must be one of {train, validate, test}. The starting mode in which to run this sampler.
• save_datasets: Default is [test]. The list of modes for which we should save the sampled data to file. Should be one or more of {train, validate, test}.
#### Intervals sampler¶
The intervals sampler will construct data samples by randomly selecting positions only in the regions specified by an intervals .bed file and then using the sequence and classes centered at that position as the input and targets for the model to predict.
An example configuration for the intervals sampler:
sampler: !obj:selene_sdk.samplers.IntervalsSampler {
reference_sequence: !obj:selene_sdk.sequences.Genome {
input_path: /path/to/reference_sequence.fa,
blacklist_regions: hg38
},
target_path: /path/to/targets.bed.gz,
input_path: /path/to/features_list.txt
},
intervals_path: /path/to/intervals.bed,
sample_negative: False,
seed: 436,
validation_holdout: 0.10,
test_holdout: 0.10,
sequence_length: 1000,
center_bin_to_predict: 100,
feature_thresholds: {"feature1": 0.5, "default": 0.1},
mode: test,
save_datasets: [test]
##### Parameters¶
With the exception of intervals_path and sample_negative, all other parameters match those for the random positions sampler. Please see the previous section for more details on the other parameters.
• intervals_path: The path to the intervals file. Must have the columns [chr, start, end], where values in chr should match the descriptions in the FASTA file. We constrain the regions from which we sample to the regions in this file instead of the using the whole genome.
• sample_negative: Optional, default is False. Specify whether negative examples (i.e. samples with no positive labels) should be drawn. When False, the sampler will check if the center_bin_to_predict in the input sequence contains at least 1 of the features/classes the model wants to predict. When True, no such check is made.
#### Multiple-file sampler¶
The multi-file sampler loads in the training, validation, and optionally, the testing dataset. The configuration for this therefore asks that you fill in some keys with the function-type constructors of type selene_sdk.samplers.file_samplers.FileSampler. Please consult the following sections for information about these file samplers.
An example configuration for the multiple-file sampler:
sampler: !obj:selene_sdk.samplers.MultiFileSampler {
train_sampler: !obj:selene_sdk.samplers.file_samplers.MatFileSampler {
...
},
validate_sampler: !obj:selene_sdk.samplers.file_samplers.MatFileSampler {
...
},
input_path: /path/to/features_list.txt
},
test_sampler: !obj:selene_sdk.samplers.file_samplers.BedFileSampler {
...
},
mode: train
}
##### Parameters¶
• train_sampler: Load your training data from either a .bed file (selene_sdk.samplers.file_sampler.BedFileSampler) or .mat file (selene_sdk.samplers.file_sampler.MatFileSampler).
• validate_sampler: Sample as train_sampler.
• test_sampler: Optional, default is None. Same as train_sampler.
• features: The list of distinct features the model predicts. (input_path to the function-type value that loads the file of features as a list.)
• mode: Default is ‘train’. Must be one of {train, validate, test}. The starting mode in which to run this sampler.
### Important note¶
If you use any of these samplers (that is, samplers with multiple data partitions) with the evaluate_model configuration, please make sure that your mode is set to test.
### Samplers used for evaluation¶
You can use all the samplers specified for training for evaluation as well (see note above). Additionally, you can use single-file samplers, which we describe below.
#### BED file sampler¶
The BED file sampler loads a dataset from a .bed file. This can be generated by one of the online samplers in Selene with the save_dataset parameter.
An example configuration for a BED file sampler:
sampler: !obj:selene_sdk.samplers.file_samplers.BedFileSampler {
filepath: /path/to/data.bed,
reference_sequence: !obj:selene_sdk.sequences.Genome {
input_path: /path/to/reference_sequence.fa
},
n_samples: 640000,
sequence_length: 1000,
targets_avail: True,
n_features: 919,
}
##### Parameters¶
• filepath: Path to the BED file.
• reference_sequence: Path to a reference sequence FASTA file we can query to create our data samples.
• n_samples: Number of lines in the file. (wc -l <filepath>)
• sequence_length: Optional, default is None. If the coordinates of each sample in the BED file, already account for the full sequence (that is, the columns end - start = sequence_length, there is no need to specify this parameter. If sequence_length is not None, the length of each sample will be checked to determine whether the sample coordinates need to be adjusted to match the sequence length expected by the model architecture.
• targets_avail: Optional, default is False. If targets_avail, assumes that it is the last column of the .bed file. The last column should contain the indices, separated by semicolons, of features (classes) found within a given sample’s coordinates (e.g. 0;1;45;60). This format assumes that we are only looking for the absence/presence of each feature within the interval.
• n_features: Optional, default is None. If targets_avail is True, must specify n_features, the total number of features (classes).
#### Matrix file sampler¶
The matrix file sampler loads a dataset from a matrix file.
An example configuration for a matrix file sampler:
sampler: !obj:selene_sdk.samplers.file_samplers.MatFileSampler {
filepath: /path/to/data.mat,
sequence_key: sequences,
targets_key: targets,
random_seed: 123,
shuffle: True,
sequence_batch_axis: 0,
sequence_alphabet_axis: 1,
targets_batch_axis: 0
}
##### Parameters¶
• filepath: The path to the file from which to load the data.
• sequence_key: The key for the sequences data matrix.
• targets_key: Optional, default is None. The key to the targets data matrix.
• random_seed: Optional, default is 436. Sets the random seed for sampling.
• shuffle: Optional, default is True. Shuffle the order of the samples in the matrix before sampling from it.
• sequence_batch_axis: Optional, default is 0. Specify the batch axis for the sequences matrix.
• sequence_alphabet_axis: Optional, default is 1. Specify the alphabet axis.
• targets_batch_axis: Optional, default is 0. Specify the batch axis for the targets matrix.
## Examples of full configuration files¶
We do have a more comprehensive set of examples on our Github that you can review. We reproduce a few of these in this document to show how you can put all of the different configuration components together to create a YAML file that can be run by Selene’s CLI:
### Training (using intervals sampler)¶
---
ops: [train, evaluate]
model: {
path: /absolute/path/to/model/architecture.py,
class: ModelArchitectureClassName,
class_args: {
arg1: val1,
arg2: val2
},
non_strand_specific: mean
}
sampler: !obj:selene_sdk.samplers.IntervalsSampler {
reference_sequence: !obj:selene_sdk.sequences.Genome {
input_path: /path/to/reference_sequence.fa,
blacklist_regions: hg19
},
target_path: /path/to/tabix/indexed/targets.bed.gz,
input_path: /path/to/distinct_features.txt
},
intervals_path: /path/to/intervals.bed,
sample_negative: True,
seed: 127,
validation_holdout: [chr6, chr7],
test_holdout: [chr8, chr9], # specifying a test partition
sequence_length: 1000,
center_bin_to_predict: 200,
feature_thresholds: 0.5,
mode: train, # starting mode for sampler
save_datasets: [test]
}
train_model: !obj:selene_sdk.TrainModel {
batch_size: 64,
max_steps: 80000,
report_stats_every_n_steps: 16000,
n_validation_samples: 32000,
n_test_samples: 640000,
use_cuda: True,
data_parallel: True,
logging_verbosity: 2,
checkpoint_resume: False
}
random_seed: 133
output_dir: /path/to/output_dir
...
#### Some notes¶
• Ordering of the keys does not matter.
• We included many of the optional keys in this configuration. You do not need to specify these if you want to use their default values.
• In this example, we specified a test partition in our intervals sampler by assigning a list of chromosomes to test_holdout. If no such holdout was specified (e.g. None or empty list), you would not be able to specify n_test_samples in TrainModel and would need to omit evaluate from the ops list.
• output_dir is specified at the top-level and used by both the sampler and the TrainModel class.
### Evaluate (using matrix file sampler)¶
---
ops: [evaluate]
model: {
path: /absolute/path/to/model/architecture.py,
class: ModelArchitectureClassName,
class_args: {
arg1: val1,
arg2: val2
},
non_strand_specific: mean
}
sampler: !obj:selene_sdk.samplers.file_samplers.MatFileSampler {
filepath: /path/to/test.mat,
sequence_key: testxdata,
targets_key: testdata,
random_seed: 456,
shufle: False,
sequence_batch_axis: 0,
sequence_alphabet_axis: 1,
targets_batch_axis: 0
}
evaluate_model: !obj:selene_sdk.EvaluateModel {
input_path: /path/to/features_list.txt
},
trained_model_path: /path/to/trained/model.pth.tar,
batch_size: 64,
report_gt_feature_n_positives: 50,
use_cuda: True,
data_parallel: False
}
random_seed: 123
output_dir: /path/to/output_dir
create_subdirectory: False
...
### Some notes¶
• For the matrix file sampler, we assume that you know ahead of time the shape of the data matrix. That is, which dimension is the batch dimension? Sequence? Alphabet (should be size 4 for DNA/RNA)? You must specify the keys that end in axis unless the shape of the sequences matrix is (n_samples, n_alphabet, n_sequence_length) and the shape of the targets matrix is (n_samples, n_targets).
• In this case, since create_subdirectory is False, all outputs from evaluate are written to output_dir directly (as opposed to being written in a timestamped subdirectory). Be careful of overwriting files.
### Analyze sequences (variant effect prediction)¶
ops: [analyze]
model: {
path: /absolute/path/to/model/architecture.py,
class: ModelArchitectureClassName,
class_args: {
arg1: val1,
arg2: val2
},
non_strand_specific: mean
}
analyze_sequences: !obj:selene_sdk.predict.AnalyzeSequences {
trained_model_path: /path/to/trained/model.pth.tar,
sequence_length: 1000,
input_path: /path/to/distinct_features.txt
},
batch_size: 64,
use_cuda: True,
reference_sequence: !obj:selene_sdk.sequences.Genome {
input_path: /path/to/reference_sequence.fa
},
write_mem_limit: 75000
}
variant_effect_prediction: {
vcf_files: [
/path/to/file1.vcf,
/path/to/file2.vcf
],
save_data: [predictions, abs_diffs],
output_dir: /path/to/output/predicts/dir,
output_format: tsv,
strand_index: 9
}
random_seed: 123
#### Some notes¶
• We ask that in all analyze cases, you specify the output_dir (when applicable) within the sub-operation dictionary. This is because only the sub-operation generates output, so there is no need to share this parameter across multiple configurations.
• In this variant effect prediction example, Selene will go through each VCF file and get the model predictions for each variant (ref and alt). analyze_sequences must have the parameter reference_sequence so that Selene can create sequences centered at each variant position by querying the reference sequence file.
• The output from this operation will be 6 files: 3 for each input VCF file. This is because of what is specified in save_data:
• predictions will output 2 files per input VCF: the model predictions for all refs and the model predictions for all alts.
• abs_diffs will output 1 file per input VCF: the absolute difference between the ref and alt model predictions. (Certainly, outputting the files from predictions is sufficient to compute abs_diffs yourself.)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3863270878791809, "perplexity": 3739.7130252175075}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257920.67/warc/CC-MAIN-20190525064654-20190525090654-00238.warc.gz"}
|
https://www.physicsforums.com/threads/physics-homework-problem-stuck.65957/
|
# Physics Homework Problem-Stuck?
1. Mar 4, 2005
### shawonna23
Physics Homework Problem--Stuck?
A 75 kg water skier is being pulled by a horizontal force of 495 N and has an acceleration of 2.0 m/s2. Assuming that the total resistive force exerted on the skier by the water and the wind is constant, what force is needed to pull the skier at a constant velocity?
I tried doing this to solve the problem:
F=ma
F=75kg x 2.0= 150N
Then I added 150 and 495 to get 645 N but this is not the right answer.
Can someone please tell me what I did wrong?
2. Mar 4, 2005
### dextercioby
What is the total resistive force...?
Daniel.
3. Mar 4, 2005
### Jameson
$$F_{net} = ma$$
This stands for the net force.. not only one. Draw a force diagram and see where each one is going.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9691165089607239, "perplexity": 894.8941180048523}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719079.39/warc/CC-MAIN-20161020183839-00300-ip-10-171-6-4.ec2.internal.warc.gz"}
|
https://openreview.net/forum?id=hGdAzemIK1X
|
## Quantum Speedups of Optimizing Approximately Convex Functions with Applications to Logarithmic Regret Stochastic Convex Bandits
Abstract: We initiate the study of quantum algorithms for optimizing approximately convex functions. Given a convex set $\mathcal{K}\subseteq\mathbb{R}^{n}$ and a function $F\colon\mathbb{R}^{n}\to\mathbb{R}$ such that there exists a convex function $f\colon\mathcal{K}\to\mathbb{R}$ satisfying $\sup_{x\in\mathcal{K}}|F(x)-f(x)|\leq \epsilon/n$, our quantum algorithm finds an $x^{*}\in\mathcal{K}$ such that $F(x^{*})-\min_{x\in\mathcal{K}} F(x)\leq\epsilon$ using $\tilde{O}(n^{3})$ quantum evaluation queries to $F$. This achieves a polynomial quantum speedup compared to the best-known classical algorithms. As an application, we give a quantum algorithm for zeroth-order stochastic convex bandits with $\tilde{O}(n^{5}\log^{2} T)$ regret, an exponential speedup in $T$ compared to the classical $\Omega(\sqrt{T})$ lower bound. Technically, we achieve quantum speedup in $n$ by exploiting a quantum framework of simulated annealing and adopting a quantum version of the hit-and-run walk. Our speedup in $T$ for zeroth-order stochastic convex bandits is due to a quadratic quantum speedup in multiplicative error of mean estimation.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9081717133522034, "perplexity": 378.08228935299394}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500158.5/warc/CC-MAIN-20230205000727-20230205030727-00553.warc.gz"}
|
https://ecalculatorsite.com/statistics-calculator.html
|
# Statistics Calculator
## How To Use Statistics Calculator
Statistics is the discipline that concerns the collection, organization, analysis, interpretation and presentation of data.
In applying statistics to a scientific, industrial, or social problem, it is conventional to begin with a statistical population or a statistical model to be studied.
Statistics deals with every aspect of data, including the planning of data collection in terms of the design of surveys and experiments.
Two main statistical methods are used in data analysis: descriptive statistics, which summarize data from a sample using indexes such as the mean or standard deviation, and inferential statistics, which draw conclusions from data that are subject to random variation
Statistics is a mathematical body of science that pertains to the collection, analysis, interpretation or explanation, and presentation of data, or as a branch of mathematics.
Some consider statistics to be a distinct mathematical science rather than a branch of mathematics. While many scientific investigations make use of data, statistics is concerned with the use of data in the context of uncertainty and decision making in the face of uncertainty.
In applying statistics to a problem, it is common practice to start with a population or process to be studied. Populations can be diverse topics such as "all people living in a country"
Statistical calculation:
Let the set of n terms be x1, x2, x3, x4, x5, ......, xn.
Let the Maximum Value be x9 and the Minimum Value be x4.
Then the Range will be x9 - x4.
$$Sum = \sum_{i=1}^n x_i$$ $$Sum = x_1 + x_2 + x_3 + ... + x_n$$ $$Mean,\hspace{1cm}\mu = \frac{1}{n}\sum_{i=1}^n x_i$$ $$\mu = \frac{1}{n}(x_1 + x_2 + x_3 + ... + x_n)$$ In other words, $$\mu = \frac{Sum \hspace{0.2cm}of \hspace{0.2cm}terms}{Number\hspace{0.2cm} of\hspace{0.2cm} terms}$$
For finding the Median, first sort the data in the Ascending or Descending Order.Then
if the number of terms is odd, then the median will be ($$\frac{n\hspace{0.1cm}+\hspace{0.1cm}1}{2}$$) th term.
if the number of terms is even, then the median will be $$\frac{1}{2}$$($$\frac{n}{2}$$ th term + $$(\frac{n}{2}\hspace{0.1cm}+\hspace{0.1cm}1)$$ th term).
The term with Maximum Occurences will be Mode.
$$Variance,\hspace{0.1cm}\sigma^2 = \frac{1}{n}\sum_{i=1}^n (x_i\hspace{0.1cm}-\hspace{0.1cm}\mu)^2$$ $$Standard\hspace{0.1cm}Deviation,\hspace{0.1cm}\sigma = \sqrt{\sigma^2}$$ $$\sigma= \sqrt{\frac{1}{n}\sum_{i=1}^n (x_i\hspace{0.1cm}-\hspace{0.1cm}\mu)^2}$$
Statistical methods:
1. Descriptive statistics is a summary statistic that quantitatively describes or summarizes features of a collection of information.
2. Statistical inference is the process of using data analysis to deduce properties of an underlying probability distribution.
For statistical calculation using above statsistics calculator, you have to just write the comma separated values in the given input box and press the calculate button, you will get the result.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4971567690372467, "perplexity": 665.910344520803}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487626122.27/warc/CC-MAIN-20210616220531-20210617010531-00395.warc.gz"}
|
https://brilliant.org/problems/maximise-it-2/
|
# Maximise It!
Algebra Level 4
Real numbers $$a$$, $$b$$ and $$c$$ are such that $$a+2b+c=4$$. Find the maximum value of $$ab+bc+ac$$.
×
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7465298175811768, "perplexity": 812.4098152225566}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676595531.70/warc/CC-MAIN-20180723071245-20180723091245-00078.warc.gz"}
|
https://socialsci.libretexts.org/Courses/Butte_College/Exploring_Intercultural_Communication_(Grothe)/10%3A_Intercultural_Communication_Competence
|
Skip to main content
Loading table of contents menu...
# 10: Intercultural Communication Competence
$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$$$\newcommand{\AA}{\unicode[.8,0]{x212B}}$$
This page titled 10: Intercultural Communication Competence is shared under a CC BY license and was authored, remixed, and/or curated by Tom Grothe.
• Was this article helpful?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9750217199325562, "perplexity": 2219.20786713468}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00587.warc.gz"}
|
http://psteitz.blogspot.com/
|
## Sunday, July 30, 2017
### Five things I look for in Software Engineers
I have been interviewing a lot of software engineers recently, as I am leading a new team and looking to expand it. That has led me to reflect a little on what I am actually looking for. The following five qualities have been shared by all of the really good, fun-to-work-with developers who I have had the pleasure to work with.
1. Technical mastery
Really good developers fully understand what they are doing. This might sound funny, but unfortunately, it is all too common for people to get things to work by cutting and pasting examples or fumbling through a quasi-random hacking process to arrive at code that "works" without actually understanding how or why (or in fact even if) it works. There is nothing wrong with experimentation and leveraging experience - and working code - of others. When really good developers do that, though, they always find their way to full understanding of the technologies and techniques that they are using. When I interview developers, I always ask them to explain exactly how the solutions that they developed work. I can usually tell very quickly if I am talking to an individual who masters the technology that they use. I would much rather have a developer with strong mastery of a small set of technologies than someone whose resume is full of advanced technologies that they don't understand.
2. Simple mindedness
In The Humble ProgrammerEdsger W. Dijkstra said "The competent programmer is fully aware of the strictly limited size of his own skull; therefore he approaches the programming task in full humility, and among other things he avoids clever tricks like the plague." Really good developers have a wonderful tendency to keep things as simple as possible, as long as possible. Fancy constructs, excessive OO complexity, needless external dependencies and exotic algorithms never find their way into their code. If there is a simple way to do something, that is what they do. Reading the code of a simple-minded developer is like reading a mathematical paper written by a great mathematician. If the content is straightforward, the progression is 100% predictable. You can stop in the middle and scribble out what should come next and then see that is what comes next. When you get to a difficult part where you have to think, you are happy to see that the author found something so simple that you should have thought of it.
3. Organizing
Another one of my favorite Dijkstra quotes is that the art of programming is the art of organizing complexity. Great developers are organizing forces. Note that this is not the same as "being organized." It means that they help define problems in a way that they can be solved with simple solutions and they help get contracts, interface boundaries and tests in place so that the teams they are part of can be organized. The scope of what developers can organize naturally grows as they progress in their careers; but they need to have the drive and ability to be "organizers" from the beginning. Developers that have to be told how to think about problems are net drags on the teams they are part of. Good ones are key contributors to their teams arrival at nicely organized approaches to problems.
4. Fast-learning
Technology changes so fast and business problems are spread across such a large surface that developers constantly need to learn new things. And these things are not just programming languages, frameworks or constructs. Developers need to learn business domain concepts, data science and AI concepts as needed, often in ridiculously short timeframes. This means that they have to be able to learn very fast. And they have to be able to do this and immediately exercise their knowledge with a high level of independence. Its great to be able to learn together and share knowledge with others, but sometimes developers need to figure things out for themselves and good ones have the ability and determination to learn what they need to learn - however hairy it gets - to solve the problems in front of them.
5. Situational awareness
Good developers ask about - and clearly understand - the execution context of the code they work on. If something needs to be thread-safe, they write it that way. They know what the performance and scalability bottlenecks in and around their code are. They know about its security context. They see enough of the larger system that their code is running in / interacting with to ensure that it will be operable, failing fast and loudly when it needs to fail, maintaining invariants that it needs to maintain, and providing sufficient logging / events / monitoring interfaces. And of course, all of this is validated in unit tests.
I know some people will say that some of what I have above - especially in 3. and 5. - can't really be expected of "SE's." These qualities, one might argue, are qualities of architects. Developers just need to be "feature machines" and architects can worry about how to organize the code and make sure the whole system is operable. My biggest learning in 30 years of software development is that that is the wrong way to think about it. Architecture influence and scope of vision naturally increases as developers progress in their careers; but it is part of what they do, every day, from day 1 and the better they do it, the more successful the teams they are part of can be. And the senior ones - those who might have "Architect" or "Principal" or "Staff" in their titles - need to encourage, cultivate, challenge and be influenced by the design thinking of SEs at all levels.
## Saturday, September 17, 2016
### I Pledge Allegiance...
Last week, I heard a segment on NPR about patriotic rituals such as saying the Pledge of Allegiance or standing for the National Anthem. One statement by a young person was hard for me. She said she could not recite the Pledge of Allegiance because saying that our republic has "liberty and justice for all" is false. I get that. And I certainly respect anyone's right to participate or not participate in saying the Pledge. What bothers me is that "the republic, for which it stands" is an idea and that idea really does mean liberty and justice for all. We have never had - and no real nation ever will have - perfect liberty and justice. What we pledge allegiance to is the idea that such a nation can exist. I bet that 150+ years ago when Abraham Lincoln made his long-remembered remarks at Gettysburg, he knew well that this idea would never be perfectly realized. He knew that what he himself had done to preserve it was not perfect. But he really did believe in the idea. This idea is the source of everything that has ever been good about the United States and everything that will ever be good about us, our children or our children's children. We cannot abandon this idea because we have not lived up to it - even collectively. Even if we see endemic and systemic injustice and prejudice, we have to see that as not who we are. And we need our children to see that. We don't need to make them say the Pledge or even stand when others do, but we do need them to have faith that this idea really can "long endure" and that there really can be "a new birth of freedom" in the United States.
## Wednesday, March 16, 2016
### When someone who works for you recommends a book, read it!
People who work for you have a great perspective on what you need to learn. Here are some great examples:
Death by Meeting - in which I learned that my team meetings were, ...um, "suboptimal." Thanks, Bob!
The Goal - in which I learned that there was a better way to think about process optimization. Thanks, Kevin!
The Phoenix Project - in which I am learning that I did not fully understand the consequences of the previous book. Thanks, Scott!
## Wednesday, November 18, 2015
### I can see it from where I live
After seeing it recommended by Dan Pink, I started reading Studs Terkel's classic, Working. The following quote brought back old memories for me
There’s not a house in this country that I haven’t built that I don’t look at every time I go by. (Laughs.) I can set here now and actually in my mind see so many that you wouldn’t believe. If there’s one stone in there crooked, I know where it’s at and I’ll never forget it. Maybe thirty years, I’ll know a place where I should have took that stone out and redone it but I didn’t. I still notice it. The people who live there might not notice it, but I notice it. I never pass that house that I don’t think of it.
I was lucky to have as my first boss a man who really valued workmanship. His landscape construction and maintenance business was hard and I could see every day the pressure to cut corners. But he never did and he got very, very mad when he saw any of us doing shoddy work.
I remember a few years later, I was working on an interstate highway construction project. My job was to break off concrete pipes and parge cement around the gaps between them and the junction boxes where they came together. I always tried to do a nice job, leaving the box looking like it had been cast as a single piece of concrete. I remember once having a hard time with one of the pipes and struggling with my coworkers to get the finish smooth. One of them said, "I can't see it from where I live." I immediately thought of my first boss, yelling at me once for justifying a sloppy joint by saying that no one would notice it because it was going to be backfilled. He said, "but I just noticed it and you saw it yourself. When you go home, you will see it again. And if you don't see it again, you haven't learned anything from me."
## Wednesday, November 11, 2015
### A very crowded corner
OK, time for a little math walk. Imagine that Bolzano's grocery is running a special on Weirstrass' Premium Vegan Schnitzel. People start converging on the corner in front of Bolzano's from all around. Based on counts using awesome new really big data technology, the local news media makes the amazing announcement that there are infinitely many people in the city block around Bolzano's. The subject of this walk is showing that there must be at least one location in that block where you can't move even the slightest distance without bumping into someone.
To simplify things, lets smash everything down into one dimension and pretend that the city block above is the closed interval $[0, 1]$ on the real number line. Let's represent the infinite set of people as points in this interval. Now consider the subintervals $(0, .1), (.1, .2), ... (.9, 1).$ At least one of these intervals must contain infinitely many people. Suppose, for example, that the interval $(.5, .6)$ contains infinitely people. Then split that interval into 10 segments, as shown in the picture below. At least one of these has to contain infinitely many people. Suppose, again for example, that this subinterval is $(.537, .538)$.
Now consider the number .537. We know that there are infinitely many people within .001 of .537. There is nothing stopping us from continuing this process indefinitely, finding smaller and smaller subintervals with left endpoints $.5, .53, .537...$ each containing infinitely many people. Let $r$ be the number whose infinite decimal expansion is what we end up with when we continue this process ad infinitum. To make $r$ well-defined, let's say that in each case we choose the left-most subinterval that contains infinitely many people. Depending on how the people are distributed, $r$ might be boring and rational or something exotic like the decimal expansion of $\pi$. The point is that it is a well-defined real number and it has the property that no matter how small an interval you draw around it, that interval includes infinitely many people. This is true because for each $n$, the interval starting at $r$ truncated to $n$ decimal digits and ending $1 / 10^n$ higher than that contains both $r$ and infinitely many other people by construction. In the example above, for $n = 3$, this interval starts at $.537$ and ends at $.538$.
Now let's remove the simplification, one step at a time. First, let's see how the same construction works if in place of $[0, 1]$ we use any bounded interval $[a, b]$. Consider the function $f(x) = (x - a) / (b - a)$. That function maps $[a, b]$ onto $[0, 1]$. Its graph is a straight line with slope $1/(b - a)$. If $b - a$ is larger than 1, points get closer together when you do this mapping; otherwise they get further apart. But the expansion or contraction is by a constant factor, so the picture above looks exactly the same, just with different values for the interval endpoints. So if we do the construction inside $[0, 1]$ using the mapped points, then the pre-image of the point $r$ we end up with will be an accumulation point for the set in $[a, b]$.
OK, now lets pick up our heads and get out of Flatland. Imagine that the square block around Bolzano's is the set of points in the x-y plane with both x and y coordinates between 0 and 1. Divide up the square containing those points into 10 equal-sized pieces. One of those pieces has to contain infinitely many people. Suppose it is the square with bottom-left coordinates (.5, .2). Now divide that little square into 10 subsquares. Again, one of these has to contain infinitely many people. Say it is the one with lower-left coordinates (.53, .22). The picture below shows these points and a next one, say, (.537, .226). Just like the one-dimensional case, this sequence of points converges to an accumulation point (x,y) that has infinitely many people within even the smallest distance from it.
The ideas presented above are the core of one proof of the Bolzano-Weierstrass Theorem, a beautiful and very useful result in Real Analysis. The existence of the limiting values is guaranteed by the Least Upper Bound Axiom of the real numbers.
## Monday, November 25, 2013
### Fully solving problems
Bryan Pendleton's great post, "Anatomy of a bug fix" suggests some basic principles that apply to all kinds of problem resolution. The attributes that he calls out as separating "great developers" from the not-so-great apply in lots of other contexts, distinguishing the people who you really want to have on your team from those who you can't really count on. Bryan's conclusion:
I often say that one of the differences between a good software engineer and a great one is in how they handle bug fixes. A good engineer will fix a bug, but they won't go that extra mile:
• They won't narrow the reproduction script to the minimal case
• They won't invest the time to clearly and crisply state the code flaw
• They won't widen the bug, looking for other symptoms that the bug might have caused, and other code paths that might arrive at the problematic code
• They won't search the problem database, looking for bug reports with different symptoms, but the same underlying cause.
The scenario above applies whenever there is a problem to be resolved. I once led a great team responsible for resolving operational problems at a bank. The "great ones" on that team always performed analogs of all of the things Bryan mentions above. They always got to a very precise problem statement and recipe to reproduce and (sometimes painfully) a really exhaustive explication of impacts (what Bryan calls "widening the bug") as well as understanding of the relation between the current problem and any others that may have been related.
I have seen this separation of talent in lots of business domains - operations, engineering, finance, marketing - even business strategy and development. The great ones don't stop until they have a really satisfying understanding of exactly what an anomaly is telling them. The not-so-great are happy just seeing it go away.
The difference is not really "dedication" or "diligence" per se - i.e., its not just that the great ones "do all the work" while the not-so-great are lazier. The great ones are driven by the desire to understand and to avoid needless rework later. They tend to be less "interrupt-driven" and may actually appear to be less responsive or "dedicated" in some cases. They solve problems all the way because they can't stop thinking about them until they have really mastered them. I always look for this quality when I am hiring people.
## Sunday, April 14, 2013
We get the word "integrity" from the same root that gives us "integer" or "whole number." To have integrity is first and foremost to be one thing. Kant built his entire theory of knowledge on the premise that experience has to make sense - the world has to be one thing in this sense. We have to be able to say "I think..." before every perception that we have about the world.
The core of all effective leadership really comes down to this. It all has to make sense. Everyone has to be able to start with "I think...". Not "Mr. X says..." Not "policy is.." Not "I was told..." but "I think..."
For this to work, leaders have to be firmly grounded in a shared vision and they have to be committed to maintaining integrity in the sense above. Values, principles, objectives, strategies, communications, performance evaluations, policies, processes, commitments all have to be constantly integrated. Leaders who force themselves to be able to say "I think..." before a comprehensive view of all of these things can lead from the core. Just as it is painful to do the exercises to strengthen your physical core, so it can be painful to maintain core leadership strength in this sense. It is very easy to get "out of shape" by neglecting core values, objectives, strategy and execution alignment. But without a strong core, none of the most important leadership attributes - authenticity, inspiration, strategic vision, followership, transformational impact - are possible.
Leaders who "skip the abs work" can get some things done and, depending on their good fortune and / or cleverness, some achieve material success. But no one remembers them. No great change is ever led by them. No great leaders are ever developed by them. Leading durable transformational change and developing great leaders requires core strength.
So how do you develop core strength? A great mentor and an already established values-based vision and strategy can help get you started, but you always end up having to do the work to build your own core yourself. Here are some little exercises that can help. There is nothing particularly deep here and there are lots of variations on these practices. The point is to regularly and critically focus on core integrity.
Look-back sit-ups. Starting once a week and working up to once a day, look back on all of the decisions, communications and interactions that you had and explain how it is possible that one person did all of these things. I guarantee that if you are really observant and critical, you will find lots of little inconsistencies - things that in retrospect you can't say, "I think..." in front of. For each of these, you have two choices: either come up with an alternative course of action that, had you done it, would have made sense; or modify whatever aspects of your vision, strategy or values it is inconsistent with (or more precisely, resolve yourself to conceive and align the necessary changes with your team, your peers and your leadership). Done honestly, this is painful. Think of each example as a little integrity sit-up. Here are a couple of concrete examples.
1. Suppose that last week you negotiated an extension to a service contract. In exchange for a healthy rate reduction, you doubled the term length and added minimums to the contract. This will help achieve your annual opex reduction goal; but your agreed upon strategy is to ensure supplier flexibility and aggressively manage demand in the area covered by the contract. Your decision basically said near-term opex reduction was more important than flexibility or demand management. Either your strategy was wrong or your decision was wrong. To be one person, you need to either acknowledge the mistake or harmonize the decision with the strategy.
2. Last week you agreed with your leader and peers in a semi-annual performance ratings alignment meeting that one of your direct reports was not fully meeting expectations in some key areas. You agreed to deliver the "needs improvement" message in these areas in his performance appraisal and to adjust his overall rating downward. You did change the rating and some of the verbiage in the assessment; but when you delivered the review and he challenged the overall rating, you were swayed by his arguments and in the end you admitted that you had been told to adjust the rating downward. Here either you failed to consider everything when agreeing to the rating adjustment or you were overly influenced by the feedback.
In some cases, the look-back exercise can and should lead you to take some remediating actions; but that is not the point of the exercise. The point is to do a little "root cause analysis" of what caused the integrity breakdown. In the first example, it may have been extreme near-term financial pressure causing things to get out of focus, or possibly just lack of clarity in the relative importance of the different factors in the strategy. In the second example, the feedback may have pushed some "hot buttons" causing you to temporarily lose some core strength. The key is to face these integrity gaps directly and honestly by yourself. First think clearly and honestly about what went wrong and why. Then think about how to "fix things."
Virtual 360 crunchies. Again starting once a week and working up to daily, imagine you are specific person on your team, in your company or a partner (alternate among randomly chosen people from these groups) and respond to the question, "What is most important to X?" where X is you. Don't just repeat goals or big initiative names or repeat your own communications. Actually try to imagine what it would be like being the selected person and what they really think is important to you and how that relates to what they do on a day to day basis. Think about how they would say it in their own words, not yours. If you can't do it, or what naturally comes out is far from what you see as your core, you have two options. Either you have a communication problem - i.e. there is no way this person can have a clear understanding of what is important to you because you have failed to communicate it - or you don't make sense from their vantage point. In the first case, you need to work on communication and in the second, you need to patch whatever holes exist in your vision, strategy or values that make you incomprehensible to this person. Here are some examples.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4086751341819763, "perplexity": 1132.831652048944}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825264.94/warc/CC-MAIN-20171022132026-20171022152026-00196.warc.gz"}
|
https://arxiv.org/abs/1405.6719
|
astro-ph.SR
(what is this?)
# Title: Stellar Abundances in the Solar Neighborhood: The Hypatia Catalog
Abstract: We compile spectroscopic abundance data from 84 literature sources for 50 elements across 3058 stars in the solar neighborhood, within 150 pc of the Sun, to produce the Hypatia Catalog. We evaluate the variability of the spread in abundance measurements reported for the same star by different surveys. We also explore the likely association of the star within the Galactic disk, the corresponding observation and abundance determination methods for all catalogs in Hypatia, the influence of specific catalogs on the overall abundance trends, and the effect of normalizing all abundances to the same solar scale. The resulting large number of stellar abundance determinations in the Hypatia Catalog are analyzed only for thin-disk stars with observations that are consistent between literature sources. As a result of our large dataset, we find that the stars in the solar neighborhood may be reveal an asymmetric abundance distribution, such that a [Fe/H]-rich group near to the mid-plane is deficient in Mg, Si, S, Ca, Sc II, Cr II, and Ni as compared to stars further from the plane. The Hypatia Catalog has a wide number of applications, including exoplanet hosts, thick and thin disk stars, or stars with different kinematic properties.
Comments: 66pgs, 32 figures, 6 tables, accepted for publication in the Astronomical Journal Subjects: Solar and Stellar Astrophysics (astro-ph.SR) DOI: 10.1088/0004-6256/148/3/54 Cite as: arXiv:1405.6719 [astro-ph.SR] (or arXiv:1405.6719v1 [astro-ph.SR] for this version)
## Submission history
From: Natalie Hinkel [view email]
[v1] Mon, 26 May 2014 20:00:16 GMT (4942kb,D)
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8979652523994446, "perplexity": 4307.539658367916}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886110578.17/warc/CC-MAIN-20170822104509-20170822124509-00141.warc.gz"}
|
http://math.stackexchange.com/questions/425970/system-of-three-equations-in-three-variables
|
# System of three equations in three variables?
Fibonacci apparently found some solutions to this problem:
Find rational solutions of:
$$x+y+z+x^2=u^2$$
$$x+y+z+x^2+y^2=v^2$$
$$x+y+z+x^2+y^2+z^2=w^2$$
How would you find solutions to this using the mathematics available in Fibonaccis's time? (of course by this I mostly mean without using calculus, series, and modern maths. Also please exclude modular arithmetic notation if possible.) I was able to find little bits of information by adding and subtracting equations, such as $z^2=w^2-v^2$, $y^2=v^2-u^2$, and $y^2+z^2=w^2-u^2$, but I really do not know what to do. Thanks.
-
Is your goal to find all solutions or some solutions ? – Ewan Delanoy Jun 21 '13 at 6:07
@EwanDelanoy I don't know if there are a finite number of solutions, but if there were an infinite number, a proof of that would be nice. – Ovi Jun 21 '13 at 6:13
A preliminary analysis: you are searching for rational solutions $(x,y,z)$ s.t. $y^2+z^2=w^2-u^2$. If $w^2-u^2<0$ there are none; if $w^2-u^2=0$ one has the solutions $(x,0,0)$ with rational $x$ s.t. $x+x^2=u^2=w^2$. Existence of rational solutions of the 2nd degree polynomial in $x$ depends on $w$. One has 2 rational solutions if $w=\frac{q^2-1}{4}$ for some real $q$, otherwise there are none. It remains to study the case $w^2-u^2>0$ – Avitus Jun 21 '13 at 6:46
@Avitus From OP's last derived equation (essentially a Pythagorean quadruple), then $w^2-u^2$ will be greater than $0$, unless $y$ and $z$ are trivially $0$. – alex.jordan Jun 21 '13 at 7:01
@alex.jordan I am not sure about this because I know nothing about $w$ and $u$, which I presume just to be fixed. If $y=z=0$, then there exists still space for non trivial rational solutions $(x,0,0)$. – Avitus Jun 21 '13 at 7:04
This is not a full answer in that not all solutions are described. But the discussion yields two infinite parametrized families of solutions. And the methods could possibly be studied longer to find more families, and possibly parametrize all solutions. As proof that this works before you invest in studying it, check that the solution it predicts at the end is valid.
There is a known trick for parametrizing rational points on quadratic surfaces, that I think extends to hypersurfaces.
Take the first equation. $(x,y,z,u)=(0,0,0,0)$ is a rational solution. Suppose $(X,Y,Z,U)$ is a different rational solution. Then the line connecting these two points in $4$-space is parametrized by $(x,y,z,u)=t(X,Y,Z,U)$. This line intersects the surface $x+y+z+x^2=u^2$ in precisely two places, since the intersection is found by solving for $t$ in $tX+tY+t^2Z^2=t^2U^2$. One solution is clearly given by $t=0$, and the other is given by $t=\frac{X+Y}{U^2-Z^2}$. Now since the line is parametrized by rational numbers, the intersection of this line with the plane $u=1$ has all rational coordinates: $(a,b,c,1)$. We can solve for $t$ to bring the fourth coordinate to $1$, and have $t=1/U$. So \begin{align}a&=X/U\\b&=Y/U\\c&=Z/U\end{align}
This establishes a map from rational points on $x+y+z+x^2=u^2$ to rational points on $u=1$. But this map is reversible. Take any rational triple $(a,b,c,1)$ and consider the line connecting this point to $(0,0,0,0)$. This line is parametrized by $(x,y,z,u)=s(a,b,c,1)$, and intersects $x+y+z+x^2=u^2$ in two places. To find both, we substitute: $as+bs+cs+a^2s^2=s^2$, and along with $s=0$, the other solution is with $s=\frac{a+b+c}{1-a^2}$.
So rational solutions to your first equation are given by \begin{align}x&=a\frac{a+b+c}{1-a^2}\\y&=b\frac{a+b+c}{1-a^2}\\z&=c\frac{a+b+c}{1-a^2}\\u&=\frac{a+b+c}{1-a^2}\end{align} where $a,b,c$ are any triple of rationals excluding $a=\pm1$.
One infinite family of solutions to the system arises out of this if we take $b=c=0$: $(x,y,z,u,v,w)=\left(\frac{a^2}{1-a^2},0,0,\frac{a}{1-a^2},\pm\frac{a}{1-a^2},\pm\frac{a}{1-a^2}\right)$.
We can see what happens if we throw these into the next equation.
$$\frac{(a+b+c)^2}{1-a^2}+(a^2+b^2)\left(\frac{a+b+c}{1-a^2}\right)^2=v^2$$
Unfortunately this equation is degree 6:
$$(1+b^2)(a+b+c)^2=v^2(1-a^2)^2$$
So trying to proceed as before but this time in $(a,b,c,v)$-space won't work. Lines will not be guaranteed to intersect the surface at two points, which is a crucial element of what we did above.
If we are merely hunting families of solutions, and give up (for now) on finding all solutions, then it would help to have $1+b^2$ be a square. That is, to have $1+b^2=d^2$. We can do this by finding any primitive Pythagorean triple $(m^2-n^2)^2+(2mn)^2=(m^2+n^2)^2$ and dividing by one of the left terms. Say we choose the second term, so that for integers $m$ and $n$, we have \begin{align}b&=\frac{m^2-n^2}{2mn}\\d&=\frac{m^2+n^2}{2mn}\end{align} Now the earlier equation reduces to $$d(a+b+c)=v(1-a^2)$$
If we take $c=0$ (implying $z=0$) then we have another family of solutions to the system that arises out of this. Taking $m,n$ to be free nonzero integers, $a$ a free rational not equal to $1$, we have $$(x,y,z,u,v,w)=\left(a\frac{a+\frac{m^2-n^2}{2mn}}{1-a^2},\frac{m^2-n^2}{2mn}\frac{a+\frac{m^2-n^2}{2mn}}{1-a^2},0,\frac{a+\frac{m^2-n^2}{2mn}}{1-a^2},\frac{m^2+n^2}{2mn}\frac{a+\frac{m^2-n^2}{2mn}}{1-a^2},\pm\frac{m^2+n^2}{2mn}\frac{a+\frac{m^2-n^2}{2mn}}{1-a^2}\right)$$
For example, $m=1$, $n=2$, $a=3/5$ yields $(-9/64, 45/256,0,-15/64, -75/256,75/256)$.
It seems reasonable that some other family could be worked out this way that does not demand $z=0$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9462310671806335, "perplexity": 167.2228521322501}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394023862507/warc/CC-MAIN-20140305125102-00037-ip-10-183-142-35.ec2.internal.warc.gz"}
|
https://www.repository.cam.ac.uk/browse?type=author&sort_by=1&order=ASC&rpp=20&etal=-1&value=Broeks%2C+Annegien&starts_with=A
|
Now showing items 1-20 of 24
• #### Age- and tumor subtype-specific breast cancer risk estimates for CHEK2*1100delC carriers
(2016)
PURPOSE: CHEK2*1100delC is a well-established breast cancer risk variant that is most prevalent in European populations; however, there are limited data on risk of breast cancer by age and tumor subtype, which limits its ...
• #### Annexin A1 expression in a pooled breast cancer series: association with tumor subtypes and prognosis
(2015-07-02)
Abstract Background Annexin A1 (ANXA1) is a protein related with the carcinogenesis process and metastasis formation in many tumors. However, little is known about the ...
• #### Annexin A1 expression in breast cancer: tumor subtypes and prognosis
(2015-06-25)
• #### BRCA2 Hypomorphic Missense Variants Confer Moderate Risks of Breast Cancer.
(American Association for Cancer Research, 2017-06)
Breast cancer risks conferred by many germline missense variants in the $\textit{BRCA1}$ and $\textit{BRCA2}$ genes, often referred to as variants of uncertain significance (VUS), have not been established. In this study, ...
(2016)
• #### Combined effects of single nucleotide polymorphisms TP53R72P and MDM2SNP309, and p53 expression on survival of breast cancer patients
(2009-12-18)
• #### Common non-synonymous SNPs associated with breast cancer susceptibility: findings from the Breast Cancer Association Consortium
(2014-07-04)
• #### Evidence that breast cancer risk at the 2q35 locus is mediated through IGFBP5 regulation
(2014-09-23)
(2016)
• #### Fine scale mapping of the 5q11.2 breast cancer locus reveals at least three independent risk variants regulating MAP3K1.
(2015-12-18)
• #### Fine-Mapping of the 1p11.2 Breast Cancer Susceptibility Locus
(2016-08-24)
• #### Fine-scale mapping of 8q24 locus identifies multiple independent risk variants for breast cancer
(2016)
Previous genome-wide association studies among women of European ancestry identified two independent breast cancer susceptibility loci represented by single nucleotide polymorphisms (SNPs) rs13281615 and rs11780156 at 8q24. ...
• #### Fine-scale mapping of the 5q11.2 breast cancer locus reveals at least three independent risk variants regulating MAP3K1
(Elsevier, 2014-12-18)
Genome-wide association studies (GWASs) have revealed SNP rs889312 on 5q11.2 to be associated with breast cancer risk in women of European ancestry. In an attempt to identify the biologically relevant variants, we analyzed ...
• #### Genetic variation in the immunosuppression pathway genes and breast cancer: a pooled analysis of 42,510 cases and 40,577 controls from the Breast
(2015-11-30)
• #### Genome-wide association analysis of more than 120,000 individuals identifies 15 new susceptibility loci for breast cancer.
(2015-03-09)
• #### High-throughput automated scoring of Ki67 in breast cancer tissue microarrays from the Breast Cancer Association Consortium
(2016-04-06)
• #### Identification and characterisation of novel associations in the CASP8/ALS2CR12 region on chromosome 2 with breast cancer risk
(2014-08-28)
• #### Identification of independent association signals and putative functional variants for breast cancer risk through fine-scale mapping of the 12p11 locus
(2016)
BACKGROUND: Multiple recent genome-wide association studies (GWAS) have identified a single nucleotide polymorphism (SNP), rs10771399, at 12p11 that is associated with breast cancer risk. METHOD: We performed a fine-scale ...
• #### Identification of Novel Genetic Markers of Breast Cancer Survival
(2015-04-18)
BACKGROUND: Survival after a diagnosis of breast cancer varies considerably between patients, and some of this variation may be because of germline genetic variation. We aimed to identify genetic markers associated with ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7952256202697754, "perplexity": 14512.063558936325}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806338.36/warc/CC-MAIN-20171121094039-20171121114039-00074.warc.gz"}
|
https://math.stackexchange.com/questions/795841/fr%C3%A9chet-derivative-and-local-maximum
|
Fréchet derivative and local maximum
I'm pretty confused with the idea of local maximum in function spaces. Normally having a null Fréchet derivative is a necessary but not sufficient condition for being a local maximum.
Computing the derivative
let $f:\mathbb{R} \mapsto \mathbb{R}$ be a continuous function. And lets denote the space of such functions $C_{\mathbb{R,R}}$.
$$F: C_{\mathbb{R,R}} \mapsto C_{\mathbb{R,R}}$$ $$F: f \mapsto \sin(f)$$
let us compute its derivative at point f:
$$D_F(f)h = \lim_{t\to 0} {F(f+th) - F(f) \over t}$$
where:
1. $f \in C_{\mathbb{R,R}}$
2. $g \in C_{\mathbb{R,R}}$
3. $t \in \mathbb{R}$
then:
$$D_F(f)h = \lim_{t\to 0} {\sin(f+th) - \sin(f) \over t}$$ $$D_F(f)h = \lim_{t\to 0} {\sin(f)\cos(th)+\cos(f)\sin(th) - \sin(f) \over t}$$ $$D_F(f)h = \lim_{t\to 0} {-h\sin(f) (1 - \cos(th))+h\cos(f)\sin(th) \over th}$$
So using: $$\lim_{x\to 0} {\sin(x)\over x} = 1$$ $$\lim_{x\to 0} {1-\cos(x)\over x} = 0$$
it reduces to:
$$D_F(f)h = h\cos(f)$$
Local maximum
we obviously have:
$$0 \leq ||F(f)||_\infty \leq 1$$
Hence:
if $||F(f)||_\infty = 1$, then f is a local maximum.
So any function $f$ such that $f(x) \equiv \pi \pmod \pi$ has a solution is a local maximum.
Null Fréchet derivative
$$D_F(f) = \cos(f) = 0 \Rightarrow \exists k \in \mathbb{N}, \forall x \in \mathbb{R}, f(x) = k\pi$$
such constant functions are indeed local maximums, but not the only ones. So instead of getting a superset containing all my local maximums, I get a strict subset of it from the nullity of my Fréchet derivative.
Question
As I'm pretty sure the mathematics I'm taught are right and I'm wrong... Where am I wrong ?
• What is the definition of the maximum of a mapping from $C(\mathbb R)$ to $C(\mathbb R)$? You seem to maximize $\|F(f)\|_\infty$ - the $\infty$-norm is not that differentiable. And $\cos(f)=0$ implies $\forall x\in \mathbb R$ $\exists k\in N$ such that $f(x)=k\pi$. Now find all such continuous functions... – daw May 15 '14 at 13:10
• @daw "the ∞-norm is not that differentiable", you mean there is a condition about the norm of my banach space for the fréchêt derivative to exists ? Or for a maximum do be defined ? – user2346536 May 15 '14 at 13:22
• @daw "Now find all such continuous functions" $\forall x f(x) = kπ$ seems a good shot to me. Constant functions basically. – user2346536 May 15 '14 at 13:32
• You seem to maximize $\phi(f):=\|F(f)\|_\infty$, but you only compute the derivative of $F$, but you do not check differentiability of $\phi$. – daw May 15 '14 at 13:38
• @daw You mean that, finding a maximum of a function $F:C_{\mathbb{R,R}} \mapsto C_{\mathbb{R,R}}$ does not really makes sens as $C_{\mathbb{R,R}}$ is not ordered. And so when I choosed my norm $||||_\infty$, I in fact was trying to maximize $||F(f)||_\infty$ ? Which is $\phi:C_{\mathbb{R,R}} \mapsto \mathbb{R}$ where R is order and $\phi(f) \leq \phi(f)$ does make sens. – user2346536 May 15 '14 at 13:44
The problem is to maximize $$\|F(f) \|_\infty.$$ However, the maximum is not differentiable. In order to perform the analysis, the function $f\mapsto\|F(f) \|_\infty$ needs to be differentiable
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9330170154571533, "perplexity": 351.84352810230456}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514575168.82/warc/CC-MAIN-20190922053242-20190922075242-00515.warc.gz"}
|
https://brilliant.org/problems/try-this-by-hand/
|
# Try This By Hand
Algebra Level 3
How many real roots does the polynomial
$P(x)=x^4+2x^2-x+1$
have?
×
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39704629778862, "perplexity": 7033.148309629488}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591719.4/warc/CC-MAIN-20180720174340-20180720194340-00190.warc.gz"}
|
http://www.zazzle.co.uk/corelli+clothing
|
Showing All Results
148 results
Page 1 of 3
Related Searches: stratford upon avon, marie, card
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
No matches for
Showing All Results
148 results
Page 1 of 3
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8284785747528076, "perplexity": 4545.590794873969}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163045148/warc/CC-MAIN-20131204131725-00010-ip-10-33-133-15.ec2.internal.warc.gz"}
|
https://hci.iwr.uni-heidelberg.de/node/4941
|
# When is a confidence measure good enough?
Title When is a confidence measure good enough? Publication Type Journal Article Year of Publication 2013 Authors Márquez-Valle, P, Gil, D, Hernàndez-Sabaté, A, Kondermann, D Journal submitted to CVPR 2013 Citation Key marquez_13_when
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8171255588531494, "perplexity": 18682.67120702351}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989115.2/warc/CC-MAIN-20210510064318-20210510094318-00449.warc.gz"}
|
https://motls.blogspot.com/2013/05/anthony-zee-einstein-gravity-in-nutshell.html?showComment=1370560484370
|
## Wednesday, May 22, 2013 ... //
### Anthony Zee: Einstein Gravity in a Nutshell
Škoda is not just a carmaker; it is producing happy drivers. And you may see that even the engines in the factory are having a great time.
In the same way, Anthony Zee – as Zvi Bern noticed – decided to make many readers fall in love with the physics of general relativity by having written this wonderful tome, Einstein Gravity in a Nutshell. Bern said that the goal wasn't to create new experts but Zee corrected him that he wanted to make the readers fall in love so deeply that they may dream about becoming experts, too. And the clearly enthusiastic Anthony had to enjoy the writing of the book, too.
I received this large, almost 900-page scripture on Einstein's theory yesterday. Obviously, I haven't read the whole book yet but I may have spent more time with it than most readers (more than zero) so that I can tell you why you should buy it and what philosophy, style, and content you may expect.
It's a book addressed to a wide variety of readers, including very young ones (perhaps college freshmen and bright high school students) and amateur physicists. Experienced physicists and professionals may find some gems or at least entertainment in the book, too. Because of this goal, the book starts with elementary things such as the units including $G,c,\hbar$ and Planck units, relativity even in classical physics, as well as basics of curved spaces, differential geometry, and so on.
The style is witty and somewhat dominated by words – and amusing titles. You may find lots of philosophical and historical remarks and stories from Anthony's professional life but the physics is always primary. And I mean physics, not rigorous mathematics. Tony is focusing on objects, phenomena, and their measurable and calculable quantities and the purpose of physics is to understand them and calculate them. So he spends almost no time with various picky issues – whether a function has to be smooth; whether one should use one fancy word from abstract mathematics or another. In fact, he considers the suppressed role of rigorous maths to be a part of the "shut up and calculate" paradigm that he subscribes to.
In some sense, you could say that the approach resembles the Feynman Lectures on Physics. It is very playful and the author is always careful to tell you think that are still fun and stop elaborating on details when he could start to bore you. So the book (probably) keeps its fun status at every place (it's true for the portions I have read). But Anthony Zee manages to penetrate much more deeply into general relativity with this strategy.
Once he goes through all the basics – which allow a beginner to start with the subject almost from scratch but which seem very entertaining for a reader who doesn't really need such introductions anymore – and he answers all the FAQs on tensors and lots of other things, he offers some of the simplest derivations of Einstein's equations and is ready to apply them.
It's useful to know what concepts are considered primary starting points by the author. I would say that Zee is elevating the concept of symmetries and the action – the latter allows us to formulate most dynamical laws in classical and quantum physics really concisely (although we know perfectly consistent quantum systems that don't seem to have any nice action; and the action always assumes that we prefer a particular classical limit of a quantum theory – and the classical limit isn't necessarily unique).
Concerning the applications, some of the historically important applications that were designed to verify the theory are suppressed. But you get very close to the cutting edge, including the general-relativistic aspects of topics that are hot in the contemporary high-energy theoretical physics and the cosmological/particle-physics interface. So you may actually learn advanced topics about black holes including some Hawking radiation (including the numerical prefactors of the temperature; but the author doesn't go extremely far here; note that amusingly enough, the Hawking radiation is even discussed in an introductory chapter); large and warped extra dimensions; de Sitter and anti de Sitter space including a discussion of conformal transformations (although it doesn't seem like a full-fledged textbook on AdS/CFT); topological field theories; Kaluza-Klein theory (with extra spatial dimensions) and braneworlds; Yang-Mills theory (there's lots of electromagnetism in the earlier chapters); even twistor theory; discussions on the cosmic inflation and the cosmological constant problem; and heuristic thoughts on quantum gravity (some of them are more heuristic than the state-of-the-art allows us; but Zee's philosophy is that textbook shouldn't be composed exclusively of the totally established stuff ready to be carved in stone).
Using lots of witticisms and clever analogies, Zee also proves some things you wouldn't expect – e.g. that Hades isn't inside the Earth. The equivalence principle is compared to the decision of all airlines, regardless of the size (and the size of their aircraft), to fly between two distant cities along the same path on the map. Witty and apt.
Anthony is convinced that most authors are explaining things in unnecessarily complicated ways – in some cases, perhaps, they want to look smart by looking incomprehensible. That's not Zee's cup of tea. He enjoys to simplify things as much as possible (but not more than that). And he loves to formulate things so that the reader is led to the conclusion that things are simple and make sense, after all. For example, there is a fun introduction to the least action principle (light isn't stupid enough not to know the best path) and we learn that "after Lagrange invented the Lagrangian, Hamilton invented the Hamiltonian". It makes sense, doesn't it?
There's a lot to find in the book. Some readers say that the book is less elementary than Hartle's book but more elementary than Carroll's. Maybe. Anthony is more playful and less formal but there are aspects in which he gets further than any other introductory textbook of GR.
The book is full of notes, a long index, and simply clever exercises. The illustrations are pretty and professional. If you are buying books to see photographs of attractive blonde women with toys, you won't be disappointed, either.
Because the book is really extensive and even the impressions it has made on your humble correspondent in the single day are numerous, I have to resist the temptation to offer you examples, excerpts etc. because that could make this blog entry really long by itself. Instead, I recommend you once again to try the book.
#### snail feedback (16) :
Sounds really good. I have his "QFT In a Nutshell" which I really enjoyed - not a textbook, but a fun romp through quantum field theory. This sounds like it's written in the same style. I'll have to get myself a copy.
QFT in a Nutshell was easier than many other books that are supposed to be introductory but still not easy.
I don't think Zee's book surpasses Gravity by MTW - Now that's an incredibly unique, creative book with its variety of diagrams, history, anecdotes etc that makes learning from this book so enjoyable. I wish the authors of today used this as a standard.
It's not available on Kindle yet either.
Prof Zee was once visiting my physics department in Paris when I was a PhD student. To my eternal shame I changed his name on the notice advertising his talk so that his name was Prof Zee Zee. If he ever reads this, can I apologize. It was late and I was a little bit drunk.
What does it mean? ;-) I would slightly understand Wee-Wee.
Zizi (zee-zee) in French is a "willy" in English (or "pinďourek" in Czech)
Lol... only for kids :-)
Interesting, that's exactly what I thought that wee-wee means in English. ;-)
Lubos, to wee is to pee... I believe "to pee" is more American since Sheldon Cooper always says it when he thinks he is the master of his own bladder...
LOL, you seem to be an expert in these gadgets - but I guess you're only a theorist, aren't you? ;-)
"...note that amusingly enough, the Hawking radiation is even discussed in an introductory chapter..."
Yep, I already knew that Anthony Zee is a bright, cool, funny as hell rascal ... :-(O)
In the QFT Nutshell, which I interpret as an INTRODUCTARY text to QFT, he already talks about brane worlds on p38 in chapter 1(!) :-))), and in the picture on p.223 he obviously could not hold back putting a world sheet beside the world line (!) :-D
For throwing in such and similar cool funny nuggets, he always obtains a happy smile from me :-)
Now I look forward to read the proof that hell is not inside the earth, ha ha ha
This GR nutshell is clearly a "must have" too!
Thanks for the nice and funny review Lumo :-)
His QFT book was very nice. I just bought this one after reading the "look inside" excerpts at amazon. At 888 pages it seems pretty substantial. Hope it does not take too long to ship from states as it's not available in Amazon europe.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3709833025932312, "perplexity": 1052.2925403144684}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514576355.92/warc/CC-MAIN-20190923105314-20190923131314-00348.warc.gz"}
|
https://content.ces.ncsu.edu/reducing-odor-and-dust-emissions-from-fan-ventilated-swine-barns
|
NC State Extension Publications
## Highlights
• Odor emissions from swine barns are a major challenge for the North Carolina pork industry.
• Research demonstrated that a simple pairing of a windbreak wall with a vegetative strip was a very effective system for reducing odor emissions from fan-ventilated barns.
• The system has little effect on the barn ventilation since its backpressure is very low.
• At current prices, the material cost for the system is about $1 per pig placed, and it can be readily built with farm labor and hand tools. • The system is easily retrofittable on existing barns, highly modular, and robust. ## Introduction Skip to Introduction Air emissions, particularly odor, from swine operations are a major challenge in North Carolina (the second largest U.S. swine producer) and throughout the U.S. pork industry. The North Carolina Department of Environmental Quality’s Division of Air Quality has the authority to investigate odor complaints from swine operations, with some exceptions. Several nuisance lawsuits filed by several plaintiffs against a major swine integrator have led to$98 million in judgments (Yeomen 2020). These plaintiffs were neighbors of contract producers raising pigs for the swine integrator. Several other lawsuits are pending in North Carolina courts. Hence, there is a need for methods to reduce air emissions from swine farms. Dietary manipulation, waste amendments, indoor air quality treatment (for example, sprinkling oil), and exhaust air treatment can all be used to reduce emissions from swine farms (Maurer et al. 2016). This fact sheet provides information regarding exhaust air treatment methods compatible with existing U.S. swine barn ventilation systems, with emphasis on the engineered windbreak wall–vegetative strip system.
In addition to being affordable, an exhaust treatment system typically imposes very little backpressure on the ventilation fans, which is important because high backpressure can reduce ventilation rate through the barn and adversely impact swine welfare and performance. Two exhaust treatment systems that can be used to reduce emissions from swine barns include the vegetative environmental belts (VEBs) or shelterbelts and the electrostatic particle ionization (EPI) Air Filter Wall. A VEB consists of suitable shrubs and trees planted downwind of the exhaust fans. The VEB dilutes and disperses the pollutant plume; traps and helps settle out odorous dust; absorbs or adsorbs odorous gases; and improves the aesthetics (Tyndall and Colletti 2007). Emissions of ammonia and dust are reduced by almost 50%, while odor emissions are reduced by up to 15% by VEBs (Tyndall and Colletti 2007). However, VEBs take as long as five years to be effective, require large footprints, and require regular maintenance (Tyndall and Colletti 2007).
The EPI Air Filter Wall developed by Baumgartner Environics Inc. of Olivia, Minnesota, consists of a low-porosity geotextile fabric screen placed about 8 feet in front of the exhaust fans. The exhaust air is ionized by an electrostatic precipitator that causes dust (and attached gases) to attach to grounded surfaces or clump together to form larger particles that settle out; the “cleaned” air exits from the space between the fans and screen (Mettler 2019). Andersen (2019) reported that the EPI Air Filter Wall reduced odor and dust emissions by 30% and 50%, respectively, from a swine barn. While no backpressure data are available for the VEBs or the EPI Air Filter Wall, unlike industrial exhaust treatment systems, they are likely to impose very little backpressure.
Ajami et al. (2019) evaluated an engineered windbreak wall–vegetative filter strip system (Figure 1), henceforth referred to as “system.” The system consisted of a wood-framed structure covered by a fiberglass pool-and-patio screen covering the exhaust fans. The screen (mesh size 18×14; porosity 60%) is widely available in hardware stores. Based on computer modeling, finer screens were not considered because they increased backpressure on the fans, which would be further worsened by dust clogging. Also based on computer modeling, to prevent backpressure from increasing due to screen clogging in the absence of rain, a 12-inch opening was created at the bottom of the front screen. An 18-inch-wide strip of switchgrass was planted to cover the 12-inch opening (Ajami et al. 2019). Belt (2015) recommended the use of switchgrass in VEBs. The system reduced swine barn odor emission at 33 feet from the fans by 71% and dust emission by 28% (Ajami et al. 2019). Of significance was its effect on the house ventilation system, which was very small; backpressure on the house fans was less than 0.02 inches of water column with all three fans running (Ajami et al. 2019).
Based on June 2020 North Carolina prices, the material cost of the system (including switchgrass plugs) that treats all of the exhaust (about 80,000 cfm) from a finishing barn (880 pigs) is about $900, or$1 per pig placed, including state taxes. The system was very robust and incurred little damage during Hurricane Florence, which produced peak wind gusts of 40 mph and heavy rainfall. As we will discuss later, the system has a modest footprint, so retrofitting existing barns with such a system is not difficult. In addition, this system is very modular and can be scaled up or down readily depending on the number of fans to be covered. This system is ideal for a tunnel-ventilated barn where several fans are clustered together, but it can also be used on sidewall-ventilated barns. Since this system is used to treat emissions from the exhaust fans, it does not affect air quality inside the barns.
Figure 1. System used for treating emissions from a tunnel-ventilated swine finishing barn. It covers two 48-inch fans and one 36-inch fan (on the right). The switchgrass strip covers an unscreened 12-inch opening.
## How the System Functions
The system reduces emissions, particularly odors, by trapping some of the dust on the screen as well as on the ground. Since dust transports odorous gases, reducing dust emissions reduces odor emissions (Hammond et al. 1981). As the front screen gets clogged with dust, more of the exhaust leaves through the top and side screens, causing dilution. The plants (switchgrass) also trap some dust and absorb nitrogen in the dust through the roots. They also absorb some odorous gases directly through the leaves (Morgan and Parton 1989; Hiatt 1998) and transport a fraction of those gases into the root zone, where the gases are degraded by soil microbes (Dela Cruz et al. 2014).
The system’s performance will vary depending on the number of fans operating at any time in the system, dust accumulation on the screen, and the height and denseness of vegetation. When the plants become dormant in fall, the system is less effective. The 71% odor reduction measured in summer and fall of 2018 (Ajami et al. 2019) occurred with no vegetation in front of the minimum ventilation fan (small fan on the right shown in Figure 1), and vegetation in front of the other fans was under 2 feet tall. The system shown in Figure 1 is from summer of 2017, when the vegetation was taller than when the odor measurements were made. In summer of 2018, the switchgrass had to be replanted, as excess pruning in the spring killed the earlier planting. With taller vegetation (5 to 6 feet), odor and dust might have been further reduced.
## System Construction and Maintenance
A box-shaped structure made of pressure-treated wood and pool-and-patio screen (mesh size 18 × 14; porosity 60%) identical to the one used by Ajami et al. (2019) is proposed. Placing the front screen at least twice the fan diameter from the fan blade keeps backpressure inside the system acceptably low (Ajami et al. 2019). For example, using the equations provided in Figure 2, to treat exhaust from 4-foot (48-inch) fans, the system length (L), plus the distance from the barn wall to the fan blades, should be 9 feet (2 times 4 feet plus 1 foot). If there are different sizes of fans (for example, 48-inch and 36-inch fans), the largest fan diameter (D) should be used to calculate L. When space is not limited, L can be increased to 2.5 times D plus 1 foot. System backpressure will decrease as L is increased. The height of the system can be close to the barn eave height, and it should be wide enough to cover all the fans in the fan bank. The opening at the bottom of the front screen is to prevent excessive backpressure and should be at least 1 foot high (Figure 2).
Producers have experience in building wood structures. Here we provide broad guidelines on building the system, with the caveat that barns can widely vary in design. Southern yellow pine grade #2 or lumber species and grade of comparable load-carrying capacity should be used. All lumber sizes mentioned are nominal sizes, and only pressure-treated lumber should be used; wood screws compatible with treated lumber should be used. In addition to the wind load, the system must support the worker doing repair and maintenance on the top. Two side views of the system tested by Ajami et al. (2019) while under construction and upon completion are shown in Figure 3. Size and spacing recommendations for the various system components labeled in Figure 3 are discussed below with slight modifications to make the system safe and sturdy, especially during construction and repairs.
Member 1: This member ties the structure to the barn. Attach the 2-inch x 4-inch vertical member to the stud in the barn wall. Member 1 can be kept above the soil but must be attached to the concrete wall using suitable fasteners. These members should be spaced no more than 9 feet apart, so for fans bigger than 48 inches, install a member between each fan.
Member 2: This horizontal 2-inch x 4-inch member is required to create the structure for the top screen.
Member 3: This member anchors the structure. Bury these 4-inch x 4-inch vertical posts 3 feet in the soil. Member 3 has to be paired with member 1.
Member 4: This 2-inch x 4-inch lumber-on-edge (for greater load-carrying capacity) member connects member 2 to member 3 to support the top screen.
Member 5: This 2-inch x 4-inch on-edge member connects member 1 to member 3 to make the system sturdy. One member 5 is needed for each member 1 (or member 3).
Member 6: This 2-inch x 4-inch lumber on edge runs parallel to member 2 and ties together the 4-inch x 4-inch posts (member 3) for greater structural rigidity.
Member 7: These members are required to create a safe scaffold for the worker doing repairs on the top. Several of these braces (2-inch x 4-inch lumber on edge) are laid parallel to member 4 at spacing not exceeding 2 feet, connecting member 2 to member 6. Connect adjacent braces to one another using 2-inch x 4-inch lumber pieces. While Ajami et al. (2019) did not experience heavy snowfall, spacing the members 2 feet apart and bracing them could be adequate for eastern North Carolina, where much swine production is concentrated. In areas with higher snowfall, some design changes may be required.
Member 8: These are 2-inch x 4-inch lumber braces that support the front and side screens and are installed about mid-height.
For durability, the construction described previously is heavier than the system constructed by Ajami et al. (2019) using 2-inch x 4-inch treated dimension lumber because their system was built for a short-term research project. However, as of June 2020, the system had been operating four years without requiring any structural repairs. You should strengthen the connections with suitable hanger, brackets, ½-inch plywood gussets (Figure 3), or lumber. Staple the fiberglass screen to the lumber frame and secure it properly using furring strips (Figure 3) or strapping strips to secure tri-ply fabric sheets to lumber (in chicken houses). A screen door may be provided, as was done by Ajami et al. (2019). Based on site conditions, you may have to modify the design and construction. For example, if feedlines enter the house through the endwalls where the fans are located, you should omit the top screen. Omitting the top screen will reduce the effectiveness of the system, but it will considerably reduce cost and complexity. As is clear from the previous description, the system can be built very rapidly with farm labor using hand tools.
Plant switchgrass to screen the opening at the bottom of the front screen (Figure 2) in an 18-inch-wide strip in spring when the soil temperature is above 60°F, preferably late March through April. Ajami et al. (2019) reported that the switchgrass cultivar ‘Alamo’ grew more vigorously and taller than ‘Shenandoah’ at their research site, which was in Zone 8a of the U.S. Department of Agriculture Plant Hardiness Zone Map. Select the switchgrass cultivar based on your location and plant availability at local nurseries. Till the soil to facilitate rapid root development. Plant at least two parallel rows with plants in a row spaced 12 inches apart. Stagger the plants in the two rows with respect to one another to prevent short-circuiting.
The system requires very little maintenance since it is readily cleaned by rain. If it does not rain for more than three weeks and the screen is clogged, wash off the screen with a garden hose nozzle. Since it is porous, it can resist high wind speeds, as was observed by Ajami et al. (2019). In late winter, trim the dried switchgrass stubble no shorter than 6 inches or wait until spring when the plant is breaking dormancy.
Figure 2. Schematic of the engineered windbreak wall–vegetative system for a swine barn. D is the nominal fan diameter and L is two times D plus the distance of the fan blades from the barn wall (1 foot).
Figure 3. Framing of the system (a) under construction and (b) after completion at the swine farm (Ajami et al. 2019). Plywood gussets (1/2-inch thick) strengthen the joints, and furring strips secure the screen to the frame.
## References
Ajami, A., S.B. Shah, L. Wang-Li, P. Kolar, and M.S. Castillo. 2019. “Windbreak Wall-Vegetative Strip System to Reduce Air Emissions from Mechanically Ventilated Livestock Barns: Part 2—Swine House Evaluation.” Water, Air, & Soil Pollution 230, no. 12.
Andersen, D. 2019. Odor Control Options: Electrostatic Fence. Video.
Belt, S.V. 2015. Plants Tolerant of Poultry Farm Emissions in the Chesapeake Bay Watershed. Beltsville, MD: USDA-NRCS Norman A. Berg National Plant Materials Center.
Dela Cruz, M., J.H. Christensen, J.D. Thomsen, and R. Muller. 2014. “Can Ornamental Potted Plants Remove Volatile Organic Compounds from Indoor Air? A Review.” Environmental Science & Pollution Research 21, no. 24: 13909–13928.
Hammond, E.G., C. Fedler, and R.J. Smith. 1981. “Analysis of Particle-Borne Swine House Odors.” Agriculture and Environment 6, no. 4: 395–401.
Hiatt, M.H. 1998. “Bioconcentration Factors for Volatile Organic Compounds in Vegetation.” Analytical Chemistry 70, no. 5: 851–856.
Maurer, D.L., J.A. Koziel, J.D. Harmon, S.J. Hoff, A.M. Rieck-Hinz, and D.S. Andersen. 2016. “Summary of Performance Data for Technologies to Control Gaseous, Odor, and Particulate Emissions from Livestock Operations: Air Management Practices Assessment Tool (AMPAT).” Data in Brief 7: 1413–1429.
Mettler, D. 2019. “Shocking Smell: A Two-Part System for the Barn that Cleans the Air.” Manure Manager (November–December): 24–26.
Morgan, J.A. and W.J. Parton. 1989. “Characteristics of Ammonia Volatilization from Spring Wheat.” Crop Science 29, no. 3: 726.
Tyndall, J. and J. Colletti. 2007. “Mitigating Swine Odor with Strategically Designed Shelterbelt Systems: A Review.” Agroforestry Systems 69, no. 1: 45–65.
Yeomen, B. 2020. “‘Nobody wants another Flint, Michigan,’ judge tells Smithfield in hog-case appeal hearing.” Food & Environment Reporting. Jan. 31. Network.
# Authors
Project Manager, NC Dept. of Environmental Quality (former graduate student)
Biological & Agricultural Engineering
Extension Specialist and Professor
Biological & Agricultural Engineering
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.506411075592041, "perplexity": 5881.052988173694}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153857.70/warc/CC-MAIN-20210729105515-20210729135515-00004.warc.gz"}
|
https://brilliant.org/problems/a-calculus-problem-by-samara-simha-reddy/
|
Exponential and Powers! (10)
Calculus Level 3
$\Large \displaystyle \int_0^{\infty} 3^{-4z^2} \, dz = \, ?$
• Use the approximation $$\pi = \dfrac{22}{7}$$ in your computation of the final value.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9274441003799438, "perplexity": 2119.1076822715013}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891706.88/warc/CC-MAIN-20180123032443-20180123052443-00584.warc.gz"}
|
https://geo.libretexts.org/Bookshelves/Geology/Book%3A_Controversies_in_the_Earth_Sciences_(Richardson)/03%3A_Consensus_in_the_Craters/3.12%3A_Teaching_and_Learning_About_Mass_Extinctions
|
# 3.12: Teaching and Learning About Mass Extinctions
$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$$$\newcommand{\AA}{\unicode[.8,0]{x212B}}$$
Let's take some time to reflect on what we've covered in this lesson!
## Teaching/Learning Discussion Activity
#### Directions
For this activity, I want you to reflect on what we've covered in this lesson and to consider how you might adapt these materials to your own classroom. Since this is a discussion activity, you will need to enter the discussion forum more than once in order to read and respond to others' postings. This discussion is scheduled to run during the last week of this lesson.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.572492778301239, "perplexity": 357.16845179193615}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710462.59/warc/CC-MAIN-20221128002256-20221128032256-00305.warc.gz"}
|
https://crypto.stackexchange.com/help/badges/9/autobiographer
|
# Autobiographer
Complete "About Me" section of user profile.
Awarded 23159 times.
Awarded 30 mins ago to
Awarded 45 mins ago to
Awarded 1 hour ago to
Awarded 3 hours ago to
Awarded 18 hours ago to
Awarded yesterday to
Awarded yesterday to
Awarded yesterday to
Awarded yesterday to
Awarded yesterday to
Awarded 2 days ago to
Awarded 2 days ago to
Awarded 2 days ago to
Awarded Oct 27 at 10:49 to
Awarded Oct 27 at 3:32 to
Awarded Oct 26 at 8:16 to
Awarded Oct 25 at 12:20 to
Awarded Oct 25 at 7:31 to
Awarded Oct 24 at 20:07 to
Awarded Oct 24 at 17:02 to
Awarded Oct 23 at 22:07 to
Awarded Oct 23 at 4:55 to
Awarded Oct 22 at 18:57 to
Awarded Oct 22 at 11:46 to
Awarded Oct 21 at 17:28 to
Awarded Oct 21 at 11:36 to
Awarded Oct 20 at 11:46 to
Awarded Oct 19 at 17:32 to
Awarded Oct 19 at 16:20 to
Awarded Oct 18 at 17:25 to
Awarded Oct 18 at 0:55 to
Awarded Oct 18 at 0:55 to
Awarded Oct 17 at 21:46 to
Awarded Oct 17 at 20:37 to
Awarded Oct 17 at 10:48 to
Awarded Oct 16 at 15:36 to
Awarded Oct 16 at 4:32 to
Awarded Oct 16 at 3:10 to
Awarded Oct 15 at 7:12 to
Awarded Oct 15 at 6:27 to
Awarded Oct 14 at 22:25 to
Awarded Oct 14 at 22:03 to
Awarded Oct 14 at 15:07 to
Awarded Oct 14 at 8:41 to
Awarded Oct 13 at 8:04 to
Awarded Oct 13 at 4:33 to
Awarded Oct 12 at 14:04 to
Awarded Oct 12 at 12:30 to
Awarded Oct 12 at 9:47 to
Awarded Oct 12 at 7:42 to
Awarded Oct 12 at 2:55 to
Awarded Oct 11 at 22:04 to
Awarded Oct 11 at 1:05 to
Awarded Oct 10 at 1:55 to
Awarded Oct 9 at 12:57 to
Awarded Oct 9 at 12:48 to
Awarded Oct 9 at 7:10 to
Awarded Oct 9 at 2:42 to
Awarded Oct 9 at 0:17 to
Awarded Oct 8 at 16:25 to
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8410312533378601, "perplexity": 7690.1433882365045}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107910204.90/warc/CC-MAIN-20201030093118-20201030123118-00087.warc.gz"}
|
https://www.lessonplanet.com/teachers/linear-function-12th-higher-ed
|
# Linear Function
In this linear functions worksheet, students solve and complete 4 different sections to a given problem. First, they determine if the velocity of a car increases at a given speed and explain. Then, students determine the time that this happens and justify their reasoning.
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9678256511688232, "perplexity": 889.0520228554124}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816462.95/warc/CC-MAIN-20180225130337-20180225150337-00523.warc.gz"}
|
http://www.phpbuilder.com/snippet/detail.php?type=snippet&id=704
|
PHPBuilder - A parser class for mime.types
# A parser class for mime.types
by: Daniel Bwetter
|
May 27, 2002
Version: 1.3
Type: Class
Category: HTTP
Description: A small PHP-class that gives you access to the mime.types file that comes e.g. with apache. It is useful if you want to set the "Content-type" header before you output a file whose type is only known by its extension.
```<?php /*
**
** mime_types.inc.php
**
** (c) 2002 peppermind Network Neue Medien, www.peppermind.de
**
** Daniel Boesswetter, [email protected], Thu May 23 12:40:52 CEST 2002
**
**
** <LEGAL BLURB>
** 1. this software is distributed free of charge (send me a mail if you like it :)
** 2. use at your own risk
** </LEGAL BLURB>
**
**
** \$Log: mime_types.inc.php,v \$
** Revision 1.3 2002/05/27 12:44:20 bos
**
** Revision 1.2 2002/05/24 09:40:50 bos
** added the default behaviour if no filename is specified: try to
** find apache's server root via the phpinfo command and
**
** Revision 1.1 2002/05/23 12:09:14 bos
** simple class for accessing apaches mime.types file
**
**
** this class provides access to the contents of the mime.types file as
** used by apache. it is based on the assumption, that a mime-type can
** have multiple associated file-extensions, but one file extension
** is only associated with one mime-type.
**
** mime.types is assumed to have one mime-type per line (with no leading
** whitespace), followed by whitespace-separated filename-extensions
** (usually without dots).
**
** todo:
** - optimization by using global variables or class variables to
** avoid multiple parsing of the same file in a single process (page)
** - optimize the documentation :)
**
*/
if (!defined("mime_types.inc.php")):
define("mime_types.inc.php", true);
class mime_types {
/*
constructor:
specify a filename, if omitted will try to find it in apache's
server-root (by using phpinfo, see below)
*/
function mime_types( \$filename="" ) {
if ( \$filename )
\$this->_filename = \$filename;
else
\$this->_filename = \$this->_get_default_filename();
\$this->_initialized = false;
}
/*
return the mime-type for a given file-extension
*/
function type_by_extension( \$ext ) {
if ( !\$this->_initialized ) \$this->_initialize();
return \$this->_ext2type[\$ext];
}
/*
return an array of file-extensions for a given type
*/
function extensions_by_type( \$type ) {
if ( !\$this->_initialized ) \$this->_initialize();
return \$this->_type2ext[\$type];
}
/*
array of known mime-types
*/
function known_types() {
if ( !\$this->_initialized ) \$this->_initialize();
return array_keys( \$this->_type2ext );
}
/*
array of known file-extensions
*/
function known_extensions() {
if ( !\$this->_initialized ) \$this->_initialize();
return array_keys( \$this->_ext2type );
}
/*
returns a human-readable dump of the internal state of this object
*/
function dump() {
if ( !\$this->_initialized ) \$this->_initialize();
ob_start();
echo "_filename=".\$this->_filename."\n";
echo "_ext2type:\n";
print_r( \$this->_ext2type );
echo "_type2ext:\n";
print_r( \$this->_type2ext );
\$ret = ob_get_contents();
ob_end_clean();
return \$ret;
}
/*
internal: read file and parse the contents
*/
function _initialize() {
\$lines = file( \$this->_filename );
\$this->_ext2type = array();
\$this->_type2ext = array();
foreach ( \$lines as \$line ) {
if ( preg_match( "/^\s*\#|^\s*\$/", \$line ) ) continue;
\$line = chop( \$line );
\$exts = preg_split( "/\s+/", \$line );
\$type = array_shift( \$exts );
\$this->_type2ext[\$type] = \$exts;
foreach ( \$exts as \$ext ) {
\$this->_ext2type[\$ext] = \$type;
}
}
\$this->_initialized = true;
}
/*
try to find the servers mime.types (ugly, but it works)
*/
function _get_default_filename() {
/**
capture the output of phpinfo(8) and find the table entry
called "Server Root". mime.types usually resides
under conf/mime.type
FIXME: this works only for apache!!!
*/
ob_start();
phpinfo(8);
\$text = ob_get_contents();
ob_end_clean();
preg_match( "/Server Root(<[^>]*>)*([^<]+)/", \$text, \$matches );
return \$matches[2]."/conf/mime.types";
}
}
endif;
?>
```
Comment and Contribute
## Your comment has been submitted and is pending approval.
Author:
Daniel Bwetter
Comment:
Comment:
(Maximum characters: 1200). You have characters left.
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8936865925788879, "perplexity": 8286.543321442276}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463475.57/warc/CC-MAIN-20150226074103-00297-ip-10-28-5-156.ec2.internal.warc.gz"}
|
https://ufapro888.com/lemon-tree-ezbp/viewtopic.php?id=radius-of-convergence-infinity-4b617c
|
2/12/2020 11:03 PM
*(3x-6)^n I know the ratio test says to take the limit as n goes to inf of A_n/A_n+1. The radius of convergence is half the length of the interval of convergence. The ratio test is the best test to determine the convergence, that instructs to find the limit. So our radius of convergence is half of that. (Use Inf For Too And -inf For -0. It is found by adding the absolute values of both endpoints together and dividing by two. Thanks. & {\text{The}}\,\,{\text{radius}}\,\,{\text{of}}\,{\text{convergence}}\,\,\,{\text{for}}\,\,{\text{the}}\,\,{\text{power}}\,{\text{series}}\,\,{\text{is}}\,\,{\text{:}} \cr In the positive case, the power series converges absolutely. In our example, the center of the power series is 0, the interval of convergence is the interval from -1 to 1 (note the vagueness about the end points of the interval), its length is 2, so the radius of convergence equals 1. So we could say that our radius of convergence is equal to 1. As long as x stays within some value of 0, this thing is going to converge. Examples : 1. The radius of convergence can be zero, which will result in an interval of convergence with a single point, a(the interval of convergence is never empty). radius of convergence of summation from 1 to infinity of 1/(k3^k) (x-5)^k Hence the radius of convergence is infinity, and the interval of convergence is - ∞ \infty ∞ < x < ∞ \infty ∞ (because it converges everywhere). It looks purposely contrived to be solved for x (bring 6 over to one side and divide by 3), but is that just irrelevant information? there is a nite radius of convergence R. Note that the interval of convergence can be open or closed or half-open/half-closed depending on the convergence of the series at the endpoints. This leads to a new concept when dealing with power series: the interval of convergence. No idea! Note that r ≥ 0, because for ˜r = 0 the series +∞ ∑ n=0an˜rn = +∞ ∑ n=0an0n = 1 converges (recall that 00 = 1). So if ak over ak+1 absolute value goes infinity as k goes to infinity, then the radius of convergence r of the power series is infinity, in other words it converges for all z in the complex plane. If you take math in your first year of college, they teach you about Or, for power series which is convergent for all x-values, the radius of convergence is +∞. Im confused. Here is a massive hint: Do you remember that Click on the problem to see the answer, or click here to continue. Remember, for a convergent series, the n-th term goes to 0. This function has a branch point at z = 1, which is one of the possibilities described at Mathematical singularity#Complex analysis. R=27/4 Help would be veeeery much appreciated! 0. reply. Find the Radius and Interval of Convergence for Sum n==1 to infinity of (-2)^n (x+1)^n Question: Find The Radius Of Convergence And Interval Of Convergence For The Given Power Series (note You Must Also Check The Endpoints). 1, then -1, then 3, then -5, then 11 ... we flip-flop back and forth And this is how far-- up to what value, but not including this value. What is the radius of convergence? If The Radius Of Convergence Is Infinity Then Do Not Include Either Endpoint ). So we could ask ourselves a question. Find the radius of convergence and the interval of convergence for each of the series listed below: a.) Close. 1. It will be non negative real number or infinity. Unlike geometric series and p-series, a power series often converges or diverges based on its x value. X=-1. © copyright 2003-2020 Study.com. Find the limit of (2n*e (( ln(n^2) + i*pi*n )/(( 16(n^2) + 5i ))^0.5))/((4n 2 + 3in) (1/2)) [From n=1 to infinity] 2. \cr infinite series and the radius of convergence. Question: Radius Of Convergence Of Summation From 1 To Infinity Of 1/(k3^k) (x-5)^k This problem has been solved! \cr Given a real power series +∞ ∑ n=0an(x −x0)n, the radius of convergence is the quantity r = sup{˜r ∈ R: +∞ ∑ n=0an˜rn converges}. Now, let’s get the interval of convergence. Here are some examples. & {\text{Given the power series}}\,:\,\,\,\,\sum\limits_{n = 0}^\infty {\frac{{{{( - 1)}^n}{x^n}}}{{n + 1}}} . 2. but we see for e^x, i get |e| when using ratio test, which implies that it diverge? Given a real power series #sum_{n=0}^{+infty}a_n(x-x_0)^n#, the radius of convergence is the quantity #r = "sup" \{tilde{r} \in \mathbb{R} : sum_{n=0}^{+infty}a_n tilde{r}^n " converges"\}#. Hello all. answer. Although this fact has useful implications, it’s actually pretty much a no-brainer. now available at Answer to: Find the radius and interval of convergence of the series: Summation_{n=0}^{infinity} (-1)^n x^n/n+1. Answer to: Find the radius and interval of convergence of the series: Summation_{n=0}^{infinity} (-1)^n x^n/n+1. Integral, from 0 to 0.1, of x*arctan(3x)dx Please try to show every step so that I can learn. Radius of convergence (3x)^2 from 0 to infinity. When the radius of convergence is infinity, then the interval of convergence is {eq}\left( { - \infty ,\infty } \right) {/eq}. The convergence of the infinite series at X=-1 is spoiled because of a problem far away at X=1, which happens to be at the same distance from zero! [sum z^n/n^2 for n=1 to infinity] defines a function called the dilogarithm. which clearly becomes infinite. Solution for Find the radius of convergence and interval of convergence for the power series from n=0 to infinity of 5^n*X^n/n The radius of convergence for this function & \Rightarrow \Im = \mathop {\lim }\limits_{n \to \infty } \left| {\frac{{ - \left( {1 + \frac{2}{n}} \right)}}{{\left( {1 + \frac{1}{n}} \right)}}} \right| \cr I think I am supposed to find the convergent point and work some magic, but every attempt has me going way off course, so I need a step by step to see where I am going wrong. Radius of convergence (3x)^2 from 0 to infinity. It is customary to call half the length of the interval of convergence the radius of convergence of the power series. Radius of Convergence of a power series is the radius of the largest disk in which the series converges. Determine the radius of convergence of the power series? What is the radius of convergence of the series #sum_(n=0)^oo(n*(x+2)^n)/3^(n+1)#? thanks. This seems very simple but you need to be careful of the notation and wording your textbooks. & \left| x \right| < \Im \Rightarrow - \Im < x < \Im . For X smaller than one and bigger than minus one, the Sciences, Culinary Arts and Personal {/eq}, {eq}\displaystyle \eqalign{ My answers: 1. If we differentiate this series term by term we get the new series and compute its radius of convergence with the ratio test: The result looks very similar. Compute the radius of convergence of the power series: (sum from n=1 to infinity) of a n z n, where a n = (2n + 1)! n = 0 to infinity ((x-3)^(2n)) / ((n+2)^)(8n)) If a power series converges on some interval centered at the center of convergence, then the distance from the center of convergence to either endpoint of that interval is known as the radius of convergence which we more precisely define below. What is the radius of convergence of the series: sum over n from 1 to infinity of (n^-1)(z^n), and how do you get it? The function f(x) = \frac{6}{5+x} may be... Find the radius of convergence of the power... PLACE Mathematics: Practice & Study Guide, Praxis Social Studies - Content Knowledge (5081): Study Guide & Practice, TExES Mathematics 7-12 (235): Practice & Study Guide, FTCE Marketing 6-12 (057): Test Practice & Study Guide, FTCE Biology Grades 6-12 (002): Practice & Study Guide, Praxis School Psychologist (5402): Practice & Study Guide, GACE Mathematics (522): Practice & Study Guide, Ohio Assessments for Educators - Mathematics (027): Practice & Study Guide, GACE Marketing Education (546): Practice & Study Guide, WEST Business & Marketing Education (038): Practice & Study Guide, CSET Science Subtest II Physics (220): Test Prep & Study Guide, MTTC English (002): Practice & Study Guide, NMTA Reading (013): Practice & Study Guide, MTTC Speech & Language Impairment (057): Practice & Study Guide, Praxis ParaPro Assessment: Practice & Study Guide, GACE Special Education Adapted Curriculum Test I (083): Practice & Study Guide, Biological and Biomedical Last edited: Mar 22, 2010. Suppose all the Ki are one, but 5 5. So, the radius of convergence is 3. Solution for Find the radius of convergence and interval of convergence for the power series from n=0 to infinity of 5^n*X^n/n Compute the radius of convergence of the power series: (sum from n=1 to infinity) of a n z n, where a n = (2n + 1)! All other trademarks and copyrights are the property of their respective owners. The radius of convergence for the power series {eq}\displaystyle f(x) = \sum\limits_{n = 0}^\infty {{C_n}{x^n} = {C_0} + {C_1}x} + {C_2}{x^2} + ... + {C_n}{x^n} + ..., Answer and Explanation: 1 Given: The interval of convergence is never empty. The ratio test is the best test to determine the convergence, that instructs to find the limit. Here is a video clip that explains how to show that a series converges for all x. \cr n = 0 to infinity ((x-3)^(2n)) / ((n+2)^)(8n)) 2. So this is the series z … . The limit does not exist. In our example, the center of the power series is 0, the interval of convergence is the interval from -1 to 1 (note the vagueness about the end points of the interval), its length is 2, so the radius of convergence equals 1. So, let's look at some examples. The radius of convergence is actually infinity so the series will always converge for any value of x. Question: Radius Of Convergence Of Summation From 1 To Infinity Of 1/(k3^k) (x-5)^k This problem has been solved! [email protected] The interval of convergence for a power series is the set of x values for which that series converges. (n + 2i) n /(3n)! {/eq} is defined the formula: {eq}\displaystyle \Im = \mathop {\lim }\limits_{n \to \infty } \left| {\frac{{{C_n}}}{{\,{C_{n + 1}}}}} \right|,\,\,\,\,{\text{where}}\,\,\,\Im \geqslant 0. So our radius of the series listed below: a. including this.! Infinity of xn, which is one of the interval of convergence of power series: ( USA Europe. Will converge non negative real number or infinity set of x values which! 8, 2020. available on hyper typer the radii of convergence using root... Example # 3 at radius of convergence of a power series, example # 3 at radius of convergence the... ( Sometimes we say it diverges ) so the series converges the limit ) dx 3 to 0 Sometimes say! All the always give a sensible answer Get access to this video and our entire Q & a library ). N+1 ): 2 of this question is customary to call half the length of radius of convergence infinity power.... * ( x^k ) b. ) ^n I know the ratio test 1. Take math in your first year of college, they teach you about infinite and... Equivalent to 5 * x actually pretty much a no-brainer.... the! Notice that we now have the radius of convergence of the interval convergence! For this function has a very similar example, example # 3 at radius of convergence radius... Massive hint: do you remember that Click on the 'circle of using! An if Sn gets weird in this case, the n-th term goes to of! 'Circle of convergence # convergence # convergence on the problem to see the,... That we now have the radius of … we ’ ll deal with \... Is 1 ( that is, the series converges for all x-values, the sum can done... Function called the dilogarithm very similar example, let ’ s Get the interval of convergence 3x! 2020. available on hyper typer hyper typer converges because Sn = log ( n+1 ): 2 confuses me that! Because Sn = 1¡ 1 n+1 remember that Click on the \ ( R = 3. Be determined by the ratio test, which implies that it diverge now available Oxford... = 4\ ) find the radius of convergence is usually the distance between the endpoints the. P1 n=1 1 n ( n+1 ) converges because Sn = 1¡ 1 n+1 length the. Video clip that explains how to show that a series converges for |x| < 1 ) how show. S say you had the interval of convergence stays the same when we integrate or differentiate power... ( k = 0 ) ^oo ( 3x ) ^2 from 0 to 0.2, of the series.... This test predicts the convergence, that instructs to find the radii of convergence and the interval radius! Interval notation ): 2 entire Q & a library so as as! So this is how far -- up to What value, radius of convergence infinity this thing going... Going to converge minus one, the n-th term goes to inf of A_n/A_n+1 respective owners our. 5X is 1/3 -- up to What value, but not including this value for value... The function blows up or gets weird converges because Sn = 1¡ 1 n+1 skip the multiplication sign so! Keyboard shortcuts the result is zero infinite number of numbers does n't always give a sensible answer say our... Of numbers does n't always give a sensible answer the radius of convergence # powerseries # radius #.. ( n+1 ): radius of convergence ( 3x ) ^k is 1/3 e^x, I |e|... To find the limit as n goes to 0 a real Sequence ngbe... Example # 3 at radius of convergence of the interval of convergence and center... Required for the given power series What is the best test to determine the radius of convergence for power... A real Sequence fx ngbe given < 1 ) always the center of the series will always converge any. Of A_n/A_n+1 Use inf for Too and -inf for -0 from our c value is 0 earn Credit! Problem to see the answer, or Click here to continue c ) 5^k ) )... Is always the center of the interval of convergence the radius of convergence and information about convergence or of! X^2 ) * ( x^k ) b. or inf '' sum converges to infinity of!. Limit Inferior of a real Sequence fx ngbe given statistical Mechanics: Entropy, Order Parameters, and Complexity now. Convergence gives information about the endpoints of the possibilities described at Mathematical singularity # analysis. Question mark to learn the rest of the power series, then thing. Requires an exponent of 1 on the problem to see the answer, or Click here to continue ’! ( radius of convergence infinity – b ) / 2 so the series converges see all questions in Determining radius... Infinite, type infinity '' or inf '' this video and our entire Q & a.. Need to be careful of the series converges for some value of x =. Press question mark to learn the rest of the endpoints ) number numbers... ( Use inf for Too and -inf for -0 problem to see the answer, or Click to... To call half the length of the power series b ) / 2 radius # interval the empty set the. For this power series ( note you must also check the endpoints ) must also check the endpoints when... Tough homework and study questions take math in your first year of college, they teach about... Their respective owners test the result is zero stays the same when we integrate differentiate... Limits is the series.... find the radius of convergence of a power series converges.. 1, which is one required for the radius of convergence ' and Replies Related Calculus and Beyond Help. As long as our x value stays less than a certain amount from our c value is 0 to.! Call half the length of the series.... find the radii of convergence of the radius of convergence this. Use inf for Too and -inf for -0 x stays within some of!, now available at Oxford University Press ( USA, Europe ) Related! Endpoints of the radius of convergence of a real Sequence let a real Sequence fx ngbe given example. Mark to learn the rest of the notation and wording your textbooks all x-values, the n-th goes! So our radius of convergence of power series can be determined by the test... Replies Related Calculus and Beyond homework Help News on Phys.org value stays less than 1 using. Of their respective owners have the radius of convergence requires an exponent of 1 on the to... This have anything to do with my radius of the interval of convergence and its center updated on June,. X values for which that series converges for all x 10.13 radius and interval of,. Convergent for all x-values, the radius of convergence # convergence # #. Consider f ( x ) = 7 sin ( 2 x ) = 7 sin ( 2 x.. I Get |e| when using ratio test a branch point at z = 1, which is convergent all... Answer, or Click here to continue will always converge for any value of x for... For e^x, I Get |e| when using ratio test the result is zero up. With power series is the series converges does this have anything to with. Equivalent to 5 * x is that the inside, ;. Equivalent to 5 * x that we now have the radius interval... 3 \ ) to 0.2, of 1/ ( 1+x^5 ) dx 3 convergence of radius of convergence infinity series! Me is that the inside, 3x-6 ; does this have anything to do with my radius of convergence a. X ) Get |e| when using ratio test is the series n=1 infinity. X smaller than one and bigger than minus one, the interval of convergence and its center n2... Interval notation ): radius of … we ’ ll deal with the \ ( R What. … determine the convergence point, if the limit as n goes to infinity but. ( L = 1\ ) case in a bit = 1\ ) case in bit! Convergence as \ ( L = 1\ ) case in a bit 1+x^5 ) dx 3 clever, can... Take math in your first year of college, they teach you infinite. Homework and study questions convergence of the series sum_ ( k = 0 ^oo... The radius of convergence ' = \sqrt 3 \ ) power series is the best test to determine the,... To six decimal places say it diverges ) is 1/3 which is one of the series... Integrate or differentiate a power series can be determined by the ratio test is the distance to nearest! From 0 to infinity Mechanics: Entropy, Order Parameters, and Complexity, now available Oxford. All questions in Determining the radius of convergence for a power series convergence 3x... The result is zero as long as our x value stays less than a certain amount from our value... For e^x, I Get |e| when using ratio test to determine the radius convergence. The keyboard shortcuts convergence on the \ ( x\ ) at z = 1, which implies that it?. 0 ) ^oo ( 3x ) ^2 from 0 to infinity: 1/n to continue keyboard shortcuts always for! / 2, 3x-6 ; does this have anything to do with my radius convergence! Point, if the radius of convergence as \ ( R\ ) a video clip that explains how to that. A branch point at z = 1, which is convergent for all,!
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9788835048675537, "perplexity": 677.4835205649345}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178364027.59/warc/CC-MAIN-20210302160319-20210302190319-00340.warc.gz"}
|
https://www.ias.ac.in/describe/article/jess/129/0004
|
• Evolution of the hydraulic properties of deep fault zone under high water pressure
• Fulltext
https://www.ias.ac.in/article/fulltext/jess/129/0004
• Keywords
Fault zone; water injection test; splitting pressure; equivalent hydraulic gap width; safety factor
• Abstract
Repeated water injection tests with varied injection flow rates are conducted on a fault zone under the roadway floor to study the evolution of the hydraulic properties of fault zone under high water pressure. Based on the analysis of test results, the evolution process of the hydraulic properties of fault zone under high water pressure can be divided into three successive stages: the initial infiltration stage, the splitting stage, and the scouring infiltration stage. It is found that in the splitting stage and the scouring infiltration stage, the hydraulic conductivity of fault zone increases rapidly under the condition of sufficient water supply, and this is likely to evolve into a large-flow-rate water inrush accident. Therefore, the safety factor e of fault zone should be defined as the ratio of the splitting pressure of fault zone $P_{f}$ over the aquifer pressure $P_{h}$, i.e., $e = P_{f} /P_{h}$; when e < 1, water inrush may occur in the fault. Based on the results in this study, a new method is proposed for assessing the risk of fault.
• Author Affiliations
1. School of Resources and Earth Sciences, China University of Mining and Technology, Xuzhou 221 116, Jiangsu, China.
2. Yanzhou Coal Mining Co. Ltd., Yanzhou Coal Mining Company, Zoucheng 273 500, Shandong, China.
• Journal of Earth System Science
Volume 130, 2021
All articles
Continuous Article Publishing mode
• Editorial Note on Continuous Article Publication
Posted on July 25, 2019
Click here for Editorial Note on CAP Mode
© 2021-2022 Indian Academy of Sciences, Bengaluru.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3745500147342682, "perplexity": 4517.194830075385}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150308.48/warc/CC-MAIN-20210724191957-20210724221957-00666.warc.gz"}
|
http://mathhelpforum.com/algebra/42595-what-does-mean.html
|
# Math Help - What does this mean?
1. ## What does this mean?
What expression raised to the fourth power is -8x^6y^9z^15
2. Originally Posted by clueless82
What expression raised to the fourth power is -8x^6y^9z^15
you want to find an expression so that $(\mbox{the expression})^4 = -8x^6y^9z^{15}$ that is, $\mbox{the expression } = \pm \sqrt[4]{-8x^6y^9z^{15}}$
3. ## The problem in the book above it
says... What expression raised to the third is -8x^6y^9z^15? and the books answer in the back is y^3-y+6
I have No idea how they got that and there is no examples about it...
4. Originally Posted by clueless82
says... What expression raised to the third is -8x^6y^9z^15? and the books answer in the back is y^3-y+6
I have No idea how they got that and there is no examples about it...
is it to the third or to the fourth. secondly, there is no way that answer is right. you are looking at the wrong problem in the answer section.
5. Originally Posted by clueless82
says... What expression raised to the third is -8x^6y^9z^15? and the books answer in the back is y^3-y+6
I have No idea how they got that and there is no examples about it...
Well, the book is wrong
Let L denote our quantity
So we have that
$L^3=-8x^6y^9z^5\Rightarrow{L=\sqrt[3]{-8x^6y^9z^15}=-2x^2y^3z^5}$
Just think about it, how does one take the $\sqrt[n]{}$ of a monomial and get a trinomial?
6. Originally Posted by clueless82
says... What expression raised to the third is -8x^6y^9z^15? and the books answer in the back is y^3-y+6
I have No idea how they got that and there is no examples about it...
Well....we want to find $\varphi$ (our expression)
If $\varphi^3=-8x^6y^9z^{15}$ then $\varphi=\sqrt [3] {-8x^6y^9z^{15}}\implies\varphi=-2x^2y^3z^5$.
This is not $y^3-y+6$...are you sure that this was the answer?
7. ## Thank You All So Very Much!!
I am clueless with algebra.. have failed every class i have taken with it... even after tutors and HOURS of studying and reading examples.. now I have 29 problems and only 3 and 1/2 hours to get them done! :+(
thanks again all! I admire YOU ALL!!
8. Originally Posted by clueless82
I am clueless with algebra.. have failed every class i have taken with it... even after tutors and HOURS of studying and reading examples.. now I have 29 problems and only 3 and 1/2 hours to get them done! :+(
thanks again all! I admire YOU ALL!!
do you understand what happened here? how did they go from the expression with the cube-root to the final answer?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7715402245521545, "perplexity": 688.393276699267}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507444657.46/warc/CC-MAIN-20141017005724-00130-ip-10-16-133-185.ec2.internal.warc.gz"}
|
http://math.stackexchange.com/questions/214152/trace-and-identity-are-the-only-linear-matrix-invariants?answertab=active
|
# Trace and identity are the only linear matrix invariants?
This question is obviously related to that recent question of mine, but I feel it’s sufficiently different to be posted as a separate question. Let $V$ be a finite-dimensional space. Let ${\cal L}(V)$ denote the space of all endomorphisms of $V$. Say that an endomorphism $\phi$ of ${\cal L}(V)$ is invariant when it satisfies $$\phi (gfg^{-1})=g\phi(f)g^{-1}$$ for any $f,g \in {\cal L}(V)$ with $g$ invertible.
Prove or find a counterexample or provide a reference : $\phi$ is invariant iff there are two constants $a,b$ such that $\phi(f)=af+b{\sf tr}(f){\bf id}_V$ for all $f$. With the help of a PARI-GP program, I have checked that this is true when ${\sf dim}(V) \leq 5$. Intuitively, the similitude invariants of a matrix are functions of the coefficients of the characteristic polynomial, and the second largest coefficient, the trace, is the only linear one.
-
Let $G=\operatorname{GL}(V)$ be the group of automorphisms of $V$. Then as a $G$-module, $\mathcal L(V)$ is isomorphic to $V\otimes V^*$, and $\hom_k(\mathcal L(V),\mathcal L(V))$ is isomorphic to $V^{\otimes 2}\otimes V^{*\otimes 2}$. Your question is, in this language:
what is the dimension of the $G$-invariant subspace of $V^{\otimes 2}\otimes V^{*\otimes 2}$?
Notice that $V^{\otimes 2}\otimes V^{*\otimes 2}$ is isomorphic as a $G$-module to $hom_k(V^{\otimes 2},V^{\otimes 2})$, so its invariant subspace is actually the space of $G$-equivariant maps, $hom_G(V^{\otimes 2},V^{\otimes 2})$. The fact that there are exactly two linear invariants in your sense is the case $m=2$ of the claim stated on page 31 of those notes —this is part of what's called Schur-Weyl duality.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9618517160415649, "perplexity": 65.26018563419251}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701158609.98/warc/CC-MAIN-20160205193918-00232-ip-10-236-182-209.ec2.internal.warc.gz"}
|
http://clay6.com/qa/29157/the-iupac-name-of-the-compound-ch-2-ch-ch-ch-3-2
|
Browse Questions
# The IUPAC name of the compound :$CH_2 =CH-CH(CH_3)_2$
$(a)\;1,1-dimethyl-2-propene \\ (b)\;3-methyl -1-butene \\ (c)\;2-vinylpropane \\ (d)\;1-isopropyl \;ethylene$
Can you answer this question?
Correct IUPAC name is $3-methyl -1-butene$
Hence b is the correct answer.
answered Feb 25, 2014 by
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.431605726480484, "perplexity": 10571.546489323975}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719465.22/warc/CC-MAIN-20161020183839-00053-ip-10-171-6-4.ec2.internal.warc.gz"}
|
http://math.stackexchange.com/questions/233273/can-a-normalizer-be-described-by-generators-and-relations
|
# Can a normalizer be described by generators and relations?
I am trying to use generators and relations here.
Let M ≤ S_5 be the subgroup generated by two transpositions t_1= (12) and t_2= (34).
Let N = {g ∈S_5| gMg^(-1) = M} be the normalizer of M in S_5.
How should I describe N by generators and relations?
How should I show that N is a semidirect product of two Abelian groups?
How to compute |N|?
How many subgroups conjugate to M are there in S_5 ? Why?
(I think Sylow's theorems should be used here.)
-
Note that if $g$ normalizes $M$, then $g$ cannot move the point $5$ (why not?), so you are really doing this in $S_4$... – user641 Nov 9 '12 at 1:46
Hint: Show that $M=\{id,(12),(34),(12)(34)\}$. What does a conjugation $gsg^{-1}$ mean for a cycle $s\in S_5$? Find some elements of $N$, then try to describe all elements.
If you have $|N|$, for the last question, consider the orbit of $M$ under the action of $G$ by conjugation on the set of subgroups: $$\langle g, H\rangle\mapsto gHg^{-1}$$ Then $N$ is exactly the stabilizer of $M$, and show that for any elements $x,y\in G$, we have $xMx^{-1} = yMy^{-1} \implies x^{-1}y\in N$, and conclude that $|M|=|S_5|/|N|$.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9514278173446655, "perplexity": 181.74982969600234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00080-ip-10-147-4-33.ec2.internal.warc.gz"}
|
https://www.electrosmash.com/forum/pedal-pi/260-how-to-launch-and-effect-just-powering-pi-zero
|
Ray OFFLINE Moderator Posts: 695 Thank you received: 146 Karma: 41 There are several ways to automatically launch an effect (program) right after powering Pedal-Pi (Raspberry Pi Zero). This is how I do it but there are others ways to do the same thing, I got the info from here: https://www.dexterindustries.com/howto/run-a-program-on-your-raspberry-pi-at-startup/ So, the way I do it: I compile the effect I want to use, more info here: Create, Edit and Compile any Code I edit rc.local:sudo vim /etc/rc.local and insert the effect I would like to play when powering pedal-pi, in this case the "multi" effect:sudo /home/pi/Pedal-Pi-All-Effects/multi & It is done.. I will work, but you can go a bit further; as we explain here: "Reduce Noise in Pedal-Pi" the pedal sounds much better if you disable the wifi, If you use a standard Pi Zero (not W) just unplug the dongle, If you use a W model, check the next step To disable wifi in W Raspberry Pi Zero , the only other thing I can think of is to disable the loading of the drivers as they explain here: https://www.raspberrypi.org/forums/viewtopic.php?t=138610 edit:/etc/modprobe.d/raspi-blacklist.confand insert:#wifi blacklist brcmfmac blacklist brcmutil #bt blacklist btbcm blacklist hci_uart Done! Last Edit: 2 years 1 month ago by Ray. The administrator has disabled public write access. The following user(s) said Thank You: travis
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7277091145515442, "perplexity": 3633.101543612041}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496672313.95/warc/CC-MAIN-20191123005913-20191123034913-00445.warc.gz"}
|
http://mathoverflow.net/questions/400/a-gentleman-never-chooses-a-basis/6458
|
# “A gentleman never chooses a basis.”
Around these parts, the aphorism "A gentleman never chooses a basis," has become popular.
Is there a gentlemanly way to prove that the natural map from V to V** is surjective if V is finite dimensionsal?
As in life, the exact standards for gentlemanliness are a bit vague. Some arguments seem to be implicitly pick basis. I'm hoping there's an argument which is unambigously gentlemanly.
-
I'm having trouble coming up with a sufficiently patriarchal argument. Does "these parts" refer to the pre-suffrage era? – S. Carnahan Oct 13 '09 at 3:31
That's fair. I should have gone for something more gender neutral. Although "gentlemanly/ladylike" is a bit awkward, and something like "classy" doesn't have the same anachronistic feel. Any suggestions? – Richard Dore Oct 13 '09 at 18:50
My personal preference is to avoid any reference to gender or class (or indeed membership in any group associated to historical persecution - e.g., we don't say that bases are for Jewish or homosexual people). This may make your question seem less colorful, but I think it is worthwhile to make mathematics more welcoming to people of all kinds. If you're still looking for an obnoxious elitist tone, I suggest replacing "gentleman" with "true mathematician" and "gentlemanly" with "mathematically cultured". – S. Carnahan Oct 14 '09 at 1:55
In my mind, "gentleman" refers to politeness rather than social class, but I can see where the problem comes from. Perhaps a good alternative is "my mommy said it's not polite to choose a basis." My mom didn't tell me that, so as a kid, I chose bases left and right; now I regret it. – Anton Geraschenko Oct 14 '09 at 14:33
This doesn't constitute a proof, but: Suppose that the result of a certain proof looks obvious in notation A, but deep and mysterious in notation B. This is usually a reason to prefer notation A. In Penrose's abstract index notation, which doesn't require a choice of basis, mapping one-dimensional space V to V* takes element $x_a$ to element $x^a$. If you then continue with V* to V**, you take $x^a$ to (drumroll, plese) $x_a$. If the mapping from V to V** wasn't surjective (and, in fact, an isomorphism) then abstract index notation would be inconsistent. – Ben Crowell Sep 18 '12 at 4:12
Following up on Qiaochu's query, one way of distinguishing a finite-dimensional $V$ from an infinite one is that there exists a space $W$ together with maps $e: W \otimes V \to k$, $f: k \to V \otimes W$ making the usual triangular equations hold. The data $(W, e, f)$ is uniquely determined up to canonical isomorphism, namely $W$ is canonically isomorphic to the dual of $V$; the $e$ is of course the evaluation pairing. (While it is hard to write down an explicit formula for $f: k \to V \otimes V^*$ without referring to a basis, it is nevertheless independent of basis: is the same map no matter which basis you pick, and thus canonical.) By swapping $V$ and $W$ using the symmetry of the tensor, there are maps $V \otimes W \to k$, $k \to W \otimes V$ which exhibit $V$ as the dual of $W$, hence $V$ is canonically isomorphic to the dual of its dual.
Just to be a tiny bit more explicit, the inverse to the double dual embedding $V \to V^{**}$ would be given by
$$V^{\ast\ast} \to V \otimes V^* \otimes V^{\ast\ast} \to V$$
where the description of the maps uses the data above.
-
OK, great! So you can define finite-dimensionality without mentioning bases (or chains of subspaces). The answer to the question is then easy. But this recasting of the definition of finite-dimensionality is, I think, much the most interesting thing. – Tom Leinster Oct 21 '09 at 22:54
Yes, there a number of ways one might think of characterizing finite-dimensionality (including being isomorphic to its double dual!), Noetherian/Artinian hypotheses, etc. But some of these characterizations don't port so well to modules over other commutative rings. The present characterization is equivalent to being finitely generated and projective, for any commutative ring. – Todd Trimble Oct 22 '09 at 4:15
When you say "isomorphic to its double dual" you presumably mean its algebraic double dual. – Loop Space Nov 8 '09 at 21:26
Presumably Andrew means that one almost never talks about unadorned infinite-dimensional vector spaces. An analyst naturally thinks of the dual of a finite-dimensional vector space as a special case of the continuous dual of a topological vector space, and in this situation spaces are rarely isomorphic to their double duals. – Qiaochu Yuan Nov 23 '09 at 15:24
I guess Andrew also means that, for example, Hilbert spaces <em>are</em> isomorphic to their continuous double duals. – Qiaochu Yuan Nov 23 '09 at 15:26
At the price of being too categorical for the question, one can follow up Todd's answer as follows. Consider any closed symmetric monoidal category $\mathcal{V}$ with product $\otimes$ and unit object $k$, such as vector spaces over a field $k$. Let $V$ be an object of $\mathcal{V}$ and let $DV = Hom(V,k)$. Just from formal properties of $\mathcal{V}$, there are canonical maps $\iota\colon k\to Hom(V,V)$ and $\nu\colon DV\otimes V\to Hom(V,V)$, which are the usual things for vector spaces. Say that $V$ is dualizable if there is a map $\eta\colon k\to V\otimes DV$ such that $\nu \circ \gamma \circ \eta = \iota$, where $\gamma$ is the commutativity isomorphism. Formal arguments show that $\nu$ is then an isomorphism and if $\epsilon\colon DV\otimes V \to k$ is the evaluation map (there formally), then, with $W=DV$, $\eta$ and $\epsilon$ satisfy the conditions Todd stated for $e$ and $f$. This is general enough that it can't have anything to do with bases. But restricting to vector spaces, we can choose a finite set of elements $f_i\in DV$ and $e_i\in V$ such that $\nu(\sum f_i\otimes e_i) = id$. Then it is formal that $\{e_i\}$ is a basis for $V$ with dual basis $\{f_i\}$. This proves that $V$ is finite dimensional, and the converse is clear as in Todd's answer. There is a result in Cartan-Eilenberg called the dual basis theorem that essentially points out that the precisely analogous characterization holds for finitely generated projective modules over a commutative ring $k$, with the same proof.
-
Yes, this is a nice argument, Peter. – Todd Trimble Sep 17 '12 at 18:52
To be pedantic, in the case of f.g. projective modules over a commutative ring, "dual basis theorem" is a slightly unfortunate name, since neither $\{e_i\}$ or $\{f_i\}$ are necessarily bases of $V$ or $DV$. – Peter Samuelson Sep 17 '12 at 22:54
Perhaps it would be most appropriate to answer your question with another question: how do you distinguish a finite-dimensional vector space from an infinite-dimensional one without talking about bases?
-
Every increasing (or decreasing) sequence of subspaces stabilizes in finitely many steps. – Richard Dore Oct 14 '09 at 4:57
Let me suggest the following strategy, then: to any chain of subspaces in V there is associated a dual chain in V*. If one can show that strict inclusions are sent to strict inclusions, then V and V* have the same dimension. – Qiaochu Yuan Oct 15 '09 at 18:22
I'm sorry if this should be a comment rather than answer. It is an addendum to my previous answer. I should have pointed out that, still in a general symmetric monoidal category, if $V$ is dualizable, then a formal argument also shows that the canonical map $V \to V^{**}$ (again defined formally) is an isomorphism. Also, in answer to Peter Samuelson, while the name dual basis theorem'' dates from long before my time, it does have some justification. When $\mathcal{V}$ is modules over a commutative ring $k$, if one takes a dualizable $V$ and constructs the free module $F$ on basis $\{d_i\}$ in 1-1 correspondence with the $e_i$ in my previous post, then $\alpha(v) = \sum f_i(v) d_i$ specifies a monomorphism $\alpha\colon V\to F$ such that $\pi\alpha = id$, where $\pi(d_i) = e_i$. This completes the proof that dualizable implies finitely generated projective, with a relevant basis in plain sight.
-
This can be added to your previous answer, if you like. – David Roberts Sep 18 '12 at 2:57
Fine with me. I'm not adept at adding things or changing things, as I'm sure you have noticed. Thanks. – Peter May Sep 18 '12 at 3:39
there is a canonical map $ev:V \to V^{**}$ defined by $ev(v)(\phi) = \phi(v)$. to check that it is an isomorphism in the finite dimensional setting you can just check that it is injective and this is evident from the definition.
-
How do you know it's surjective though? You have to know they're the same dimension. I don't know how to prove that without getting dirty with a basis. – Richard Dore Oct 13 '09 at 18:46
Some kind of solution proposal:
Let V be a n-dimensional vector space over a field (or a free R-module, where R is a commutative unital ring).
There is a morphism V tensor V* to End(V), which sends each v tensor lambda to the endomorphism of V that sends each w to lambda(w)v. It is an epimorphism since it's image are all finite rank endomorphisms, so it's surjective. It is a monomorphism as you can check by calculation. So this is an isomorphism.
We can calculate the dimensions: dim(V tensor V* ) = dim(End(V)), where dim(V tensor V* ) = dimV * dimV* and dim(End(V))=(dimV)^2. So the result is n * dimV* = n^2 and we get dimV* = n = dimV.
Now notice that every short exact sequence in our category splits. That implies for every monomorphism V to W, that W is isomorphic to a direct sum of V and W/V and therefore we have a dimension formula dimV + dim(V/W) = dim W. We get the result that every monomorphism from V to W with dimW=dimV is an isomorphism.
Look at the linear map ev : V to V**, which sends v to ev_v : (lambda mapsto lambda(v)), the evaluation-at-v-map. Now we make an induction: for dimV=0, the map ev is trivially an isomorphism. For dimV=n, the kernel of ev is a subspace, so we have V = ker(ev) + W with some complement W and either ker(ev)=V or ker(ev)=0 or the two subspaces have strictly smaller dimension. That would mean, by induction hypothesis, that their evaluation map, which is the restriction of the evaluation map of V, has no kernel and so we get ker(ev)=0. The case ker(ev)=V remains, where we get that V*=0 which contradicts n=dimV=dimV*.
Now ev is a monomorphism and dim(V** )=dim(V* )=dim(V), therefore ev is an isomorphism. One can check easily that this is "functorial", that is: we have a natural transformation from the identity functor to the bidual functor.
One could object that I have chosen an arbitrary flag, when I take the complement of the kernel in the induction step... but I guess without that you wouldn't use the "free" property of the modules in question, and for non-free modules there are counter-examples.
If I did something wrong, please tell me.
-
How do you prove that dim(End(V)) = n^2 without choosing a basis? – Qiaochu Yuan Oct 23 '09 at 5:31
Over real or complex (or other similar) field, where we know that for a finite-dimensional vector space all reasonable vector-space topologies coincide... V is dense in V** in the weak topology, hence in all topologies, but the (unique) topology is also complete, so V = V** (I think this works and avoids choosing a basis. Of course you would have to prove those other facts also without choosing a basis.)
-
## protected by Community♦Apr 1 at 7:17
Thank you for your interest in this question. Because it has attracted low-quality answers, posting an answer now requires 10 reputation on this site.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9285504817962646, "perplexity": 526.8570798147504}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988051.33/warc/CC-MAIN-20150728002308-00330-ip-10-236-191-2.ec2.internal.warc.gz"}
|
https://blog.aurorasolar.com/the-basic-principles-that-guide-pv-system-costs/
|
Solar design tips, sales advice, and industry insights from the premier solar design software platform
Author
The Basic Principles that Guide PV System Costs
Costs Associated with a PV System
In order to determine financial returns, it is important to have a solid understanding of the basic economics that dictate PV system costs. There are two general categories of PV systems costs: capital costs and operation and management (O&M) costs.
Capital Costs
Capital costs refer to the fixed, one-time costs of designing and installing the system. Capital costs are categorized into hard costs and soft costs.
Hard costs are the costs of the equipment, including modules, inverters, and BOS components, as well as installation-related labor.
Soft costs include intangible costs such as permitting, taxes, customer acquisition costs, etc.
Figure 1. Cost breakdown of PV systems. Source: B. Fiedman, et al, "Benchmarking Non-hardware BoS Costs for US PV Systems, Using a Bottoms-Up Approach and Installer Survey," National Renewable Energy Laboratory, Second Edition, December 2013.
Figure 1 illustrates the relationship between soft and hard costs, and breaks down hard costs into its components. According to SEIA, while hard costs have come down dramatically over the last decade, soft costs have remained largely constant.
Operation and Management Costs
O&M costs refer to costs that are associated with running and maintaining the system. These can include fuel, repairs, and operation personnel. PV systems generally have low O&M costs.
Incentives and Policies that Benefit Solar Energy
The high capital costs are one of the biggest factors that discourage people from going solar. To combat this, there are a number of incentives and policies in place to make PV systems financially competitive.
Cost-Based Incentives
Cost based incentives, such as the Solar Investment Tax Credit (ITC), allow those who invest in a solar system to apply a tax credit towards their income tax. The incentive is determined by the cost of the system, and is independent of its performance.
Performance-Based Incentives
Performance based incentives (PBIs) encourage PV system owners to install and maintain efficient systems through payments that are based on the monthly energy production of the system.
Net Energy Metering
In addition to incentives, many states, such as California, implement a net energy metering (NEM) policy that allows consumers who generate excess electricity to be reimbursed at the then-prevailing rate of electricity. For instance, if a residential PV system produces an excess of 100 kWh over the course of the month, the owner will be reimbursed for 100 kWh at the market rate of electricity for that time period. The owner is then free to use that reimbursement credit towards electricity they consume from the grid when solar is not meeting their current energy load. Therefore, households with solar PV and NEM are able to significantly reduce their electricity bill.
Figure 2. Visualized relationship between PV energy production and household electricity use for an average home in New South Wales, Australia. Source: solarchoice.net.au
Figure 2 shows the relationship between PV electricity production and electricity consumption during the day. Note that while the PV system can generate more than enough electricity during the daytime, it can fail to deliver electricity during peak consumption hours.
Basic Financial Calculation for a Residential PV System
In return for a large upfront investment in a solar installation, homeowners that go solar benefit from a reduced monthly electricity bill. Thus, for NEM regimes the benefit of solar comes in the form of avoided costs.
For instance, assume that upon installing a rooftop PV system, a home electricity bill is reduced by $1,500 per year and the cost of the hypothetical PV system is$10,000 after incentives. In order to calculate the simple payback period, which is the approximate time for a PV system to pay for itself, we divide the cost of the PV system by the savings.
$$\text{Simple Payback Period} = \frac{\text{System Cost}}{\text{Annual Savings}} = \frac{10,000}{1,500\mathrm{/year}} = 6.7\mathrm{years}$$
Thus, the payback period for a system that costs $10,000 and reduces the electricity bill by$1,500 per year is 6.7 years.
However, a PV system can last much longer than the duration of its payback period. A typical rooftop PV system has a lifetime of about 25 years. This means that for the last 18 years of its life, after it has paid itself off, the hypothetical PV system described above will generate revenue in the form of additional savings. To calculate this revenue, we multiply the annual savings by the remaining lifetime of the system, after it has paid itself off.
$$\text{Net Revenue} = \text{Annual Savings} \times \text{Years left in lifetime after system is paid of}$$ $$\text{Net Revenue} = 1,500\mathrm{/year} \times 18.3\mathrm{year} = 27,450$$
Based on this simple analysis, the system will generate approximately $27,450 in savings over its lifetime. It is important to note that this is an approximation, and does not take into account factors such as maintenance costs, changes in electricity price and usage, as well as system degradation over time. The figure below shows another financial analysis for a hypothetical residential PV system. In both graphs, the y-axis is the dollar amount and the x-axis is the year. Figure 3. The cumulative (top) and annual (bottom) cash flows of a hypothetical PV system. Source: Aurora Solar The top graph, which shows the cumulative cash flow of the project over time, and indicates that the project has a payback period of approximately four years. Additionally, the dollar amount in the 25th year, which is about$25,000, is the cumulative net revenue that the system generated. The bottom graph is the annual cash flow of the project. The first year is characterized by a large negative cash flow, due to the large upfront cost required to install the system, but after that there is positive annual cash flow with the exception to this is in the 14th year, which is when the inverters are being replaced.
The Basic Principles that Guide PV System Costs is part of Solar PV Education 101, a six-article series that serves as an introductory primer on the fundamentals of solar PV for beginners.
• Capital Costs
• cost based incentives
• Hard costs
• net energy metering
• O&M costs
• performance based incentives
• PV System Costs
• simple payback period
• soft costs
• Solar Primer
• Solar PV Education 101
Author
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3262239694595337, "perplexity": 2608.709094857193}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655919952.68/warc/CC-MAIN-20200711001811-20200711031811-00507.warc.gz"}
|
https://de.maplesoft.com/support/help/maple/view.aspx?path=type%2Ffactorial
|
type/factorial - Maple Programming Help
Home : Support : Online Help : Programming : Data Types : Type Checking : Types : type/factorial
type/factorial
test for factorial
Calling Sequence type(expr, !)
Parameters
expr - any expression
Description
• This function will return true if expr is a factorial, and false otherwise. For more information about factorials, see factorial.
• An expression of the type n!, where $n$ is an integer, is not of type !, since its value is calculated before the call to the type function is executed.
• Note that the factorial function is both of type function and type !. In the function call, it is important that the exclamation mark, !, be enclosed in quotes. Missing quotes will cause a syntax error.
Examples
> $\mathrm{type}\left(n!,\mathrm{!}\right)$
${\mathrm{true}}$ (1)
> $\mathrm{type}\left(n!,\mathrm{function}\right)$
${\mathrm{true}}$ (2)
> $\mathrm{type}\left(6!,\mathrm{!}\right)$
${\mathrm{false}}$ (3)
> $\mathrm{type}\left(0.5!,\mathrm{!}\right)$
${\mathrm{false}}$ (4)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 9, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9420307278633118, "perplexity": 2518.5686155087265}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369054.89/warc/CC-MAIN-20210304113205-20210304143205-00276.warc.gz"}
|
https://www.physicsforums.com/threads/measuring-acoustic-energy.249729/
|
# Measuring acoustic energy
1. Aug 12, 2008
### Bert Rackett
I would like to measure the relative amplitudes (energy levels) of several specific
frequencies in a noise field. I thought of attaching capacitive microphones to
tubes that would resonate at those frequencies. I've visited hundreds of web sites
that invariably give equations for frequencies and resonant points, but say nothing
How much larger will my response be in my tube? How large are the harmonic
responses? I have several texts, but they speak qualitatively about resonances and
not quantitatively. can someone point out a text with the math?
Thank you.
Bert Rackett
2. Aug 12, 2008
### billiards
As long as you're sampling at more than twice the highest frequency you want to detect, can't you just put the signal through a fast fourier transform? You could use something like MatLab to do this and plot the Power Spectrum quite easily.
It would be interesting to see how the power spectrum changes in relation to the position of your microphone. You might expect to see notches at frequencies with wavelengths that destructively interfere with reflections off of the surrounding walls.
Similar Discussions: Measuring acoustic energy
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8191079497337341, "perplexity": 1915.2264016038334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689779.81/warc/CC-MAIN-20170923213057-20170923233057-00074.warc.gz"}
|
https://www.shaalaa.com/textbook-solutions/c/selina-solutions-selina-icse-concise-physics-class-10-chapter-2-work-energy-power_381
|
Share
Books Shortlist
# Selina solutions for Class 10 Physics chapter 2 - Work, Energy and Power
## Chapter 2 : Work, Energy and Power
#### Page 0
Define work. Is work a scalar or a vector?
How is the work done by a force measured when (i) force is in direction of displacement, (ii) force is at an angle to the direction of displacement?
A force F acts on a body and displaces it by a distance S in a direction at an angle θ with the direction of force. (a) Write the expression for the work done by the force. (b) what should be the angle between the force and displacement to get the (i) zero work (ii) maximum work?
A body is acted upon by a force. State two condition when the work done is zero.
State the condition when the work done by a force is (i) positive, (ii) negative. Explain with the help of examples.
A body is moved in a direction opposite to the direction of force acting on it. State whether the work is done by the force or work is done against the force
When a body moves in a circular path, how much work is done by the body? Give reason.
A satellite revolves around the earth in a circular orbit. What is the work done by the force of gravity? Give reason.
In which of the following cases, is work being done?
(i) A man pushing a wall.
(ii) a coolie standing with a load of 12 kgf on his head.
(iii) A boy climbing up a staircase.
A coolie carrying a load on his head and moving on a frictionless horizontal platform does no work. Explain the reason
The work done by a fielder when he takes a catch in a cricket match, is negative Explain.
Give an example when work done by the force of gravity acting on a body is zero even though the body gets displaces from its initial position.
What are the S.I. and C.G.S units of work? How are they related? Establish the relationship.
State and define the S.I. unit of work.
Express joule in terms of erg.
A body of mass m falls down through a height h. Obtain an expression for the work done by the force of gravity.
A boy of mass m climbs up a staircase of vertical height h.
(a) What is the work done by the boy against the force of gravity?
(b) What would have been the work done if he uses a lift in climbing the same vertical height?
Define the term energy and state its S.I. unit.
What physical quantity does the electron volt (eV) measure? How is it related to the S.I. unit of that quality?
Complete the following sentence:
1 J = Calorie
Name the physical quantity which is measured in calorie. How is it related to the S.I. unit of the quality?
Define a kilowatt hour. How is it related to joule?
Define the term power. State its S.I. unit.
State two factors on which power spent by a source depends. Explain your answer with examples.
Differentiate between work and power.
Differentiate between energy and power.
State and define the S.I. unit of power.
What is horse power (H.P)? How is it related to the S.I. unit of power
Differentiate between watt and watt hour.
Name the quality which is measured in kWh.
Name the quality which is measured in kW.
Name the quality which is measured in Wh.
Name the quality which is measured in eV
#### Page 0
MULTIPLE CHOICE TYPE:
One horse power is equal to:
(a) 1000 W
(b) 500 W
(c) 764 W
(d) 746 W
MULTIPLE CHOICE TYPE:
kWh is the unit of:
(a) power
(b) force
(c) energy
(d) none of these
#### Page 0
A body, when acted upon by a force of 10 kgf, gets displaced by 0.5 m. Calculate the work done by the force, when the displacement is (i) in the direction of force, (ii) at an angle of 60° with the force, and (iii) normal to the force. (g = 10 N kg-1)
A boy of mass kg runs upstairs and reaches the 8 m high floor in 5 s Calculate:
the force of gravity acting on the boy.
(i) the work done by him against gravity.
(ii) the power spent by boy.
(Take g = 10 m s-2)
It takes 20 s for a person A to climb up the stairs, while another person B does the same in 15 s. Compare the (i) Work done and (ii) power developed by the persons A and B.
A boy weighing 350 N runs up a flight of 30 steps, each 20 cm high in 1 minute, Calculate:
(i) the work done and
(ii) power spent.
A man spends 6.4 KJ energy in displacing a body by 64 m in the direction in which he applies force, in 2.5 s Calculate:
(i) the force applied and
(ii) the power Spent (in H.P) by the man.
A weight lifter a load of 200 kgf to a height of 2.5 m in 5 s. Calculate: (i) the work done, and (ii) the power developed by him. Take g = 10 N kg-1
A machine raises a load of 750 N through a height of 16 m in 5 s. calculate:
(i) energy spent by machine,
(ii) power at which the machine works.
An electric heater of power 3 KW is used for 10 h. How much energy does it consume? Express your answer in (i) kWh, (ii) joule.
A boy of mass 40 kg runs up a flight of 15 steps each 15 cm high in 10 s. Find:
(i) the work done and
(ii) the power developed by him
Take g =10 N kg^-1
A water pump raises 50 litres of water through a height of 25 m in 5 s. Calculate the power which the pump supplies.
(Take g = 10 N kg^-1 and density of water = 1000 kg m^-3)
A man raises a box of mass 50 kg to a height of 2 m in 2 minutes, while another man raises the
same box to the same height in 5 minutes. Compare:
(i) the work done and
(ii) the power developed by them.
A pump is used to lift 500 kg of water from a depth of 80 m in 10 s. calculate:
(a) the work done by the pump
(b) the power a which the pump works,
(c) the power rating of the pump if its efficiency is 40% (Take g = 10 m s^-2)
An ox can apply a maximum force of 1000 N. It is taking part in a cart race and is able to pull the cart at a constant speed of 30 M S^-1 while making its best effort. Calculate the power developed by the ox.
If the power of a motor is 40 kw, at what speed can it raise a load of 20,000 N?
#### Page 0
What are the two forms of mechanical energy?
Name the forms of energy which a wound-up watch spring possesses.
Name the type of energy (kinetic energy K or potential energy U) possessed in the given cases:
A moving cricket ball
Name the type of energy (kinetic energy K or potential energy U) possessed in the given cases:
A compressed spring
Name the type of energy (kinetic energy K or potential energy U) possessed in the given cases:
A moving bus
Name the type of energy (kinetic energy K or potential energy U) possessed in the given cases:
The bob of a simple pendulum at its extreme position.
Name the type of energy (kinetic energy K or potential energy U) possessed in the given cases:
The bob of a simple pendulum at its mean position.
Name the type of energy (kinetic energy K or potential energy U) possessed in the following case:
A piece of stone places on the roof.
When an arrow is shot from a bow, it has kinetic energy in it. Explain briefly from where does it get its kinetic energy?
Define the term potential energy of a body.
State different forms of potential energy and give one example of each.
A ball is placed on a compressed spring. What form of energy does the spring possess? On releasing the spring, the ball flies away. Give a reason.
What is meant by the gravitational potential energy? Derive expression for it.
Write an expression for the potential energy of a body of mass m places at a height h above the earth’s surface.
Name the form of energy which a body may possess even when it is not in motion. Give an example to support your answer.
What do you understand by the kinetic energy of a body?
A body of mass m is moving with a velocity v. Write the expression for its kinetic energy.
State the work energy theorem.
A body of mass m is moving with a uniform velocity u. A force is applied on the body due to
which its velocity changes from u to v. How much work is being done by the force.
A light mass and a heavy mass have equal momentum. Which will have more kinetic energy?
(Hint : Kinetic energy K = P2/2m where P is the momentum)
Name the three forms of kinetic energy and give on example of each.
Differentiate between the potential energy (U) and the kinetic energy (K)
Complete the following sentence:
The kinetic energy of a body is the energy by virtue of its………….
Complete the following sentence:
The potential energy of a body is the energy by virtue of its ……………….
Is it possible that no transfer of energy may take place even when a force is applied to a body?
Name the form of mechanical energy, which is put to use.
In what way does the temperature of water at the bottom of a waterfall differ from the temperature at the top? Explain the reason.
Name six different forms of energy?
Energy can exist in several forms and may change from one form to another. For the following, state the energy changes that occur in:
the unwinding of a watch spring
Energy can exist in several forms and may change from one form to another. For the following, state the energy changes that occur in:
a loaded truck when started and set in motion.
Energy can exist in several forms and may change from one form to another. For the following, state the energy changes that occur in:
a car going uphill
Energy can exist in several forms and may change from one form to another. For the following, state the energy changes that occur in:
photosynthesis in green leaves.
Energy can exist in several forms and may change from one form to another. For the following, state the energy changes that occur in:
Charging of a battery.
Energy can exist in several forms and may change from one form to another. For the following, state the energy changes that occur in:
respiration
Energy can exist in several forms and may change from one form to another. For the following, state the energy changes that occur in:
burning of a match stick
Energy can exist in several forms and may change from one form to another. For the following, state the energy changes that occur in:
explosion of crackers.
State the energy changes in the following case while in use:
loudspeaker
State the energy changes in the following case while in use:
a steam engine
State the energy changes in the following case while in use:
microphone
State the energy changes in the following case while in use:
washing machine
State the energy changes in the following case while in use:
an electric bulb
State the energy changes in the following case while in use:
burning coal
State the energy changes in the following case while in use:
a solar cell
State the energy changes in the following case while in use:
bio-gas burner
State the energy changes in the following case while in use:
an electric cell in a circuit
State the energy changes in the following case while in use:
a petrol engine of a running car
State the energy changes in the following case while in use:
an electric toaster.
State the energy changes in the following case while in use:
a photovoltaic cell.
State the energy changes in the following case while in use:
an electromagnet.
#### Page 0
MULTIPLE CHOICE TYPE
A body at a height possesses:
(a) kinetic energy
(b) potential energy
(c) solar energy
(d) heat energy
MULTIPLE CHOICE TYPE
In an electric cell which in use, the change in energy is from:
(a) electrical to mechanical
(b) electrical to chemical
(c) chemical to mechanical
(d) chemical to electrical
#### Page 0
Two bodies of equal masses are placed at heights h and 2h. Find the ration of their gravitational potential energies.
Find the gravitational potential energy of 1 kg mass kept at a height of 5 m above the ground if g = 10 m s-2.
A box of weight 150 kgf has gravitational potential energy stored in it equal to 14700 J. Find the height of the box above the ground. (Take g = 9.8 N kg-1)
A body of mass 5 kg falls from a height of 10 m to 4 m. Calculate: (i) the loss in potential energy of the body, (ii) the total energy possessed by the body at any instant? (Take g = 10 m s-2)
Calculate the height through which a body of mass 0.5 kg is lifted if the energy spent in doing so is 1.0 J. Take g = 10 m s-2.
A boy weighing 25 kgf climbs up from the first floor at height 3 m above the ground to the third floor at height 9m above the ground. What will be the increase in his gravitational potential energy? (Take g = 10 N kg -1)
A vessel containing 50 kg of water is placed at a height 15 m above the ground. Assuming the gravitational potential energy at ground to be zero, what will be the gravitational potential energy of water in the vessel? (g = 10 m s-2)
A man of mass 50 kg climbs up a ladder of height 10 m. Calculate: (i) the work done by the man, (ii) the increase in his potential energy. (g = 9.8 m s-2)
A block A, whose weight is 200 N, is pulled up a slope of length 5 m by means of a constant force F (= 150 N) as illustrated in Fig 2.13
(a) what is the work done by the force F in moving the block A, 5 m along the slope?
(b) By how much has the potential energy of the block A increased?
(c) Account for the difference in work done by the force and the increase in potential energy of the block.
Find the kinetic energy of a body of mass 1 kg moving with a uniform velocity of 10 m s-1.
If the speed of a car is halved, how does its kinetic energy change?
Two bodies of equal masses are moving with uniform velocities v and 2v. Find the ratio of their kinetic energies.
A car is running at a speed of 15 km h-1 while another similar car is moving at a speed of 30 km h-1. Find the ration of their kinetic energies.
A bullet of mass 0.5 kg slows down from a speed of 5 m s-1 to that of 3 m s-1. Calculate the change in kinetic energy of the ball.
A cannon ball of mass 500 g is fired with a speed of 15 m s-1. Find: (i) its kinetic energy and (ii) its momentum.
A bullet of mass 50 g is moving with a velocity of 500 m s-1. It penetrated 10 cm into a still target and comes to rest. Calculate: (a) the kinetic energy possessed by the bullet, (b) the average retarding force offered by the target.
A body of mass 10 kg is moving with a velocity 20 m s-1. If the mass of the body is doubled and its velocity is halved, find the ratio of the initial kinetic energy to the final kinetic energy.
A truck weighing 1000 kgf changes its speed from 36 km h-1 to 72 km h-1 in 2minutes. Calculate: (i) the work done by the engine and (ii) its power/ (g = 10 m s-2)
A body of mass 60 kg has the momentum 3000 kg m s-1. Calculate: (i) the kinetic energy and (ii) the speed of the body.
How much work is needed to be done on a ball of mass 50 g to give it s momentum of 500 g cm s-1?
How much energy is gained by a box of mass 20 kg when a man
(a) carrying the box waits for 5 minutes for a bus?
(b) runs carrying the box with a speed of 3 m s-1 to catch the bus?
(c) Raises the box by 0.5 m in order to place it inside the bus? (g = 10 m s-2)
A spring is kept compressed by a small trolley of mass 0.5 kg lying on a smooth horizontal surface as shown in the adjacent fig. when the trolley is released, it is found to move at a speed v = 2 m s-1. What potential energy did the spring possess when compressed?
#### Page 0
State two characteristic which a source of energy must have.
Name the two groups in which various sources of energy are classified. State on what basis are they classified.
What is meant by the renewable and non-renewable sources of energy? Distinguish between them giving two examples of weach.
Select the renewable and non-renewable sources of energy from the following:
Coal
Select the renewable and non-renewable sources of energy from the following:
Wood
Select the renewable and non-renewable sources of energy from the following:
Water
Select the renewable and non-renewable sources of energy from the following:
Diesel
Select the renewable and non-renewable sources of energy from the following:
Wind
Select the renewable and non-renewable sources of energy from the following:
Oil
Why is the use of wood as a fuel not advisable although wood is a renewable source of energy?
Name five renewable sources of energy.
Name three non-renewable sources of energy.
What is tidal? Explain in brief.
What is ocean? Explain in brief.
What is geo thermal energy? Explain in brief.
What is the main source of energy for earth?
What is solar energy?
How is the solar energy used to generate electricity in a solar power plant?
What is a solar cell?
State whether a solar cell produces a.c. or d.c.
Give one disadvantage of using a solar cell.
State two advantages of producing electricity from solar energy.
State two disadvantages of producing electricity from solar energy.
What is wind energy?
How is wind energy used to produce electricity?
How much electric power is generated in India using the wing energy?
State two advantages of using wind energy for generating electricity.
State two disadvantages of using wind energy for generating electricity.
What is hydro energy?
Explain the principle of generating electricity from hydro energy.
How much hydroelectric power is generated in India?
State two advantage of producing hydroelectricity.
State two disadvantages of producing hydroelectricity.
What is nuclear energy?
Explain the principle of producing electricity using the nuclear energy.
State the energy transformation on the following:
Electricity is obtained from solar energy.
Name two places in india where electricity is generated from nuclear power plants.
State two advantages of using nuclear energy for producing electricity.
State two disadvantages of using nuclear energy for producing electricity.
State the energy transformation on the following:
Electricity is obtained from solar energy.
State the energy transformation on the following:
Electricity is obtained from wind energy.
State the energy transformation on the following:
Electricity is obtained from hydro energy.
State the energy transformation on the following:
Electricity is obtained from nuclear energy.
State four ways for the judicious use of energy.
#### Page 0
MULTIPLE CHOICE TYPE:
The ultimate source of energy is:
(a) wood
(b) wind
(c) water
(d) sun
Renewable source of energy is:
Coal
fossil fuels
natural gas
sun
#### Page 0
State the law of conservation of energy.
What do you understand by the conservation of mechanical energy?
State the condition under which the mechanical energy is conserved.
Name two examples in which the mechanical energy of a system remains constant.
A body is thrown vertically upwards. Its velocity keeps on decreasing. What happens to its kinetic energy as its velocity becomes zero?
A body falls freely under gravity from rest. Name the kind of energy it will possess at the point from where it falls.
A body falls freely under gravity from rest. Name the kind of energy it will possess while falling.
A body falls freely under gravity from rest. Name the kind of energy it will possess on reaching the ground.
Show that the sum of kinetic energy and potential energy (i.e., total mechanical energy) is always conserves in the case of a freely falling body under gravity (with air resistance neglected) from a height h by finding it when (i) the body is at the top, (ii) the body has fallen a distance x, (iii) the body has reached the ground.
A pendulum is oscillating on either side of its rest position. Explain the energy changes that takes place in the oscillating pendulum. How does the mechanical energy remains constant in it? Draw the necessary diagram.
A pendulum with bob of mass m is oscillating on either side from its resting position A between the extremes B and C at a vertical height h and A. what is the kinetic energy K and potential energy U when the pendulum is at position (i) A, (ii) B and (iii) C?
What do you mean by degradation of energy?
Explain degradation of energy by taking two examples of your daily life.
#### Page 0
MULTIPLE CHIOCE TYPE:
A ball of mass m is thrown vertically up with an initial velocity so as to reach a height h. The correct statement is:
(a) Potential energy of the ball at the ground in mgh.
(b) Kinetic energy imparted to the ball at the ground is zero.
(c) Kinetic energy of the ball at the highest point is mgh.
(d) potential energy of the ball at the highest point is mgh.
MULTIPLE CHIOCE TYPE:
A pendulum is oscillating on either side of its rest position. The correct statement is:
(a) It has only the kinetic energy.
(b) it has the maximum kinetic energy at its extreme position.
(c) it has the maximum potential energy at its rest position.
(d) The sum of its kinetic and potential energies remains constant throughout the motion.
#### Page 0
A ball of mass 0.20 kg is thrown vertically upwards with an initial velocity of 20 m s-1. Calculate the maximum potential energy it gains as it goes up.
A stone of mass 500g is thrown vertically upwards with a velocity of 15 m s-1. Calculate: (a) the potential energy at the greatest height, (b) the kinetic energy on reaching the ground, (c) the total energy at its half-way point.
A metal ball of mass 2 kg is allowed to fall freely from rest from a height of 5m above the ground.
(Take g = 10 m s-2)
(a) Calculate the potential energy possessed by the ball when initially at rest.
(b) what is the kinetic energy of the ball just before it hits the ground?
(c) what happens to the mechanical energy after the ball hits the ground and comes to rest?
The diagram given below shows a ski jump. A skier weighing 60 kgf stands at A at the top of ski jump. He moves from A to B and takes off for his jump at B.
(a) Calculate the change in the gravitational potential energy of the skier between A and B.
(b) If 75% of the energy in part (a) becomes kinetic energy at B. Calculate the speed at which the skier arrives at B.
(Take g=10 m s-2)
A hydroelectric power station takes its water from a lake whose water level if 50 m above the turbine. Assuming an overall efficiency of 40%, calculate the mass of water which must flow through the turbine each second to produce power output of 1 MV.
The bob of a simple pendulum is imparted a velocity 5 m s-1 when it is at its mean position. To what maximum vertical height will it rise on reaching to its extreme position if 60% of its energy is lost in overcome friction of air?
## Selina solutions for Class 10 Physics chapter 2 - Work, Energy and Power
Selina solutions for Class 10 Physics chapter 2 (Work, Energy and Power) include all questions with solution and detail explanation. This will clear students doubts about any question and improve application skills while preparing for board exams. The detailed, step-by-step solutions will help you understand the concepts better and clear your confusions, if any. Shaalaa.com has the CISCE Selina ICSE Concise Physics for Class 10 solutions in a manner that help students grasp basic concepts better and faster.
Further, we at shaalaa.com are providing such solutions so that students can prepare for written exams. Selina textbook solutions can be a core help for self-study and acts as a perfect self-help guidance for students.
Concepts covered in Class 10 Physics chapter 2 Work, Energy and Power are Concept of Work, Energy, Power (Sum, Numericals ), Different Types of Energy, Work, Energy, Power - Relation with Force, Concept of Work, Energy, Power.
Using Selina Class 10 solutions Work, Energy and Power exercise by students are an easy way to prepare for the exams, as they involve solutions arranged chapter-wise also page wise. The questions involved in Selina Solutions are important questions that can be asked in the final exam. Maximum students of CISCE Class 10 prefer Selina Textbook Solutions to score more in exam.
Get the free view of chapter 2 Work, Energy and Power Class 10 extra questions for Physics and can use shaalaa.com to keep it handy for your exam preparation
S
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9483938813209534, "perplexity": 849.8727745978515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578530505.30/warc/CC-MAIN-20190421080255-20190421102255-00086.warc.gz"}
|
https://www.gamedev.net/forums/topic/134887-regular-expressions-in-visual-studio/
|
• Advertisement
#### Archived
This topic is now archived and is closed to further replies.
# Regular Expressions in Visual Studio
This topic is 5512 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
If you intended to correct an error in the post then please contact us.
## Recommended Posts
Hi! I just found out you can write regular expressions inside Visual Studio "Find/Replace" dialog boxes. Wow, this is really useful.. However, when I try to include the newline \n, inside the expression, it just acts as if I typed the letter n! Does anyone know how to include the newline character? Thanks a lot! Raj
#### Share this post
Advertisement
Maybe $or ^ will work (I can''t remember which matches end of line)? #### Share this post ##### Link to post ##### Share on other sites Yeah, they''re done on a line-by-line basis (like grep) so you need to use the$ character.
If I had my way, I''d have all of you shot!
codeka.com - Just click it.
#### Share this post
##### Share on other sites
Hmm thanks a lot! That seems to work for finding the end of the line. When I try to actually insert an end-of-line though, using the Replace dialog box, it just inserts a dollar sign. Any idea of how to get around that?
Thanks a lot!
Edit: By the way, it might help to show what I'm trying to do. Basically, I'm trying to generate empty functions from the prototoypes, because this saves a lot of redundant manual editing. So, say I had this bunch of functions taken from a "Graphics" class:
Graphics();
void SetMode(int modeNumber);
void VSync();
Then, I could use regular expressions to convert this into:
Graphics::Graphics()
{
}
void Graphics::SetMode(int modeNumber)
{
}
void Graphics::VSync()
{
}
But, since I can't get newlines to work on the "Replace" dialog box, all I end up with is stuff like this:
Graphics::Graphics() {}
void Graphics::SetMode(int modeNumber) {}
void Graphics::VSync() {}
Thanks =)
Raj
[edited by - Rajansky on January 21, 2003 9:35:11 AM]
#### Share this post
##### Share on other sites
You''re going to have to go back and edit it anyway so you could leave it as it is. You''ll need to make sure you return the correct type for non-void returns. Not hard, but a bit precarious if you ask me, you''ll have code which compiles nicely enough but does nothing.
If it''s backed up with tests and you''re following the practice of writing code which deliberately fails first time through, just to get the tests in place, then you''ll have a better chance of catching the bugs.
#### Share this post
##### Share on other sites
I guess you could cut and paste a new line from the text editor into the replace box. I do that a bit with tabs (which you can''t type into there either).
If I had my way, I''d have all of you shot!
codeka.com - Just click it.
#### Share this post
##### Share on other sites
you guess wrongly
#### Share this post
##### Share on other sites
Actually, you can do tabs, with \t
Ahh oh well I give up, I don''t think it can be done in VS 6. I guess, maybe there''s some better way to do it using Visual Studio macros.
Raj
use perl or ruby
#### Share this post
##### Share on other sites
• Advertisement
• Advertisement
• ### Popular Tags
• Advertisement
• ### Popular Now
• 22
• 11
• 17
• 11
• 13
• Advertisement
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19741356372833252, "perplexity": 3083.3612779388595}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814833.62/warc/CC-MAIN-20180223194145-20180223214145-00258.warc.gz"}
|
https://physics.stackexchange.com/questions/171256/whats-the-difference-between-nmr-and-epr
|
# What's the difference between NMR and EPR?
Both NMR and EPR describe the response of magnetic spin to external field. When collecting data, how do you know you're looking at nucleus spin flip or electron spin flip? In other words, since every sample has both protons and electrons, and all have magnetic spin, how do you separate between the protons' response and the electrons' response to the external perturbation?
• by the resonant frequency? Same way you separate $-CH_2-C^*H_3$ from $-C^*H_2-CH_3$ – aaaaaa Mar 19 '15 at 18:48
• @aandreev, how different is the res. freq.? – Sparkler Mar 19 '15 at 19:03
• @Sparkler, the Wikipedia links you provided state that the frequency is similar to ... (60–1000 MHz) about NMR, and the great majority of EPR measurements are made with microwaves in the 9000–10000 MHz (9–10 GHz) region. So unless I've missed something, it seems to be one to two orders of magnitude in difference? – jabirali Mar 19 '15 at 20:32
• @jabirali, thanks, I missed that part. But why the freq. is different? because of the mass? – Sparkler Mar 19 '15 at 20:36
• @Sparkler - Short answer: Electrons have more/higher spin angular momentum. That is not a very satisfying answer, but to normal quantum mechanics spin is an intrinsic property, so you would need something like extra dimensions or something entirely different to describe it better.. – nsandersen Apr 16 '16 at 11:32
The electron magnetic moment is about 660 times larger than that of the proton, and the proton's magnetic moment is the largest of all the nuclei. Although most electrons occur in pairs, unpaired electrons, as they occur in radicals, give rise to electron paramagnetic resonance (EPR) signals.
Signal frequencies in magnetic resonance are, to a very good approximation, proportional to the magnetic moment (unless the external magnetic field becomes very weak or in the case of large quadrupolar splittings).
In a typical nuclear magnetic resonance (NMR) experiment one would thus observe either the proton or the $^{13}$C carbon NMR signal (much like listening to different FM radios). For a 10 Tesla magnet, these would have frequencies of approximately 400 MHz and 100 MHz, respectively. It is possible to excite proton or carbon NMR simultaneously, but this requires two channels, tuned to the respective (radio) frequencies.
On the other hand, an electron spin would precess at a (microwave) frequency approaching 300 GHz, requiring different excitation and detection pathways (waveguides rather than coaxial cables, and cavity resonators rather than $LC$-resonators).
However, the presence of free electron spins may manifest itself in the NMR detection via reducing the relaxation time $T_1$, a phenomenon known as paramagnetic relaxation enhancement (PRE).
since every sample has both protons and electrons, and all have magnetic spin,
But
the electron spins cancel out usually, because
normal matter only has electrons in Pauli pairs.
EPR is restricted to radicals in organic chemistry
or transition metal complexes, or O2 gas :=)
• To add a couple of sentences, the electrons are typically paired in shared orbitals - can't help think of the slight similarity that only nuclei with an odd number of protons are observable by NMR. – nsandersen Apr 16 '16 at 11:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7651034593582153, "perplexity": 591.8853832075374}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574182.31/warc/CC-MAIN-20190921022342-20190921044342-00473.warc.gz"}
|
https://mathoverflow.net/questions/249183/a-graph-spectra-problem/249211
|
# A graph spectra problem?
The composition $G=G_1[G_2]$ of graphs $G_1$ and $G_2$ with disjoint point sets $V_1$ and $V_2$ and edge sets $X_1$ and $X_2$ is the graph with point vertex $V_1×V_2$ and $u=(u_1,u_2)$ adjacent with $v=(v_1,v_2)$ whenever $u_1$ is adjacent to $v_1$ or $[u_1=v_1]$ and $u_2$ is adjacent to $v_2$.
Does anyone know what the spectrum of this graph is related to eigenvalues of $G_1$ and $G_2$?
The adjacency matrix of the product is $A_1 \otimes J + I \otimes A_2$, where $J$ is the all ones matrix of size $n = |V(G_2)|$ and $I$ is the identity matrix of size $m = |V(G_1)|$. The two matrices in the sum commute if and only if $G_2$ is regular, and in this case you can compute the eigenvalues of $G_1[G_2]$ easily. In particular, if $\lambda_1 \ge \ldots \ge \lambda_m$ and $\mu_1 \ge \ldots \ge \mu_n$ are the eigenvalues of $G_1$ and $G_2$ respectively, then whenever $G_2$ is regular the eigenvalues of $G_1[G_2]$ are $\lambda_in + \mu_1$ for all $i \in [m]$ and $\mu_j$ with multiplicity $m$ for all $j \in [n]\setminus \{1\}$. Note that some of the $\mu_j$'s may be repeated so their actual multiplicity will be some multiple of $m$.
If $G_2$ is not regular then you are probably going to have harder time writing the eigenvalues of the product in terms of the eigenvalues of the factors.
• Roberson I have a confusion here.In this product we should have $mn$ eigenvalues with multiplicity. But with your computation we'll have $m+nm=m(n+1)$. What is the reason? Can you clear that for me? Vahid – user91523 Sep 10 '16 at 15:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9486737847328186, "perplexity": 57.33387936791676}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107903419.77/warc/CC-MAIN-20201029065424-20201029095424-00029.warc.gz"}
|
https://asmedigitalcollection.asme.org/FEDSM/proceedings-abstract/FEDSM2002/36150/985/298203
|
Heating up experiments at the secondary pools side of the NOKO test facility were performed, to investigate mixed convection phenomena. The NOKO test facility was designed to investigate the heat transfer capability of an emergency condenser and was operated in the Research Centre Ju¨lich. In the Forschungszentrum Rossendorf the heating up tests were analyzed by CFD-simulations using the AEA-Technology code CFX-4. Applying the Boussinesq approximation the simulation of the heating up process is possible, at least qualitatively. Using the laminar approach, temperature oscillations caused by plumes could be simulated. A further test series performed at Forschungszentrum Rossendorf deals with the investigation of transient boiling. Heating up a 10 l water tank from the side walls, the temperatures and the void fractions at different locations in the tank were measured. CFX-4 simulations using the implemented boiling model reproduce and explain the observed phenomena. Convergence problems occurred with higher vapor volume fractions.
This content is only available via PDF.
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8881755471229553, "perplexity": 2795.8762462167624}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585199.76/warc/CC-MAIN-20211018062819-20211018092819-00181.warc.gz"}
|
https://encyclopediaofmath.org/wiki/Artin-Schreier_theorem
|
Artin-Schreier theorem
2010 Mathematics Subject Classification: Primary: 12E10 [MSN][ZBL]
The Artin–Schreier theorem for extensions $K$ of degree $p$ of a field $F$ of characteristic $p>0$ states that every such Galois extension is of the form $K = F(\alpha)$, where $\alpha$ is the root of a polynomial of the form $X^p - X - a$, an Artin–Schreier polynomial.
The function $A : X \mapsto X^p - X$ is $p$-to-one since $A(x) = A(x+1)$. It is in fact $\mathbf{F}_p$-linear on $F$ as a vector space, with kernel the one-dimensional subspace generated by $1_F$, that is, $\mathbf{F}_p$ itself.
Suppose that $F$ is finite of characteristic $p$. The Frobenius map is an automorphism of $F$ and so its inverse, the $p$-th root map, is defined everywhere, and $p$-th roots do not generate any non-trivial extensions.
If $F$ is finite, then $A$ is exactly $p$-to-1 and the image of $A$ is a $\mathbf{F}_p$-subspace of codimension 1. There is always some element $a \in F$ not in the image of $A$, and so the corresponding Artin-Schreier polynomial has no root in $F$: it is an irreducible polynomial and the quotient ring $F[X]/\langle A_\alpha(X) \rangle$ is a field which is a degree $p$ extension of $F$. Since finite fields of the same order are unique up to isomorphism, we may say that this is "the" degree $p$ extension of $F$. As before, both roots of the equation lie in the extension, which is thus a splitting field for the equation and hence a Galois extension: in this case the roots are of the form $\beta,\,\beta+1, \ldots,\beta+(p-1)$.
If $F$ is a function field, these polynomials define Artin–Schreier curves, which in turn give rise to Artin–Schreier codes (cf. Artin–Schreier code).
References
[La] S. Lang, "Algebra" , Addison-Wesley (1974)
Comment
This is also a name for the theorem that a field is formally real (can be ordered) if and only if $-1$ is not a sum of squares.
References
• J.W. Milnor, D. Husemöller, Symmetric bilinear forms, Ergebnisse der Mathematik und ihrer Grenzgebiete 73, Springer-Verlag (1973) p.60 ISBN 0-387-06009-X Zbl 0292.10016
How to Cite This Entry:
Artin-Schreier theorem. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Artin-Schreier_theorem&oldid=39812
This article was adapted from an original article by M. Hazewinkel (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9692834615707397, "perplexity": 212.53200420632706}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499953.47/warc/CC-MAIN-20230201211725-20230202001725-00018.warc.gz"}
|
https://www.deepdyve.com/lp/springer_journal/generalized-derivations-on-some-convolution-algebras-3vUgiLPVUx
|
# Generalized derivations on some convolution algebras
Generalized derivations on some convolution algebras Let G be a locally compact abelian group, $$\omega$$ ω be a weighted function on $${\mathbb {R}}^+$$ R + , and let $$\mathfrak {D}$$ D be the Banach algebra $$L_0^\infty (G)^*$$ L 0 ∞ ( G ) ∗ or $$L_0^\infty (\omega )^*$$ L 0 ∞ ( ω ) ∗ . In this paper, we investigate generalized derivations on the noncommutative Banach algebra $$\mathfrak {D}$$ D . We characterize $$\textsf {k}$$ k -(skew) centralizing generalized derivations of $$\mathfrak {D}$$ D and show that the zero map is the only $$\textsf {k}$$ k -skew commuting generalized derivation of $$\mathfrak {D}$$ D . We also investigate the Singer–Wermer conjecture for generalized derivations of $$\mathfrak {D}$$ D and prove that the Singer–Wermer conjecture holds for a generalized derivation of $$\mathfrak {D}$$ D if and only if it is a derivation; or equivalently, it is nilpotent. Finally, we investigate the orthogonality of generalized derivations of $$L_0^\infty (\omega )^*$$ L 0 ∞ ( ω ) ∗ and give several necessary and sufficient conditions for orthogonal generalized derivations of $$L_0^\infty (\omega )^*$$ L 0 ∞ ( ω ) ∗ . http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png aequationes mathematicae Springer Journals
# Generalized derivations on some convolution algebras
, Volume 92 (2) – Jan 15, 2018
19 pages
/lp/springer_journal/generalized-derivations-on-some-convolution-algebras-3vUgiLPVUx
Publisher
Springer International Publishing
Copyright © 2018 by Springer International Publishing AG, part of Springer Nature
Subject
Mathematics; Analysis; Combinatorics
ISSN
0001-9054
eISSN
1420-8903
D.O.I.
10.1007/s00010-017-0531-6
Publisher site
See Article on Publisher Site
### Abstract
Let G be a locally compact abelian group, $$\omega$$ ω be a weighted function on $${\mathbb {R}}^+$$ R + , and let $$\mathfrak {D}$$ D be the Banach algebra $$L_0^\infty (G)^*$$ L 0 ∞ ( G ) ∗ or $$L_0^\infty (\omega )^*$$ L 0 ∞ ( ω ) ∗ . In this paper, we investigate generalized derivations on the noncommutative Banach algebra $$\mathfrak {D}$$ D . We characterize $$\textsf {k}$$ k -(skew) centralizing generalized derivations of $$\mathfrak {D}$$ D and show that the zero map is the only $$\textsf {k}$$ k -skew commuting generalized derivation of $$\mathfrak {D}$$ D . We also investigate the Singer–Wermer conjecture for generalized derivations of $$\mathfrak {D}$$ D and prove that the Singer–Wermer conjecture holds for a generalized derivation of $$\mathfrak {D}$$ D if and only if it is a derivation; or equivalently, it is nilpotent. Finally, we investigate the orthogonality of generalized derivations of $$L_0^\infty (\omega )^*$$ L 0 ∞ ( ω ) ∗ and give several necessary and sufficient conditions for orthogonal generalized derivations of $$L_0^\infty (\omega )^*$$ L 0 ∞ ( ω ) ∗ .
### Journal
aequationes mathematicaeSpringer Journals
Published: Jan 15, 2018
## You’re reading a free preview. Subscribe to read the entire article.
### DeepDyve is your personal research library
It’s your single place to instantly
discover and read the research
that matters to you.
over 18 million articles from more than
15,000 peer-reviewed journals.
All for just $49/month ### Explore the DeepDyve Library ### Search Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly ### Organize Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place. ### Access Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals. ### Your journals are on DeepDyve Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more. All the latest content is available, no embargo periods. DeepDyve ### Freelancer DeepDyve ### Pro Price FREE$49/month
\$360/year
Save searches from
PubMed
Create lists to
Export lists, citations
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9792493581771851, "perplexity": 1625.090092207823}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158279.11/warc/CC-MAIN-20180922084059-20180922104459-00198.warc.gz"}
|
https://bmjopen.bmj.com/content/9/12/e031874.long
|
Article Text
Emerging cancer incidence, mortality, hospitalisation and associated burden among Australian cancer patients, 1982 – 2014: an incidence-based approach in terms of trends, determinants and inequality
1. Rashidul Alam Mahumud1,2,3,4,
2. Khorshed Alam1,2,
3. Jeff Dunn1,5,6,
4. Jeff Gow1,2,7
1. 1 Health Economics and Policy Research, Centre for Health, Informatics and Economic Research, University of Southern Queensland, Toowoomba, Queensland, Australia
2. 2 School of Commerce, University of Southern Queensland, Toowoomba, Queensland, Australia
3. 3 Health Economics Research, Health Systems and Population Studies Division, International Centre for Diarrhoeal Disease Research, Dhaka, Bangladesh
4. 4 Health and Epidemiology Research, Department of Statistics, University of Rajshahi, Rajshahi, Bangladesh
5. 5 Cancer Research Centre, Cancer Council Queensland, Fortitude Valley, Queensland, Australia
6. 6 Prostate Cancer Foundation of Australia, St Leonards, New South Wales, Australia
7. 7 School of Accounting, Economics and Finance, University of KwaZulu-Natal, Durban, South Africa
1. Correspondence to Mr Rashidul Alam Mahumud; Rashed.Mahumud{at}usq.edu.au
## Abstract
Objective Cancer is a leading killer worldwide, including Australia. Cancer diagnosis leads to a substantial burden on the individual, their family and society. The main aim of this study is to understand the trends, determinants and inequalities associated with cancer incidence, hospitalisation, mortality and its burden over the period 1982 to 2014 in Australia.
Settings The study was conducted in Australia.
Study design An incidence-based study design was used.
Methods Data came from the publicly accessible Australian Institute of Health and Welfare database. This contained 2 784 148 registered cancer cases over the study period for all types of cancer. Erreygers’ concentration index was used to examine the magnitude of socioeconomic inequality with regards to cancer outcomes. Furthermore, a generalised linear model was constructed to identify the influential factors on the overall burden of cancer.
Results The results showed that cancer incidence (annual average percentage change, AAPC=1.33%), hospitalisation (AAPC=1.27%), cancer-related mortality (AAPC=0.76%) and burden of cancer (AAPC=0.84%) all increased significantly over the period. The same-day (AAPC=1.35%) and overnight (AAPC=1.19%) hospitalisation rates also showed an increasing trend. Further, the ratio (least-most advantaged economic resources ratio, LMR of mortality (M) and LMR of incidence (I)) was especially high for cervix (M/I=1.802), prostate (M/I=1.514), melanoma (M/I=1.325), non-Hodgkin's lymphoma (M/I=1.325) and breast (M/I=1.318), suggesting that survival inequality was most pronounced for these cancers. Socioeconomically disadvantaged people were more likely to bear an increasing cancer burden in terms of incidence, mortality and death.
Conclusions Significant differences in the burden of cancer persist across socioeconomic strata in Australia. Policymakers should therefore introduce appropriate cancer policies to provide universal cancer care, which could reduce this burden by ensuring curable and preventive cancer care services are made available to all people.
• cancer incidence
• mortality
• cancer burden
• socioeconomic inequalities
• remoteness
This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.
## Request Permissions
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
### Strengths and limitations of this study
• This study examined the trends, determinants and inequality in terms of incidence, mortality, hospitalisation and associated burden of cancer (eg, years life lost, years lost due to disability and disability-adjusted life years) in the Australian context over a 33 year period.
• This study was not captured in details inequalities regarding the cancer survivorship in terms of stage, treatment procedures and utilisation of healthcare.
• Although we have limited understanding of what is driving these changes in cancer outcomes as reported here they may reflect random variation or changes in unknown risk factors, and therefore highlight the need for more research into the aetiology of cancer.
## Background
Non-communicable diseases (NCDs) are accountable for the majority of global deaths.1 Cancer is expected to rank as the most significant global public health problem and a leading cause of death and illness in the world in the 21st century2–6 including Australia.7 In 2019, it is estimated that almost 145 000 new cases of cancer will be diagnosed in Australia, and 35% of these individuals will eventually die from the disease.7 Cancer accounts for the highest burden of disease of any illness, at approximately 18% (19% for males; 17% for females), followed by cardiovascular disease (14%), musculoskeletal (13%) and mental health (12%).8 Approximately 40% of cancer patients are of working age in Australia.7 Among those in employment, 46% are unable to return to work after an episode,9 and 67% return to employment or change their job after being diagnosed.10 The majority of cancer survival patients depend on family, relatives and friends for physical and economical support during their treatment and/or in the last stages of the disease.9–12 Cancer-related illness results in a substantial number of patients experiencing economical hardship due to high out-of-pocket expenses (eg, medicines and treatments, including diagnostics), lost productivity, loss/reduction of household income and other induced expenditure.9 10 12 13 The economic burden of cancer is of growing concern for policymakers, healthcare practitioners, physicians, employers and society overall.10 12 Furthermore, the magnitude of the cancer burden increases significantly with remoteness from treatment sources and those individuals in depressed socioeconomic circumstances.14–16 Considerable progress has been made in recent decades in terms of cancer survival and reduced mortality rates17 18 through several initiatives including introducing primary preventive strategies and effective collaboration with non-government organisations and other stakeholders. Therefore, a reduction of cancer incidence, along with improvements in cancer treatments and therefore survival rates, are essential to reduce the burden of the disease.
Economic disparities between socioeconomically advantaged and disadvantaged individuals and groups are worsened by the increasing burden of cancer in Australia.15 The lack of appropriate services are significantly worse in resource-poor settings, including geographically disadvantaged areas compared with more advantaged people and communities with easier access to a greater range of cancer services, increased knowledge and awareness of cancer prevention and better and more easily accessible health facilities and resources.15 19 20 Other common reasons for such disparities include limited affordability and accessibility of cancer care services for individuals from socioeconomically disadvantaged groups,16 and their inadequate utilisation of healthcare.21 Thus, increased cancer incidence leads to a higher overall burden for the individual, family and society, which is exacerbated for the more disadvantaged.
In the recent past, disparities related to cancer outcomes have become the subject of international focus and new service initiatives.2 6 In 2016, the WHO Executive Board recommendation was to strengthen health systems to ensure early detection and diagnosis, as well as accessible, affordable and appropriate and quality healthcare services for all patients with cancer.22 Only a few studies have focused explicitly on socioeconomic inequality of cancer care and healthcare utilisation in Australia. This study therefore purposes to provide data and analysis on trends in cancer incidence, mortality rates, hospitalisation and associated burden (years life lost, YLL; years lost due to disability, YLD and disability-adjusted life years, DALYs) for the most prevalent malignancies among Australians, by sex, state, remoteness and socioeconomic status, using routinely collected health data for the period of 1982 to 2014.
There is an extensive body of research on the many different dimensions of cancer. In recent decades, the cancer incidence has increased,5 17 23–25 which has been more pronounced among adolescents and young adults,26 and older adults,27 yet cancer-related mortality rates have slightly dropped.28 Some types of cancer in Australia are the highest in the world: melanoma,26 keratinocyte and melanocyte.29 Australia and New Zealand together have the highest rates for Merkel cell carcinoma.30 31 A number of studies have focused on geographical or socioeconomic disparities in cancer care and survival.32–38 These have usually been conducted in small settings at the Australian state level. No previous studies have attempted to measure the trends, associated determinants and magnitude of socioeconomic inequalities of cancer outcomes (eg, incidence, mortality, hospitalisation and burden of cancer - YLL, YLD, DALYs) over time. Therefore, national level trends, the differential socioeconomic inequality of cancer outcomes, as well as influential factors associated with the cancer burden in Australia are unclear.
Furthermore, the study’s findings will provide authorities with national evidence about the trends and magnitude of the inequalities in cancer burden and hopefully assist in developing low-cost interventions to reduce this burden. This study thus aims to examine the trends, associated determinants and magnitude of socioeconomic inequality as related to incidence, mortality, YLL, YLD and DALYs, as a result of cancer.
## Methods
### Study design
An incidence-based approach was used to examine the trends and socioeconomic inequalities associated with adverse cancer outcomes in Australia. A health system perspective was adopted and cancer-related data were accessed from organisations that are committed to promote, restore or maintain health and well-being.39 40 The study population represented different population subgroups using characteristics such as sex, geographical distribution and economic circumstances.
### Australian health system
Australian health system (AHS) provides quality and affordable healthcare services for all Australians. It is operated by three levels of government: federal (financing), state and territory (funding and service delivery) and local (service delivery).41 The foundation of AHS is the publically funded national universal health insurance scheme, Medicare and its predecessor Medibank which commenced in the 1970s to promote universal healthcare by providing safe and affordable healthcare services for Australians. Through Medicare, patients are able to access medical services, treatment in public hospitals free of charge, receive subsidised out of hospital treatment and medicines. Those eligible to access healthcare services through Medicare include: Australian and New Zealand citizens, permanent residents of Australia and individuals who have applied for a permanent visa.42 On the other hand, overseas student health cover is mandatory for all international students, to ensure they and their dependents can access affordable healthcare while living and studying in Australia. Patients are provided a rebate benefit for healthcare services for out of hospital services.
The rebate amount is case dependent. For example, for a consultation with a general practitioner (GP), specialist or consultant physician of at least 10 min duration on a patient with cancer to develop a multidisciplinary treatment plan, the schedule payment is $A81.50 in 2019, and the benefit is 100% of the schedule fee; hyperbaric oxygen therapy associated with treatment of localised non-neurological soft tissue radiation injuries has a schedule fee of$A254.75 and the benefit is 75% of the schedule fee, or $A191.10.42 While public hospitals are free of charge, the majority of out of hospital healthcare services are provided by private health providers. The actual amount of fees for service is set by the providers themselves and are not regulated, meaning that private healthcare providers can make their fees above the schedule payment. Any difference between the amount of the providers fee for a service and the amount of rebate is paid by the patient from their out-of-pocket (OOP). For example, if the actual amount charged of a provider is$A81.50 for a diagonostic (eg, blood) test, Medicare would provide a rebate of $A69.30 (75% of the schedule fee), leaving the patient to pay$A12.20. Medicare has additional policies to protect patients from catatrophic OOP healthcare payments. In this context, healthcare cards are provided to welfare recipients and low income earners, and other eligible patients who pay a lower OOP payment for prescription medicines.43 The ‘Medicare Safety Net’ and ‘Extended Medicare Safety Net’ Programmes also provide higher rebates if an individual or family group reaches a certain level of total expenditure on OOP fees within a calendar year. Any subsequent services or prescriptions will have a higher proportion subsidised for the rest of that calendar year.44 Under the ‘Medicare Safety Net’, once the threshold is reached then 100% of the schedule fee for all healthcare services is rebated; and under the ‘Extended Medicare Safety Net’ 80% of the actual OOP payments are rebated.45
### Data sources
Various cancer-related national data sources were accessed. Data on cancer incidence, mortality and hospitalisation were extracted from the publicly accessible Australian Institute of Health and Welfare (AIHW) online database7 and cancer-related published reports.8 46 AIHW accumulates data from the Australian Cancer Database (ACD), National Mortality Database (NMD) and National Hospital Morbidity Database (NHMD). ACD accumulates and manages all sorts of cancer data from each Australian state and territory under legal mandate since 1982. Different types of hospitals (eg, government and non-government), clinics, laboratories other organisations and institutions are required to report all cancer cases to the central cancer registry (CCR). The CCR data is delivered to the AIHW on an annual basis, where it is accumulated into the ACD. The NMD includes information supplied by the registries of births, deaths and marriages and the national coronial information system. These data are then coded by the Australian Bureau of Statistics (ABS) and are incorporated into the NMD. The NHMD is an accumulation of episode-level records of hospitalised patient morbidity data collection systems (eg, all acute and psychiatric hospitals, freestanding day hospital facilities and alcohol and drug treatment centres). Further, cancer burden-related data is collected via the Australian Burden of Disease Study (ABDS). Data were retrieved from the published reports of ABDS-2011 and ABDS-2015, the last two that explicitly included cancer.8 46 Death caused by cancer was considered as the fatal burden (eg, YLL) and this data was sourced from the NMD. The non-fatal cancer burden related data emanated from different administrative sources including NHMD, ACD, NMD and some epidemiological studies. ABDS amassed data on some other parameters from the Global Burden of Disease studies of 2010 and 2013 that covered the standard life table for fatal burden (YLL), health status and disability weights for the non-fatal burden (YLD) and relative risks and the risk factor attribution.8 46 The present study used these national level accumulated data in the analysis.
### Study population
A total of 2 784 148 registered cancer cases (male=1 537 882; female=1 246 265) were accessed, based on data from 1982 to 2014 in Australia (table 1). In addition, to revealing the trends of cancer-related mortality over the same period, a total of 1 165 552 cancer-related deaths (male=6 59 105; female=5 06 447) were considered. Due to the paucity and availability of data related to cancer outcomes, a total of 591 631 registered cancer cases during the period from 2008 to 2012 and a total of 217 349 cancer-related deaths during 2010 to 2014 were used to examine inequality in cancer incidence and cancer-related mortality in Australia.
Table 1
Characteristics of the study parameters
### Measurement of cancer parameters
The age-standardised cancer incidence, or mortality rate, was measured using the number of new cases diagnosed or deaths for a specific age group, divided by the mid-year population of the same age group and year. Similarly, cancer incidence or mortality rate was estimated from the total number of new cases diagnosed or deaths across all age groups combined, divided by the mid-year population. These rates were interpreted as the number of new cases of cancer or deaths per 100 000 population. Cancer related burden estimation was undertaken using the burden of disease methodology.8 46 In the ABDS, the burden of cancer was calculated through the DALY by summing up the fatal burden (ie, YLL) due to premature cancer-related mortality and the non-fatal burden (ie, YLD) for patients surviving the condition.
(1)
(2)
(3)
Where, n=number of deaths; L (YLL)=standard life expectancy at the age of death in that year; I=number of people with each type of cancer cases; DW=disability wt; r=discount rate; L (YLD)=duration of disability in years.
### Definition of some potential factors
#### Index of economic resources
The magnitude of inequality in cancer outcomes was examined using an index of relative socioeconomic disadvantage (IRSD). The IRSD was developed by the ABS using potential factors like average household income, education level and unemployment rates.47 It is a geographical area-based estimate of socioeconomic status where small geographical settings of Australia are categorised from economically disadvantaged to wealthy. This index is employed as a proxy for the socioeconomic status of the people living in different geographical settings in Australia. The cut-offs value for each of the quintiles are as follows: Q1 (IRSD ≤927.0), Q2 (927.0> IRSD ≤965.8), Q3 (965.8> IRSD ≤1001.8), Q4 (1001.8> IRSD ≤1056.0) or Q5 (IRSD >1056.0).47 The most disadvantaged socioeconomic quantile (Q1) corresponds to geographical settings covering the 20% of the population with least advantaged socioeconomic areas, and the fifth quintile (Q5) refers to the 20% of the population with the most advantaged socioeconomic areas.
#### Remoteness
Remote locations exist in each state and territory of Australia and are based on the accessibility to services and Remoteness Index of Australia, which is constructed by the Australian Population and Migration Research Centre at the University of Adelaide.48 Remoteness was classified into six groups: major cities, inner regional, outer regional, remote, very remote and migratory. Migratory was excluded from the current analysis due to the paucity of information. The category of the major cities included Australia’s capital cities, except Darwin and Hobart, which were treated as an inner regional.
### Data analysis
#### Trend analysis
Trend analysis of cancer incidence, cancer-related mortality rates, hospitalisations and burden of cancer were performed using the ACD (from 1982 to 2014), NMD (1982 to 2014), NHMD (2000 to 2015) and ABDS (2011 to 2015) population data sets, respectively. Trend analyses were done across sex, state and socioeconomic status over these periods. To identify changes in cancer parameters trends, joinpoint regression analysis was performed using the Joinpoint Regression Programs, V.4.5.0.1.49 The annual percentage change (APC) in rates between trend-change points (ie, joinpoint segment) was calculated, and it also estimated the average annual percentage change (AAPC) in the whole study period. A negative APC indicates a decreasing trend whereas a positive APC indicates an increasing trend. Furthermore, increased or decreased APC of cancer-related outcomes were examined by the magnitude of cancer’s impact over the period.
To measure the APC, the following model was used:
(4)
where, log (Y x ) is the natural logarithm of the rate in year x. Then, the APC from year ‘x’ to year ‘x+1’ was:
(5)
Then, AAPC was estimated as a weighted average of the estimated APC in each segment by using the segment lengths as weights.
(6)
where, Si=i th segment lengths (i=1, 2, 3, …, n), APCi=i th annual percentage change.
#### Measuring socioeconomic inequality
Index of Economic Resources (IER) was measured in quintiles, with the first quintile (Q1) representing the lowest 20% of the total population living in the most impoverished socioeconomic areas, and the fifth quintile (Q5) representing the top 20% of the total population living in the most prosperous socioeconomic areas. Inequality analyses were constructed for cancer incidence, cancer-related mortality and DALYs across the different IER quintiles. The absolute and relative differences (eg, least advantaged-most advantaged difference, LMD and least advantaged-most advantaged ratio, LMR) in cancer incidence, cancer-related mortality, YLL, YLD and DALY were calculated to examine the magnitude and direction of the cancer outcomes across different socioeconomic groups. A high value of the LMR and LMD represents a high degree of socioeconomic inequality.16 The ratio of cancer mortality and incidence (M/I) was measured to capture the survival inequality of cancer patients. The measures of the concentration index (CI) (Erreygers’ CI) was used to examine the magnitude of socioeconomic inequality and the trends in adverse cancer outcome changes during the period.50
#### Multivariate analysis
The fatal cancer burden (eg, YLL) was considered as the outcome variable in the analytical exploration. YLL is characterised by a large cluster of data and a right-skewed distribution, but the zero values were excluded from the analysis. The natural logarithm of YLL was used to reduce the effects of the skewed nature of the burden of cancer data. In the multivariate analysis, natural logged YLL was predicted using different patients’ characteristics related to demographics (eg, sex), state, socioeconomic position and geographical distribution (eg, remoteness). A generalised linear model (GLM) was constructed to examine these associations. The model was tested for sensitivity by including and excluding specific variables and estimating the robust SEs. A series of diagnostic tests were performed, such as tests on the presence of heteroscedasticity, multicollinearity and omitted variables. The Breusch-Pagan/Cook-Weisberg test was used to check the presence of heteroscedasticity in the model. Variance Inflation Factor test was performed to examine the presence of multicollinearity. The Ramsey Ramsey Regression Equation Specification Error Test (RESET) test was to check if there is any omitted variable bias in the model. The outcome of the GLM analysis is presented as adjusted regression coefficients with robust SEs along with 95% CIs. Data management and all statistical analyses were performed using Stata/SE 13.0 (StataCorp, College Station, Texas, USA). A p value <0.05 was considered statistically significant.
### Ethics
This study was conducted using the publicly accessible AIHW online data sources and cancer-related published reports. Ethical approval was not required from an institutional review board because the patient information was de-identified.
### Patient and public involvement
Patient and public were not involved in the design or planning of this study.
## Results
### Trends in cancer incidence and cancer-related mortality
The overall incidence of cancer among males significantly increased from 1982 to 1994, and then increased exponentially until 2014 (figure 1). The rate of cancer incidence among females also showed an increasing trend from 1982 to 2014. The cancer incidence rate increased from 1984 (2507 cases) to 1991 (3896 cases) in South Australia, after which the rate increased slightly during the period 1992 (3994 cases) to 2002 (4127 cases), and then increased again until 2014 (5392 cases). A similar trend was observed for males in New South Wales and Western Australia. A sharp reduction of cancer incidence was seen during 1994 (1333 cases) to 1997 (1100 cases), and the overall rate increased during 1998 to 2008 (1124 cases to 1889 cases) in Tasmania. In the Northern Territory and Australian Capital Territory, the incidence of cancer increased exponentially for both males and females throughout the period. The overall cancer-related mortality rate also increased for both males (eg, 5000 cases in 1982 to 8470 cases in 2014) and females (eg, 3952 cases in 1982 to 6490 cases in 2014) in New South Wales from 1982 to 2014. Further, a similar trend was observed for male and female in Victoria, Queensland, Western Australia, South Australia and Tasmania during the period 1982 to 2014 (figure 2). However, in the Northern Territory and Australian Capital Territory, little change from the trend was observed.
Figure 1
Trends of cancer incidence by sex and state, Australia, 1982 to 2014. ACT,Australian Capital Territory; NSW, New South Wales; NT, Northern Territory;QLD, Queensland; SA, South Australia; WA, Western Australia.
Figure 2
Trends of cancer mortality by sex and state, Australia, 1982 to 2014. ACT,Australian Capital Territory; NSW, New South Wales; NT, Northern Territory;QLD, Queensland; SA, South Australia; WA, Western Australia.
### Distribution of average annual percentage change in cancer incidence and cancer-related mortality
Cancer incidence was measured as an AAPC over the period 1982 to 2014 (figure 3). Cancer incidence increased by an AAPC of 1.33% over the period 1982 to 2014, with the AAPC slightly higher for males 1.38% compared with females 1.29%. The highest AAPC was found in Northern Territory (2.57%), followed by the Australian Capital Territory (1.78%) and Western Australia (1.65%). In New South Wales (NSW), the rate of cancer incidence increased steadily from 1982 to 1994 and then oscillated until 2013. Similarly, the percentage change of cancer incidence rate increased among females over time. Cancer mortality rate rose 0.76% from 1982 to 2014, and the mortality rate among females (0.78%) was slightly higher compared with males (0.73%). In the Northern Territory, cancer-related mortality rate was comparatively very high among males (1.98%), while cancer-related mortality rates were found to be comparatively highest among females in Queensland (1.21%) and Australian Capital Territory (1.13%).
Figure 3
Distribution of cancer outcomes in Australia, 1982 to 2014.
### Trends in cancer-related hospitalisation
A total of 13 213 340 cancer-related hospitalisation cases were observed, of which 66.91% were for same-day treatment and 33.09% were overnight hospitalisations (figure 4). The AAPC of overall cancer-related hospitalisations increased by 1.27% as a whole, wherein same-day and overnight were 1.35% and 1.19%, respectively, higher over the period. The overnight hospitalisation rate fell over the period with a comparative increase in the same-day hospitalisation rate.
Figure 4
Distribution of cancer-related hospitalisations by same-day and overnight status in Australia, 2000 to 2015. AAPC, average annual percentage change; ACT, AustralianCapital Territory; NSW, New South Wales; NT, Northern Territory; QLD,Queensland; SA, South Australia; TAS, Tasmania; VIC, Victoria; WA, WesternAustralia.
### Trends in fatal cancer burden
An upward trend of the fatal burden of cancer was observed over the 2011 to 2015 period (figure 5). Males experienced a relatively higher burden (AAPC=0.89%) compared with females (AAPC=0.78%). The magnitude of the burden also varied across the states. For example, the rate of years of life lost increased by 9950 YLL (AAPC=1.16%) in Queensland, 2612 YLL (AAPC=0.22%) in NSW, 5838 YLL (AAPC=1.42%) in Western Australia, 2034 YLL (AAPC=0.63%) in South Australia and 1253 YLL (AAPC=2.57%) in the Australian Capital Territory. A major reduction in the fatal burden of cancer occurred among females (11 339 YLL, AAPC=−1.53%) in Tasmania and for males (3532 YLL, AAPC=−0.72%) in Victoria.
Figure 5
Trends of fatal burden of cancer across states, Australia, 2011 to 2015.
### The magnitude of socioeconomic inequality for cancer patients
Cancer incidence was highest among the poorest quintile (table 2). Similarly, the age-specific cancer incidence was marginally highest among the poorest group. Furthermore, the poorest were 1.083 times more likely to be exposed to cancer than the richest and the poor/rich difference amounted to an additional 9873 cases per year. The cancer-related mortality rate difference was even starker with the LMR (1.513 times) and LMD (17 770 cases/100 000 persons). The overall ratio of (LMR of mortality) and (LMR of incidence) was high (M/I=1.276). Again, it has been revealed that nearly 34% more least advantaged group of people experienced cancer-related mortality compared with most advantaged economic resources of people. The overall magnitude of cancer incidence (CI=−0.029, p<0.01) and cancer-related mortality rate (CI=−0.011, p<0.05) were highest in the least advantaged group.
Table 2
Socioeconomic inequality of cancer incidence and mortality in Australia
This skewed distribution was also true for the individual types or sites of cancer (table 3). The highest contributors to the socioeconomic inequality-mortality gap were colorectal (LMR=1.327 times), pancreas (LMR=1.336 times), lung (LMR=1.965 times), cervix (LMR=1.363 times), kidney (LMR=1.344 times), bladder (LMR=1.433 times) and unknown primary cancer (LMR=1.660 times). Further, the ratio (LMR of mortality) and (LMR of incidence) was especially high for cervix (M/I=1.802), prostate (M/I=1.514), melanoma (M/I=1.325), non-Hodgkin's lymphoma (M/I=1.325) and breast (M/I=1.318), suggesting that survival inequality was most pronounced for these cancers. The high value of the concentration index (CI) of different cancers, such as lung (CI=−0.060), melanoma (CI=−0.087), breast (CI=−0.104), prostate (CI=−0.076) and non-Hodgkin's lymphoma (CI=−0.078), indicates that cancer incidence was disproportionately distributed in the least economic resources quintile. In addition, a high degree of inequality in cancer related-mortality occurred across the different economic resources quintiles. Significant negative CI of mortality by different types of cancer, such as lung (CI=−0.066), melanoma (CI=−0.034), breast (CI=−0.048), cervix (CI=−0.095) and unknown primary cancer (CI=−0.043), reflected that mortality due to these types of cancers was more highly concentrated among the least advantaged economic resources group. Likewise, the number of deaths related to all types of cancer was highest among the least advantaged group. As a result, LMR is more than 1, and LMD is positive for all types of cancer-related mortality.
Table 3
Distribution of cancer cases (2008 to 2012) and cancer-related mortality (2010 to 2014) by cancer site/type and socioeconomic status in Australia
### Supplemental material
Table 4
Burden of cancer (YLL, YLD and DALY/1000) across socioeconomic groups, 2011 to 2015
Table 5
Trends in socioeconomic inequality of fatal cancer burden (years lost life/1000) by cancer type, 2011 to 2015
### Factors influencing the fatal burden of cancer
The regression coefficients were interpreted as the effect of a 1% change in the characteristics of cancer patients on the 1% change in YLL (table 6). These results show that a 1% increase in the proportion of male cancer patients slightly increased the YLL from 3.87% to 4.19%. In very remote areas the YLL increased by 32.05% in 2011 but reduced in 2015 by 22.75%.
Table 6
Association of fatal cancer burden (natural logged of years of life lost) with sex, remoteness, location and socioeconomic resources
However, the cancer burden was significantly increased for those who lived in remote, inner or outer regional areas during the period. In terms of geographical distribution, patients from New South Wales (32%) experienced a significantly higher burden, followed by Victoria (30%) and Queensland (25%), but the changes were stable during this period. In Western Australia and Tasmania, the burden of cancer significantly increased, by 15.72% to 20.80% and 6.29% to 7.90%, respectively. However, the burden of cancer declined for others, including the Northern Territory from 3.77% to 2.43%, and South Australia from 18.65% to 16.65%. Similarly, the magnitude of the cancer burden increased for those in the least advantaged economic resource quintiles.
## Discussion
This study aimed to reveal the trends in cancer incidence, related mortality and cancer burden, as well as measure the magnitude of inequality in cancer mortality, incidence and DALYs during the period of 1982 to 2014 in Australia. The study design was an incidence-based on from a health system perspective. Overall incidence and mortality showed an upward trend over the period and the highest average increase in incidence was found in the Northern Territory, Australian Capital Territory and Western Australia. Also, the proportion of cancer-related hospitalisation has increased and is dominated by same-day hospitalisations. Further, the survival inequality in terms of LMR of mortality and LMR of incidence was especially high for prostate, cervix, melanoma, non-Hodgkin's lymphoma and breast, suggesting that survival inequality was most pronounced for these cancers. Overall, the fatal burden of cancer exhibited an increasing trend over the period.
The study’s findings support a growing body of research evidence that has found the incidence of cancer and cancer-related mortality to be increasing in other country settings.14 51–54 These increasing trends have been pronounced in the last couple of decades globally.6 52 53 The WHO55 and the Sustainable Development Goals56 have outlined the increasing burden of non-communicable diseases that include cancer, and have promoted initiatives to control and prevent future increases through action plans. Still, the burden of cancer has been growing in Australia over the last decades.24 Four driving forces have contributed to this: first, increased exposure to risk factors (for example, unbalanced and industrialised-type diets)57 as well as a high prevalence of obesity58 59; second, improved health outcomes (eg, life expectancy)4 and demographic transition (eg, ageing and growth of population)5 has reduced death rates compared with other causes of death; third, widespread urbanisation (responsible for the change in lifestyles),60 exposure to smoking61 and alcohol consumption60 are contributing to developing higher cancer risk60 62 and fourth, overdiagnosis is considered another potential driving force for increasing cancer incidence and related mortality. It is evident from past studies that overdiagnosis has played a significant role in increasing the burden of cancer63 but that the rising magnitude of cancer burden among Australians may not be entirely explained by overdiagnosis.64 Therefore, further research that explores the potential risk factors may contribute to a deeper understanding of the reasons behind the increasing burden of cancer in Australia.
This study found that survival inequality was most pronounced for prostate cancers and consistent with previous studies.65 66 Evidence about underlying causes to explain inequalities in prostate cancer. Some possible explanations can be considered such as factors associated with the tumour (eg, stage at diagnosis, biological characteristics), the patient (comorbidity, health behaviour, psychosocial factors) and the healthcare (treatment, medical expertise, screening).65–67 Furthermore, the utilisation rate of screening services is lower among prostate cancer patients with disadvantaged socioeconomic status.68 69 Moreover, patient factors as comorbidity or health behaviour can interact with treatment modalities or disease stage and additionally have a potential impact on inequalities in survival.70 71 Further, an increased likelihood of surveillance as treatment among patients with severe comorbidity while radical prostatectomy was significantly less likely to be offered.65 66 69 70 Some studies conducted in England,72 Australia73 and the USA74 also revealed that socioeconomically disadvantaged patients have a reduced likelihood of having radical prostatectomy compared with patients with disadvantaged socioeconomic status who utilised more regularly hormone therapy, active surveillance, watchful waiting and partly radiation. There is an ongoing debate regarding the significant role of healthcare management as a contributing factor to inequalities in survival among prostate cancer patients.67
Moreover, low productivity, loss/reduction of household income and increased expenditure due to illness result in reduced earnings and higher expenditure that further disadvantage the poorest. Growing socioeconomic inequalities of cancer outcomes need the attention of governments, health systems and decision-makers. These initiatives should aim for universal cancer care in all states. A sustained reduction of socioeconomic inequalities, which concerns poverty, gender, education and health, should promote universal equality in health and well-being and further enhance both socioeconomic and human development.
The present study has also identified that the fatal burden of cancer was high in 2011 among patients in very remote areas, but it was reduced by 2015. Similarly, the burden of cancer was high in New South Wales, Victoria and Queensland; however, the magnitude of fatal burden was unchanged during 2011 to 2015. Some previous studies have shown consistent findings, which have confirmed that the proportion of life lost for patients in geographical disadvantaged or low-resource settings had a higher cancer burden than their more advantaged counterparts.75 76 Socioeconomic inequalities in terms of poorer survival for geographically isolated patients was observed in cancer types in Australia including breast and colorectal cancer.86 Several issues might be associated with a high burden of cancer among patients in regional and remote Australia, including a lack of appropriate skills among health professionals and a lack of adequate resources being available in remote and smaller cities.15 33 87 A recent study conducted in regional Australia identified that there was a paucity of medical professionals with expertise and appropriate cancer training in regional areas.68 The study also confirmed that a lack of communication and coordination persisted between different medical professionals (such as oncologists and GPs) and across geographical locations (major vs regional centres).
Difficulty in service accessibility and availability of appropriate cancer care services is faced by residents of rural, remote communities in Australia.87 However, only 30% of the population lives outside the major cities.88 The federal government has committed to improving the cancer infrastructure by building a network of new and enhanced regional cancer centres in regional Australia.89 Furthermore, innovative cancer care models, including mobile clinics incorporating video conference and tele-oncology, have been introduced in order to address the challenges of distance. Advanced technology-based services such as tele-oncology have been implemented in Western Australia and North Queensland, allowing regional cancer patients to use the latest treatments including specialist consultations and chemotherapy treatments.90 91 These models have also been implemented in the USA and Canada to ensure maximum access to services among people in limited resources settings, with high levels of satisfaction and acceptance of services.90–94
This study contributes to the existing literature by providing first-hand evidence on the trends of incidence, mortality and burden of cancer, using Australian nationally representative population-based data. This study has used large national level data sets covering all states over the past 33 years. Due to paucity of survival data, this study has not captured in details inequalities regarding the cancer survivorship. However, there is a limited understanding of what is driving these changes of cancer outcomes reported here which may reflect random variation or changes in unknown risk factors, and therefore highlight the need for more research into the aetiology of cancer.
## Conclusions
The overall burden of cancer is substantial in Australia across all socioeconomic strata and geographical regions. Compared with socioeconomically advantaged people, disadvantaged people had a substantially higher risk of cancer incidence and cancer-related mortality. Those living in remote areas also bear a higher burden than those in urban areas who are closer to prevention and treatment services. The findings of this study can inform efforts by healthcare policymakers and those involved in healthcare systems to improve cancer survival in Australia. This work also suggests that the provision of universal cancer care can reduce the burden by ensuring curable and preventive cancer care services are accessible for all people regardless of socioeconomic status or location.
## Acknowledgments
The study is part of the first author’s PhD research. The PhD program was funded by the University of Southern Queensland, Australia. We would also like to thank the Australian Institute of Health and Welfare, Central Cancer Registry, Australian Bureau of Statistics and Australian Burden of Disease Study. We would like to gratefully acknowledge the reviewers and editors of our manuscript.
## Footnotes
• Contributors Conceptualised the study: RAM. Contributed data extraction and analyses: RAM, under the guidance of KA and JG. Result interpretation: RAM, under the guidance of KA and JG. Prepared the first draft: RAM. Contributed during the conceptualisation and interpretation of results and substantial revision: RAM, KA, JG and JD. Revised and finalised the final draft manuscript: RAM, KA, JD and JG. All authors read and approved the final version of the manuscript.
• Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
• Competing interests None declared.
• Patient consent for publication Not required.
• Provenance and peer review Not commissioned; externally peer reviewed.
• Data availability statement Data were extracted from the publicly accessible Australian Institute of Health and Welfare (AIHW) online sources (https://www.aihw.gov.au/reports-data/health-conditions-disability-deaths/cancer/data).
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35799649357795715, "perplexity": 6676.320067549185}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00782.warc.gz"}
|
http://math.stackexchange.com/questions/108023/metric-on-the-unit-cube
|
# Metric on the unit cube
Let $X$ be $\mathbb{R}^3$ with the sup norm $\|\cdot\|_{\infty}$. Let $Y=\{x\in X: \|x\|_{\infty}=1\}$. For $x,y\in Y,y\neq -x$ define $d(x,y)$ to be the arc length of the path $$Y\cap \{\lambda x+\mu y: \lambda\ge 0, \mu\ge 0\}.$$ Define $d(x,-x)=4$. Note that the arc lengths are computed using the sup norm. My question is: Does $d$ define a metric on $Y$? A related question was answered in Shortest path on unit sphere under $\|\cdot\|_\infty$
-
In fact the answer can be found in the given link. Let $A=(1,3/4,1/4), B=(3/4,1,3/4), C=(1,1,1/2)$ and $M=(1,1,4/7)$. Then $d(A,C)=1/4, d(C,B)=1/4$ and $d(A,B)=d(A,M)+d(M,B)=4/7$. So $d(A,C)+d(C,B)<d(A,B)$, proving that $d$ is not a metric. – TCL Feb 11 '12 at 5:00
TCL: Why not post an answer? – Jonas Meyer Feb 11 '12 at 5:58
@TCL I'm not sure what the relevance is more than a year later, but it's better to ask a new question instead of editing in this case -- your edit made this question wholly different from its original. – Lord_Farin Nov 11 '13 at 12:54
Let: $$A=(1,3/4,1/4),\ B=(3/4,1,3/4),\ C=(1,1,1/2)\text{ and }M=(1,1,4/7).$$ Then: $$d(A,C)=1/4,\ d(C,B)=1/4\text{ and }d(A,B)=d(A,M)+d(M,B)=4/7.$$ So $d(A,C)+d(C,B)<d(A,B)$, proving that $d$ is not a metric.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.992591142654419, "perplexity": 151.2188524911933}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115868812.73/warc/CC-MAIN-20150124161108-00141-ip-10-180-212-252.ec2.internal.warc.gz"}
|
http://physics.stackexchange.com/questions/80341/blackbody-radiation
|
Looking at the radiation from the sun (T=5800K) I got a little surprise which I do not understand.
I first calculated the energy density u and also the number of photons per unit volume ng. From this one gets the average energy per photon:
$$E_g = \frac{u}{n_g}$$
Converted to a wavelength one gets:
$$\tag{1} \lambda = 920\:\mathrm{nm}$$
The average frequency using
$$\nu_{\text{av}} = \frac{\int \nu \cdot u_{\nu} \cdot \mathrm{d}\nu}{\int u_{\nu} \cdot \mathrm{d}\nu}$$
Converted to a wave length I get $\lambda_{\text{av}} = 650\:\mathrm{nm}$.
Next I calculate the average wavelength using $$\lambda_{\text{av}} = \frac{\int \lambda\cdot u_\lambda \cdot \mathrm{d}\lambda }{ \int u_\lambda \cdot \mathrm{d}\lambda}$$ One gets $\lambda_{\text{av}2} = 920\:\mathrm{nm}$
Questions:
I was expecting some value which would be of course different from the value of the frequency distribution. This is actually the case. The same is of course true for the max of the distributions.
However to my surprise this is also equal to the the value of eq. (1). Why?
What is the physical meaning of the average frequency $\nu_{\text{av}}$?
If there is no particular meaning of $\nu_{\text{av}}$ it looks like the wavelength distribution has more physical meaning which I think is nonesense.
-
I $\TeX$ed your formulas (please learn to do that yourself), but I couldn't make much sense of your calculations. Check them back, perhaps I misunderstood a few things... – leftaroundabout Oct 10 '13 at 23:48
Frank, what @leftaroundabout has done is write the math in the input language of the MathJax rendering engine which is a pretty good implementation of $\LaTeX$'s mathmode and which we have running on the site. You should examine the raw text to understand what he did (just click the edit button). – dmckee Oct 10 '13 at 23:49
You are working your statistics out from the Plank law, right? Off the top of my head, I should expect the mean energy to be $h$ times the mean frequency, since the expectation is a linear operator. – WetSavannaAnimal aka Rod Vance Oct 10 '13 at 23:57
What formula are you using to calculate the number of photons per unit volume? Don't confuse this with the density of states. – WetSavannaAnimal aka Rod Vance Oct 11 '13 at 1:43
Please revise your first formula. It does not make any sense. – mcodesmart Oct 11 '13 at 6:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8958512544631958, "perplexity": 433.0320739377664}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657132646.40/warc/CC-MAIN-20140914011212-00227-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
|
http://aas.org/archives/BAAS/v31n5/aas195/607.htm
|
AAS 195th Meeting, January 2000
Session 52. Absorption in the Intergalactic Medium
Display, Thursday, January 13, 2000, 9:20am-6:30pm, Grand Hall
## [52.08] High Resolution Echelle Spectroscopy of Low Redshift Intervening O VI Absorbers with the Space Telescope Imaging Spectrograph
T.M. Tripp, D.V. Bowen, E.B. Jenkins (Princeton Obs.), B.D. Savage (U. Wisconsin-Madison)
We present high resolution FUV echelle spectroscopy of several low z intervening O VI absorbers (z < 0.3) in the spectra of H1821+643 and PG0953+415. The data were obtained with the Space Telescope Imaging Spectrograph at a resolution of ~45,000 (7 km/s FWHM). We also present selected new measurements of galaxy redshifts in the 10' field centered on H1821+643. The observations provide several clues about the nature of these absorbers: (1) In the case of the strong O VI system at z = 0.2250 in the spectrum of H1821+643, we detect multicomponent Si II and Si III absorption as well as O VI and several Lyman series lines of H I. Multiple components are evident in the O VI profiles, but the components have different velocities than the Si III and Si II lines. Furthermore, the Si II and Si III lines are quite narrow, and the O VI lines are broader and spread over a larger velocity range. This evidence strongly indicates that this is a multiphase absorber. (2) We also detect `high velocity' O VI in the z = 0.2250 system. High velocity H I is also seen in the Ly\alpha profile, but substantially offset in velocity from the O VI. This high velocity O VI may be analogous to the highly ionized high velocity clouds seen near the Milky Way. (3) We also present systems at other redshifts including very weak O VI absorption lines accompanied by weak and narrow H I absorption. (4) In all cases, several galaxies are close to the sight lines at the redshift of the O VI systems. We examine whether the O VI absorption can be attributed to the ISM of a single galaxy or the intragroup medium.
[Previous] | [Session 52] | [Next]
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8623226881027222, "perplexity": 4837.032219258477}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802772265.125/warc/CC-MAIN-20141217075252-00102-ip-10-231-17-201.ec2.internal.warc.gz"}
|
http://www.nanoscalereslett.com/content/8/1/242
|
Nano Express
# One-qubit quantum gates in a circular graphene quantum dot: genetic algorithm approach
Gibrán Amparán12, Fernando Rojas12* and Antonio Pérez-Garrido1
Author Affiliations
1 Departamento de Física Aplicada, Antiguo Hospital de la Marina, Campo Muralla del Mar, UPCT, Cartagena, 30202, Murcia, Spain
2 Departamento de Física Teórica, Centro de Nanociencias y Nanotecnologías, Universidad Nacional Autónoma de México, UNAM, Apdo, Postal 14, Ensenada, Baja California 22830, México
For all author emails, please log on.
Nanoscale Research Letters 2013, 8:242 doi:10.1186/1556-276X-8-242
Received: 15 November 2012 Accepted: 18 April 2013 Published: 16 May 2013
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
### Abstract
The aim of this work was to design and control, using genetic algorithm (GA) for parameter optimization, one-charge-qubit quantum logic gates σx, σy, and σz, using two bound states as a qubit space, of circular graphene quantum dots in a homogeneous magnetic field. The method employed for the proposed gate implementation is through the quantum dynamic control of the qubit subspace with an oscillating electric field and an onsite (inside the quantum dot) gate voltage pulse with amplitude and time width modulation which introduce relative phases and transitions between states. Our results show that we can obtain values of fitness or gate fidelity close to 1, avoiding the leakage probability to higher states. The system evolution, for the gate operation, is presented with the dynamics of the probability density, as well as a visualization of the current of the pseudospin, characteristic of a graphene structure. Therefore, we conclude that is possible to use the states of the graphene quantum dot (selecting the dot size and magnetic field) to design and control the qubit subspace, with these two time-dependent interactions, to obtain the optimal parameters for a good gate fidelity using GA.
### Background
Quantum computing (QC) has played an important role as a modern research topic because the quantum mechanics phenomena (entanglement, superposition, projective measurement) can be used for different purposes such as data storage, communications and data processing, increasing security, and processing power.
The design of quantum logic gates (or quantum gates) is the basis for QC circuit model. There have been proposals and implementations of the qubit and quantum gates for several physical systems [1], where the qubit is represented as charge states using trapped ions, nuclear magnetic resonance (NMR) using the magnetic spin of ions, with light polarization as qubit or spin in solid-state nanostructures. Spin qubits in graphene nanoribbons have been also proposed. Some obstacles are present, in every implementation, related to the properties of the physical system like short coherence time in spin qubits and charge qubits or null interaction between photons, which is necessary to design two-qubit quantum logic gates. Most of the quantum algorithms have been implemented in NMR as Shor's algorithm [2] for the factorization of numbers. Any quantum algorithm can be done by the combination of one-qubit universal quantum logic gates like arbitrary rotations over Bloch sphere axes (X(ϕ), Y(ϕ), and Z(ϕ)) or the Pauli gates () and two-qubit quantum gates like controlled NOT which is a genuine two-qubit quantum gate.
The implementation of gates using graphene to make quantum dots seems appropriate because this material is naturally low dimensional, and the isotope 12C (most common in nature) has no nuclear spin because the sum of spin particles in the nucleus is neutralized. This property can be helpful to increase time coherence as seen by the proposal of graphene nanoribbons (GPNs) [3] and Z-shape GPN for spin qubit [4].
In this work, we propose the implementation of three one-qubit quantum gates using the states of a circular graphene quantum dot (QD) to define the qubit. The control is made with pulse width modulation and coherent light which induce an oscillating electric field. The time-dependent Schrodinger equation is solved to describe the amplitude of being in a QD state Cj(t). Two bound states are chosen to be the computational basis |0〉 ≡ |ψ1/2 |1〉 ≡ |ψ− 1/2 〉 with j = 1/2 and j = −1/2, respectively, which form the qubit subspace. In this work, we studied the general n-state problem with all dipolar and onsite interactions included so that the objective is to optimize the control parameters of the time-dependent physical interaction in order to minimize the probability of leaking out of the qubit subspace and achieve the desired one-qubit gates successfully. The control parameters are obtained using a genetic algorithm which finds efficiently the optimal values for the gate implementation where the genes are: the magnitude (ϵ0) and direction (ρ) of electric field, magnitude of gate voltage (Vg0), and pulse width (τv). The fitness is defined as the gate fidelity at the measured time to obtain the best fitness, which means the best control parameters were found to produce the desired quantum gate. We present our findings and the evolution of the charge density and pseudospin current in the quantum dot under the gate effect.
### Methods
#### Graphene circular quantum dot
The nanostructure we used consists of a graphene layer grown over a semiconductor material which introduces a constant mass term Δ [5]. This allows us to make a confinement (made with a circular electric potential of constant radio (R)) where a homogeneous magnetic field (B) is applied perpendicular to the graphene plane in order to break the degeneracy between Dirac's points K and K’, distinguished by the term τ = +1 and τ = −1, respectively.
The Dirac Hamiltonian with magnetic vector field in polar coordinates is given by [6]:
(1)
where v is the Fermi velocity (106 m/s), b = eB/2, and j which is a half-odd integer is the quantum number for total angular momentum operator Jz. We need to solve . Eigenfunctions have a pseudospinor form:
(2)
where χ are hypergeometric functions M (a,b,z) and U (a,b,z) inside or outside of radius R (see [6] for details) (Figure 1).
Figure 1. Radial probability density (lowest states) and qubit subspace density and pseudospin current. (a) Radial probability density plot for the four lowest energy states inside the graphene quantum dot with R = 25 nm and under a homogeneous magnetic field of magnitude B = 3.043 T. The selected computational basis (qubit subspace) is inside the red box. Qubit subspace spatial probability density plot and vector field of the pseudospin current in (b) |0〉 = |ψ1/2 and (c) |1〉 = |ψ− 1/2 , respectively.
Due to the constant mass term and broken degeneracy, we obtain two independent Hilbert spaces. Therefore, we can choose the space K for the definition of the computational basis of the qubit to implement the quantum gates and to make the dynamic control following a genetic algorithm procedure.
The wave function in graphene can be interpreted as a pseudospinor of the sublattice of atom type A or B. In order to visualize the physics evolution due to the gate operation, we calculate the pseudospin current as the expectation values for Pauli matrices .
The selected states that we choose to form the computational basis for the qubit are the energies (Ej): E1/2 = .2492 eV and E−1/2 = .2551 eV (and the corresponding radial probability distributions is shown in Figure 2a). The energy gap is E01 = E−1/2 − E1/2 = 5.838 meV. To achieve transitions between these two states with coherent light, the wavelength required has to be , which is in the range of far-infrared lasers. Also, in controlling the magnetic field B, it is possible to modify this energy gap. We present as a reference point the plot for the density probability and the pseudospin current for the two-dimensional computational basis |0〉 = |ψ1/2 (Figure 2b) and |1〉 = |ψ− 1/2 (Figure 2c), where a change of direction on pseudospin current and the creation of a hole (null probability near r = 0) is induced when one goes from qubit 0 to1.
Figure 2. Diagram of genetic algorithm. Initial population of chromosomes randomly created; the fitness is determined for each chromosome; parents are selected according to their fitness and reproduced by pairs, and the product is mutated until the next generation is completed to perform the same process until stop criterion is satisfied.
#### Quantum control: time-dependent potentials
First of all, we have to calculate the matrix representation of the time-dependent interactions in the QD basis. Then, we have to use the interaction picture to obtain the ordinary differential equation (ODE) for the time-dependent coefficient which is the probability of being in a state of the QD at time t and finally obtaining the optimal parameter for gate operation.
#### Electric field: oscillating
These transitions can be induced by a laser directed to the QD carrying a wavelength that resonates with the qubit states in order to trigger and control transitions in the qubit subspace. We introduce an electric dipole interaction [7] using a time periodic Hamiltonian with frequency ω: Vlaser(t) = eϵ(t)r, with parameters ϵ(t) = ϵ0 cos ωt, ϵ0 = ϵ0(cos ρ, sin ρ), and r = r(cos φ, sin φ), the term ρ is the direction and ϵ0 is the magnitude of the electric field and are parameters constant in time. To determine the matrix of dipolar transitions on the basis of the QD states, the following overlap integrals must be calculated:
(3)
where l and j are the state indices. In Equation 3, the radial part defines the magnitude of the matrix component, the angular part defines transition rules, and as a result, we get a non-diagonal matrix; this indicates that transitions are only permitted between neighbor states. The matrix components are complex numbers; ϵ0 directed in direction is a pure imaginary number and directed in is a real number.
#### Voltage pulse on site
This interaction can be applied as a gate voltage inside the QD. In order to modify the electrostatic potential, we use a square pulse of width τv and magnitude Vg0. The Hamiltonian is
(4)
(5)
The matrix components in Equation 5 are diagonal, so this interaction only modifies the energies on the site. Since the Heaviside function θ depends on r in Equation 4, the matrix components are the probability to be inside the quantum dot which is different for each eigenstate, so this difference can introduce relative phases inside the qubit subspace.
#### One-qubit quantum logic gates
Therefore, we have to solve the dynamics of QD problem in N-dimensional states involved, where the control has to minimize the probability of leaking to states out of the qubit subspace in order to approximate the dynamic to the ideal state to implement correctly the one-qubit gates. The total Hamiltonian for both quantum dot and time-dependent interactions is , where is the quantum dot part (Equation 1) and Vlaser(t) and Vgate(t) are the time control interactions given by Equations 3 and 4.
We expand the time-dependent solution in terms of the QD states (Equation 2) as. Therefore, the equations for the evolution of probability of being in state l at time t, Cl(t), in the interaction picture, are given by:
(6)
The control problem of how to produce the gates becomes a dynamic optimization one, where we have to find the combination of the interaction parameters that produces the one-qubit gates (Pauli matrices). We solve it using a genetic algorithm [8] which allows us to avoid local maxima and converges in a short time over a multidimensional space (four control parameters in our case). The steps in the GA approach are presented in Figure 2, where the key elements that we require to define four our problem are chromosomes and fitness.
In our model, the chromosomes in GA are the array of values {Vg0,τv,ϵ0,ρ}, where Vg0 is the voltage pulse magnitude, τv is the voltage pulse width, ϵ0 is the electric field magnitude, and ρ is the electric field direction. The fitness function, as a measure of the gate fidelity, is a real number from 0 to 1 that we define as fitness(tmed) = | < Ψobj|Ψ(tmed) > |2 × | < Ψ0|Ψ(2tmed) > |2 where |Ψobj 〉 is the objective or ideal vector state, which is product of the gate operation (Pauli matrix) on the initial state |Ψ0〉. Then, we evolve the dynamics to the measurement time tmed to obtain |Ψ(tmed)〉. Determination of gate fidelity results in the probability to be in the objective vector state at tmed. Fitness involves gate fidelity at tmed and probability to be in the initial state at 2 tmed. This gives a number between 0 and 1, indicating how effective is the transformations in taking an initial state to the objective state and back to the initial state in twice of time (the reset phase).
The initial population of chromosomes ({Vg0,τv,ϵ0,ρ}) is randomly created, then fitness is determined for each chromosome (which implies to have the time-dependent evolution of Cl(t) to the measurement time); parents are selected according to their fitness and reproduced by pairs, and the product is mutated until the next generation is completed; one performs the same process until a stop criterion is satisfied.
### Results and discussion
The control dynamics were done considering N = 6 states, two of them are used as the qubit basis, so that the effect of the interaction stays inside the qubit subspace . The gate operation is completed in a time window that depends on ϵ0, and control parameters are defined to achieve operation inside a determined time window. The possible values of the electric field direction ρ is set from 0 to 2π, pulse width τv domain is set from 0 to time window and the magnitude Vg0 is set from 0 to an arbitrary value. The genetic algorithm procedure is executed for quantum gates σx and σy. The fitness reaches a value close to 1 near to 30 generations for both gates. The optimal parameters found for quantum gate σx are Vg0= .0003685, τv = 4215.95, ϵ0 = .0000924, and ρ = .9931π. For σy are Vg0 = .0355961, τv = 326.926, ϵ0 = .0000735, and ρ = 1.5120π. For the quantum gate σz, genetic algorithm is not needed because for this case, ϵ0 = 0, so Equation 6 is an uncoupled ordinary differential equation (ODE) with specific solution. To achieve this gate transformation in a determined time window, we can calculate Vg0, so that the control values for this quantum gate are Vg0= .1859, τv = 5,000, ϵ0 = 0, and ρ = 0. In Figure 3, we plot the time evolution of the gate fidelity or fitness for the three gates. We observe a good optimal convergence close to 1 at the time of measurement and reaching again the reset phase. To see the state transition and the quantum gate effect in the space, it is convenient to plot the density probability in the quantum dot and the corresponding pseudospin current, where we see how the wave packet has different time trajectory according to the gate transformation. For instance, the direction and time of creation of the characteristic hole (null probability) in the middle of the qubit one, which correspond more or less to an equal superposition of the qubit zero and one (column 2 and row 2 in Figure 4, right). This process has to be different for σy because it introduces an imaginary phase in the evolution which is similar with the change of the arrow directions in the pseudospin current. The same situation arises for σz (result not shown), but in this case, we use as an initial state , which is similar to the plot of column 2 and row 2 in Figure 4 (left) and then to show explicitly the gate effect of introducing the minus in the one state to reach a rotated state similar to plot of column 2 and row 2 in Figure 4 (right).
Figure 3. Time evolution of gate fidelity or fitness for the three gates. Plot of gate fidelity σx in the top side, σy in the middle, and σz in the bottom side; gate fidelity (FσI in blue where I is{x,y,z}) is the probability to be in the objective vector state; measurement time is shown in orange.
Figure 4. Time evolution of probability density and pseudospin current for the quantum gate σx and σy operation. Time evolution of density and current probability due to the effect of the produced quantum gate σx in the left side and σy in the right side, initial state |Ψ0〉 = |0〉 (Figure 1b).
### Conclusions
We show that with a proper selection of time-dependent interactions, one is able to control or induce that leakage probability out of the qubit subspace in a graphene QD to be small. We have been able to optimize the control parameters (electric field and gate voltage) with a GA in order to keep the electron inside the qubit subspace and produce successfully the three one-qubit gates. In our results, we appreciate that with the genetic algorithm, one can achieve good fidelity and found that little voltage pulses are required for σx and σy and improve gate fidelity, therefore making our proposal of the graphene QD model for quantum gate implementation viable. Finally, in terms of physical process, the visualization of the effects of quantum gates σx and σy is very useful, and clearly, both achieve the ideal states. The difference between them (Figure 4) is appreciated in the different trajectories made by the wave packet and pseudospin current during evolution due to the introduction of relative phase made by gate σy.
### Competing interests
The authors declare that they have no competing interests.
### Authors’ contributions
The work presented here was carried out collaboration among all authors. FR and APG defined the research problem. GA carried out the calculations under FR and APG's supervision. All of them discussed the results and wrote the manuscript. All authors read and approved the final manuscript.
### Acknowledgments
The authors would like to thank DGAPA and project PAPPIT IN112012 for financial support and sabbatical scholarship for FR and to Conacyt for the scholarship granted to GA.
### References
1. Ladd TD, Jelezko F, Laflamme R, Nakamura Y, Monroe C, O’Brien JL: Quantum computers (review).
Nature 2010, 464:45-53. PubMed Abstract | Publisher Full Text
2. Vandersypen LM, Steffen M, Breyta G, Yannoni CS, Sherwood MH, Chuang IL: Experimental realization of Shor's quantum factoring algorithm using nuclear magnetic resonance.
Nature 2001, 414:883-887. PubMed Abstract | Publisher Full Text
3. Trauzettel B, Bulaev DV, Loss D, Burkard G: Spin qubits in graphene quantum dots.
Nature Physics 2007, 3:192-196. Publisher Full Text
4. Guo G-P, Lin Z-R, Tao T, Cao G, Li X-P, Guo G-C: Quantum computation with graphene nanoribbon.
New Journal of Physics 2009, 11:123005. Publisher Full Text
5. Zhou SY, Gweon G-H: Substrate-induced band gap opening in epitaxial graphene.
Nature Materials 2007, 6:770-775. PubMed Abstract | Publisher Full Text
6. Recher P, Nilsson J, Burkard G, Trauzettel B: Bound states and magnetic field induced valley splitting in gate-tunable graphene quantum dots.
Physical Review B 2009, 79:085407.
7. Fox M: Optical Properties of Solids. In Quantum Theory of radiative absorption and emission Appendix B. Oxford: Oxford University Press; 2001:266-270.
8. Chong EKP, Zak SH: An introduction to optimization. In Chapter 14: Genetic Algorithms. 2nd edition. Weinheim: Editorial WILEY; 2001.
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.927214503288269, "perplexity": 1021.1792272954405}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065375.30/warc/CC-MAIN-20150827025425-00054-ip-10-171-96-226.ec2.internal.warc.gz"}
|
http://www.ck12.org/book/CK-12-Trigonometry-Concepts/r1/section/1.2/
|
<meta http-equiv="refresh" content="1; url=/nojavascript/">
You are reading an older version of this FlexBook® textbook: CK-12 Trigonometry Concepts Go to the latest version.
# 1.2: Identifying Sets of Pythagorean Triples
Difficulty Level: At Grade Created by: CK-12
%
Progress
Practice Identifying Sets of Pythagorean Triples
Progress
%
While working as an architect's assistant, you're asked to utilize your knowledge of the Pythagorean Theorem to determine if the lengths of a particular triangular brace support qualify as a Pythagorean Triple. You measure the sides of the brace and find them to be 7 inches, 24 inches, and 25 inches. Can you determine if the lengths of the sides of the triangular brace qualify as a Pythagorean Triple? When you've completed this Concept, you'll be able to answer this question with certainty.
### Guidance
Pythagorean Triples are sets of whole numbers for which the Pythagorean Theorem holds true. The most well-known triple is 3, 4, 5. This means that 3 and 4 are the lengths of the legs and 5 is the hypotenuse. The largest length is always the hypotenuse . If we were to multiply any triple by a constant, this new triple would still represent sides of a right triangle. Therefore, 6, 8, 10 and 15, 20, 25, among countless others, would represent sides of a right triangle.
#### Example A
Determine if the following lengths are Pythagorean Triples.
7, 24, 25
Solution: Plug the given numbers into the Pythagorean Theorem.
$7^2 + 24^2 & \overset{\underset{?}{}}{=} 25^2\\49 + 576 & = 625\\625 & = 625$
Yes, 7, 24, 25 is a Pythagorean Triple and sides of a right triangle.
#### Example B
Determine if the following lengths are Pythagorean Triples.
9, 40, 41
Solution: Plug the given numbers into the Pythagorean Theorem.
$9^2 + 40^2 & \overset{\underset{?}{}}{=} 41^2\\81 + 1600 & =1681\\1681 & =1681$
Yes, 9, 40, 41 is a Pythagorean Triple and sides of a right triangle.
#### Example C
Determine if the following lengths are Pythagorean Triples.
11, 56, 57
Solution: Plug the given numbers into the Pythagorean Theorem.
$11^2 + 56^2 & \overset{\underset{?}{}}{=} 57^2\\121 + 3136 & = 3249\\3257 & \ne 3249$
No, 11, 56, 57 do not represent the sides of a right triangle.
### Vocabulary
Pythagorean Triple: A Pythagorean Triple is a set of three whole numbers that satisfy the Pythagorean Theorem, $a^2 + b^2 = c^2$ .
### Guided Practice
1. Determine if the following lengths are Pythagorean Triples.
5, 10, 13
2. Determine if the following lengths are Pythagorean Triples.
8, 15, 17
3. Determine if the following lengths are Pythagorean Triples.
11, 60, 61
Solutions:
1. Plug the given numbers into the Pythagorean Theorem.
$5^2 + 10^2 & \overset{\underset{?}{}}{=} 13^2\\25 + 100 & = 169\\125 & \ne 169$
No, 5, 10, 13 is not a Pythagorean Triple and not the sides of a right triangle.
2. Plug the given numbers into the Pythagorean Theorem.
$8^2 + 15^2 & \overset{\underset{?}{}}{=} 17^2\\64 + 225 & = 289\\289 & = 289$
Yes, 8, 15, 17 is a Pythagorean Triple and sides of a right triangle.
3. Plug the given numbers into the Pythagorean Theorem.
$11^2 + 60^2 & \overset{\underset{?}{}}{=} 61^2\\121 + 3600 & = 3721\\3721 & = 3721$
Yes, 11, 60, 61 is a Pythagorean Triple and sides of a right triangle.
### Concept Problem Solution
Since you know that the sides of the brace have lengths of 7, 24, and 25 inches, you can substitute these values in the Pythagorean Theorem. If the Pythagorean Theorem is satisfied, then you know with certainty that these are indeed sides of a triangle with a right angle:
$7^2 + 24^2 & \overset{\underset{?}{}}{=} 25^2\\49 + 576 & = 625\\625 & = 625$
The Pythagorean Theorem is satisfied with these values as a lengths of sides of a right triangle. Since each of the sides is a whole number, this is indeed a set of Pythagorean Triples.
### Practice
1. Determine if the following lengths are Pythagorean Triples: 9, 12, 15.
2. Determine if the following lengths are Pythagorean Triples: 10, 24, 36.
3. Determine if the following lengths are Pythagorean Triples: 4, 6, 8.
4. Determine if the following lengths are Pythagorean Triples: 20, 99, 101.
5. Determine if the following lengths are Pythagorean Triples: 21, 99, 101.
6. Determine if the following lengths are Pythagorean Triples: 65, 72, 97.
7. Determine if the following lengths are Pythagorean Triples: 15, 30, 62.
8. Determine if the following lengths are Pythagorean Triples: 9, 39, 40.
9. Determine if the following lengths are Pythagorean Triples: 48, 55, 73.
10. Determine if the following lengths are Pythagorean Triples: 8, 15, 17.
11. Determine if the following lengths are Pythagorean Triples: 13, 84, 85.
12. Determine if the following lengths are Pythagorean Triples: 15, 16, 24.
13. Explain why it might be useful to know some of the basic Pythagorean Triples.
14. Prove that any multiple of 5, 12, 13 will be a Pythagorean Triple.
15. Prove that any multiple of 3, 4, 5 will be a Pythagorean Triple.
### Vocabulary Language: English
Pythagorean Triple
Pythagorean Triple
A Pythagorean Triple is a set of three whole numbers $a,b$ and $c$ that satisfy the Pythagorean Theorem, $a^2 + b^2 = c^2$.
Sep 26, 2012
Feb 26, 2015
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 11, "texerror": 0, "math_score": 0.7236358523368835, "perplexity": 773.9435072281684}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929012.53/warc/CC-MAIN-20150521113209-00127-ip-10-180-206-219.ec2.internal.warc.gz"}
|
https://discourse.julialang.org/t/specify-jumps-in-a-heston-like-model/42422
|
# Specify jumps in a Heston-like model?
I am working with a model similar to the Heston model, and would like to add jumps. I would like to have the jump rate which depend on the ordinary volatility as
`rate1(u,p,t) = λ0 .+ λ1.*exp(u[2]/2.0)`
I understand that this should be specified as a variable rate jump. However, I can’t seem to get the code to run if I do that. It does run if I specify it as a constant rate jump, but I’m doubting if the solution is correct when I do that. The code for the jump part follows. Any pointers would be welcome!
(this post is substantially similar to and earlier post, but with the diffeq tag added, sorry for the noise)
``````## jump in price
rate1(u,p,t) = λ0 .+ λ1.*exp(u[2]/2.0) # volatility dependent jump rate
# jump is normal with st. dev. equal to λ1 times current st. dev.
affect1!(integrator) = (integrator.u[1] = integrator.u[1].+randn(size(integrator.u[1])).*λ2.*exp(integrator.u[2]./2.0))
# this works:
jump1 = ConstantRateJump(rate1,affect1!)
# this does not
#jump1 = VariableRateJump(rate1,affect1!)
jump_prob = JumpProblem(prob,Direct(), jump1)
``````
You may also want to look at https://github.com/rveltz/PiecewiseDeterministicMarkovProcesses.jl for simulating this, using a trivial flow. At least, you will have a way to compare to another numerical solution.
1 Like
That doesn’t support SDEs?
@mcreel I was planning to look at your post today. Sorry about that!
1 Like
He did not give the flow in between jumps, I assumed it is constant
It’s from his other post, and the solver is `SRIW1`
Here’s the whole file, if it would help. As is, it runs and gives output that looks as expected, I’m just not sure if the jumps are occurring at the proper rate. I also don’t know how to determine when a jump has occurred.
``````using DifferentialEquations, Plots
function MyProblem(μ0,μ1,κ,α,σ,ρ,u0,tspan)
f = function (du,u,p,t)
du[1] = μ0 + μ1*(u[2]-α)/σ # drift in log prices
du[2] = κ*(α-u[2]) # mean reversion in shocks
end
g = function (du,u,p,t)
du[1] = exp(u[2]/2.0)
du[2] = σ
end
Γ = [1.0 ρ;ρ 1.0] # Covariance Matrix
noise = CorrelatedWienerProcess!(Γ,tspan[1],zeros(2),zeros(2))
sde_f = SDEFunction{true}(f,g)
SDEProblem(sde_f,g,u0,tspan,noise=noise)
end
function main()
# assume trading period is 1/3 of day (8 hours)
# but that latent price evolves continuously
# observed return is daily difference of log price at
# closing time
TradingDays = 1000 # total days in sample
Days = Int(TradingDays*7/5) # calendar days
MinPerDay = 1440 # minutes per day
MinPerTic = 5 # minutes between tics, lower for better accuracy
tics = Int(MinPerDay/MinPerTic) # number of tics in day
dt = 1/tics # divisions per day
closing = Int(floor(tics/3)) # closing tic: closing happens after 1/3 of day
# parameters
μ0 = 0.0
μ1 = 0.0
κ = 0.1
α = 0.15
σ = 0.15
ρ = -0.7
λ0 = 1.0 # constant in jump rate
λ1 = 1.0 # slope in jump rate
λ2 = 3.0 # size of jumps
σme = 0.05 # standard dev of measurement error in returns
u0 = [0;α]
prob = MyProblem(μ0, μ1, κ, α, σ, ρ, u0, (0.0,Days))
## jump in price
rate(u,p,t) = λ0 .+ λ1.*exp(u[2]/2.0) # volatility dependent jump rate
# jump is normal with st. dev. equal to λ1 times current st. dev.
affect1!(integrator) = (integrator.u[1] = integrator.u[1].+randn(size(integrator.u[1])).*λ2.*exp(integrator.u[2]./2.0))
# this works:
jump = ConstantRateJump(rate,affect1!)
# this does not
#jump = VariableRateJump(rate,affect1!)
jump_prob = JumpProblem(prob,Direct(), jump)
# get log price at end of trading days
global j = 0 # counter for day of week
global k = 0 # counter for trading days
for i = 0:(Days)
# set day of week, and record if it's a trading day
j +=1
if j<6
z[k]=(sol.u)[i*tics+closing][1]
end
if j==7 # restart the week if Sunday
j = 0
end
end
z[2:end]-z[1:end-1] + σme*randn(TradingDays) # returns are diff of log price
end
z = main()
plot(z)
``````
This one is going to be quite tough. The first thing to work out is that VariableRateJumps grow the size of the system, so the problem was that you needed to define a 3 noise process for this to work:
``````using StochasticDiffEq, DiffEqJump, DiffEqNoiseProcess, Plots
function MyProblem(μ0,μ1,κ,α,σ,ρ,u0,tspan)
f = function (du,u,p,t)
du[1] = μ0 + μ1*(u[2]-α)/σ # drift in prices
du[2] = κ*(α-u[2]) # mean reversion in shocks
end
g = function (du,u,p,t)
du[1] = exp(u[2]/2.0)
du[2] = σ
end
Γ = [1 ρ 0;ρ 1 0;0 0 0] # Covariance Matrix
noise = CorrelatedWienerProcess!(Γ,tspan[1],zeros(3),zeros(3))
sde_f = SDEFunction{true}(f,g)
SDEProblem(sde_f,g,u0,tspan,noise=noise)
end
μ0 = 0.0
μ1 = 0.0
κ = 0.05
α = 0.2
σ = 0.2
ρ = -0.7
λ0 = 2.0
λ1 = 3.0
u0 = [0;α]
dt = 0.01
prob = MyProblem(μ0, μ1, κ, α, σ, ρ, u0, (0.0,1000.0))
## jump in price
#rate1(u,p,t) = λ0 # constant jump rate
rate1(u,p,t) = λ0.*exp(u[2]) # volatility dependent jump rate
# jump is normal with st. dev. equal to λ1 times current st. dev.
affect1!(integrator) = (integrator.u[1] = integrator.u[1].+randn(size(integrator.u[1])).*λ1.*exp(integrator.u[2]./2.0))
# this works:
#jump1 = ConstantRateJump(rate1,affect1!)
# this does not
jump1 = VariableRateJump(rate1,affect1!)
jump_prob = JumpProblem(prob,Direct(), jump1)
``````
However, this now errors at the jump stage because continuous callbacks require pulling back the noise process, so it requires the definition of a bridging distribution. If you’re willing to work out the bridging distribution for the correlated noise process then we’re done. Basically, we need the distribution for if you know W0 and Wh, what’s the distribution of Wt in the middle.
Thanks for the explanation. I will fall back to using a constant rate jump, which will be good enough for my purpose, which is simply to illustrate an econometric estimator. I do have one remaining question, though, how can one see the realization of jumps is a simulation, their timing and size? Thanks again.
If you look for non-unique time points you can see the jump at those spots.
Thanks, that does the trick. Here’s a plot of returns with the jumps highlighted in red:
In this example, a jump is a normally distributed r.v. with mean zero, so not all jumps are outliers.
If anyone would like to see how to find jumps, here’s the code:
check jumps
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8844525814056396, "perplexity": 4652.930442880202}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439740423.36/warc/CC-MAIN-20200815005453-20200815035453-00396.warc.gz"}
|
https://scicomp.stackexchange.com/questions/2294/large-scale-generalized-eigenvalue-problem-with-low-rank-lhs-matrix?rq=1
|
# Large-scale generalized eigenvalue problem with low rank LHS matrix
Assume that we have generalized eigenvalue problem:
$B^HB\textbf{x} = \lambda A\textbf{x}$
where $A$ is an nxn Hermitian sparse matrix (n is very large, so we do not have $A^{-1}$ but can solve using iterative methods) and full-rank, and $B$ is a 2xn matrix such that $B^HB$ is also nxn but only rank 2. Thus, we know that this problem can only have 2 non-zero eigenvalues. Is there any simple way for finding the two eigenpairs corresponding to nonzero eigenvalues by taking advantage of the very low rank of $B^HB$? Assume that we have the two eigenvectors of $B$.
If I am only interested in the eigenvector corresponding to the largest eigenvalue, is there a faster way of finding it than using simple power iteration on the transformed standard eigenvalue problem: $A^{-1}B^HB\textbf{x} = \lambda\textbf{x}$?
Thanks!
This answer is essentially a fix of the approach suggested by @WolfgangBangerth, as there is not enough space in the comments.
Starting from $$B^H B x = \lambda A x,$$ if we are interested in eigenpairs corresponding to nonzero eigenvalues, then we must have that $B^H B x$ lies in the range of $A$, and $Ax$ lies in the range of $B^H B$, which is to say that, since $A$ is invertible, $$B^H B x \in \mathrm{Range}(A) = \mathbf{C}^n,$$ and $$Ax \in \mathrm{Range}(B^H B) = \mathrm{span}(B^H).$$ Now, the first constraint is trivially satisfied, but we must ensure that $Ax \in \mathrm{span}(B^H)$, which is equivalent to the constraint $$x \in \mathrm{span}(A^{-1} B^H).$$ Then if the columns of a unitary matrix $Q$ span the columns of $A^{-1}B^{H}$, we have that $$x = Q Q^H x$$ for any eigenvector corresponding to a nonzero eigenvalue.
We are now ready to use the mechanism from Wolfgang's approach:
1. Compute $W := A^{-1} B^H$ through two (preconditioned) Krylov solves
2. Compute $[Q,R]=\mathrm{qr}(W)$
3. Form $K := (B Q)^H (B Q)$ and $M := Q^H (A Q)$
4. Solve the $2 \times 2$ eigenvalue problem $K U = M U \Lambda$
5. Form the interesting global eigenvectors, $Z := Q U$.
• Thanks, Jack! Are you sure you need to do QR? I don't think the two vectors comprising the W matrix need to necessarily be orthonormal? (A quick test shows that it works even if you solve $W^HB^HBW\textbf{y} = \lambda W^HAW\textbf{y}$ ) – Costis May 22 '12 at 2:08
• The QR decomposition for an $m \times n$ matrix, $m \ge n$, is $O(mn^2)$. In this case, $n=2$, so the cost is linear and should be dominated by the Krylov solves. I am skeptical of how this would work without a QR decomposition. – Jack Poulson May 22 '12 at 2:26
• $W=QR$, so substituting: $R^HQ^HB^BQR\textbf{y}=\lambda R^HQ^HAQR\textbf{y}$. Multiply both sides by $R^{-H}$ and substitute $\textbf{x}=R\textbf{y}$. You end up with $Q^HB^BBQ\textbf{x}=\lambda Q^HAQ\textbf{x}$ which has the same eigenvalues as if you just used W instead of Q. – Costis May 22 '12 at 2:34
• I can get the eigenvector by just doing: $\textbf{w}=W\textbf{y}$ which implicitly multiplies by R since $W=QR$. Just tried a quick test case and it seems to work, although as you said I think it would be trivial do QR as compared to doing the Krylov solves. – Costis May 22 '12 at 2:45
• Ah, good point! I would still rather work with $Q$ though, as the cost of computing it is insignificant, and it will be more numerically stable. – Jack Poulson May 22 '12 at 2:48
If $B$ is $2\times n$, then the only two non-trivial eigenvectors (i.e. the eigenvectors corresponding to the two non-zero eigenvalues) can be written as linear combinations of the vectors that form the two rows of $B$. Let's call these two vectors $b_1, b_2$ so that $B=\left[\begin{matrix}b_1^T\\b_2^T\end{matrix}\right]$.
Now, let $P \in {\mathbb R}^{2\times n}$ be the projector from ${\mathbb R}^n$ onto the two-dimensional space spanned by $b_1,b_2$. Since we are only interested in vectors in this space, we know that the two non-trivial eigenvectors must satisfy $x = P^TPx$. The eigenvalue problem can then be written as $$B^H B P^T P x = \lambda A P^T P x.$$ Even though this linear system has $n$ rows, it is really only a two-dimensional problem since we can only determine only two components of $x$. The remainder of the linear system is over-determined, but we can select the two independent equations by projecting onto the non-trivial subspace: $$P B^H B P^T P x = \lambda P A P^T P x.$$
In other words, you only have to solve the $2 \times 2$ eigenvalue problem $$P B^H B P^T y = \lambda (P A P^T) y.$$ This is easy to solve since the matrices involved are only $2\times 2$ and the matrix on the right can easily be computed using just two matrix-vector and two vector-vector products.
• I don't think your assumption that $x=P^T Px$ is valid for non-trivial $A$. – Jack Poulson May 21 '12 at 20:14
• I think that $P$ will need to be modified to span a space including the columns of $B^H$ and $A^{-1} B^H$, which is at most rank 4, and only requires two solves with $A$ to set up. – Jack Poulson May 21 '12 at 21:37
• Hmmm.. I think you only need the columns of $A^{-1}B^H$ actually, so if you take $P=A^{-1}B^H$ Wolfgang's approach will work. – Costis May 22 '12 at 1:14
• @Costis: I think you are right. I have a nice explanation of why which I will post as an answer, as there is not enough space here. – Jack Poulson May 22 '12 at 1:31
• I feel like the nomenclature is slightly unclear. $P$ cannot be a projector, because $P$ is neither square, nor idempotent. For the sake of clarity, the relevant (orthogonal) projector appears to be $P^{T}P$. Your explanation also seems to implicitly rely on knowing that the eigenvectors of $B^{H}B$ form an orthonormal basis, which is why an orthogonal projector is appropriate. – Geoff Oxberry May 22 '12 at 2:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9532991647720337, "perplexity": 193.05996823646095}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991514.63/warc/CC-MAIN-20210518191530-20210518221530-00569.warc.gz"}
|
https://physics.stackexchange.com/questions/108394/r-symmetry-commutator
|
# R-symmetry commutator
I've seen the claim made several placed; Terning's "Modern Supersymmetry" p. 5 on N=1 SUSY algebra states it as well as anyone:
The SUSY algebra is invariant under a multiplication of $Q_\alpha$ by a phase, so in general there is one linear combination of $U(1)$ charges, called the $R$-charge, that does not commute with $Q$ and $Q^\dagger$:
$[Q_\alpha,R] = Q_\alpha, \;\;\;[Q^\dagger_\dot{\alpha},R]=-Q^\dagger_\dot{\alpha}$
The first statement is straightforward to see. But
(1) Why is there is one linear combination of charges that does not commute?
(2) How do we arrive at these commutators? (I imagine that the generators can be rescaled to give the coefficient $\pm1$, but I would like a clearer explanation.)
I spoke to a peer who said that the commutation relations could be found in a very general, mathematically heavy treatment of the most general possible SUSY algebra. Is there some easier way to understand?
The point is that the SUSY algebra, \begin{align} & \left\{ Q _\alpha ,Q _\beta \right\} = \left\{ \bar{Q} _{\dot{\alpha}} , \bar{Q} _{\dot{\beta}} \right\} = 0 \\ & \left\{ Q _\alpha , \bar{Q} _{\dot{\beta}} \right\} = 2 \sigma ^\mu _{ \alpha \dot{\beta} }P ^\mu \end{align} is invariant under multiplication of $Q _\alpha$ by a phase, \begin{align} Q _\alpha & \rightarrow e ^{ - i \phi } Q _\alpha \\ \bar{Q} _{\dot{\alpha}} & \rightarrow e ^{ i \phi } \bar{Q} _{\dot{\alpha}} \end{align} This means that you can have a SUSY invariant theory but still have an additional symmetry which differentiates between bosons and fermions (since it doesn't need to commute with $Q _\alpha$).
To see this explicitly consider the effect of an $R$ symmetry on $Q _\alpha$: \begin{align} e ^{ -i R \phi } Q _\alpha e ^{ i R \phi } & = e ^{ - i \phi } Q _\alpha \\ \left( 1 - i R \phi - ... \right) Q _\alpha \left( 1 + i R \phi - ... \right) & = - i \phi Q _\alpha \\ i \left[ Q _\alpha , R \right] \phi = - i \phi Q _\alpha \\ \left[ Q _\alpha , R \right] = - Q _\alpha \end{align} Thus this phase shift symmetry implies that the commutation between the $R$ symmetry generator and $Q _\alpha$ is nontrivial and hence bosons and fermions can have a different R charge. This is what makes R symmetries so special. This discussion is likely more complicated for ${\cal N} > 1$.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.999788224697113, "perplexity": 764.8367787836249}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662530553.34/warc/CC-MAIN-20220519235259-20220520025259-00583.warc.gz"}
|
http://www.ck12.org/geometry/Applications-of-Cosine/lesson/Determine-and-Use-the-Cosine-Ratio/r8/
|
<meta http-equiv="refresh" content="1; url=/nojavascript/"> Applications of Cosine ( Read ) | Geometry | CK-12 Foundation
You are viewing an older version of this Concept. Go to the latest version.
# Applications of Cosine
%
Best Score
Practice Applications of Cosine
Best Score
%
# Determine and Use the Cosine Ratio
Do you know how to use cosines when problem solving? Take a look at this dilemma.
A triangle has a hypotenuse of 4.5 inches. Angle A is equal to 40 degrees. Find the length of the adjacent side.
To figure this out, you will need to know how to use angle measures and cosines. You will learn how to accomplish this task in this Concept.
### Guidance
A trigonometric ratio for a specific angle will remain constant no matter how large or small the triangle is. The idea is that the sides will always be in proportion to each other. So, if you know the measure of an angle (and can therefore identify the value of a trigonometric ratio) and the value of one side, you can use trigonometry to calculate the lengths of other sides.
The trick is to use good algebra technique, and make sure that every time you set up a ratio, you are putting the values and variables in the correct places.
You can find trigonometric ratios by using your calculator.
You understand trigonometric ratios and have had a chance to practice reading specific values out of a table.
You can find the ratio for any trigonometric value using your calculator. Take a moment to locate the buttons for sine, cosine, and tangent on the calculator. Keep in mind that usually, sine is abbreviated as sin , cosine is usually abbreviated as cos , and tangent is usually abbreviated as tan.
Press the key of the ratio you want to find, and enter the angle in question. If you hit enter, or calculate, the calculator will show you the value of that specific ratio.
Let's look at finding the cosine ratio by using a calculator.
$\text{cosine} \ 23^{\circ}$
You can find the values for each ratio using your calculator. When dealing with large decimals values, it is usually best to round the numbers to the nearest thousandth. It gives you a reasonably accurate value without being too long of a number to work with.
The cosine of $23^{\circ}$ is 0.92050485345244..., or about 0.921.
Now as we work with cosines, we will be using given information to find the length of the adjacent side of a right triangle.
Remember that the adjacent side is the side next to the angle that we are working with. As you recall, the ratio of cosine is $\frac{adjacent}{hypotenuse}$ .
If you know the cosine value of the angle in question, and the length of the hypotenuse, you can find the measure of the adjacent side.
Look at the algebraic situation below.
$\text{cosine} \angle X&=\frac{adjacent}{hypotenuse}\\\text{cosine} \angle X \times hypotenuse&=\frac{adjacent}{hypotenuse} \times hypotenuse\\\text{cosine} \angle X \times hypotenuse&=adjacent$
If you multiply the cosine of any angle $X$ and the length of the hypotenuse, the result is the length of the adjacent side.
Write this statement in your notebook. Be sure to include that it is for cosines.
Take a look at this situation.
What is the length of side $BC$ in the triangle below?
Use the following equation to find the length of the side adjacent to angle $B$ . Notice that to find the length of the adjacent side that you will first need to find the cosine for angle $B$ . Then you can multiply that answer with the length of the hypotenuse. This will give you the measurement of the side next to or adjacent to the angle.
$\text{cosine} \angle B \times hypotenuse&=adjacent\\\text{cosine} 14.5 \times 5&=adjacent\\0.968 \times 5 &=adjacent\\4.84 &=adjacent$
The length of side $BC$ is 4.84 units.
Use a calculator to find each cosine. You may round to the nearest hundredth.
#### Example A
Cosine $45^{\circ}$
Solution: $.71$
#### Example B
Cosine $62^{\circ}$
Solution: $.47$
#### Example C
Cosine $22^{\circ}$
Solution: $.93$
Now let's go back to the dilemma from the beginning of the Concept.
To work through this dilemma, we can use the following equation and solve.
$\text{cosine} \angle A \times hypotenuse&=adjacent\\\text{cosine} 40 \times 4.5&=adjacent\\.77 \times 4.5 &=adjacent\\3.46 &=adjacent$
The missing length of the adjacent side is 3.46 inches.
### Vocabulary
Sine
a ratio between the opposite side and the hypotenuse of a given angle.
Cosine
a ratio between the adjacent side and the hypotenuse of a given angle.
Tangent
a ratio between the opposite side and the adjacent side of a given angle.
Trigonometric Ratio
used to find missing side lengths of right triangles when angle measures have been given.
### Guided Practice
Here is one for you to try on your own.
A triangle has a hypotenuse of 7.5 inches. Angle A is equal to 55 degrees. Find the length of the adjacent side.
Solution
To do this, we can use the following equation.
$\text{cosine} \angle A \times hypotenuse&=adjacent\\\text{cosine} 55 \times 7.5&=adjacent\\.57 \times 7.5 &=adjacent\\4.27 &=adjacent$
The length of the adjacent side is 4.27 inches.
### Practice
Directions: Use a calculator to find each cosine. You may round to the nearest hundredth.
1. $\text{Cosine} \ 33^{\circ}$
2. $\text{Cosine} \ 29^{\circ}$
3. $\text{Cosine} \ 73^{\circ}$
4. $\text{Cosine} \ 88^{\circ}$
5. $\text{Cosine} \ 50^{\circ}$
6. $\text{Cosine} \ 67^{\circ}$
7. $\text{Cosine} \ 42^{\circ}$
8. $\text{Cosine} \ 18^{\circ}$
9. $\text{Cosine} \ 9^{\circ}$
Directions: Find the length of the adjacent side.
10. A triangle has a hypotenuse of 7 inches. Angle A is equal to 60 degrees. Find the length of the adjacent side.
11. A triangle has a hypotenuse of 12 inches. Angle B is equal to 45 degrees. Find the length of the adjacent side.
12. A triangle has a hypotenuse of 8 inches. Angle A is equal to 35 degrees. Find the length of the adjacent side.
13. A triangle has a hypotenuse of 12 inches. Angle A is equal to 28 degrees. Find the length of the adjacent side.
14. A triangle has a hypotenuse of 6 inches. Angle A is equal to 33 degrees. Find the length of the adjacent side.
15. A triangle has a hypotenuse of 14 inches. Angle A is equal to 72 degrees. Find the length of the adjacent side.
16. A triangle has a hypotenuse of 11 inches. Angle A is equal to 80 degrees. Find the length of the adjacent side.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 27, "texerror": 0, "math_score": 0.9156146049499512, "perplexity": 307.2398334657971}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500830323.35/warc/CC-MAIN-20140820021350-00091-ip-10-180-136-8.ec2.internal.warc.gz"}
|
http://blog.spidey01.com/2008/04/
|
## Wednesday, April 30, 2008
Only in my home.... Could it take more then 6 hours of work, to complete some thing that *shouldn't* take longer then 20 minutes.
Well... at least it's done, documented, and due for having a report written tomorrow lol.
Oh joy, I can just bet good money if I had any on how long it'll take to do that...
## Tuesday, April 29, 2008
I find myself in a place surrounded by lions.
My muscles may tense with the rage that powers them
Yet I cannot bring myself to strike at my enemies.
I yearn to strike back, rend them limb from limb as they do me.
Yet irregardless of how much I burn, how much I stir,
My arms will not budge, they will not strike.
Oh how I long... To let them feel as I,
To forget my humanity, to strike without remorse..
With that same animal fury as they, so unrelenting.
But to do so, is to walk into the same corrupting fires
That blaze about them, oh how those fires burn.
Even as I bid my heart, become as stone..
It constantly reminds me it is made of flesh.
And not made for such dark ends...
Whether 'tis to be the greatest fool of them all
or weakest creature of all, I bid it to end
before I look back upon all my memories
as just another picture to burn.
-- Tue Apr 29 23:06:04 UTC 2008
### Stuck workin'
I'm trying to code and what song do they have on the radio lol.
Work, work all week long
Punchin’ that clock from dusk till dawn.
Countin’ the days till Friday night
That’s when all the conditions are right.
For a good time
I need a good time.
Yea, I’ve been workin’ all week
And I’m tired and I don’t wanna sleep
I wanna have fun
It’s time for a good time
I cashed my check, cleaned my truck
Put on my hat, forgot about work
Sun goin’ down, head across town
Pick up my baby and turn it around
Good time,
Aahh, I need a good time
I’ve been workin’ all week
And I’m tired and I don’t wanna sleep
I wanna have fun
Time for a good time
HEY!
Pig in the ground, beer on ice
Just like ole Hank taught us about
Singin’ along, Bocephus songs
Rowdy friends all night long
Good time
Lord, we’re having a good time,
Yea, I’ve been workin’ all week
And I’m tired and I don’t wanna sleep
I wanna have fun
It’s time for a good time
Whew
Heel toe dosey doe
Scootin’ our boots, swingin’ doors
B & D Kix and Dunn
Honkin’ tonk heaven, Double shotgun
Good time,
Lord, we’re havin’ a good time
Cause I’ve been workin’ all week
And I’m tired and I don’t wanna sleep
I wanna have fun
It’s time for a good time
Shot of Tequila, beer on tap
Sweet southern woman set on my lap
G with an O, O with a D
T with an I and an M and an E
And a good time
Shhheww, good time
I’ve been workin’ all week
And I’m tired and I don’t wanna sleep
I wanna have fun
It’s time for a good time
Ahh, turn it up now.
A Shot of Tequila.
Beer on tap.
A good looking woman.
To set on my lap.
A G with an O, an O with a D
A T with an I an M with an E
That spells good time
A good time
Ohh, I’ve been workin’ all week
And I’m tired and I don’t wanna sleep
I wanna have fun
Time for a good time
Twelve o’clock, two o’clock three o’clock four
Five o’clock we know were that’s gonna go
Closing the door, shuttin’ em down
Head for that Waffle House way across town
Good time
Ohh, we’re havin’ a good time.
Ohh, I’ve been workin’ all week
And I’m tired and I don’t wanna sleep
I wanna have fun
It’s time for a good time
Ohh, I’ve been workin’ all week
And I’m tired and I don’t wanna sleep
I wanna have fun
It’s time for a good time
Ohh, I’ve been workin’ all week
And I’m tired and I don’t wanna sleep
I wanna have fun
It’s time for a good time
Ohh, yea, a good time.
I need a good time.
Yea, a good time.
-- "Good Time", Alan Jackson.
Haha !
Managed to get off work early today, that's great from my pov since I usually leave work 15min late on Monday's lol.
Unfortunately it's after dark and I've still not gotten any thing onde, I really wish my family would /remember/ I'm not a fuk'n servant boy some days.
Break time though... starving lol.
FOOD now, work later... work in morning, 'special' operation in evening, so much for taking it easy.
## Monday, April 28, 2008
### before work...
Just a few minutes until it's time to get ready for work today...
Things have a nice way of arranging it so I've got no time before hand lol. I don't really mind having to work afternons on Mondays, but I would appreciate it if I _could_ get stuff _done_ during afternoons I'm home, rather then having to do things after dark until I pass out, or not at all in this place...
Be stuck at work all day so that leaves the until crack of dawn or point of no return mark (to be sleeping by or hate getting up for work tomorrow) in order to get things finished. That or cram it all into dead line day and really be driving out of my fscking skull trying to get it done with my family around.
Hmm... Wouldn't it be fun to throw everyone out for about 6 hours... ? Lol
## Sunday, April 27, 2008
### m/work/
Well, if nothing else over the past two days... I've at least gotten a chapter done with my book, and taken care of two issues on the website (y). I also managed to weasel through a little bit of code that might just go along way at making improvements hehe.
Spent a few hours relaxing in the proving grounds, didn't want to interrupt the training in the other server. I dunno if it's the time of day I usually /get/ to play or what, but PG#1 has been very laggy for me lately. Although I haven't had any problems on TG#3 which also should be out in England lol.
For tonight my plans are working on a small analytical script for ringing alarm bells, this should be interesting lol.
## Saturday, April 26, 2008
The more I'm here, the worse I feel
It's like wasting away, on a full stomach.
Every thing points to the past, will it never die?
As I long für the future, to hold all I seek in my hands.
Little more then that dream keeps my bones from breaking.
Oh LORD, why am I tormented so...
I know almost no peace, my enemies no relent, till I am bound and gagged.
I keep my mouth shut, less it bring the rain.
While the wild beats graze around me, in search of flesh.
All I can see, is the emptiness here, so far am I from my goals.
Even the slightest thought of them, makes me crazy.
Just to concentrate a moment on that sensation...
Of all the things that could be, my life fulfilled.
Why can't I have release? Even for a time of rest, just to be at peace.
They come at me, as if thieves by night to take my life.
Fore they take every thing and leave nothing behind,
Nothing save the corpse of my heart.
Where there was once joy, they bring pain.
Where there was once love, they bring sorrow.
Where there was once tranquility, they bring rage.
I find nothing of what I seek here, only a battlefield.
Layered with mines and entrenched with barbwire.
Some times I think, I would throw it all away...
Even my very dignity, my very sanity just for a few seconds.
But I know, that path seals fates but never frees them.
So I continue to stand, although I may crack and crumble.
My body refuses to break completely, surrender is not a word I know.
Yet sadly, nether is peace a word I'm familiar with either.
I am given sweet bread to eat, but nothing else in this place
Where I feel bitterness around me, as the chains that bind me
Are forged ever stronger about me. Whether 'tis by my destruction
Or by my freedom, be they even entwined:
I yearn to walk freely through the light again.
-- 2008-04-26
A very fine Italian expression comes to mind... couldn't spell it correctly to save my life but it fits like a glove. In English, it roughly means:
Damn the misery !
## Friday, April 25, 2008
### Ninja Class, 2008-04-25T2000Z
Conducted my basics level 'Ninja' Class today in our Raven Shield server. Originally planned for Thursday, 2000 Zulu but moved to Friday 2000 Zulu due to business reasons 8=), then moved from [SAS] Training Grounds #1 to [SAS] Proving Grounds #1 for technical reasons :-(.
I think things went fairly well, first time I've done a training session in a good while now... first time in a long time I've done one without a lesson plan either, you could say I winged it lol. Split things into three phases, because working with Recruit Shadows stealth training was the primary reason behind the session. I tried to keep things basic enough for recruits, less Capt'n Rouge have my head but still keeping the session interesting hehe.
Started off easy trying to get peoples minds into the right mind set. Mindful of ones surroundings, the possible cause and effect of ones actions, and thinking about how we can be more stealthy in what we do so often. And how to adopt from our usual "see tango, shoot tango" functioning, while that works under Green Light conditions it don't the rest of times. And when you doing recon ops, leaving 10 or 20 dead bodies behind doesn't help things. Belive it or not, there is a lot more to stealth then slapping on a suppressor and trying not to blow any thing sky high on the way in.
For the second part of training, I moved us onto the warehouse level so I could give them some practical practice time. Cleared the immediate area of threats and had everyone form an element while I set up a patrol route in between the two main buildings. They had to sneak past me -- I made the patrol route simple, so it was not very hard but still took some effort to complete.
My favorite part was when I turned around on my patrol and saw Ambu standing in the open and shouted 'tango spotted' and fired a warning shot, only to turn left and go 'tango spotted' as I saw Ghost -- good to see that if I had been a real tango, Ghost was ready to put'em down before Ambu could be harmed. Another fun time was when they snuck all ~5 or so of them right under my noise and then I turn around and their all standing behind me hahaha. I also got Ghost to try the patrol route I set up, and I gave it a go myself and snuck past to the designated objective on my first go.
A lot of times when I do training, I'll try to challenge people to do better. Like back with Rct Boone and Rct Mando two years ago, they thought it was impossible to do this one room without blowing the heck out of it with tactical aids. So I doubled dared them into taking the room without any tactical aids, full dynamic assault, and without using suppressed weapons to approach the target room. I tagged along with a light machine gun shooting at them trying to distract them and they pulled off the entry, cleared the room without a hitch, and were like "Wow, we actually did it". While I'm standing here with a smile on my face, thinking "I knew you lads could do it".
I don't like to ask any one to do some thing I'm not prepared to do myself, that's why in my training sessions I'll often try and arrange for me to have a turn at tricky things. I can't do worse then fall flat on my face in front of the recruits, and at best I can show it's actually possible (y).
I remember I once set up a 'room clearing challenge' that was modeled after my personal training sessions but I made it even harder and posted it. No body was able to complete the challenge, not even me for a long time... so i started doing more dynamic training myself and I eventually scored a respectable spot on the score board, only name there but that one really nagged at me. How could I ask the recruits do complete a task I couldn't? So I set out to train until I could, and I did it..
For the last part of training, I wanted to give the guys a chance to relax yet still keep on learning. So I set up a mini live fire scenario where we had to do a double-hostage rescue, rules of engagement red (aka fire on command), and minimise loss of stealth and enemies neutralized to the max possible.
It took maybe 4 or 5 tries but we eventually did the mission, 4 man element, about 3 tangos killed, two hostages rescued, but one causality in the process :-(
What I really liked about the live fire scenario is I got to see my teammates at work. Setting up angles of fire on the risky threats while we snuck past, so if any one got seen, the tango would have a nice double tap to the head before they could fire. And communicating the positions of the enemy patrols among each other and adopting our plan and formation to the situation.
We also found a few bugs in the map that really made sneaking in some spots harder and once we got 'stepped on' so to speak which resulted in the entire element being either gunned down or blown up lol. Eventually though we did it with flying colours (y) but we got plenty of good practice in the middle hehe.
It's nice to do training again, I really love to have a chance to teach people. Of all the tasks that have eventually found there way to me in [SAS], the one I've always carried out the happiest is trying to pass on my experience and what was passed onto me, to a new generation.
In a lot of ways, I think I'm really starting to get to be an 'old man' of sorts among my teammates... Although most of them are older then me by a good margin or just a few years younger. I've been apart of this team for almost 2 and a half years now. I think in a way, I'm kind of like how people such as En4cer or Shield were to my generation of recruits. I'm not the best teacher but I do sincerely try to help us move into the right direction.
You're only at your best, when you accept the limit of your current abilities as such rather then seeking a way to better yourself.
And in the [SAS], we always seek to improve ourselves for the future.
### Writer's Block: Happy Friday
What are you most looking forward to this weekend?
Live Journals Writer's Block
That if _anyone_ tries to gloat how they've flooded my schedule,
I have planned nothing for the weekend :-P
### So Winucking funny, it's pitiful !!!
I honestly don't know if I should laugh or cry, it's really that bad but I'm laughing my ass off right now lol.
Like last week I set up Microsofts Services For UNIX 3.5 on my XP machine, configured user/group maps from my Windows XP user account to my account on the OpenBSD server with the NFS shares. I followed the documentation that came with the software to get it set up.
If I try to access \\vectra\srv\nfs through Windows Explorer I either get an error message or I get the files, or I get BOTH. And trying to even right click to highlight any thing in Windows Explorer causes lock ups for several seconds. If I use the IP address rather then the alias 'vectra' that I setup in %SystemRoot%\System32\Drivers\etc\hosts it works slightly faster if I try to use the dir command in the command prompt which is stupid.
When I try to map the share to a network drive in Windows Explorer it dies with an error at \\vectra\srv\nfs, but I can 'browse' for it and then use it some times. It also ignores the maps I set up in the graphical SFU admin program so I can't access files -- and still buggers up when I tell it the login datam.
So finally pissed off after a week of this lag & lock crap, I open a command prompt with SFU's shell and check the mount commands documentation which tells me to use the Windows Uniform Naming Convention (UNC) syntax for the file paths.
mount \\vectra\srv\nfs N:
And I get an error message about \\vectrasrvnfs being an invalid command line argument to mount. So I for the hell of it I try the unix style host:share syntax to see if that works.
mount vectra:/srv/nfs N:
and BOOM it friging works !!!
I open windows explorer and go to N:\ in the nav bar and it works QUICKLY just like the NFS Shares mounted on my PC-BSD system do. Now my NFS shares are working through Windows Explorer properly, not like a piece of garbage as it was when doing through the GUI on Windows.
THE IRONY OF IT ALL !?
Microsoft Windows is noted by some people for giving easy, graphical ways to do things that 'unix' systems are supposed to lack quality documentation.
I used the 'easy', 'graphical' interfaces in Windows to do what takes 2 seconds in Unix which is 'supposed' to lack documentation and it works like shit or not at all in Windows.
I used the 'hard', 'command line' like way on Windows, only to find that the 'supposed' good documentation is wrong, and guess what -- Doing it from the command line on Windows works ____better____ then the GUI once you figure it out.
Time to roll on the freaking floor laughing until my sides hurt !!!!
## Thursday, April 24, 2008
Well, it's been almost 4 hours or more since I took a 'break' from SWAT4... A few min to lay down, then get back to work was the plan.
Family has such a great way of fscking you over, don't they?
### Writer's Block: Define Cheater
What is your definition of cheating?
Live Journals Writer's Block
Hmm, guess it depends on the subject.
Strangely when I've got to use mg or emacs instead of vim/vi for some thing I often feel like I'm cheating on an old friend... LOL
Generally I define cheating as unfair advantage, being able to fire 2*as fast for example or shooting through walls, spawn raping, etc.
And as to the 'other' kind of cheating that comes to mind -- not worth doing.
### Who flipped the crash override?
Almost 0900 UTC (local 5am, ESD) and time to leave for work in less then ~4 hours... Wide awake and ain't life wonderful?
More to do tomorrow... Because it ain't done tonight +S
### Groaning spider
So flib'n tired....
Ma is pissed off because the A/C isn't running a lot, so she can't sleep, for crying out loud how did she ever stand Florida for 30 years?
The less Ma sleeps, the less work _I_ get done, which is bad because that is more time she spends trying to piss me off. I think it's fair to say at this point, the more time I spend around my family -- the more misserable they make me... Productivity isn't even possible around them.
I wish I knew some thing about electrical wiring and manipulating A/C units the 'hard' way, maybe if I did, I could try to hot wire the bloody thing into running for her or fry myself in the process: which would solve the problem too LOL.
Oh what joy it would be... To actually be able to have a /productive/ day and get some sleep at night too, sheesh... I can't remember the last time I could get a decent nights sleep, maybe the early 1990s or late 1980s? For some one born in '88 that sucks lol.
*Sighs* with luck I can at least have time to deal with my mail tonight, it would be kind of nice if I could get some work done tonight, as opposed to having to go to work in the morning, be driven batty, and kept from doing any thing worth doing until after dark... As f'ing usual in this rats nest, I always got stiffed on time to do things but dragged into hop'n, skip'n, and jump'n about to do every thing else but what I've got to get done.
I need a freaking vacation
Whew it's been a long day.
Been trying to get stuff done on the website in between instant messages, I really wish we could pick up the pass but we're kinda split between three countries and our associated jobs.
Had to conduct a tryout in SWAT4 today, first time I've actually done it in SWAT4. Usually I only get to observe but not conduct, this time though it was a situation where I got asked to take care of business. Don't get me wrong, I've done plenty of tryouts and been to a ton of them in both RvS and SWAT but it's still a tryout.
I don't really like conducting tryouts but I've done enough that I can get them running quickly and smoothly when not training others about the process. You've got to be able to judge the recruits performance, and I don't really like to judge people. When it comes to doing tryouts, I am extremely strict, that is more or less how I was trained.
Some of my friends would say I'm just a right * proper bastard, suppose I am really lool. But when it comes to tryouts, even more so because I understand how important they are -- and why we should be strict. I don't have any problem with failing some one on their tryout, who said it was easy? But I'm also as fair as the Tryout SOPs allows me to be, things have to be done _correctly_ but honorably (y).
Although, to be perfectly honest I don't think I've met any recruit that didn't have it in them to pass a Troopers Tryout. If they didn't, odds are they wouldn't have been granted Recruit tags yet.
Heh, I remember my first clan back during my MechWarrior career. When I ended up with [SAS] down in Rvs, I was like "Huh? I'm getting trained and I've not been given a tryout yet?". [SAS]'s selection course I think is as close to the real thing in terms of being more like the military then the average group of gamers.
I'm really an easy going guy for the most part, but tryout conductor == iron fist-ed stickler for the tinest detail. When I conduct a tryout, you can generally be sure that who ever passes got a solid tryout or the ship went down trying!
The one thing I do like about conducting tryouts, is getting to debrief the recruit when they've passed. That's the good part before the paper work and admin work pile up when they've passed haha.
## Wednesday, April 23, 2008
GRRRRRRRRRRRRRRRRRRRRRRR
PC-BSD update for 1.5->1.5.1 went smoothly if slowly but broke flock :-(
Well, at lest they are starting to figure out you don't have to nuke every installed port/pkg to do upgrades... They however still don't seem to have fixed the flib'n syntax error in their sound detection systems XML file, which has been there since the new sound detection system hit.
## Tuesday, April 22, 2008
Hmm, forced to get up early before work.
Interrupted while trying to get 'work' done before having to go to work
Shouted at it's time to go once I finally get to sit down in peace >_>
What a way to start a work day, I love my family!!!
### Playing with perl
Ok, so I got bored and wound up with a new toy :-P
Hmm, now to hunt down some snacks...
1: #!/usr/bin/perl
2:
3: use strict;
4: use warnings;
5:
6: use Getopt::Long;
7: use Image::Size;
8: use Pod::Usage;
9:
10: =pod
12:
13: image-size.pl
14:
16:
17: image-size.pl [options] [file ...]
18:
19: Options:
20: -f, --ask-file include output from file(1)
21: -h, --help print usage help info
23: -s, --silent ignore errors
24: -v, --verbose more verbose output
25: -d, --die blow up on first error
26:
28:
29: A quick perl script to do what I've always wanted, tell me the height and width
30: of an image file. Without having to open a graphical program (X11) just for the
31: sake of finding out! File formats supported are based on the Image::Size module
32: which is required for this script to function.
33:
34: Special thanks to the creators of the llama book for mentioning perls
35: Image::Size module and thanks to the creators of that module!
36:
38:
39: =over 8
40:
42:
43: Politely ask the systems file utility about the files format. This option
44: requires the file program installed and accessible through your environments
45: PATH.
46:
47: =item B<-h, --help>
48:
49: Print out a summery of command line options and exit.
50:
51: =item B<--man>
52:
53: Displays this manual page using the provided Plain Old Documentation.
54:
55: =item B<-s, --silent>
56:
57: Ignore failed files and continue, combine with -v or --verbose to include the
58: file name but still skip printing error messages.
59:
60: =item B<-v, --verbose>
61:
62: Print the file name along with it's width, height, and type (if known). Each
63: field is also separated by a new line and ordered in a more elaborate format.
64:
65: =item B<-d, --die>
66:
67: Default behavior for image-size.pl is to print a simple warning message if any
68: specified file can not be operated on.
69:
70: When the the -d or --die switches are given, the program will halt execution
71: with an appropriate exit status instead of continuing.
72:
73: This is useful for when you do not wish to continue after an error when
74: processing a list of files
75:
76: Refer to the perl documentation for details about how the exit status is
77: affected.
78:
79: =back
80:
82:
83: The image-size.pl utility exits 0 on success or returns via perls die() if -d or
84: --die was passed on the command line.
85:
87:
88: L<perl(1)>, L<perldoc(1)>, L<file(1)>
89:
91:
92: Copyright (c) 2008, TerryP <snip>
93:
94: Permission to use, copy, modify, and distribute this software for any purpose
95: with or without fee is hereby granted, provided that the above copyright notice
96: and this permission notice appear in all copies.
97:
98: THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH
99: REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND
100: FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT,
101: INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS
102: OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER
103: TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF
104: THIS SOFTWARE.
105:
106: =cut
107:
108:
109: # message for pod2usage()
110: my $usage_msg = "$0 -- figure out the height and width of image files\n";
111:
112: # message to display on error getting the image size
113: my $warn_msg = "File does not exist or cannot be opened: "; 114: 115: my ($deadly, $help,$verbose, $man,$silent, $ask) = undef; 116: 117: { 118: Getopt::Long::Configure('bundling'); 119: GetOptions( 120: 'f|ask-file' => \$ask,
121: 'h|help|?' => \$help, 122: 'man' => \$man,
123: 's|silent' => \$silent, 124: 'v|verbose' => \$verbose,
125: 'd|die' => \$deadly, 126: ) or$help++;
127:
128: pod2usage(-msg => $usage_msg, -output => \*STDOUT, 129: -exitval => 1, -verbose => 0 ) if$help;
130: pod2usage(-verbose => 2, -exitval => 1) if $man; 131: 132: exit 1 unless @ARGV; 133: 134: # check if we are reading file names off stdin 135: if ($ARGV[0] eq '-') {
136: while (<>) {
137: chomp;
138: &print_size(imgsize($_),$_)
139: if -f $_ or$silent ? next : &handle_error and next;
140: }
141: } else {
142: foreach (@ARGV) {
143: &print_size(imgsize($_),$_)
144: if -f $_ or$silent ? next : warn $warn_msg."$_\n" and next;
145: }
146: }
147: }
148:
149: sub print_size() {
150: my ($x,$y, $type,$file) = @_;
151:
152: $x = 'unkown' unless$x;
153: $y = 'unkown' unless$y;
154:
155: # keep it simple stupid
156: my $std_msg = "width-x:$x\theight-y: $y\tfile type:$type\n";
157:
159: my $verb_msg = "file name:$file\n" .
160: "width-x: $x\nheight-y:$y\n" .
161: "file type: $type\n\n"; 162: 163:$verbose ? print $verb_msg : print$std_msg;
164:
165: print "running file(1) ...\n\n",file $_,"\n" if$ask;
166: }
167:
168: sub handle_error() {
169: $deadly ? die$! : warn $warn_msg."$_\n";
170: }
171:
## Monday, April 21, 2008
I've almost finished Learning Perl with about 10 pages left, honestly it makes me itch to find a copy of the Alpaca book just to find out more lol.
It's rare that I read about any given language beyond it's documentation or tutorials but I've rather enjoyed this O'Riley, it's understandable why the company has a good reputation (y). I even noticed a module in the appendix that might be handy for implementing a program I've always wanted but have never found one before.
Perl is a very fun language to put to work, I don't think I'd want to do any thing lengthy in Perl (~1000s of lines) but for getting stuff done it's quite handy. It also comes with a ****load of documentation hehe, not to mention I like the POD (Plain Old Documentation) style for doing things.
### Melon Popping through BF2
Some recreation for the day hehe: found decent infantry only server of BF2 running my favorite city map. Rotated between weapons as I usually do when I'm not in a squad that is 'serious', eventually wound up as a Squad Leader going helmet popping with a Accuracy International.
I had a couple major runs, just found a nice covered spot and started sighting targets. After pushing the ammo counter to it's limit I racked in a Veteran Sniper Combat badge +S. Normally I aim for centre of mass, some times the heart or lungs of a target specifically when sniping. But I usualyl go for a head shot when dealing with a threat, in BF2 I find them quite easy to manage on other snipers in particular.
It's rare though that I usually stream though targets like that though, like 20 some rounds and almost as many kills later I'm still sniping lol. Given a good mark to shoot at, a secure firing position, I'll usually be able to hit what ever I can see... Having to do that while every one and their dog is trying to shoot you, is a different story >_>
### urxvt & utf-8
Found an interesting problemo tonight with using vim in rxvt-unicode. Since the German umlauts and the old double-S (ä ö ü ß) are a bit tricky for me to make without a copy/pasting then where needed I usually use the alternative (ue oe ue ss) where possible. Since vim 7 has spell checking, I've got spelllang set to handle US and British english plus German in the spell checker. Which really works very nice because my spelling is a bit of a hodge podge for those 'differences' in English spelling.
While I was working on the translation last night, I employed both Vims spell checker and a translator program to help me with the grammer. Vims spell checker has the lovely ability of being able to correct things, taking the form I can easily get out of a US QWERTY board and replacing with the proper characters (ä ö ü ß), I knew there was some thing I loved about vim xD
The only thing is, trying to open the file again with vim caused it to display weird, all of the umlauts replaced with strange characters. I checked Vims idea of the files encoding and it was UTF-8, just like my system locale settings should be saying.
Yet, (n)vi, cat, and other utilities were showing them fine. Setting the terminal encoding in vim or launching it with LANG=de_DE.ISO8859-1 got them to display properly but still senseless :\. My ~/.zshrc sets LANG to en_US.UTF-8, why nothing seemed to work right i dunno. Forcing urxvt (rxvt-unicode) to run with the C locale set (LC_CTYPE="en_US.UTF-8") got it working fine.
I'm not familiar with that end of C++ but I wouldn't be surprised if it relied on the same setlocale() routine as C apps tend to. The FreeBSD handbook said to set LANG and MM_CHARSET and not LC_* variables for the environment. I've fixed it so the system kicks urxvt off with the right locale settings so the problem is fixed.
still a little odd imho lol
## Sunday, April 20, 2008
Nothin ever stops all these thoughts n the pain attached to them
Sometimes I wonder why this is happenin
Its like nothin I can do would distract me when
I think of how I shot myself in the back again
cuz from the infinite words I can say i
Put all pain you gave to me on display
But didnt realize instead of settin it free i
Took what I hated and made it a part of me
It never goes away
It never goes away
And now
You've become a part of me
You'll always be right here
You've become a part of me
You'll always be my fear
I cant separate
Myself from what I've done
Giving up a part of me
I've let myself become you
Hearin your name the memories come back again
I remember when it started happenin
I see you n every thought I had and then
The thoughts slowly found words attached to them
And I knew as they escaped away
I was committin myself to em n everyday
I regret sayin those things cuz now I see that i
Took what I hated and made it a part of me
It never goes away
It never goes away
And now
You've become a part of me
You'll always be right here
You've become a part of me
You'll always be my fear
I can't separate
Myself from what I've done
Giving up a part of me
I've let myself become you
It never goes away
It never goes away
Get away from me!
Give me my space back you gotta just Go!
Everything comes down the memories of You!
I kept it in without lettin you Know!
I let you go so get away from Me!
Give me my space back you gotta just Go!
Everything comes down the memories of You!
I kept it in without lettin you Know!
I let you go
And now
You've become a part of me
You'll always be right here
You've become a part of me
You'll always be my fear
I cant separate
Myself from what I've done
Giving up a part of me
I've let myself become you
I've let myself become you
I've let myself become lost inside these thoughts of you
Giving up a part of me
I've let myself become you
Dropped off around 0630 last ngiht :\
Spent msot of it hacking on the SOP Rewrites and chatting with friends. The room clearing section is quite difficult, it needs to be sufficantly normalized which is a sticky operation. Trying to balance between short and sweet with completeness makes managing its verbosity a tricky one to.
I also spent some time doing a little translation work. I quite like trying to translate small portions of text between English and German, because it gives me a chance to get a greater feel for the language. You've got to learn to think a different way when using another language, I generally try to be as accurate as I can but enjoy the 'soft spots' where the idea becomes clear but, trying to put express it in the other language is hard.
I would love some day, the opportunity to learn it in a lot more depth. I'm generally able to read well enough when I have a dictionary to help, but... Trying to express things well is a bit more challenging. It takes time and experience to learn a languages grammer. It's not quite like learning a programming language... Given decent documentation I generally can pick up most common programming languages in an hour or two depending on it's size, I especially love it when some kind of Backus–Naur Form (BNF) listing is available, makes learning a programming language much faster lol.
Conventional languages for people on the other hand, ain't quite so simple :\
## Friday, April 18, 2008
I've had a special briefing to give, a Rct to train with, two SNCOs to join int he shoot house for a few things, a ban/unban issue, yukes live op that almost launched, finishing [most of] my auditing and tending to a few site matters, and setting up a reminder should any thing remaining slip my mind.
I'm finishing my report and the last of my work for the day if I've gotta lock myself in a closet to do it >_>
Hopefully this time I won't be working on it at 0500Q lol.
The good news, I've got the first draft done on my report (yay!), the bad news is it is 0550Q and I'm less tired then I was at 1300Q loool. All that is less I guess is one or two more sections that I need to write.
One thing I really need to do is start organizing more of my common things into packages, I freaking love TeX :-).
Also good news is I've gotten auth for putting TeX Live PBI's on their testing server so that means I can get cracking on field testing it here, then start trying to push it on to PBIDir.
With luck, tomorrow might be a SWAT4 Live Op so I'll have time finish work on the site and get some more interesting work done too xD.
I was trained in Rvs, I took that route for the Selection Course and wouldn't change that decision for any thing. But I do rather feel at home with SWAT4. I've been playing S4 since the beta and bought the game shortly after it came out. The thing I dislike about RvS is the door bugs, SWAT4.. Hey some suspects might draw faster then Jesse James but at least they eventually go down when you shoot them, can't say the same for Raven Shields occasional super tangos dancing between bullets ^_^.
What pisses me off is I paid $50 for SWAT4,$30 for SWAT4:TSS and SWAT4:TSS although it is an expansion pack is implemented through the (very shitty compared to SWAT3s) mod system, which is probably the ONLY reason they ever released the SDK. Because the change in dev-teams meant a need for a quick way of hacking in the other 40% of the freaking game!
What a company...
Oy...
Set to work on preparing my report, down to the AFK again the second half a butt cheek is in the chair level of interruptions as normal 8=). Finally took a break for dinner, a nice double whammy of Whoppers
Crashed on the couch for awhile until Coco finally succeeded in guilting me out of my spot, jeeze it's only been _my_ spot for the last decade ^_^.
Dropped off to bed, never made it through most of Mission To Mars and woke up ~0300 in search of some Raspberry Danish.
May as well get back to work eh?
Some times I wonder if I ever sleep without being ready to drop :\
## Thursday, April 17, 2008
### task.done() ? rest : work;
Not an overly busy day compared to the last few but the tiredness is catching up. Current plan of action is 5-10min to lay down and then start getting to work on things.
I need to finish evaluating things, start preparing my report, and get things done as necessary lol. I've also got a related appointment for this weekend so I've noted that my report should be considered due by Monday.
Managed to get in server for some SWAT4, *finally* game time this week. Only had time for two or three rounds though, got to move through on Point and First Cover positions. Been awhile since I've done any serious point work, much of my time as SSM/RSM had me in EL's boots lol. Glad to see I'm still effective though, was able to keep a dynamic pace as Point with an MP5 and as First Cover with the M4A1 and still keep accuracy up.
Firing the M4A1 in SWAT4 when on the move is not very hard but takes practice to do it, gotta learn to control and manage the weapon. The recoil in the game is unrealistically appalling but it makes taking threats out as you move to your Point of Domination a little more challenging then in Rvs or using one of the SMGs.
It's also nice to see how far one of our new Troopers have come, Big12... He's come a long way since the first day he set foot in our server hehe.
It's just the way I am I guess, work a bit, game a bit, work more. When I've got a big task list it tends to weigh on my mind to much to spend an entire day off totally lol.
Today also is my Fathers birthday, he was born in 1946 iirc so it would have been his 62nd.
## Wednesday, April 16, 2008
Been in my usually hyper-jumping multi-tasking state as usual.
I've had enough IM windows open simultaneously of late between friends, business, and other matters to be aiming for a new personal record. It's a good thing I didn't expect to get much done as Virtual WO1 on day 1 or day 2 looool.
Some times my systems load is light, at the least I usually will have a web browser, instant messenger, and a command prompt open some where (local or network). Other times even on my desktops 1600x1200 resolution screen space can be hard to get hehe.
That's just the way I am though, I'm usually avail about 14-18 hours a day or more depending on working hours lol. Most people know, don't call me, messenger me ! One thing I do love about IMs over phone calls is you don't have to respond instantly, and you the message buffer handles incomings between AFK spikes... It's sort of necessary with the way life at home is lol.
A few things on my agenda are already in the works... Most of which do to with things with the new AoR in [SAS] and the SOP Rewrites. You could say taking over a larger scope of things calls with it a bit of nose poking around hehe. With the SOPs I've been trying to get work done & keep the RSM informed.
Another thing I would like to do is find some time to learn Perls OO features. I'm no real fan of Object Oriented programming although I believe it is just a means of doing the right things a certain way. In this case it really is just the simple fact that I don't know much about Perls Object Oriented syntax and I'm not interested in waiting for Perl6 to learn it lol.
For me Perl is getting to be kind of like an old friend in the tool chest, like a trusty hammer or a favorite screw driver. I often find my self using Python, Perl, and Ruby for scripting tasks or random programming that I either can't use some thing else for or just don't have time/energy for using another language for.
And for languages and tools I often put to work, I like to know them like a well read book ;-)
What can I say but I love to learn!
## Tuesday, April 15, 2008
Having dived down the rabbit hole an RSM and landed a Warrant Officer Class 1 in Wonderland, I know the red pill was the best choice.
I'm not really sorry to see the RSMs post go, it is really a lot less to worry about. I still have responsibilities but of a different nature, the one thing I really do like about the WO1 task set is the potentional jack of all trades nature it holds.
Some people don't do things by halves, including me. I know many things but am master of few, yet I often study well beyond my skill sets own domain.
I've been transferring as many files as I think relivent over to our new RSM but I havn't had time to formally report for my own orders yet. Now being the most healthy one here I'm stuck on the on call every 2 minutes at the drop of a pin instead of ht emore normal 5 minutes :-(. First time I've seen my mother throwing up in almost 20 years, whether she has a cold or not I know I'm only with allergies. Ended up with blood in the output of one of my allergy attacks today, not sure if that is a good sign or a bad sign but I know one thing -- my sinuses feel a hell of a lot better !
With luck the down time won't be to much... I'm usually like a butler around this house any way :\. I'm not exactly the best person for the job imho but there is no other course of action. What would my family do if I didn't act the way I do.... lol.
## Monday, April 14, 2008
### Dixie reborn
I find the lipstick style that PC-BSD uses by default a little yucky to stare at all day so I set it to my favorite (Keramik). I have installed a ton of colour schemes off kde-look.org but hate most of them..... One that I found was essentially an emulation of Ubuntu's "Human" setup, which I do like very much or else I wouldn't be using a modified form of it.
The colour scheme and GTK+ widgets is actually the only good thing I can say about Ubuntu 6.06 when I tested it last year. At first I thought I might try a custom colour scheme with a red title bar, give KDE a nice little FreeBSD flair ;-) But I couldn't get a shade of red that I could live with, like using, and not be distracted by in the same colour. PC-BSDs default window decor, 'Crystal' didn't match well with the human colour scheme so I changed it repeatidly trying to find one that did match well and I could live with. I couldn't find one I liked, so as usual I wound up with Keramik haha. No matter what I do I always find that window decore attractive 0.o. I also installed the Human_KDE icon set to match the human colour scheme.
I copied over the KMenu and Konqueror icons from PC-BSDs default theme into a copy of Human_KDE and I made a clone of the Human colour scheme. Then changed the desired portion of the title bar to use PC-BSDs default colours for it instead, adding some contrast. I loved the match up and it is much more appeasing to my eyes :-)
A bit of both muahuaha !
As far as the screen shot, the background is my 'choice picture of the day', rxvt-unicode is running and displays a listing of my home directory and the system versioning. Normally my desktop is some what dominated by a terminal emulator and a web browser with a few IM windows for icing on the cake. Below urxvt is linux-flock open to a live journal page. Lower left hand corner is XMMS blasting music while the lower right hand corner is a 'KasBar' which provides a replacement for the usual taskbar. While still giving me some thing similar to how Window Maker solves the problem hehe. There are no icons on the desktop only the panel.
I placed the main panel on top because with a laptop + touch pad I find it easier to use and more comfortable on my eyes with the widescreen display. From left to right on the top panel there is the K-Menu button, System [folders] Menu, Settings Menu, Web Browser (flock), Terminal (~/sh/urxvt big), Network Folders, the system tray applet which shows PC-BSDs battery monitor, Klipper the clibboard app I wish Windows XP had, KMix (volume/mixer control), PC-BSDs update manager, KOrganizer (which may be getting the ax soon), Pidgin (AIM/MSN/YIM/ICQ/XMPP chat), and Konversation (IRC). Over to the righter' side is a desktop pager, lock/logout buttons, and a clocklet.
I feel the system has a bit of a Gnome / Ubuntu look and feel to it but I'm finding it quite comfortable. Because I like the pleasant feel of it plus it matches my work flow while still being KDE3 and FreeBSD powered instead xD.
## Sunday, April 13, 2008
Well after a bit of work the system is now fully operational and I can pass out >_>
Managed to get to bed at a nice early post 0415, only for a crazy set of dreams. I dreamed that my allergies were so bad I could barely breath and my throat so dry it was choking me to death. Yet as much water as I drank, it was as if it never touched my tongue :\
It's kind of strange but when I dream, I usually know I'm dreaming pretty quickly so I wasn't afraid just uncomfortable.
Transition to leading a SEAL team on an dockside assault with an M4 in hand and MP5 slung. Sent the team below while I took down the ships bridge, left the 'abnormal' terrorist leader with a few 9x19mm in the head after I figured out a way around the personal engey shield and regrouped.
Some talk about a dead mans switch and time to evacuate. The SEAL team pulled out while I went to check on the status of the lower level, only to find the NSA and Nurses tending to the hostages.
Transition yet again to being stuck in the middle of the desert with just a pistol in each hand, Tomb Raider style and a bet on who makes it out of their first. Only to end up with a psycho-path trying to get there first, a fairly attractive brunette in toe but horriabley useless in a gun fight in the race to the LZ lol.
Dang man, I have strange dreams lol.
My allergies have not been to bad today but I haven't eaten much all day... There is nothing to take, even the stuff that comes most highly recommended doesn't do squat. Most of them are just 10mg of loratadine which is pretty useless IMHO. With the way I've been feeling I think a decongestant might be helpful but not exactly worth the price tag. I can't wait for winter to come back !!!!!!!!!
I've spent most of my time working on the laptop and chatting with friends. Still havn't gotten much done today of productive use. Next on my list is restoring TeX Live from backup which I can do tonight. If Martínez ever gets back to me about the PBI Testing ftp server I might be able to get a TeX Live PBI set ready to rock & roll, it's a little to freaking big for any of the places I have storage on >_>. Once a working PBI is out, I can try and see what I can do about making a port of it once the PBI's out of my hair.
### Reinstalling all the software
still to do:
mencoder -> build from source
konverter -> I still ain't used it but want it installed just in case
linux-flock -> from ports (rpm)
linux-realplayer -> from ports (rpm)
linux-mplayerplug-in -> install after flock
libdvdcss -> build from source
portupgrade -> needed for Neo Ports Manager development (it's the backend)
emacs or xemacs -> from source, rarly use emacs but I like to have a fat and micro sized emacsen installed.
Ports/Packages that PC-BSD actually saves me time on are perl, python, ruby, gtk2, subversion, kdegames, xv, kdegraphics, kdepim, libdvdread, libdvdnav, cdrtools, mplayer, and X.Org ;-)
I've been using Window Maker for a long time now, I think I'll go back to KDE3 for awhile. I've always liked using KDE3, even though I love Window Maker hehe. I've thought about switching to a less 'common' window manager as well but lack the time to RTFM and bend it to my wishes, especially since the ones that interest me can be quite keyboard driven hehe. I can use just about any window manager but I'm partial to Window Maker, the Box family, KDE, and Gnome. The only window manager I've used that I don't like, depending on what one considers 'explorer.exe' any way ^_^ is TWM, I used to use it over VNC to my test machine back in the PC-BSD 1.0RC2 days... I find it very much less then pretty. I don't care much for FVWM1/2 and most of it's variants either but would prefer them to TWM for using 24 * 7 * 365 ! Oh and I also have to reinstall TeX Live 2007 but that I have backed up to beat the bands hehe.
All that is left for tonight is to configure and build Vim before I hit the hay.
./configure --with-features=big --with-x --enable-gui=gtk2 --enable-xfontsel --enable-rubyinterp --enable-pythoninterp --enable-perlinterp --enable-cscope && gmake -j4 && gmake install
Technically all I could leave it as --with-features big and --enable-gui=gtk2 but I usually su[pply the other args to the configure script instictively.
Tomorrow I can finish installing the remaining apps since most of it is just waiting on me to install a ports tree. I also need to get the NFS/SMB shares sorted on Vectra & SAL1600, look up my ICQ# as it seems I lost both my KDM Theme and Pidgin settings by lack of forsite :-(. No matter, I actually like the more Gnome'ish PC-BSD KDM theme lol. I also remember the logins for my AIM/M$N/Y!M/XMPP so that one is not a big problem hehe. And of course as always to playfully mold KDE to match my work flow, muhauahuaha ! /* * list of software I've installed tonight: */ // languages gcc43 // including the GNU Compiler for Java javavmwrapper, JRE, and JDK rubygem-rtags rubygem-rake guile scheme48 // libraries Qt4 // development tools gmake // needed for vim, gtk+, qt3/4, and my tex makefiles ctags // extended multi-language ctags, *BSD has a C based one cscope and kscope webcpp // games xgalaga prboom with freeware doom-data wesnoth // graphics software gimp with animation package (gimp-gap) inkscape // browsers lynx // local mail clients just in case thunderbird thunderbird-i18n mutt // chat konversation // worlds best irc client pidgin // aim/msn/yim/icq/etc pidgin-hotkeys pidgin-guifications pidgin-libnotify pidgin-otr pidgin-encryption teamspeak_client // linux version // multimedia libdvdplay xmms xmms-skins xmms-pipe // control xmms from a named pipe // documents gnumeric abiword koffice // personal zsh docker rxvt-unicode terminus-font mg // micro gnu emacs, openbsds alterntive to vi ### Reinstalling PC-BSD I complted my backups during dinner so when I booted my laptop tonight, compared the MD5 checksums on the PC-BSD v1.5 CD#1 ISO file and burned the disk. I had K3B installed from PBI when I installed PC-BSD from a 2-Disk set awhile ago but I've never actually used K3B to do things lol. So I put a blank CD-R in my laptops acd0 and looked around on how to burn the ISO. cdrecord -scanbus # find out my 'dev'ice Cdrecord-Clone 2.01 (i386-unknown-freebsd6.2) Copyright (C) 1995-2004 Jörg Schilling Using libscg version 'schily-0.8'. scsibus2: 2,0,0 200) 'PHILIPS ' 'DVD+-RW SDVD8441' 'PA48' Removable CD-ROM 2,1,0 201) * 2,2,0 202) * 2,3,0 203) * 2,4,0 204) * 2,5,0 205) * 2,6,0 206) * 2,7,0 207) * cdrecord -v -pad speed=1 dev=2,0,0 PCBSD1.5-x86-CD1.iso # with very nice verbose output ;-) I've never used my laptops DVD+-RW drive for burning disks before, normally I use the install of Nero that came with my Desktop but good ol'Dixie ain't let me down, the CD-ROM came out great. I did an install with the decision to use the entire disk and a custom disk label. The dang gum installer still doesn't have an option to set the time zone to UTC so I set it to Europe/London GMT 0000 which is close enough (my .zshrc sets TZ) I noticed three problems with the custom disk label part of the installer. The first is, although PC-BSD finally fixed their default of 1024MB SWAP to instead use a more dynamic algorithim... For which it alloted 512MB of SWAP when my laptop has 512MB of PC2700 RAM. My previous install had that much RAM and when under the 'worst loads of its life' top some times reported ~300-400MB swap usage. The installer woulnd't let me create a second swap partition, so I upped the size to 1024MB. Normally I double check my values with a calculator since the installer seems to lack fdisks ability to handle K, M, and G suffixes but I found BC was gone. I didn't have one handy so I started an XTerm only to find out that 'bc' was not on the install disk :-( so I did it manually. The other two problems are that I created /usr, /home, /var, and /tmp partitions. It converted the /home mount point to /usr/home and made /home a symlink, the only problem is I created /home before /usr in the installer. So when I rebooted I found a nice surprise that /usr/home was not mounting because /usr was not mounted yet :-(. Also although I made a /tmp partition the PC-BSD installer failed to disable tmpmfs in rc.conf, I had to do that manually. I know rc.conf.local is supposed to be a bit out dated on FreeBSD and the proper way on OpenBSD... But I always use /etc/rc.conf.local for changing rc.conf on PC-BSD, less trouble ;-). Started PC-BSD, noted the boot menu now shows FreeBSD instead of PC-BSD like in the last release and the splash screen was gone which is fine by me. I usually would clear it when booting but was always too lazy to disable it 8=) Setup the display for 1280x800 24-bit with 'ati-3d-enable' and switched to a vtty with control+alt+F2 and logged in as root. I had to change roots password, because my is to strong to 'pass' the PC-BSD installers concept of an acceptible multinational password lol. And to add my personal user, during install I only added 'rstaff' because I wanted to create my user 'Terry' with the same UID and GID settings as on my OpenBSD machine, tired of remapping stuff... passwd # fix roots pw adduser # add my user Then I realized that there was one fatal flaw in my plan, all the backups were on Vectra including the copy of my wpa_supplicant.conf file used for an internetconnection via wireless. There is more ways then one to solve a problem ;-) Since I don't have a USB Flash Drive I booted my desktop into Windows and stuck in my spare SD Memory card in the hopes of copying the backup of /etc to it but Windows couldn't access the bloody file shares, *Grrr* so I used PuTTY to SSH into Vectra and used cat, copy, and paste to create a new wpa_supplicant file. Since my laptops card reader is not supported on FreeBSD 6.3 I swapped memory cards in my camera and attached the USB cable, I keep it set to 'Mass Storage' mode rather then PTP so I can transfer pictures to my laptop. I plugged in the cable, turned on the camera, and in the time it took for me to type ls /dev | grep da the entire computer locked up, frozen solid on 'ls /d' so I had to shutdown with the magic on/off button :-( So this time I turned off the camera and started my laptop again, turning on the camera during the kernel probe so it would stay in umass mode. Booted into single user mode and did a fsck -y then mounted the camera so I could get the file. mount -t msdosfs /dev/da0s1 /mnt cp /mnt/wpa_* /etc/ umount /mnt # exit single user mode Logged into KDE with my main user, 'Terry' and I decided to give PC-BSDs networking tool a try, set up my wireless card. It failed to detect my wireless access point so I specified the SSID manually and cat, copy, and pasted my passphrase from wpa_supplicant into the GUI. I then proceded with my master plan, mount my stored backups off Vectra via NFS and start restoring files. So I booted into single user mode again and set to work, I knew I'd need single user mode because with X running things would get fucked soon if I didn't get my xorg.conf back! Since I rarely write out a mission plan in that much detail when I am 'playing' with one of my computers. I've kept a log of my actions using vi to write /root/fixit.log and have ordered and commented the entries in a more logical order, I just did them in the order I thunk of them hehe. fsck -y mount -u -o rw / mount -a /etc/rc.d/netif start # start the network connection # and mount my backup files on /mnt mount_nfs -r 8192 -w 8192 xxx.xxx.xxx.xxx:/srv/nfs/Backups/today /mnt bash # /bin/sh lacks a bit on tab-completion cd /tmp tar -xf /mnt/etc.tar cd etc cp ssh/ssh*_config /etc/ssh/ cp /etc/X11/xorg.conf /etc/X11/xorg.conf.pcbsd15.install cp X11/xorg.conf /etc/X11/ cp rc.conf.local /etc/ && vi /etc/rc.conf.local # trim my rc.conf cp pf.conf /etc/pf.conf.my-old vi /etc/fstab # create fstab entries for the NFS shares cd / tar -xf /mnt/local-share-ri.tar # install ruby docs pc-bsd lacks tar -xf /mnt/local-etc /usr/local/etc/sudoers # restore my sudo config cd /usr/home/Terry # add nfs-users and smb groups pw groupadd -g 7778 -n nfs-users -M rstaff,Terry pw groupadd -g 19132 -n smb -M rstaff,Terry pw groupmod -n operator -m Terry # add myself to the operator group su - Terry mv Images Pictures # I prefer that name ;-) mkdir code # adjust the ownsership of my dirs chown Terry:nfs-users {Documents,Music,Pictures,code,Videos} tar -xf /mnt/my-home-backups.tar # various files, extracts as 'backups/' # restore the stuff I want saved mv backups/GNUstep ~/ mv backups/sh ~/ mv backups/misc ~/ mv backups/konversation ~/.kde/share/apps/ mv backups/knode ~/.kde/share/apps/ mv backups/.* ~/ # restore selected 'dot' files # connect to my file server and create a new dir for nfs ssh -p 22222 -i .ssh/mykey Terry@vectra su - root mkdir -m 1770 /srv/nfs/code # I'll extract files later groupadd -g 7778 nfs-users vi /etc/group # added my user to nfs-users ^D # exit vectra root shell ^D # exit vectra Terry's shell cd /srv/nfs chown -R Terry:nfs-users ./* ^D# back to working as root on dixie in single user mode cd /tmp tar -xf /mnt/root-home.tar cd root # restore a few files I want there cp *.ogg ~/ cp .login ~/ cp *-supfile ~/ reboot on reboot I set out to work with molding KDE into shape and installing PC-BSD updates. With no lockups within the first half hour of operation. ## Saturday, April 12, 2008 Well, downloading a PC-BSD v1.5 install disk via KGet... Looks like a reinstall / repair is probably going to be the only way to fix Linux GTK+ apps without spending more time and effort then it pays to on the issue. I even tried booting off my FreeBSD 7 partition and setting up linux-flock there. Much more successful then PC-BSD, it died due to a missing gnome library which is probably what I get for installing gnome2, gtk2, linux-gtk2, and mutual friends from packages >_> I actually like KGet as far as download utilities go. I'm used to using FreeBSDs fetch command which just wraps around a few library routines. What I like most about kget is it just stays out of my way, sits in the system tray, and doesn't take a Ph.D to figure it out ;-) It's been awhile since I've tried the konqueror integration but it probably would be nice. I do rather like keeping downloads separated from my browser when it's a _big_ file though. That way at least if my browser crashes the download won't get FUBAR'd on me. So here I sit, downloading the remaining ~500MB of the ISO image and watching The Negotiator which is one of my favorite thrillers. I remember I once caught it on cable one night and had to get the VHS when the chance came up. Now I enjoy the movie twice as much while I watch crooked SWAT team members break almost every damn rule their is to hostage rescue. To quote Kevin Spacey's charactor, "You want to kill him on national television now!?". The whole point of SWAT is to *_save_* lives, even the suspects if you can... but never, ever do you jeprodize the lives of hostages like that. I need to get my system files backed up, shouldn't take long it's mostly the /etc folder, the parts of my home dir that are still local, and a few things in /usr/local/{share,etc} that I might want to keep. Guess it's time to update my partitioning scheme while I'm at it.... ## Friday, April 11, 2008 ### Writer's Block: I Left My Heart in... What do you love about where you live? Live Journals Writer's Block It might rain cats and dogs, thunder and lighting to beat the bands, blow the car cover off the car, and even occasionally hail up but the buildings still stand ! Up until last year there was one road that we used to have a lot of clients in that area. That place is like Twisters Vill, if there is a storm, it'll get whacked. Several years ago I remember we were at the Churches mid-week service and the sound of the wind and thunder was some of the worst I've ever heard in my life. And that is a place where you can probably hear the music and singing down to the highway lol. When service was over the Pastor told everyone they could stay there if they didn't feel up to leaving. It had to be at least 30-45min before we filtered out. I think the people that left probably wanted to see if they still had a house to go back to lol. On the way home we took that one road, it was seriously blown away man. Dark, any few lights there probably were out of power, there was trees down and debris every where and it was still storming. We finally had to turn around and take the highway after bumping into a crew trying to handle a downed power line in the middle of the road. Where we live, usually doesn't get hit that bad... You might have water halfway into your shoes some days, or feel like a tumbleweed in the ol'west but it's generally safer then most other places in this city. Although to be honest, given the number of tornadoes per year in Georgia, I think I'd take my chances with the Hurricanes back in Florida if we ever moved closer to that on road lol. I don't mind rain, I suppose it is strange but I generally like rainy whether; more adventure to travel. It's the tornadoes that worry me. Thunder and lighting I don't care very much for but I've never concerned myself a lot with it. I figure, if the LORD wants me dead, I'm as good as dead whether or nott I get hit by lighting. And if it ain't time to go yet, probably not a lot to worry about. I don't really have a problem with death I suppose, I know we should always be ready to go but there are a few things I'd like to do/see in life first... The only thing of it, is if I gotta go, I'd rather not take any one with me and a twister is usually not that picky about it's targets. It's always been my expectation that I'll be as good as alone when the end comes, dunno why but that is my suspicion on the matter. I've had a lot of rolling about on [SAS] related business today, generally productive as far as the NCO and RSM matters go. I also posted a file with my 'musings' on a few tactical matters in an appropriate place. Not sure if it was a good idea to share my thoughts in this case but I've never really cared much about what others think of me, no point starting now 8=). I've had some time to play with my little music management toy hehe. Basically the idea is to track filename changes in my music collection and then update my playlist files. It's not meant to be pretty or optimal, just effective. If I ever get it finished I'll probably leave it running on the file server so it can look after my music files. Every now and then I do like to rearrange files in ~/Music and it always breaks my playlists, and even when using Amarok for the excellent playlist editing... It kind of sucks to have to redo them manually through Amaroks collection browser, it's a great system but my playlists can some times get quite large. I've also managed to get my server and laptop set up to use NFS instead of SSHFS. Since it seems I can't count on SSHFS, it has already incured a 'price tag'. Samba's mount_smbfs is to much bother on FreeBSD atm, the program that comes with the fusefs-smbnetfs port seems to be as good as KiA if you want to know any thing more about it beyond the sample config file and source code... That leaves NFS and AFS, not familiar very greatly with AFS but NFS a bit more so. The BSDs seem to do things a bit differently with the exports file then what I've encountered before, the fbsd handbook / obsd faq also leaves a little to be desired compared to some nice BNF notation ^_^. I might be able to do some thing with a SSH tunnel later, right now some thing that works is all that is important... My allergies have been tremendous lately but it's that time of year again. One thing I really did like about living in Fl. It was to damn hot for most of the stuff that makes me sneeze! It was kind of one of those places you are soaked to the bone just crossing the parking lot to get to the car lol. Time to get some rest, tomorrows another day... Primary objectives for the near future are getting more work done on the SOP Rewrites and trying to fix my laptops problem with Linux GTK+ apps..grrr. ### Tactical Wonderland An old friend appeared before me with a question before my new orders come, in each hand he held a pill. One Red, one Blue, each representing a course of action. Towards service or my own ends, my reply? Loyal to the end ## Thursday, April 10, 2008 ### days rumble Well I've put a few thoughts through LaTeX and even discovered that at least the version in my MiKTeX install on WinXP, some times pdflatex.exe/latex.exe will have an endless loop rather then die with the normal error message if you accidentally delete the \end{document} at the end of a doc ^_^ I think I'll probably post the file tomorrow in the members forum, I zipped it with a simple password chosen at random. The question is will I remember what it is later rofl. I like using tex/latex quite a lot so far, works much better then XHTML+CSS at giving decent output without *eventually* getting annoying to edit and maintain. I've been slowing building up a .sty file for things I use a lot so I don't have to worry about finding the last document I did some thing to when I can't remember a specific. Probably will pass out in a few hours next to learning perl, glad I'm off work tomorrow... need some rest:\ Maybe I can even catch up with the 30++ messages in my in box haha! I've done enough for the night, in the future Ineed to play around with the Linux ABI on my laptop and NFS. I have a FreeBSD 7 partition, I think I will see if I can use that to help fix the PC-BSD one... Either way it would probably take less then 2 hours to reinstall my laptop and restore files once I've got a set of PC-BSD v1.5 disks handy...Most of the time of course spent transferring and extracting files on low end hardware lool. NFS, I've always avoided and OpenBSD seems to use a different syntax for /etc/exports then I learned on Linux but as long as it works... SSHFS and SMB/CIFS seem to have failed me, the only remaining options I know available to me are NFS and AFS, nether of which I've had time to test fully yet. I guess that can wait for later. *passes out* It is kind of amazing, I posted a review of PC-BSD 1.2 and people occasionally still post comments on it lol. I think I'll do some thing similiar when PC-BSD v2.0 is released, will be a good chance to take a look deeper into KDE4. Hit over the Library today and checked out the Lama book (Learning Perl), I figure I've been using Perl for a lot of odds and ends lately. And I may as well inhale the book a bit since I'm usually finding myself skimming manual pages, perldoc'ing, and STFW'ing for any thing I don't remember. Hmm, I think Perl was my second programming language but I never went to deep into it back then. Also pulled a few on PHP, dunno if I'll have time to read them and I *hate* PHP even through it is a language I've had to employ on more then one occasion. I was also lucky enough to find a copy of the Art Of Computer Programming, Volume 4, Fascicle 4. That will make for a bit of an interesting read. If I had the cash, I think I'd love to get all of the volumes that have been published. Considering that Donald Knuth is like 70 years old and the first volume was published ~1968, I don't see how he can possibly live long enough to finish TAOCP when there are at least 2 or 3 more volumes left... but I'd love to read'em some day! Never really been a big one for computer related books, I'm more of a Science Fiction kind of guy. But hey, I could live at the library xD. ## Wednesday, April 9, 2008 ### Writer's Block: Lost & Found What have you lost that you wish you still had? Live Journals Writer's Block I could go on for paragraphs about what comes to mind ... wouldn't do me much good. Since I recently 'lost' it, I'd settle for my web browser back ! As to the other matter, well maybe it was for the best. ### exhaustion mounting Oy.. I've spent most of the week with an almost constent headache, barely been on the computer the past few days and even then mostly AFK. Managed to get some training in today and some work on the SOP rewrites as well but it's still a very tiring week. I'm looking forward to the time off coming up so I can catch up with stuff, about all I have gotten done this week so far is talking to Noer and Rasa lol. ## Monday, April 7, 2008 ### A crashing BSD Ok, now I dunno what is worse that my very stable laptop has gone nuts or that I'm not surprised by it at all. mentally back tracing events: using urxvt with zsh vim running in background during perl file editing session linux-flock playing my favorite radio station via linux-mplayerplug-in and native mplayer mv ./myfile.mp3 /tmp/ -> trying to move a file off a sshfs mount to /tmp system locked up with sound stuck replaying a single note tried to switch to vtty1 system auto-rebooted, never saw the vtty On reboot I restarted flock and tried to move the file again, system locked up and rebooted when I tried to switch to vtty0... Now linux-flock segfaults when I run it and the only other linux app I know that is handy, realplayer also segflauts. I ain't seen any thing informative in /var/ yet either. Now, my Windows XP machine Blue Screen of Deaths and occasionally Black Screens of Deaths! On me all the time when listing to music while using the server browser in Raven Shield, if I use any thing other then WMP: trying WinAmp == instanto death and often same with MPlayer using the usual DirectX related sound/video opts. So why do I find it sad that for me it is not so much of a shocker that with a third party kernel module installed from pre-compiled binary (fuse) that was ported from another OS, moving data from a mounted network file system (sshfs) to the local hard drive through SSH and said driver, while running binary programs designed for an entirely different system (linux flock+mplayerplug-in), could possibly cause a system to crash? At least it's got a better damn reason then Windows XP has got looooool I've tried fsck'ing the drive but the Linux ABI still seems FUBAR.. All things considered with SSHFS and SMB/CIFS, I am seriously considering putting both NFS and AFS into testing here to see if either will fill the gap. ## Sunday, April 6, 2008 Some musings from ~/Music/Playlists/manager/ideas.outline. The file outlines a format for the stored records and as much of the scripts operation as my mind can think of right now, it is 0420 already... tags file format | every song listed as 'file := checksum' daemon/script | for all files in collection loop | | if md5 == known then | | | if filename == known then | | | | continue | | | else | | | | if md5(filename) in file then | | | | | generate new entry in file | | | | else | | | | | update file with new name | | | | | update playlists with new name | | | | end | | | end | | end | done I like to outline my ideas every now and then for later reference, especially when I'm very tired it helps me make sense of my notes next week. There is a vim plugin for outlining but that is kind of over kill for me. Vim has a 'listchars' setting that alters from Vi's behavior how and what it displays things when 'list' is set. I have a function named My_OutlineMode() and an automcmd that calls the function whenever creating a new file or reading a new buffer with a file.outline setl listchars =tab:\|\ " Mark \t's with |'s setl list That makes each level of indentation be highlighted and displays a pipe symbol '|' at each tab stop (e.g. indentation level) without inserting it into the file. I find it a tad distracting while coding -- indentation is used in programs for a reason after all! But for outlining ideas, I find it really helps to visually display the collation of ideas to indents. Maybe because I use blank lines and tabs to order thoughts in my outlines but what ever works hehe. I remember I first learned about list/listchars when trying to help someone in #vim that wanted code to display each indent level with leading dots, kind of like how KATE can be set to show a '.' at each tab stop. So I made a note of it in case I would want to do something like that myself later. After looking at Vim 7.x's omni completion during a conversation in #vim on irc.freenode.net I got an inkling to try making Vim auto complete the end of HTML tags without pressing ctrl-x + ctrl-o each time. I'm sorry to say, it only took about 3 minutes: imap </ </<c-x><c-o> which tells Vim to enter </, press control+x following by control+o which is the insert mapping vim uses by default to do auto completion. Yes I'm that bored... Haha Hmm, I did think of an interesting idea which would a 'collection monitoring playlist updater' that would use MD5 checksums (or a faster algorithm due to the size of some of Wiz's mixes). The idea would be a sort of tags-file / flat-file database that maps MD5 checksums of all known files to current file names. If a file name of a checksumed file has changed, update all playlists with the 'new' file name. An interesting idea, especially since I use a mixture of XMMS and MPlayer these days rather then ol'Amarok. As it would probably take more then ~15 minutes to md5 1.9GB of music files over a sshfs mount on my laptop, it'll have to wait for another night but what an interesting idea xD ### a python filled sigh Some how it makes perfect sense... While every one is sound asleep, even the dogs... I'm spending my time with the music blasting and digging into code on my laptop and being quite happy during the duration. Until Ma is off the couch, then I'm already miserable 8=) At least I've managed to finish most of the work I was trying to do on Neo Ports Manager. Just committed changes to how it handles port build options, I'm still not happy with it from a design stand point... But it should be suitable for being activated in NPMs Beta Release whenever that occurs. Actually getting any thing else done in the next four or five hours is questionable. ## Saturday, April 5, 2008 ### Musings of an RSM. Well, I expect I'll probably get flamed over it but I've posted a WARNO in the SNCO forum. I need to know exactly what is going on so I know just how much needs to be done. Perhaps my tone and style of writing is not the best for the situation, never has been one of my strong points for that matter. I think it serves the purpose well though, warno := helps me to order my thoughts and instructions in a manor for others to read and understand without a headache trying to figure them out. The selection of a "warning order" I think was also appropriate, it's advance warning of coming instructions and providing as much information and instruction as I can now. It's also a point of fact that we need to get going hehe. Once every thing is done, I want to grab the SSMs, sit down, discuss it and work on an OPORD (Operations Order), get with any Sgts who have more to say. Once that is done, type set the operations order and issue it. I figure it is probably the best way for me to handle the situation. I've mentally alotted about a week for start and a week for finish, after that.. If nothing comes in I guess it is RSM from hell time. Because, whether I like it or not (and I don't) I need to make sure things are running smoothly and push things in the proper direction as necessary... I want to make sure of where we are on the map before I push, to hard or softly... One thing I do want to do and Noer seems to be a step a head of me is a consistency to what we teach. I want as part of the OPORD or an attachment to it, a listing of every thing 'big' on that issue and boiled down to it's simple form for routing through the SNCO and NCOs. I could write that part in 5 minutes, have it type set in another 5, but I want to make sure nothing is missing from our current environment. I am not going to sit and watch another generation of [SAS] Members face the same things mine had to during training... As the motto I would put on my sigblock if I could (bloody forum rules and all). Lead me, follow me, or get the fuck out of my way. In this case I think we probably have a better group for getting the tasks we need done done, then we have since Randoms time as Cpl loool. note to self, send our new webcoder some instructions and concept mock-ups on the new 'project' I have in mind for the website hehehe. ## Friday, April 4, 2008 I'll never understand it... If I try to do any thing while others are awake here it causes my productivity to shrink and headache to grow exponentially. Is this some kind of universal law I've never heard of? Or is it just my family likes to bitch when I don't get stuff done but loves to drive me freaking crazy whenever I *try* to get stuff done? Lol... I so need to get out of here... Marching in the thunderstorm would probably be more peaceful :\ After having been wanting to for ages I have finally fixed up my OpenBSD machines partitions. I had an 80GB hard drive formated (wd1a) and moved /usr/local/ on to it and put my SMB shares on it for the free space. Since wd0 is a 8GB disk split into a, b, h, d, g, and e partitions the biggest is wd0g mounted on /usr with ~6GB free but I had almost 10GB of files on /usr/local (wd1a). So I had to copy my backups and videos to the windows machine via Samba/Network Neighborehood before I could move all of my files in /usr/local/srv to a temporary place in /usr and then archived the rest of the directory. cd /usr mkdir storage mv local/srv storage tar -cf /var/tmp/local.tar local I had to relabel the disk and then format the partitions, I created wd1a and wd1d to use as /usr/local and /srv with ~15GB more free space in case I need it. umount -f local disklabel -E wd1 newfs wd1a newfs wd1d During the disk label I changed to 'disk geometry' (g d), deleted the a partition (d a) and created the a and d partitions (c a and c d) keeping with the prompts on it and specifying 12G and 45G for the partition sizes. Fixed my fstab and then mounted the partitions vi /etc/fstab # 8GB Primary Master, PATA drive # device mount type opts dump fsck /dev/wd0a / ffs rw 1 1 /dev/wd0h /home ffs rw,nodev,nosuid 1 2 /dev/wd0d /tmp ffs rw,nodev,nosuid 1 2 /dev/wd0g /usr ffs rw,nodev 1 2 /dev/wd0e /var ffs rw,nodev,nosuid 1 2 # 80GB Primary Slave, PATA drive # device mount type opts dump fsck /dev/wd1a /usr/local ffs rw,nodev 1 2 /dev/wd1d /srv ffs rw,nodev,nosuid 1 2 mount -o rw,nodev /dev/wd1a /usr/local mount -o rw,nodev,nosuid /srv I'm some what tempted to mark wd1d 'noexec' but I may wish to run scripts from there later if I ever move ~/code over. After that it was just a quick hop, skip, and jump to restore my files. tar xpf /var/tmp/local.tar mv storage/srv/smb /srv/ vi /etc/samba/smb.conf I corrected all of my shares in smb.conf from command line mode: :1,$s/\/usr\/local\/srv/\/srv/g
I could've used ex but I rather like paging up/down with ^U and ^D instead of using 'addr1,addr2p' in ex.
mount
/dev/wd0a on / type ffs (local)
/dev/wd0h on /home type ffs (local, nodev, nosuid)
/dev/wd0d on /tmp type ffs (local, nodev, nosuid)
/dev/wd0g on /usr type ffs (local, nodev)
/dev/wd0e on /var type ffs (local, nodev, nosuid)
/dev/wd1a on /usr/local type ffs (local, nodev)
/dev/wd1d on /srv type ffs (local, nodev, nosuid)
# df -h
Filesystem Size Used Avail Capacity Mounted on
/dev/wd0a 147M 30.4M 110M 22% /
/dev/wd0h 393M 35.6M 337M 10% /home
/dev/wd0d 98.3M 2.0K 93.4M 0% /tmp
/dev/wd0g 6.7G 398M 6.0G 6% /usr
/dev/wd0e 148M 84.1M 56.2M 60% /var
/dev/wd1a 11.8G 76.9M 11.1G 1% /usr/local
/dev/wd1d 44.3G 5.1G 37.0G 12% /srv
Windows wouldn't see the file shares and sending the HUP signal to Samba to reread it's conf file immediately didn't help any. So I gave Vectra a reboot to double check my fstab entry (yes I am paranoid), I could've just killed the processes and reloaded them manually for the same effect.
# uptime
9:14PM up 19 days, 3:29, 1 user, load averages: 4.12, 4.16, 3.86
# reboot
I love OpenBSD :-)
EDIT:
To prevent some nasty time outs.
vi /etc/ssh/sshd_config
ClientAliveInterval 15
ClientAliveCountMax 45
vi ~/.ssh/config # or /etc/ssh/ssh_config for all clients
ServerAliveInterval 15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2708226144313812, "perplexity": 4570.770275038378}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573011.59/warc/CC-MAIN-20190917020816-20190917042816-00310.warc.gz"}
|
https://onepetro.org/REE/search-results?qb=%7B%22Keywords1%22:%22system+stiffness%22%7D
|
1-1 of 1
Keywords: system stiffness
Close
Close Modal
Sort by
Journal Articles
SPE Res Eval & Eng 25 (04): 704–718.
Paper Number: SPE-210560-PA
Published: 16 November 2022
... and the hydraulic fracture form a single system, for a conventional DFIT (sequence of pump-in/shut-in) or a DFIT-FBA (sequence of PIFB), the trend of pressure with time depends on the system stiffness and changes in system volume ( Raaen et al. 2001 ; McClure et al. 2016 ). Therefore, the pressure derivative can...
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9350617527961731, "perplexity": 8960.968562697844}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500154.33/warc/CC-MAIN-20230204205328-20230204235328-00860.warc.gz"}
|
https://artofproblemsolving.com/wiki/index.php?title=2014_AMC_8_Problems/Problem_21&diff=prev&oldid=111078
|
# Difference between revisions of "2014 AMC 8 Problems/Problem 21"
## Problem
The -digit numbers and are each multiples of . Which of the following could be the value of ?
## Solution 1
The sum of a number's digits is congruent to the number . must be congruent to 0, since it is divisible by 3. Therefore, is also congruent to 0. , so . As we know, , so , and therefore . We can substitute 2 for , so , and therefore . This means that C can be 1, 4, or 7, but the only one of those that is an answer choice is .
## Solution 2
Since both numbers are divisible by 3, the sum of their digits has to be divisible by three. 7 + 4 + 5 + 2 + 1 = 19. In order to be a multiple of 3, A + B has to be either 2 or 5 or 8... and so on. We add up the numerical digits in the second number; 3 + 2 + 6 + 4 = 15. We then add two of the selected values, 5 to 15, to get 20. We then see that C = 1, 4 or 7, 10... and so on, otherwise the number will not be divisible by three. We then add 8 to 15, to get 23, which shows us that C = 1 or 4 or 7... and so on. In order to be a multiple of three, we select a few of the common numbers we got from both these equations, which could be 1, 4, and 7. However, in the answer choices, there is no 7 or 4 or anything greater than 7, but there is a 1, so is your answer.
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8014805316925049, "perplexity": 116.44116114019641}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519395.23/warc/CC-MAIN-20210119135001-20210119165001-00701.warc.gz"}
|
http://boards.fireden.net/sci/page/15/
|
Current status http://status.fireden.net/ Hidden site now up at http://ydt6jy2ng3s3xg2e.onion/
(40 replies)
## Why Don't We Have Hair and/or Skin Of Unusual Colors Naturally?
No.10650384
Unusual hair colors in people and fur colors in animals, blue, green, pink, blood-red, etc, are fairly common in fiction, particularly fantasy. But why don’t we see people with naturally green, or blue, or pink, silver, etc, hair in reality? What would be needed for such colors to exist? How could they evolve or be created by genetic engineering? What terminology would they use? What about people with blue, red, or green skin?
35 posts omitted
(20 replies)
97KiB, 993x900, Datascience.png
No.10670091
Is data science a meme? Is it a science?
Isn't it just stats?
Am I better off with a masters in stats or a masters in data science?
15 posts omitted
(5 replies)
## string worldsheet theory
No.10667450
so, the theory that describes a string worldsheet is conformally invariant. haha, that sounds like Weyl, right?
anyhow, let's talk string theory. you know, normal topics:
Swampland Conjecture
KKLT mechanism
BPS states
i'll start.
KKLT and the string multiverse are a Kachru/Kallosh/Linde/Trivedi/Susskind/Stanford clownsquad meme. stanford idiots need to be rejected by all the major journals from now on. lenny's club proven to be a bunch of publicity-craving shills. no more.
/sci/ is a Vafa board, fuck those stanford fucks
only harvard, IAS, MIT, Princeton, etc. can be trusted. west coasters like JH Schwarz can go eat a dick.
(7 replies)
77KiB, 600x600, vQnx_913.jpg
No.10666563
What books should I read to be able to understand Vsauce videos?
2 posts omitted
(5 replies)
17KiB, 1514x751, chi squared ass.png
## Probability calculations or statistics?
No.10670042
Which one is more boring?
(5 replies)
155KiB, 600x600, 53e.png
No.10670547
Does /sci/ think science and technology is advancing at a pace too fast for society to keep up, or notice the negative effects far after its implementation
(319 replies)
79KiB, 1280x720, teddy.jpg
## /med/
No.10654055
"What the fuck is wrong with teddy?" edition
old: >>10645251
We discuss research, offer advice (Just see your family physician), make fun of premeds, discuss residency and different specialities but we mostly shitpost
If you want to discuss vaccines, please make your own thread because it takes a lot of replies and the discussion degenerates.
>What's the best speciality for research?
Path, clinical lab, onc, rad/onc, anaesthesia
>What are the best specialities lifestyle wise?
314 posts and 33 images omitted
(5 replies)
## Doppler shift
No.10670436
I'm a physics major first year. We were discussing doppler shift, so I started putting it in terms of limits (limit as vsource -> 0 is unshifted, obviously). But then i tried when vsource -> v or when vsource -> c and things got weird. When it approaches the speed of sound it approaches an infinite pitch shift, ie, when the source of sound moves towards you at the speed of sound it sounds really high pitch until it cannot be heard. But as vsource approaches the speed of light, the frequency is multipled by a shift that goes to 0.
So, the frequency goes very high towards the speed of sound, then past the speed of sound the frenquency is negative (traveling in opposite direction?), gradually slowing as it approaches the speed of light? Am I missing something here? Graph and reasoning attached
(58 replies)
7KiB, 614x461, ApolloEarth2.jpg
No.10667200
Which Great Filter hypothesis do you find most compelling?
Personally, I like Rare Earth. What a twist that would be, after centuries of demolishing anthropocentrism...
53 posts and 4 images omitted
(10 replies)
15KiB, 677x351, 1557839244355.png
No.10670404
If E_k is equals to (1/2)mv^2 and delta E_k is equals to force times distance, then F*s is equals to (1/2)mv^2. But m*(1/2)v^2 is nothing else but mass times the integral of v, which is equals to s (the distance). That means m is equals to F. But F=ma.
So can someone explain to me this retarded ass formula immediately?
5 posts omitted
|
{"extraction_info": {"found_math": true, "script_math_tex": 3, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1683504730463028, "perplexity": 5733.826390415833}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232258620.81/warc/CC-MAIN-20190526004917-20190526030917-00010.warc.gz"}
|
https://www.aimsciences.org/article/doi/10.3934/proc.2009.2009.857
|
# American Institute of Mathematical Sciences
2009, 2009(Special): 857-868. doi: 10.3934/proc.2009.2009.857
## Asymptotical dynamics of the modified Schnackenberg equations
1 Department of Mathematics and Statistics, University of South Florida, Tampa, FL 33620
Received June 2008 Revised February 2009 Published September 2009
The existence of a global attractor in the $L^2$ product phase space for the solution semiflow of the modified Schnackenberg equations with the Dirichlet boundary condition on a bounded domain of space dimension $n\le 3$ is proved. This reaction-diffusion system features two pairs of oppositely-signed nonlinear terms so that the dissipative sign-condition is not satisfied. The proof features two types of rescaling and grouping estimation in showing the absorbing property and the uniform smallness in proving the asymptotical compactness by the approach of a new decomposition.
Citation: Yuncheng You. Asymptotical dynamics of the modified Schnackenberg equations. Conference Publications, 2009, 2009 (Special) : 857-868. doi: 10.3934/proc.2009.2009.857
[1] Emmanuel Hebey and Frederic Robert. Compactness and global estimates for the geometric Paneitz equation in high dimensions. Electronic Research Announcements, 2004, 10: 135-141. [2] Tibor Krisztin. The unstable set of zero and the global attractor for delayed monotone positive feedback. Conference Publications, 2001, 2001 (Special) : 229-240. doi: 10.3934/proc.2001.2001.229 [3] Lorenzo Brasco, Marco Squassina, Yang Yang. Global compactness results for nonlocal problems. Discrete & Continuous Dynamical Systems - S, 2018, 11 (3) : 391-424. doi: 10.3934/dcdss.2018022 [4] Brahim Alouini. Global attractor for a one dimensional weakly damped half-wave equation. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020410 [5] Milena Stanislavova. On the global attractor for the damped Benjamin-Bona-Mahony equation. Conference Publications, 2005, 2005 (Special) : 824-832. doi: 10.3934/proc.2005.2005.824 [6] Wided Kechiche. Regularity of the global attractor for a nonlinear Schrödinger equation with a point defect. Communications on Pure & Applied Analysis, 2017, 16 (4) : 1233-1252. doi: 10.3934/cpaa.2017060 [7] Zhijian Yang, Zhiming Liu. Global attractor for a strongly damped wave equation with fully supercritical nonlinearities. Discrete & Continuous Dynamical Systems - A, 2017, 37 (4) : 2181-2205. doi: 10.3934/dcds.2017094 [8] D. Hilhorst, L. A. Peletier, A. I. Rotariu, G. Sivashinsky. Global attractor and inertial sets for a nonlocal Kuramoto-Sivashinsky equation. Discrete & Continuous Dynamical Systems - A, 2004, 10 (1&2) : 557-580. doi: 10.3934/dcds.2004.10.557 [9] Azer Khanmamedov, Sema Simsek. Existence of the global attractor for the plate equation with nonlocal nonlinearity in $\mathbb{R} ^{n}$. Discrete & Continuous Dynamical Systems - B, 2016, 21 (1) : 151-172. doi: 10.3934/dcdsb.2016.21.151 [10] Tomás Caraballo, Marta Herrera-Cobos, Pedro Marín-Rubio. Global attractor for a nonlocal p-Laplacian equation without uniqueness of solution. Discrete & Continuous Dynamical Systems - B, 2017, 22 (5) : 1801-1816. doi: 10.3934/dcdsb.2017107 [11] Kazuhiro Ishige, Michinori Ishiwata. Global solutions for a semilinear heat equation in the exterior domain of a compact set. Discrete & Continuous Dynamical Systems - A, 2012, 32 (3) : 847-865. doi: 10.3934/dcds.2012.32.847 [12] Yuncheng You. Asymptotical dynamics of Selkov equations. Discrete & Continuous Dynamical Systems - S, 2009, 2 (1) : 193-219. doi: 10.3934/dcdss.2009.2.193 [13] Nobu Kishimoto, Minjie Shan, Yoshio Tsutsumi. Global well-posedness and existence of the global attractor for the Kadomtsev-Petviashvili Ⅱ equation in the anisotropic Sobolev space. Discrete & Continuous Dynamical Systems - A, 2020, 40 (3) : 1283-1307. doi: 10.3934/dcds.2020078 [14] Yuncheng You. Random attractor for stochastic reversible Schnackenberg equations. Discrete & Continuous Dynamical Systems - S, 2014, 7 (6) : 1347-1362. doi: 10.3934/dcdss.2014.7.1347 [15] Carlo Mercuri, Michel Willem. A global compactness result for the p-Laplacian involving critical nonlinearities. Discrete & Continuous Dynamical Systems - A, 2010, 28 (2) : 469-493. doi: 10.3934/dcds.2010.28.469 [16] Irena Lasiecka, Roberto Triggiani. Global exact controllability of semilinear wave equations by a double compactness/uniqueness argument. Conference Publications, 2005, 2005 (Special) : 556-565. doi: 10.3934/proc.2005.2005.556 [17] Chunqing Wu, Patricia J.Y. Wong. Global asymptotical stability of the coexistence fixed point of a Ricker-type competitive model. Discrete & Continuous Dynamical Systems - B, 2015, 20 (9) : 3255-3266. doi: 10.3934/dcdsb.2015.20.3255 [18] Igor Shevchenko, Barbara Kaltenbacher. Absorbing boundary conditions for the Westervelt equation. Conference Publications, 2015, 2015 (special) : 1000-1008. doi: 10.3934/proc.2015.1000 [19] Zhiming Liu, Zhijian Yang. Global attractor of multi-valued operators with applications to a strongly damped nonlinear wave equation without uniqueness. Discrete & Continuous Dynamical Systems - B, 2020, 25 (1) : 223-240. doi: 10.3934/dcdsb.2019179 [20] Nikos I. Karachalios, Nikos M. Stavrakakis. Estimates on the dimension of a global attractor for a semilinear dissipative wave equation on $\mathbb R^N$. Discrete & Continuous Dynamical Systems - A, 2002, 8 (4) : 939-951. doi: 10.3934/dcds.2002.8.939
Impact Factor:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36449965834617615, "perplexity": 4478.763861645076}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738819.78/warc/CC-MAIN-20200811180239-20200811210239-00513.warc.gz"}
|
https://huggingface.co/speechbrain/asr-crdnn-transformerlm-librispeech
|
# CRDNN with CTC/Attention and RNNLM trained on LibriSpeech
This repository provides all the necessary tools to perform automatic speech recognition from an end-to-end system pretrained on LibriSpeech (EN) within SpeechBrain. For a better experience, we encourage you to learn more about SpeechBrain.
The performance of the model is the following:
Release Test clean WER Test other WER GPUs
05-03-21 2.90 8.51 1xV100 16GB
## Pipeline description
This ASR system is composed of 3 different but linked blocks:
1. Tokenizer (unigram) that transforms words into subword units and trained with the train transcriptions of LibriSpeech.
2. Neural language model (Transformer LM) trained on the full 10M words dataset.
3. Acoustic model (CRDNN + CTC/Attention). The CRDNN architecture is made of N blocks of convolutional neural networks with normalization and pooling on the frequency domain. Then, a bidirectional LSTM with projection layers is connected to a final DNN to obtain the final acoustic representation that is given to the CTC and attention decoders.
The system is trained with recordings sampled at 16kHz (single channel). The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling transcribe_file if needed.
## Install SpeechBrain
First of all, please install SpeechBrain with the following command:
pip install speechbrain
### Transcribing your own audio files (in English)
from speechbrain.pretrained import EncoderDecoderASR
asr_model = EncoderDecoderASR.from_hparams(source="speechbrain/asr-crdnn-transformerlm-librispeech", savedir="pretrained_models/asr-crdnn-transformerlm-librispeech")
asr_model.transcribe_file("speechbrain/asr-crdnn-transformerlm-librispeech/example.wav")
### Inference on GPU
To perform inference on the GPU, add run_opts={"device":"cuda"} when calling the from_hparams method.
## Parallel Inference on a Batch
Please, see this Colab notebook to figure out how to transcribe in parallel a batch of input sentences using a pre-trained model.
### Training
The model was trained with SpeechBrain (Commit hash: 'eca313cc'). To train it from scratch follow these steps:
1. Clone SpeechBrain:
git clone https://github.com/speechbrain/speechbrain/
2. Install it:
cd speechbrain
pip install -r requirements.txt
pip install -e .
3. Run Training:
cd recipes/LibriSpeech/ASR/seq2seq
python train.py hparams/train_BPE_5000.yaml --data_folder=your_data_folder
You can find our training results (models, logs, etc) here.
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
# Citing SpeechBrain
@misc{speechbrain,
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
year={2021},
eprint={2106.04624},
archivePrefix={arXiv},
primaryClass={eess.AS},
note={arXiv:2106.04624}
}
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.15998585522174835, "perplexity": 19442.884146188826}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571150.88/warc/CC-MAIN-20220810070501-20220810100501-00340.warc.gz"}
|
https://www.physicsforums.com/threads/micro-structure-from-equilibrium-diagrams.566747/
|
# Micro structure from equilibrium diagrams
1. Jan 11, 2012
### aiat_gamer
I have this question:
"What micro-structural features of metal alloys can be studied by using equilibrium diagrams and how? Can the equilibrium diagrams be used for some other purposes? What are the limitations?"
I think that only the composition of micro-structure can be predicted in a certain temperature in an alloy basically. They don`t actually give the shape of the micro-structure, do they?
2. Jan 11, 2012
### pukb
The phase diagrams does not give the exact microstructure because the rate of cooling/heating in phase diagrams is extremely slow such that an equilibrium state is attained at any point. The practical situation involves cooling rates that are much higher than the equilibrium cooling rates.
Composition of alloy can be determined at any temperature using lever rule.
An approximate development of micro-structure can be understood using phase diagrams.
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8589815497398376, "perplexity": 1848.3267124925367}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376830479.82/warc/CC-MAIN-20181219025453-20181219051453-00165.warc.gz"}
|
https://brilliant.org/discussions/thread/report-a-problem-what-it-means/
|
# Report A Problem - What It Means
Image
We have released a way to keep you informed about problems which have potential issues in them. When several people have reported a problem, the above banner will be displayed.
As a problem solver,
• be aware that the answer may not be correct.
• this problem can still affect your ratings. However, if there are issues with the problem, then those who got it wrong will have their ratings refunded (see below).
• help us identify such problems by actively reporting them.
For your reference, you can look at Example Of Reported Problem.
As a problem creator, respond to clarification requests and disputes in a timely manner by editing or deleting the problem. Once you see such a banner appear on your problem, you may no longer edit or delete the problems in the usual way. Instead, you choose from the following options in a dropdown menu:
Image
1) Delete the problem
This deletes the problem from your feed, and refunds the ratings for those who got it wrong.
Remember to explain why. You may reference the solution discussions. Once the answer is updated (after review), the corresponding rating changes will occur.
3) Adjust the wording of the problem.
Use this for significant changes which affect the problem. E.g. You used “integer” but meant “real number”. This will also refund the ratings for those who got it wrong.
4) Make no changes, explain existing problem to moderator.
Remember to explain why. You may reference the solution discussions. You can then make minor edits to the question, without affecting the ratings of others.
Once you have made the corresponding change, it will be reviewed.
Note by Calvin Lin
5 years, 10 months ago
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
• Use the emojis to react to an explanation, whether you're congratulating a job well done , or just really confused .
• Ask specific questions about the challenge or the steps in somebody's explanation. Well-posed questions can add a lot to the discussion, but posting "I don't understand!" doesn't help anyone.
• Try to contribute something new to the discussion, whether it is an extension, generalization or other idea related to the challenge.
MarkdownAppears as
*italics* or _italics_ italics
**bold** or __bold__ bold
- bulleted- list
• bulleted
• list
1. numbered2. list
1. numbered
2. list
Note: you must add a full line of space before and after lists for them to show up correctly
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)example link
> This is a quote
This is a quote
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
MathAppears as
Remember to wrap math in $$ ... $$ or $ ... $ to ensure proper formatting.
2 \times 3 $2 \times 3$
2^{34} $2^{34}$
a_{i-1} $a_{i-1}$
\frac{2}{3} $\frac{2}{3}$
\sqrt{2} $\sqrt{2}$
\sum_{i=1}^3 $\sum_{i=1}^3$
\sin \theta $\sin \theta$
\boxed{123} $\boxed{123}$
Sort by:
Thank you for giving us the ability to change the answer to our problems! I've needed this don a couple times, and it was quite embarrassing while the answer was incorrect. This is a great change!
- 5 years, 10 months ago
As always, if you want to update your answer (especially if it hasn't been reported), please send me an email ([email protected])
Staff - 5 years, 10 months ago
This is really cool.....Brilliant is getting more brilliant day by day!!!!!!!!!!!!
- 5 years, 9 months ago
good work.. I appreciate team Brilliant for their vibrant challenging problems.All your changes are welcome.
- 5 years, 10 months ago
Can you see if any of your problems are flagged without clicking on it?
- 5 years, 10 months ago
As this is a new feature, we have not built out nice capabilities like this. If reporting works well, we will make it much easier to review. This could include being sent an email (digest?) for problems that are being reported.
Staff - 5 years, 10 months ago
Hmm. I think it'd be nice if the problems themselves had a nice glowing, light red color when viewed from a feed or a set.
- 5 years, 10 months ago
Very nice feature.
But what if a valid problem is reported by multiple members who didn´t solve it correctly? Just a moderator will be able to change a problem status to "flagged" or is automatic feature?
- 5 years, 10 months ago
If a problem is "wrongly" reported (or has a history of being wrongly reported), we have a way to prevent further banners from showing up. This would apply in cases where people easily misunderstand the conditions / misread the problem.
Unless the problem is very badly worded, the problem creator should not need to make repeated edits. This would make it fairer to people who are working on the problem.
Staff - 5 years, 10 months ago
thnkx for making this change...
- 5 years, 10 months ago
Is there a way to change the options (not necessarily the answer) of a multiple choice question?
- 5 years, 10 months ago
No. My suggestion would be to delete the problem and start over.
It is unlikely that we would implement such a change.
Staff - 5 years, 10 months ago
But at least can we see what the options were after posting the problem? It helps in writing a solution, explaining why the other options are wrong...
- 5 years, 10 months ago
We are working on giving everyone the ability to see the options in a condensed way after they solved the problem.
Staff - 5 years, 10 months ago
That's nice..
- 5 years, 9 months ago
I have a question: How many is 'multiple members'?
- 5 years, 9 months ago
It depends on how 'reputable' the source is.
For reliably good members (a clear example would be me), 1 report would be sufficient to trigger the banner. For new members with a low rating, it could take 3 reports before a Level 1 problem is flagged, and several more before a Level 5 problem is flagged.
We're also playing around with this algorithm, and will consider other making it stricter or more relaxed. I'm currently against "banner once anyone reports" reports do arise from misreading / misunderstanding the problem / terminology.
Staff - 5 years, 9 months ago
To brilliant sir i have reported a bug in this brilliant website. This is not only affecting the person doing the problem but also making the person gain more points by a wrong method. A person is creating 2 accounts and he/she is using one account as a guide and the other as an original account. They are typing the answer in the 1st account and if the answer is wrong then they r seeing the correct answer and typing the answer in the other account. Due to this the people who have really worked to get an answer are really of no use.so i request u to immediately take an action on this issue and solve this problem. HOPE TO GET A BETTER RESULT.
- 5 years, 1 month ago
Hi Vishwa, Thanks for raising your concerns. Unfortunately, I am not able to police how everyone uses the internet, nor am I enable to enforce any kind of moral code amongst everyone. If I could, I would happily be ending all wars (and thus not be working here).
At the end of the day, how one person decides to use resources is up to him/her. The person who cheats and merely gets the momentary satisfaction of a green screen, is not going to have any long term value. The person who puts the effort into thinking about these interesting questions and deciding that math and science is actually useful and fun, would have a whole world of opportunities open to them. Furthermore, the Brilliant community praises, and values, written solutions to a problem, which help shed light on the thought process. This is something that the latter can contribute and benefit from, but not the former. As such, I strongly disagree with "the people who have really worked to get an answer are really of no use".
Staff - 5 years, 1 month ago
knowing an answer is not important. how we arrived at an answer is important. bluffing will not serve any purpose. it is a cool place where we get to a challenge. face it. in the way you learn many things. if , any body cheats that is his problem. he is not here to learn .points are not the real count. what we learn is remains with us always.
- 5 years, 1 month ago
yes sir u r truly right! it is hard work that makes them success...
- 5 years, 1 month ago
When I open it(the wrong problem), it will be add to the list of (started problems)? & is it will be better if the wrong problem have a sign like ⛔or 🚫behind the head line of it?, to notify us there is some issues with this problem before we open it.
- 5 years, 10 months ago
As with any other problem, it will be added to your list of started problems.
As mentioned below, we have not built out any nice capabilities as yet.
Staff - 5 years, 9 months ago
What if I found a good problem somewhere (website, olymbiad,..etc) and I can't find the right answer because it's higher than my ability to solve it but I know others will like it..what I suppose to do then ? Put a random answer then wait to someone report it to correct it?
- 5 years, 9 months ago
You can post it as a note, and ask others how they would approach it.
Staff - 5 years, 9 months ago
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 8, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.847981333732605, "perplexity": 1724.3662809419036}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145960.92/warc/CC-MAIN-20200224132646-20200224162646-00126.warc.gz"}
|
https://tricks11.com/how-to-remove-passwords-from-pdf-files/
|
# How To Remove Passwords From PDF Files Easily
0
408
## How To Remove Passwords From PDF Files Easily
Friends many time you found that you have some pdf files which are locked or have some restrictions and because of these restriction we found that we are not able to open these files.And many time we forgot the password of file,So In This Post I will Teach You How To Remove Passwords From PDF Files. Open Your Important PDF Files Without Password easily.Friends Just follow the step below to Remove Passwords from PDF Files Easily.
### How to Remove Passwords from PDF Files :-
1. First of all Click Here
2. Now select option to upload your file
3. Upload the PDF file which you want unlock.
4. Wait for the file to upload
5. After full upload Click on Unlock button
6. After process completed, download your unlocked PDF file.
7. that’s all
Tags:-Remove Passwords From PDF Files Easily ,Remove Passwords From PDF Files Easily 2017,Remove Passwords From PDF Files Easily march 2017,Remove Passwords From PDF Files Easily ,Remove Passwords From PDF Files Easily,open password protected pdf easily,crack password of pdf,how to crack password of protected pdf
SHARE
Previous articleHow to Remove Password from Rar File easily
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9478166103363037, "perplexity": 7267.853245383611}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529737.79/warc/CC-MAIN-20190723215340-20190724001340-00225.warc.gz"}
|
https://math.stackexchange.com/questions/2445992/in-how-many-ways-can-we-choose-3-objects-from-28-objects-on-circle-such-that-the
|
# In how many ways can we choose 3 objects from 28 objects on circle such that they are neither adjacent nor diametrically opposite?
Suppose 28 objects are placed along a circle at equal distances. In how many ways can 3 objects be chosen from among them so that no two of the three chosen objects are adjacent nor diametrically opposite? I have done this problem in the following manner: There are 28 ways to choose the first object.For the second object, there are two cases:
Case 1:
It is the next-to-next object from the first object. This can be done in two ways. For each of these two ways, we can choose the third object in 21 ways excluding the 7 as per restrictions. (1 for position of first object,1 for second object,3 for their adjacent positions and 2 for their opposite positions). Therefore, for this case, there are 28*2*21 ways to choose the object.
Case 2:
Second object occupies any other position except for the next to next positions of first object. There are 22 ways to choose the second object then. Now for each of these ways we can choose the third object in 18 ways. So the number of ways is 28*18*22.
Now the answer is 28*(18*22+2*21) ways which is way larger than the given answer:2268. The given solution uses complementary method.Though I understood the solution,I am not able to figure out the flaw in my approach but I understand that something is seriously wrong.
Kindly help me figure out the mistake.
## 2 Answers
Suppose $28$ objects are placed along a circle at equal distances. In how many ways can three objects be chosen from among them so that no two of the three chosen objects are adjacent nor diametrically opposite?
There are $\binom{28}{3}$ ways to select three of the $28$ objects. From these, we must exclude those cases in which two or more objects are adjacent or diametrically opposite.
There are $28$ ways to select a pair of adjacent objects since there are $28$ possible starting points as we move clockwise around the circle. For each such pair, there are $26$ ways to choose the third object.
The only way to have two pairs of adjacent objects is to select three consecutive objects, which can be done in $28$ ways since, again, there are $28$ possible starting points as we move clockwise around the circle.
By the Inclusion-Exclusion Principle, there are $$\binom{28}{3} - 28 \cdot 26 + 28$$ ways to select three objects so that no two of them are adjacent.
In considering the case of one pair of adjacent objects, we have already excluded those cases in which the third object is diametrically opposite one of the objects in the adjacent pair. We still need to remove those cases in which two objects are diametrically opposite each other but no two of the objects are adjacent.
There are $14$ pairs of diametrically opposite objects. For each such pair, there are $22$ ways of selecting a third object that is not adjacent to either of these objects. Hence, there are $14 \cdot 22$ pairs of diametrically opposite objects which we have not previously excluded.
Hence, the number of permissible selections is $$\binom{28}{3} - 28 \cdot 26 + 28 - 14 \cdot 22 = 2268$$
I am not able to find the flaw in my approach.
You are counting the same arrangements multiple times.
Suppose we number the objects from $1$ to $28$ as we proceed clockwise around the circle.
Case 1: There are two subcases, each of which you have counted multiple times.
Type 1: There is exactly one pair of next-to-next objects.
You count each such selection twice. For example, you count the selection $1, 3, 7$:
• once when you select $1$, then select $3$ as the next-to-next object, and $7$ as the additional object
• once when you select $3$, then select $1$ as the next-to-next object, and $7$ as the additional object
Type 2: There are two pairs of next-to-next objects. You count these selections four times.
For example, you count the selection $1, 3, 5$:
• once when you select $1$, then select $3$ as the next-to-next object, and $5$ as the additional object
• once when you select $3$, then select $1$ as the next-to-next object, and $5$ as the additional object
• once when you select $3$, then select $5$ as the next-to-next object, and $1$ as the additional object
• once when you select $5$, then select $3$ as the next-to-next object, and $1$ as the additional object
There are $28$ cases of the second type, corresponding to the $28$ possible starting points as we move clockwise around the circle.
For the first type, there are $28$ pairs, corresponding to the $28$ possible starting points as we move clockwise around the circle. There are $19$ ways to place the additional object so that it satisfies the restrictions and does not fall into the second type.
We can separate your $21$ ways of selecting the third object into $19$ ways of the first type and $2$ of the second type (choosing the third object to be one that is next-to-next of one end of the pair). Hence, there are actually $$\frac{1}{2} \cdot 28 \cdot 2 \cdot 19 + \frac{1}{4} \cdot 28 \cdot 2 \cdot 2 = 28 \cdot 19 + 28 = 28 \cdot 20 = 560$$ such cases.
Case 2: You count each such selection six times, once for each permutation of the three objects you selected.
However, you have to be careful how you count this case. Notice, for instance, that the selections $1, 4, 19$ and $1, 7, 20$ both satisfy your criteria. However, $1$ and $4$ share excluded spaces, while $1$, $7$, and $20$ do not. Whenever two of the objects are separated by two or three other objects, they share excluded spaces, which you did not take into account.
Taking the circle to be numbered, but the $3$ objects to be identical, we can divide the inadmissible cases out of $\binom{28}3$ total placements into three disjoint types for ease of computation:
• all three together: $28$
• exactly two together: $28\cdot24$
• two diametrically opposite and the third apart: $14\cdot22$
Thus admissible placements $= \binom{28}3 - (28+28\cdot 24+ 14\cdot 22) = 2268$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6088883280754089, "perplexity": 172.2090271665477}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875148671.99/warc/CC-MAIN-20200229053151-20200229083151-00407.warc.gz"}
|
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=43&t=21446
|
## Relationship between Electronegativity and Orbital Energy
Joe Rich 1D
Posts: 32
Joined: Fri Jun 23, 2017 11:39 am
Been upvoted: 1 time
### Relationship between Electronegativity and Orbital Energy
The answer for question 4.57 says that a higher electronegativity for an atom makes its orbitals have lower energy. Why is that?
Dabin Kang 1B
Posts: 22
Joined: Fri Jun 23, 2017 11:39 am
Been upvoted: 1 time
### Re: Relationship between Electronegativity and Orbital Energy
Electronegativity is directly proportional to effective nuclear charge, so when electronegativity is high, the nuclear charge is high as well. The high nuclear charge strongly attracts the electrons and pulls them in, decreasing the energy of the orbitals.
JD Malana
Posts: 21
Joined: Wed Nov 16, 2016 3:02 am
### Re: Relationship between Electronegativity and Orbital Energy
Can you apply the same logic to a relationship between electronegativity and atomic size? As in would more electronegative atoms be smaller in size?
Justin Lai 1C
Posts: 50
Joined: Fri Sep 29, 2017 7:04 am
### Re: Relationship between Electronegativity and Orbital Energy
I think that the more electronegative an atom is, the more easy it is to attract electrons. This goes up in a diagonal trend, so increase as group goes to the right and as period goes up, disregarding noble gases. The atomic radius will increase as group goes to the left and period goes down. There is not necessarily a correlation however because there may be exceptions.
Chem_Mod
Posts: 17501
Joined: Thu Aug 04, 2011 1:53 pm
Has upvoted: 393 times
### Re: Relationship between Electronegativity and Orbital Energy
As an example, consider the most electronegative element F. When it comes to effective nuclear charge, smaller atoms tend to have higher effective nuclear charge because the nucleus is less shielded therefore its nucleus pulls electrons toward it more easily than say, Cl, which is also quite electronegative, but less so than F, O, and N. I do not know that having a higher effective nuclear charge makes the orbitals have lower energy, but rather, but rather these especially electronegative atoms at the top of the p-block with high effective nuclear charge simply--intrinsically have lower energy orbitals.
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9150625467300415, "perplexity": 2709.758342591537}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986677964.40/warc/CC-MAIN-20191018055014-20191018082514-00535.warc.gz"}
|
https://engineering.stackexchange.com/questions/16385/reinforcing-beam-and-slab-with-correct-steel-bar-structure
|
# Reinforcing beam and slab with correct steel bar structure
This question might look silly. I am not knowing the correct engineering terms to use in this question. I am trying to explain using the words what I know in English.
We have been hired some construction workers for a work in our backyard. I have some doubt on reinforcing beam for slabs. I came through following 2 models
.
1. Model one
.
2. Model two
According to my perception, I believe the second one is much better in terms of strength. Because the beam is supporting the weight of slab. But my friend suggested me the first one is the right way. But I am not aware why the first one is better than the second. Could any one please help me out in this case to identify the better one.
## Model 1 is always better, but may need some modifications
Whenever you have two reinforced concrete elements, you always need to facilitate the transfer of internal stresses between them. This is done by "mixing" their reinforcement.
For instance, model 2 will have no steel between the slab steel and the beam steel. This means that the connection between them will be very fragile and weak to horizontal shear, meaning that there's a risk of a horizontal crack like this:
With Model 1, this isn't a risk because the vertical steel from the beam ("stirrups") will resist these forces.
Model 1 also has an advantage in that it makes the beam itself stronger, since it allows the beam to behave as a much taller beam (including the height of the slab). Indeed, it even allows the beam to behave not as a rectangular beam, but as a stronger T-shape beam, using some of the slab to resist some of the beam's internal forces (how much depends on your country's standards and codes).
If the beam and the slab will be poured simultaneously, then that's it, and you can stop reading this answer.
However, if there's a chance that the beam will be built first and then the slab will be poured over it, then that means that the beam's stirrups and negative reinforcement (the longitudinal bars at the top of the beam) will be outside of the concrete when the beam is initially built but the slab hasn't yet been poured. This means that the beam's weight and that of the freshly-poured slab (before it gains enough stiffness to do anything) will have to be resisted by the "short" beam alone (without the added height from the slab), so some additional steel will be necessary:
You are basically reinforcing the same beam twice: once considering it as the short beam (without the slab), and once as the tall beam (with the slab). To avoid confusion as to how to define how much reinforcement goes into each position, here's a list:
• positive reinforcement (longitudinal steel at the bottom of the beam): calculate the necessary reinforcement for the short beam to resist the beam's and the slab's self-weight, and then calculate the reinforcement for the tall beam to resist any additional loads (dead loads, live loads, etc). Add these two numbers and you have the total necessary positive reinforcement.
• negative reinforcement (longitudinal steel at the top of the beam) and the shear reinforcement (stirrups) for the short beam: calculate the necessary reinforcement for the short beam to resist the beam's and the slab's self-weight. You may need to just put the minimum reinforcement.
• negative reinforcement (longitudinal steel at the top of the beam) and the shear reinforcement (stirrups) for the tall beam: calculate the reinforcement for the tall beam to resist any additional loads (dead loads, live loads, etc). You may need to just put the minimum reinforcement.
• appreciated. Clear and understood. Jul 23 '17 at 13:18
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8015920519828796, "perplexity": 970.3238970483626}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585181.6/warc/CC-MAIN-20211017175237-20211017205237-00523.warc.gz"}
|
https://ask.sagemath.org/question/60699/finding-a-unique-integer-solution-to-a-set-of-inequalities/?answer=60725
|
# Finding a unique integer solution to a set of inequalities [closed] edit
I am trying to find the unique integer solution to a set of equalities and inequalities. The equations I have are 0<=k0<p-1 and k=k0+j(p-1). I know that for p a prime >=5 and k an integer, there is is a unique set of integers k0 and j such that these equations are satisfied. However, I can't seem to implement this.
What I have is
k0,j = var('k0 j')
solve([0<=k0<p-1,k==k0+j(p-1),k0 in ZZ,j in ZZ],k0,j)
Where I added the "in ZZ" parts later to try and force it to give me an answer, but no matter what I plug in for p and k, the output is always just
(k0, j)
What exactly am I doing wrong?
edit retag reopen merge delete
### Closed for the following reason the question is answered, right answer was accepted by Rune close date 2022-01-22 20:20:15.013423
Sort by » oldest newest most voted
Sage doesn't work that way, e.g. k0 in ZZ and j in ZZ evaluate to False because they are symbolic variables. You meant assume(k0, 'integer') and assume(j, 'integer'). Still, solve doesn't seem very good at your problem. Instead, you can define your solution set as a Polyhedron and ask for its integral points:
sage: p = 5
sage: k = 1
sage: Polyhedron(ieqs=[[0,1,0], [p-2,-1,0]], eqns=[[-k, 1,p-1]]).integral_points()
((1, 0),)
more
Thanks, this works. I'm still new to Sage, so I'm still learning how it works.
( 2022-01-20 21:29:12 +0100 )edit
The given conditions imply that k0 and j are simply the remainder and quotient of division of k by p-1. Correspondingly, they can be computed as:
j, k0 = k.quo_rem(p-1)
more
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33668577671051025, "perplexity": 1398.768443717773}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945323.37/warc/CC-MAIN-20230325095252-20230325125252-00272.warc.gz"}
|
https://scoop.eduncle.com/78-the-rate-law-of-rone-of-the-mechanism-of-the-pyrolysis-of-ch-cho-at-520-c-and-0-2-bar-is-2-rate-ch
|
IIT JAM Follow
July 9, 2021 11:28 pm 30 pts
78 The rate law of rone of the mechanism of the pyrolysis of CH,CHO at 520°C and 0.2 bar is /2 ( Rate=- CH,CHO/2 31/68 The overall activation energy E, in terms of the rate law is (a) E, (2)+ E, (0)+2E,(4) (b) E,2)+E, (0)-E,(4) E,0)E,0-E,4) (0) E,)-,0)+E,4)
• 0 Likes
• Shares
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8291282653808594, "perplexity": 17234.665833657706}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153971.20/warc/CC-MAIN-20210730154005-20210730184005-00477.warc.gz"}
|
http://www.physicsforums.com/showthread.php?p=3872477
|
# spin of stars
by Sean Pan
Tags: spin, stars
P: 10 It is said that massive stars spin faster than less massive ones and I am always wondering why.Could someone please tell me the reason? Thanks a lot.
P: 1,262 Hi Sean Pan, welcome to PhysicsForums. The basic idea is that more massive stars formed from larger molecular clouds. Larger molecular clouds had more angular-momentum, and that angular-momentum is (largely) conserved in the star-formation process. Thus you end up with a faster spinning star.
P: 10 Thanks, but there are many processes in the forming of stars that can reduce the angular momentun of the centural stars. Maybe I should have paid more attention to its initial angular momentum, but other factors should also be considered. Since star forming last a very long time, I think the final state may not depend largely on its original states.
PF Gold
P: 3,072
## spin of stars
We would expect all stars to spin rapidly, because we believe there is always ample angular momentum in the molecular cloud. So the question is not so much why do massive stars spin faster, it is why do low-mass stars spin slower. This is an ongoing research question, but one idea is that they tend to have a strong magnetic coupling with the gas that is forming them, and this coupling involves magnetic field lines that connect the rotating star to gas that is very far away from the star, which is in orbit. Kepler's laws say that the farther away gas is, the longer is its orbital period, so you have a rotating star with a short rotation period connected to gas with a long orbital period, and this tends to rob the star of angular momentum (and send it out to that gas way out there). Then you need a mechanism to get much of the high-angular-momentum gas to escape the system, and you can "spin down" your star (since this can happen with an accretion disk, it is also called "disk locking"). I'm not sure what the present status is of understanding how reliable this mechanism is, but no doubt many questions remain unanswered. For one thing, we might imagine that high-mass stars could also lose angular momentum in similar ways, so then we'd be back to asking why they spin so fast. It is thought that high-mass stars are even more likely to form in close binaries, which can then merge and convert the orbital angular momentum of the merging stars into spin. But that can happen to low-mass stars too, so then we are back to asking why low-mass stars spin so slowly! If you look at young low-mass stars, you find the younger they are, the faster they spin, so they are losing rotational angular momentum long after than have formed. Here interactions between magnetic fields and the winds from the stars are thought to play a key role, but you then have to explain why the winds are so strong in young stars. So you see, there is plenty of grist for the research mill here!
PF Gold P: 11,047 Regardless of the mechanism for shedding angular momentum, I would guess that one of the basic reasons is that more massive stars simply have much more momentum to shed to slow down to a given rotation rate.
P: 10
Quote by Ken G We would expect all stars to spin rapidly, because we believe there is always ample angular momentum in the molecular cloud. So the question is not so much why do massive stars spin faster, it is why do low-mass stars spin slower. This is an ongoing research question, but one idea is that they tend to have a strong magnetic coupling with the gas that is forming them, and this coupling involves magnetic field lines that connect the rotating star to gas that is very far away from the star, which is in orbit. Kepler's laws say that the farther away gas is, the longer is its orbital period, so you have a rotating star with a short rotation period connected to gas with a long orbital period, and this tends to rob the star of angular momentum (and send it out to that gas way out there). Then you need a mechanism to get much of the high-angular-momentum gas to escape the system, and you can "spin down" your star (since this can happen with an accretion disk, it is also called "disk locking"). I'm not sure what the present status is of understanding how reliable this mechanism is, but no doubt many questions remain unanswered. For one thing, we might imagine that high-mass stars could also lose angular momentum in similar ways, so then we'd be back to asking why they spin so fast. It is thought that high-mass stars are even more likely to form in close binaries, which can then merge and convert the orbital angular momentum of the merging stars into spin. But that can happen to low-mass stars too, so then we are back to asking why low-mass stars spin so slowly! If you look at young low-mass stars, you find the younger they are, the faster they spin, so they are losing rotational angular momentum long after than have formed. Here interactions between magnetic fields and the winds from the stars are thought to play a key role, but you then have to explain why the winds are so strong in young stars. So you see, there is plenty of grist for the research mill here!
Thanks a lot for your very detailed analysis!
Related Discussions Quantum Physics 2 Introductory Physics Homework 2 Astrophysics 2 General Astronomy 6 General Physics 4
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8588579893112183, "perplexity": 518.2776087327791}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00300-ip-10-147-4-33.ec2.internal.warc.gz"}
|
http://mathhelpforum.com/number-theory/121189-i-have-problem-about-primes-help-me-please.html
|
If p>=q>=5 and p and q are both primes, prove that 24|(p^2-q^2).
Thank.
2. Originally Posted by konna
If p>=q>=5 and p and q are both primes, prove that 24|(p^2-q^2).
Thank.
What have you tried yourself? Of course, $p^2- q^2= (p- q)(p+q)$ since p and q are both primes larger than 3, they are both odd and so both p-q and p+q are even- their product is, at least, divisible by 4. Now see if you can find another factor of 2. If p= 2m+1 and q= 2n+1, p+q= 2m+2n+ 2= 2(m+n+1) and p- q= 2m- 2n= 2(m-n). Consider what happens to m+n+1 and m-n if m and n are both even, both odd, or one even and the other odd. Notice that so far we have only required that p and q be odd, not that they be prime.
Once you have done that the only thing remaining is to show that $p^2- q^2$ must be a multiple of 3 and that means showing that either p-q or p+q is a multiple of 3.
3. Originally Posted by konna
If p>=q>=5 and p and q are both primes, prove that 24|(p^2-q^2).
Thank.
$p^2-q^2=(p-q)(p+q)$ , and as:
1) $x^2=1\!\!\!\!\pmod 3\,\,\,\forall\,x$ not a multiple of 3 ;
2) either $p-q$ or $p+q$ is divisible by 4, and the other one by 2.
The result follows at once.
Tonio
4. I used to try by myself , but i not sure my process.
Thank so much ..
5. Hello, konna!
I don't know if this qualifies as a proof.
If $p \geq q \geq 5$ and $p$ and $q$ are both primes,
. . prove that: . $24\,|\,(p^2-q^2)$
Any prime greater than or equal to 5 is of the form: . $6m \pm 1$
. . for some integer $m.$
Let: . $\begin{array}{ccc}p &=& 6m \pm 1 \\ q &=& 6n \pm1\end{array}$
Then: . $\begin{array}{ccc}p^2 &=& 36m^2 \pm 12m + 1 \\ q^2 &=& 36n^2 \pm12n + 1 \end{array}$
Subtract: . $p^2-q^2 \;=\;36m^2-36n^2 \pm 12m \mp 12n \;=\;36(m^2-n^2) \pm12(m - n)$
. . . . . . . $p^2-q^2 \;=\;36(m-n)(m+n) \pm12(m - n)$
. . . . . . . $p^2-q^2 \;=\;12(m-n)\bigg[3(m+n) \pm 1\bigg]$
We see that $p^2-q^2$ is divisible by 12.
We must show that either $(m-n)$ or $[3(m+n) \pm1]$ is even.
If $m$ and $n$ has the same parity (both even or both odd),
. . then $(m-n)$ is even.
If $m$ and $n$ have opposite parity (one even, one odd),
. . then $(m+n)$ is odd.
And . $3(m+n)$ is odd.
. . Then: . $3(m+n) \pm1$ is even.
Therefore, $p^2-q^2$ is divisible by 24.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 29, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9630281329154968, "perplexity": 436.82859002789604}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719843.44/warc/CC-MAIN-20161020183839-00445-ip-10-171-6-4.ec2.internal.warc.gz"}
|
https://codereview.stackexchange.com/questions/172448/find-frequency-of-combinations-in-large-data-set
|
# Find frequency of combinations in large data set
My concern is performance of the code. It takes way longer than I would like. I would be willing to give up some memory for higher performance.
I have JTable with serial numbers in column1 and options in column2. With every option the person has chosen on a new row.
Each serial number can only have 1 of each options. Each serial number usually has around 60 options.
My dataset is very large (100k-1m rows).
I have a total of 350 options but I realize that it is impossible to get combinations of all of them as that would be 7084700 combinations.
My combinations are not position specific.
ex:
option1 option2 option3
is the same as
option2 option1 option3
Here is an Example of how my data looks:
1 a
1 b
1 c
2 a
2 b
2 c
3 a
3 b
4 b
4 c
4 d
5 a
5 b
5 c
5 d
6 c
6 d
6 e
8 d
8 e
And here is the output after calculation:
a b 57%
a c 42%
a d 14%
a e 0%
b c 42%
b d 14%
b e 0%
c d 14%
c e 0%
d e 0%
a b c 42%
a b d 14%
a c d 14%
b c d 14%
a b e 0%
a c e 0%
a d e 0%
b c e 0%
b d e 0%
c d e 0%
Here is my code to get the frequency of combinations of 3 options.
//used to get number of combinations (n!/(r!(n-r)!)
static BigInteger binomial(final int N, final int K) {
BigInteger ret = BigInteger.ONE;
for (int k = 0; k < K; k++) {
ret = ret.multiply(BigInteger.valueOf(N - k)).divide(BigInteger.valueOf(k + 1));
}
return ret;
}
// method for calculalating combinations of 3
public void calc3(JTable table) {
currentSerialNums2 = new ArrayList<Object>(1500);
options = new ArrayList<Object>(table.getRowCount());
serialNum = new ArrayList<Object>(table.getRowCount());
currentSerialNums = new ArrayList<Object>(1000);
DistinctOptions = new ArrayList<Object>();
// Loop through all rows and extract options and serial numbers to lists
for (int i = 0; i < table.getRowCount(); i++) {
// add all objects from 2nd column in table to options list
// add all objects from first column in table to serialNum list
}
// Extract unique options and save to List<Object> DistinctOptions. This is a github library called jstreams but I could use hashset aswell
DistinctOptions = Stream.create(options).distinct().toList();
// declaring these ArrayLists with custom capacity to avoid constant resizing
previousCombos = new ArrayList<List<Object>>(binomial(DistinctOptions.size(), 3).intValue());
SerialNumsPerOption = new ArrayList<List<Object>>(DistinctOptions.size());
// Adding all serial numbers for each option in a List<List<Object>>
for (int i = 0; i < DistinctOptions.size(); i++) {
for (int y = 0; y < table.getRowCount(); y++) {
if (DistinctOptions.get(i).equals(options.get(y))) {
}
}
if (!currentSerialNums.isEmpty()) {
if (i == 0) {
}
currentSerialNums2 = new ArrayList<Object>(currentSerialNums);
currentSerialNums.clear();
}
}
for (int i = 0; i < SerialNumsPerOption.size(); i++) {
compareSerialNums = new HashSet<>(SerialNumsPerOption.get(i));
System.out.println("next");
outerloop2: for (int j = 0; j < SerialNumsPerOption.size(); j++) {
if (DistinctOptions.get(i).equals(DistinctOptions.get(j))) {
continue;
}
currentCombo = new ArrayList<Object>();
for (int a = 0; a < previousCombos.size(); a++) {
if (previousCombos.get(a).containsAll(currentCombo)) {
continue outerloop2;
}
}
compare2 = new HashSet<>(SerialNumsPerOption.get(j));
compare2.retainAll(compareSerialNums);
innerloop: for (int j2 = 0; j2 < SerialNumsPerOption.size(); j2++) {
if (DistinctOptions.get(i).equals(DistinctOptions.get(j2))) {
continue;
}
if (DistinctOptions.get(j).equals(DistinctOptions.get(j2))) {
continue;
}
currentCombo = new ArrayList<Object>(4);
for (int a = 0; a < previousCombos.size(); a++) {
if (previousCombos.get(a).containsAll(currentCombo)) {
continue innerloop;
}
}
SerialNumsPerOption.get(j2).retainAll(compare2);
}
}
}
System.out.println("Saving to list");
tabledata = new Object[previousCombos.size()][4];
for (int i = 0; i < previousCombos.size(); i++) {
for (int j = 0; j < 4; j++) {
if (previousCombos.get(i).get(j) != null) {
tabledata[i][j] = previousCombos.get(i).get(j);
}
}
}
System.out.println("Calculations completed");
}
What is taking all the time:
Comparing the two lists to see how many serial numbers the two/three options share using the retainAll. Is there an alternative here? Since there are all unique values, converting to a set works but it takes more time to convert the values than it takes to do retainAll on two Lists.
comparing current combination with all the previous ones to eliminate ones that I already have (order doesn't matter)
for (int a = 0; a < previousCombos.size(); a++) {
if (previousCombos.get(a).containsAll(currentCombo)) {
continue innerloop;
}
}
EDIT:
• Just to clarify: Each serial number can have an arbitrary number of options associated with it, and it's just a characteristic of the input format that a line contains just one option for a serial number? Aug 9 '17 at 20:41
• Also, I am confused how you got the number $5668650$, could you elaborate on that please? ${350\choose 3}=7084700$, and ${350\choose 2}=61075$, and $2^{350}=2.29349861599007 \cdot 10^{105}$, so maybe I misunderstood something in your question. Aug 9 '17 at 21:43
• I don't want permutations. I am interested in combinations and I don't want position specific or repetition. Formula I am using is n!/r!(n-r)! Aug 10 '17 at 5:46
• That is correct Stingy. Each serial number has about 60 lines. 1 for each of its options. And it can't have the same option twice. Aug 10 '17 at 5:48
• $n\choose r$ is equivalent to $\frac{n!}{r!\cdot(n-r)!}$. It doesn't count permutations. So the question how you got 5668650 still stands. Aug 10 '17 at 7:16
First, a general note on this answer: I am going to use the type names SerialNumber and Option, even though in your code they're both represented by Objects, because it's easier to understand that way.
• The first thing that struck me as a potential source of unnecessary complication is the fact that, in your code, you mimic the input format of the data (i.e. two columns) by storing the serial numbers and options in two separate lists, even though this format doesn't reflect the actual relationship between serial numbers and options at all. Why don't you, instead, store the data in a Map<SerialNumber, Set<Option>>? I think that this alone would make everything easier, even if it's just by making the code more readable. The same is true for the reverse. Instead of making a List<List<SerialNumber>> whose indexes just happen to be related to the indexes of DistinctOptions, it would be much clearer if you used a Map<Option, Set<SerialNumber>>, because then, the relationship you want to represent would be directly reflected by the code, reducing the potential for both bugs and headaches when trying to figure out the large for loop where you compare the "combos".
• Regardless of the above, you make SerialNumsPerOption a List<List<SerialNumber>>, but later, when you access the elements of this List, you convert them to a Set before using them. So why don't you make SerialNumsPerOption a List<Set<SerialNumber>> in the first place (which implies making currentSerialNums2 and currentSerialNums Sets instead of Lists)? Creating these effectively unnecessary Lists just to convert them to a Set later on when you need their contents makes the code unnecessarily complicated.
• Also, where are all those variables used in calc3(JTable) (currentSerialNums2, options etc.) declared? Judging by the code you provided, they seem to fulfill the purpose of local variables because you assign them at the beginning of the method, yet I see no declarations of them. I shudder to think what kind of object (or class, in case they are static fields) this must be that holds all those fields.
• Regardless of whether they are local variables or fields, what makes your code appear more intimidating than it actually is is where you first use these variables in the method. For instance, the variables currentSerialNums2 and currentSerialNums are needed exclusively in the for loop accompanied by the comment "Adding all serial numbers for each option in a List<List<Object>>" (at least within the method calc3(JTable), who knows what horrifying scenarios outside it call for them …), and they don't even have to keep their state from one loop iteration to the next, so you might as well "introduce" (or declare if they should indeed be local) them inside this for loop.
• I had a "?????" moment when I saw this line:
if (DistinctOptions.get(i).equals(DistinctOptions.get(j))) {
I thought DistinctOptions was supposed to contain only distinct options. So how can DistinctOptions.get(i) and DistinctOptions.get(j) be equal? Right, if i and j are equal. But then, why not just do this:
if (i == j) {
Actually, you can save yourself this trouble entirely by initializing j as int j = i + 1 instead of int j = 0, since the order of options doesn't matter and, for your purpose, i and j are interchangeable.
• What is this:
currentCombo.add("");
A red herring? But wait, it gets even more confusing:
currentCombo.add(compare2.size());
So now, currentCombo also contains an Integer apart from two Options and a mysterious empty String. Ah, the integer is just the number of serial numbers that have both of the options. But then, what is this integer doing in currentCombo itself? It is not an Option but just a value associated with a combination of Options and therefore should not be placed on the same level as the Options. Again, a better way to represent this relationship would be a Map<Set<Option>, Integer> (note the use of Set instead of List).
• After looking at the innerloop, the for loop where you declare j2, it finally becomes apparent what the purpose of this mysterious empty String observed earlier was, and the revelation is not relieving at all. It turns out that you seem to try to hard code a recursive process using multiple nested loops. Apart from the fact that this is a form of code duplication, what will you do if you want to include combinations of 4 options instead of only 3? Create another nested loop and add more empty Strings in the outer loops? This simply cannot be it. You are implementing a recursive process, so the code design should reflect this.
Taking all of the above into account, a final remark about your code in general. I think your code would be clearer if you separated the logic of generating all possible n-size option combinations from the logic of determining their frequency in your input data. Actually, once you have the first part working, the second part should be a piece of cake, since, for every combination, you only have to go through every serial number and check if the combination is contained in the serial number's options.
But note that it will only be a piece of cake if the types of your variables actually reflect what you are trying to represent with them. The loop construct you yourself criticized in your question is a prime example:
for (int a = 0; a < previousCombos.size(); a++) {
if (previousCombos.get(a).containsAll(currentCombo)) {
continue innerloop;
}
}
Here, both previousCombos.get(a) and currentCombo should be a Set that only contains options (a Set fits here perfectly, because it cannot contain duplicate elements and the order of the elements doesn't matter). The number of serial numbers containing this combination is only a value that is associated with this combination and not a part of the combination itself, and an empty String belongs here even less, because it is only relevant for the output format and has nothing to do with the combinatorial logic itself. If you apply these changes, the above code could be replaced by this:
if (previousCombos.contains(currentCombo) {
continue innerloop;
}
And the simpler the design of your code is, the easier it will be to spot opportunities where you can improve performance, which is, after all, what your question was originally about.
• This is a great answer and it points out my terrible shortcuts ex. currentCombo.add("");. Your point about Mapping options and serialnumbers to better reflect the relationship was also spot on and I will look into it. Aug 11 '17 at 11:32
I don't know what your whole setup is, but I would recommend that you use the model/view separation. Your data is stored somewhere as decent Java objects in memory or some database; your JTable is purely for displaying the data and not for storing and retrieving data (Object ouch!).
Another tip that could be useful: look at BitSet. You could fill those up with your options and they easily compare different bitsets.
Soo ... some egregious pain points in your code. I strongly recommend you return to get another review:
1. <Object> on the right hand side of an assignment is a really bad idea. Instead of using something messy like this, you should be using the "Diamond Operator" (RHS Type Inference) like so:
currentSerialNums2 = new ArrayList<>(1500);
2. 1500 is a magic number. Why 1500? The same applies for 1000.
3. DistinctOptions as well as SerialNumsPerOption should be written as lowerCamelCase
4. Because Swing is older than most university undergraduates by now it's generally considered to be sucky and bad. First and foremost, it doesn't support proper strong typing for input retrieval from Tables and similar components. You'll have significantly cleaner code when you move from Swing to JavaFX which is integrated significantly better with the "new" language features like Generics (Java 6) and syntactic sugars.
5. The declarations for all these more or less local variables you have there seem to be on the class-level, which is ... bad. Declare variables in the smallest possible scope to reduce how much things a maintainer / reader needs to juggle in their head to understand the code.
6. Comments should describe the Why not the What. The What is always already described by the code. When the code mismatches with the comments, what are you going to do? Trust the code and delete the comment, because the code is always right? Or Adjust the code to the comment, because comments are business requirements?
7. Don't use System.out.println to communicate with the user when you have a GUI. Swing has Dialogs and JOptionPane for this exact purpose. Similar facilities exist for JavaFX
• the System.out.println is just for me to get an idea of speed while debuging. but the rest I completely agree with. Aug 10 '17 at 7:43
• Care to not change the speed when observing it. System.out.println is horrendously slow compared to doing everything else in that loop. Generally avoid IO (as is printing to console) like the plague in "high performance" loops. Aug 10 '17 at 7:45
• I will keep that in mind. Right now I have the System.out.println in the outer loop and not the inner so it should make a big difference for the overall speed. Aug 10 '17 at 7:50
• should not make a big difference* Aug 10 '17 at 7:57
• I find your point on using <Object> on the right hand side of an assignment a bit unclear. Is it the use of Object as a generic parameter that you criticize, or the fact that a type parameter is explicitly specified in the constructor even though it could be inferred by the compiler? Your suggestion of using the diamond operator suggests the latter. If this is what you meant, maybe you could elaborate on why you think it is a "really bad idea", because, apart from it being redundant, I can't think of any harm done by it. Aug 10 '17 at 22:49
I did some changes and managed to speed up the function 10x.
This is the loop that was taking all the time. Its purpose was to check if the current combination has already been viewed.
Old code:
for (int a = 0; a < previousCombos.size(); a++) {
if (previousCombos.get(a).containsAll(currentCombo)) {
continue outerloop2;
}
}
New code:
public static Comparator<Object> comboSorter = new Comparator<Object>() {
public int compare(Object o1, Object o2) {
String object1 = o1.toString();
String object2 = o2.toString();
return object2.compareTo(object1);
}};
Collections.sort(currentCombo, comboSorter);
if (previousCombosList.contains(currentCombo.hashCode())) {
continue innerloop;
}
• This is not a solution to your problem, but gambling. The only guarantee you can get from hash codes is that objects with different hash codes are unequal (provided that hashCode() and equals(Object) are implemented correctly). But the reverse is not true. Objects with equal hash codes are not necessarily equal. So in your new code, you run the risk of disregarding an option combination if another combination produces the same hash (however unlikely this may be). Aug 11 '17 at 7:05
• Is there a good hash function for 64 bit that you recommend? there shouldnt be any collisions in a 64 bit hash Aug 11 '17 at 12:01
• Where did you get the information that equals(Object) "uses" hashCode() from? It just has to obey the contract of hashCode(). Furthermore, I think there are more efficient approaches to your problem than trying to design a hash-function that is collision proof for your usage. Firstly, if you find that a combo with a hash equal to that of currentCombo has already been evaluated, you can still compare the combos themselves. It's just that […] Aug 11 '17 at 13:30
• […] you don't need to compare two objects when the hashes don't match, because equal objects must have equal hashes. And this is what a HashSet effectively does anyway, so you don't need to re-invent the wheel. But apart from that, if you initialize j to i + 1 instead of 0, then you will never encounter the same combination twice in the first place (provided the List you're iterating over doesn't contain duplicates), rendering the variable previousCombos completely unnecessary and, in turn, automatically solving your problem. Aug 11 '17 at 13:30
• This is not ment for finding duplicate options i.e combo option1-1-2 but for finding if the combo is the same but with another order ex option1-2-3 is the same as 2-1-3. So intitializing j to i + 1 would not stop me from encountering same combo in different order as far as i can tell Aug 11 '17 at 13:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2286832630634308, "perplexity": 1155.9657141654566}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362919.65/warc/CC-MAIN-20211203212721-20211204002721-00376.warc.gz"}
|
https://discourse.flucoma.org/t/supercollider-justintime-nmf-example-look-mum-no-sendreply/1328
|
# SuperCollider JustInTime NMF example: look mum no SendReply
Hi! I figured out Just In Time analysis can be done in SuperCollider without using SendReply… this is awesome! Thanks a lot! It’s great to be able to trigger NRT analysis directly from the server, without passing through those extra OSC messages (especially when working with a “remote” server I guess).
So I’ve rewritten the JiT NMF example to do it, and wanted to share it with you:
(
~nmfNumComponents = 3;
~nmfDur = 0.5;
// we need two buffers to store the output of nmf analysis
// we need two because when one is playing, the other is being rewritten with new analysis data
// (and we are going to concatenate them in the SynthDef below)
~nmfBuffers = 2.collect { Buffer.alloc(s, ~nmfDur * s.sampleRate, ~nmfNumComponents) };
// input buffer to nmf analysis: we need twice our analysis size
~analBuffer = Buffer.alloc(s, ~nmfDur * 2 * s.sampleRate);
)
(
// abstract away the alternating players mechanism, for clarity
~alternatingPlayers = { |bufs, bufChannels = 3|
// alternating triggers for players
var playDur = BufDur.ir(bufs[0]);
var playTrigs = Impulse.kr(
freq: (playDur * bufs.size).reciprocal,
phase: Array.series(bufs.size, 0, bufs.size.reciprocal)
);
var players = PlayBuf.ar(bufChannels, bufs, trigger: playTrigs, loop: 0);
players * envs;
}
};
SynthDef(\becauseIcan) { |analBuffer = 0|
var nmfBufnums = \nmfBuffers.kr(0!2);
var in = \in.kr(0);
var audioIn = In.ar(in);
// trigDelay can be 0 if phasors are audiorate
// otherwise, 1 blockSizes will do (trig needs to go to zero for at least one blockSize)
var trigDelay = 0; //BlockSize.ir();
writeSamples = BufFrames.kr(analBuffer);
writeHead = Phasor.ar(0, 1, 0, writeSamples);
// trigger nmf buf 0 when writeHead is at analBuf half
(writeSamples / 2 + trigDelay),
// trigger nmf buf 1 when writeHead is at analBuf beginning
(0 + trigDelay)
];
// trigger NRT analysis from the server
nmfBufnums.do { |destBuf, n|
FluidBufNMF.kr(analBuffer, resynth: destBuf, trig: analTrigs[n],
startFrame: BufFrames.ir(destBuf) * n,
numFrames: BufFrames.ir(destBuf),
components: ~nmfNumComponents, fftSize: 1024, windowSize: 512, hopSize: 256
);
};
players = SynthDef.wrap(~alternatingPlayers.value(nmfBufnums, ~nmfNumComponents));
Out.ar(0, Mix(Splay.ar(players)));
)
// instantiate the player
Ndef(\play) { PlayBuf.ar(1, ~audioBuffer, loop: 1) };
// instantiate the processor
(
y = Synth.after(Ndef(\play), \becauseIcan, [
analBuffer: ~analBuffer, nmfBuffers: ~nmfBuffers,
in: Ndef(\play).bus
]);
)
// stop it all
[~analBuffer, ~nmfBuffers, ~audioBuffer].do(_.free);
As a side-note, this JiT approach is different from the one with the circular buffer, because here we have fixed analysis windows, so we don’t need to Latch positions or assure we can always read from a contiguous block (it is already assured).
Greatness, that’s a second JiT “recipe” that adds up to the circular buffer one
1 Like
And here is a more “composable” version of the same SynthDef, where the JiT mechanism is isolated so that it could be reused. It might be too early for me to ship this abstraction (feels like it’s very tied to this example’s resynth case), but nevertheless I thought it might be inspiring?
// - records audioIn to a local buffer (2 * analDur)
// - analyzes 2 halves of that buffer (each lasting analDur * analChannels), alternating:
// when first half is being recorded, analyzes the second, and vice-versa
// - runs a custom function to actually perform nrt analysis, providing necessary arguments
// (see arguments below)
// - returns analysis buffers
~fluidJiTFixed = { |audioIn, analDur = 0.5, analChannels = 1, analFunc|
var analBuffer = LocalBuf(analDur * 2 * SampleRate.ir);
var segmentBuffers = {LocalBuf(analDur * SampleRate.ir, analChannels)} ! 2;
writeSamples = BufFrames.ir(analBuffer);
writeHead = Phasor.ar(0, 1, 0, writeSamples);
// trigger analysis for first segment when writeHead is at analBuf half
writeSamples / 2,
// trigger analysis for second segment when writeHead is at analBuf beginning
0
];
segmentBuffers.do { |destBuf, n|
analFunc.value(
/*source: */ analBuffer,
/*startFrame:*/ BufFrames.ir(destBuf) * n,
/*numFrames: */ BufFrames.ir(destBuf),
/*destBuf: */ destBuf,
/*trig: */ analTrigs[n]
)
};
segmentBuffers
};
SynthDef(\nmfSplay) {
var audioIn = In.ar(\in.kr(0));
var nmfBufnums = ~fluidJiTFixed.value(audioIn, ~nmfDur, ~nmfNumComponents) {
|source, startFrame, numFrames, destBuf, trig|
FluidBufNMF.kr(source, resynth: destBuf, trig: trig,
startFrame: startFrame, numFrames: numFrames,
components: ~nmfNumComponents, fftSize: 1024, windowSize: 512, hopSize: 256
);
};
var players = SynthDef.wrap(~alternatingPlayers.value(nmfBufnums, ~nmfNumComponents));
Out.ar(0, Mix(Splay.ar(players)));
it’s been on my todo list for the week - I will read tomorrow in my hotel room and come back to you with what was my all-server-solution for a JIT nmf classifier - coded in Max and Pd and the SC version is looooon due!
1 Like
ok I had a flash that I actually did it and it disappeared from the examples folder… so I found it back and it works!
This is exactly the same thing as the Max and Pd equivalent. I think the code is verbose, and I am certain @tedmoore would make it better and all trigger based on the server side, but this might be inspiring for now. Using NMF on a portion of a circular buffer, as just-in-time classifier, is quite fun. It would be much faster all on server so I might give it a spin. In the meantime, I hope this helps!
// using nmf in 'real-time' as a classifier
// how it works: a circular buffer is recording and attacks trigger the process
// if in learning mode, it does a one component nmf which makes an approximation of the base. 3 of those will be copied in 3 different positions of our final 3-component base
// in in guessing mode, it does a thres component nmf from the trained bases and yields the 3 activation peaks, on which it thresholds resynth
//how to use:
// 1. start the server
// 2. select between parenthesis below and execute. You should get a window with 3 pads (bd sn hh) and various menus
// 3. train the 3 classes:
// 3.1 select the learn option
// 3.2 select which class you want to train
// 3.3 play the sound you want to associate with that class a few times (the left audio channel is the source)
// 3.4 click the transfer button
// 3.5 repeat (3.2-3.4) for the other 2 classes.
// 3.x you can observe the 3 bases here:
~classify_bases.plot(numChannels:3)
// 4. classify
// 4.1 select the classify option
// 4.2 press a pad and look at the activation
// 4.3 tweak the thresholds and enjoy the resynthesis. (the right audio channel is the detected class where classA is a bd sound)
// 4.x you can observe the 3 activations here:
~activations.plot(numChannels:3)
/// code to execute first
(
var circle_buf = Buffer.alloc(s,s.sampleRate * 2); // b
var input_bus = Bus.audio(s,1); // g
var classifying = 0; // c
var cur_training_class = 0; // d
var train_base = Buffer.alloc(s, 65); // e
var activation_vals = [0.0,0.0,0.0]; // j
var thresholds = [0.5,0.5,0.5]; // k
var activations_disps;
var analysis_synth;
var osc_func;
var update_rout;
~classify_bases = Buffer.alloc(s, 65, 3); // f
~activations = Buffer.new(s);
// the circular buffer with triggered actions sending the location of the head at the attack
Routine {
SynthDef(\JITcircular,{arg bufnum = 0, input = 0, env = 0;
duration = BufFrames.kr(bufnum) / 2;
halfdur = duration / 2;
// circular buffer writer
audioin = In.ar(input,1);
trig = FluidAmpSlice.ar(audioin, 10, 1666, 2205, 2205, 12, 9, -47,4410, 85);
// cue the calculations via the language
Out.ar(0,audioin);
// drum sounds taken from original code by snappizz
// https://sccode.org/1-523
// produced further and humanised by PA
SynthDef(\fluidbd, {
|out = 0|
var body, bodyFreq, bodyAmp;
var pop, popFreq, popAmp;
var click, clickAmp;
var snd;
// body starts midrange, quickly drops down to low freqs, and trails off
bodyFreq = EnvGen.ar(Env([Rand(200,300), 120, Rand(45,49)], [0.035, Rand(0.07,0.1)], curve: \exp));
bodyAmp = EnvGen.ar(Env([0,Rand(0.8,1.3),1,0],[0.005,Rand(0.08,0.085),Rand(0.25,0.35)]), doneAction: 2);
body = SinOsc.ar(bodyFreq) * bodyAmp;
// pop sweeps over the midrange
popFreq = XLine.kr(Rand(700,800), Rand(250,270), Rand(0.018,0.02));
popAmp = EnvGen.ar(Env([0,Rand(0.8,1.3),1,0],[0.001,Rand(0.018,0.02),Rand(0.0008,0.0013)]));
pop = SinOsc.ar(popFreq) * popAmp;
// click is spectrally rich, covering the high-freq range
// you can use Formant, FM, noise, whatever
clickAmp = EnvGen.ar(Env.perc(0.001,Rand(0.008,0.012),Rand(0.07,0.12),-5));
click = RLPF.ar(VarSaw.ar(Rand(900,920),0,0.1), 4760, 0.50150150150) * clickAmp;
snd = body + pop + click;
snd = snd.tanh;
Out.ar(out, snd);
SynthDef(\fluidsn, {
|out = 0|
var pop, popAmp, popFreq;
var noise, noiseAmp;
var click;
var snd;
// pop makes a click coming from very high frequencies
// slowing down a little and stopping in mid-to-low
popFreq = EnvGen.ar(Env([Rand(3210,3310), 410, Rand(150,170)], [0.005, Rand(0.008,0.012)], curve: \exp));
popAmp = EnvGen.ar(Env.perc(0.001, Rand(0.1,0.12), Rand(0.7,0.9),-5));
pop = SinOsc.ar(popFreq) * popAmp;
// bandpass-filtered white noise
noiseAmp = EnvGen.ar(Env.perc(0.001, Rand(0.13,0.15), Rand(1.2,1.5),-5), doneAction: 2);
noise = BPF.ar(WhiteNoise.ar, 810, 1.6) * noiseAmp;
click = Impulse.ar(0);
snd = (pop + click + noise) * 1.4;
Out.ar(out, snd);
SynthDef(\fluidhh, {
|out = 0|
var click, clickAmp;
var noise, noiseAmp, noiseFreq;
// noise -> resonance -> expodec envelope
noiseAmp = EnvGen.ar(Env.perc(0.001, Rand(0.28,0.3), Rand(0.4,0.6), [-20,-15]), doneAction: 2);
noiseFreq = Rand(3900,4100);
noise = Mix(BPF.ar(ClipNoise.ar, [noiseFreq, noiseFreq+141], [0.12, 0.31], [2.0, 1.2])) * noiseAmp;
Out.ar(out, noise);
// makes sure all the synthdefs are on the server
s.sync;
// instantiate the JIT-circular-buffer
analysis_synth = Synth(\JITcircular,[\bufnum, circle_buf, \input, input_bus]);
train_base.fill(0,65,0.1);
// instantiate the listener to cue the processing from the language side
osc_func = OSCFunc({ arg msg;
// when an attack happens
if (classifying == 0, {
// if in training mode, makes a single component nmf
FluidBufNMF.process(s, circle_buf, head_pos, 128, bases:train_base, basesMode: 1, windowSize: 128);
}, {
// if in classifying mode, makes a 3 component nmf from the pretrained bases and compares the activations with the set thresholds
FluidBufNMF.process(s, circle_buf, head_pos, 128, components:3, bases:~classify_bases, basesMode: 2, activations:~activations, windowSize: 128, action:{
// we are retrieving and comparing against the 2nd activation, because FFT processes are zero-padded on each sides, therefore the complete 128 samples are in the middle of the analysis.
~activations.getn(3,3,{|x|
activation_vals = x;
if (activation_vals[0] >= thresholds[0], {Synth(\fluidbd,[\out,1])});
if (activation_vals[1] >= thresholds[1], {Synth(\fluidsn,[\out,1])});
if (activation_vals[2] >= thresholds[2], {Synth(\fluidhh,[\out,1])});
defer{
activations_disps[0].string_("A:" ++ activation_vals[0].round(0.01));
activations_disps[1].string_("B:" ++ activation_vals[1].round(0.01));
activations_disps[2].string_("C:" ++ activation_vals[2].round(0.01));
};
});
};
);
});
// make sure all the synths are instantiated
s.sync;
// GUI for control
{
var win = Window("Control", Rect(100,100,610,100)).front;
Button(win, Rect(10,10,80, 80)).states_([["bd",Color.black,Color.white]]).mouseDownAction_({Synth(\fluidbd, [\out, input_bus], analysis_synth, \addBefore)});
Button(win, Rect(100,10,80, 80)).states_([["sn",Color.black,Color.white]]).mouseDownAction_({Synth(\fluidsn, [\out, input_bus], analysis_synth, \addBefore)});
Button(win, Rect(190,10,80, 80)).states_([["hh",Color.black,Color.white]]).mouseDownAction_({Synth(\fluidhh, [\out, input_bus], analysis_synth,\addBefore)});
StaticText(win, Rect(280,7,85,25)).string_("Select").align_(\center);
classifying = value.value;
if(classifying == 0, {
train_base.fill(0,65,0.1)
});
});
cur_training_class = value.value;
train_base.fill(0,65,0.1);
});
Button(win, Rect(375,65,85,25)).states_([["transfer",Color.black,Color.white]]).mouseDownAction_({
if(classifying == 0, {
// if training
FluidBufCompose.process(s, train_base, numChans:1, destination:~classify_bases, destStartChan:cur_training_class);
});
});
StaticText(win, Rect(470,7,75,25)).string_("Acts");
activations_disps = Array.fill(3, {arg i;
StaticText(win, Rect(470,((i+1) * 20 )+ 7,80,25));
});
StaticText(win, Rect(540,7,55,25)).string_("Thresh").align_(\center);
3.do {arg i;
TextField(win, Rect(540,((i+1) * 20 )+ 7,55,25)).string_("0.5").action_({|x| thresholds[i] = x.value.asFloat;});
};
win.onClose_({circle_buf.free;input_bus.free;osc_func.clear;analysis_synth.free;});
}.defer;
}.play;
)
// thanks to Ted Moore for the SC code cleaning and improvements!
1 Like
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32126590609550476, "perplexity": 26553.99128022555}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662529658.48/warc/CC-MAIN-20220519172853-20220519202853-00075.warc.gz"}
|
http://math.stackexchange.com/users/49253/jens-na?tab=summary
|
jens-na
Reputation
Top tag
Next privilege 125 Rep.
Vote down
2
Impact
~84 people reached
• 0 posts edited
### Question (1)
1 How to prove that $0.\overline{9} \neq 1$? [duplicate]
### Reputation (106)
This user has no recent positive reputation changes
This user has not answered any questions
### Accounts (5)
Stack Overflow 1,488 rep 1717 Super User 186 rep 3 Server Fault 126 rep 2 Mathematics 106 rep 2 Meta Stack Exchange 101 rep 1
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2402448207139969, "perplexity": 27500.457232682154}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701154682.35/warc/CC-MAIN-20160205193914-00244-ip-10-236-182-209.ec2.internal.warc.gz"}
|
https://toc.seas.harvard.edu/seminar14-15
|
# TOC Seminar '14-'15
## The Independent Set problem on Degree-d Graphs
Anupam Gupta, CMU
Monday, May 11, 2015 - 1:00pm to 2:30pm
Pierce 213
The independent set problem on graphs with maximum degree d is known to be Omega(d/log^2 d) hard to approximate, assuming the unique games conjecture. However, the best approximation algorithm was worse by about an Omega(log d) factor. In a recent breakthrough, Bansal showed how to use few rounds of the SA+ hierarchy to estimate the size of the optimal independent set to within O~(d/log^2 d), essentially closing the gap. Some questions remained: could we find such an IS? And did we really need the SA+ lifting step?
In this talk, we show two results. Firstly, the standard SDP, based on the Lovasz Theta function, gives a O~(d/log^{3/2} d) approximation without using any lift/project steps. Secondly, using SA+, one can convert Bansal's algorithm for IS size estimation into an approximation algorithm. Both results, just like Bansal's, are based on Ramsay- theoretic results of independent interest.
This is based on joint work with Nikhil Bansal (TU Eindhoven) and Guru Guruganesh (CMU).
## Dynamic Maximum Matching and Related Problems
Shay Solomon, Weizmann Institute of Science
Monday, May 4, 2015 - 1:00pm to 2:30pm
Pierce 213
Graph matching is one of the most well-studied problems in combinatorial optimization, with applications ranging from scheduling and object recognition to numerical analysis and computational chemistry. Nevertheless, until recently very little was unknown about this problem in real-life dynamic networks, which aim to model the constantly changing physical world.
In the first part of the talk we will discuss our work on the dynamic maximum matching problem. In the second part of the talk we will highlight some of our work on a few related problems in both centralized and distributed systems.
## Multidimensional $\epsilon$-Approximate Agreement and Computability in Byzantine Systems
Hammurabi Mendes, Brown University
Monday, April 27, 2015 - 1:00pm to 2:30pm
Pierce 213
This talk is divided in two parts. In the first part, we discuss a problem called multidimensional $\epsilon$-approximate agreement. Consider a distributed system with asynchronous communication and Byzantine (i.e., arbitrary) failures. Each process inputs a value in $\mathbb{R}^d$ with $d \ge 1$, and all non-faulty processes must finish with values where: (1) outputs lie within a distance $\epsilon$ of each other; and (2) outputs are in the convex hull of non-faulty process inputs. This problem generalizes the traditional $\epsilon$-approximate agreement of 1983/1986, and has implications to computability for more general Byzantine tasks.
In the second part, we characterize computability in Byzantine, asynchronous systems by using tools adapted from combinatorial topology. Tasks are formalized with a pair of combinatorial structures called simplicial complexes, one for non-faulty process inputs (the input complex), and another for non-faulty process outputs (the output complex). A map between the input complex and the output complex defines task semantics. We see how a Byzantine asynchronous task is solvable if and only if a "dual" asynchronous, crash-failure task is solvable as well. We are then able to characterize computability for Byzantine asynchronous tasks with a concise, topology-based language.
## Recent progress in structure of large treewidth graphs and some applications
Chandra Chekuri, University of Illinois, Urbana-Champaign
Monday, April 13, 2015 - 1:00pm to 2:30pm
Pierce 213
The seminal work of Robertson and Seymour on graph minors developed and utilized several important properties of tree decompositions and treewidth. Treewidth has since become a fundamental tool for structural and algorithmic results on graphs. One of the key results of Robertson and Seymour is the Excluded Grid Theorem which states that there is an integer valued function f such that that every graph G with treewidth at least f(k) contains a k x k grid as a minor.
In this talk we will discuss some recent developments on the structure of graphs with large treewidth. In particular, Julia Chuzhoy and the author showed that f can chosen to be a polynomial, improving previous bounds that were at least exponential. We will discuss this and related structural results and some applications to approximation algorithms for routing, fixed parameter tractability and Erdos-Posa type theorems.
## Secretary Problems with Non-Uniform Arrival Order
Bobby Kleinberg, Cornell and MSR New England
Monday, March 30, 2015 - 1:00pm to 2:30pm
Pierce 213
For a number of problems in the theory of online algorithms, the assumption that elements arrive in uniformly random order enables the design of algorithms with much better performance guarantees than under worst-case assumptions. The quientessential example of this phenomenon is the secretary problem, in which an algorithm attempts to stop a sequence at the moment it observes the element of maximum value. As is well known, if the sequence is presented in uniformly random order there is an algorithm that succeeds with probability 1/e, whereas no non-trivial performance guarantee is possible if the elements arrive in worst-case order.
In many applications of online algorithms, it is reasonable to assume there is some randomness in the input sequence, but unreasonable to assume that the arrival ordering is uniformly random. This work initiates an investigation into relaxations of the random-ordering hypothesis, by focusing on the secretary problem and asking what performance guarantees one can prove under relaxed assumptions. Along the way to answering this question we will encounter some tools, such as coding theory and approximation theory, not normally associated with the analysis of online algorithms.
## Relational verification of Differential Privacy and Mechanism Design...for a theory audience
Marco Gaboardi, University of Dundee and Visiting Scholar at Harvard CRCS
Monday, March 9, 2015 - 1:00pm to 2:30pm
Pierce 213
Programming language research has developed a wide collection of techniques useful for reasoning about different correctness properties of programs. Some of these techniques can be tailored to formally verify differential privacy, and mechanism design properties like bayesian incentive compatibility. In this talk I will introduce the basic ingredient of these techniques by emphasizing the reasoning principles they capture and why they are useful for reasoning about differential privacy and mechanism design. I will also discuss the limitations of this approach and the motivations for further works in this area.
## Higher lower bounds from the 3SUM conjecture
Tsvi Kopelowitz, University of Michigan
Monday, March 2, 2015 - 1:00pm to 2:30pm
Pierce 213
The 3SUM hardness conjecture has proven to be a valuable and popular tool for proving conditional lower bounds on the complexities of dynamic data structures and graph problems. This line of work was initiated by Patrascu [STOC 2010] and has received a lot of recent attention. Most of these lower bounds are based on reductions from 3SUM to a special set intersection problem introduced by Patrascu, which we call Patrascu's Problem. However, the framework introduced by Patrascu that reduces 3SUM to Patrascu's Problem suffers from some limitations, which in turn produce polynomial gaps between the achievable lower bounds via this framework and the known upper bounds.
We address these issues by providing a tighter and more versatile framework for proving 3SUM lower bounds via a new reduction to Patrascu's Problem. Furthermore, our framework does not become weaker if 3SUM can be solved in truly subquadratic time, and provides some immediate higher conditional lower bounds for several problems, including for set intersection data-structures. For some problems, the new higher lower bounds meet known upper bounds, giving evidence to the optimality of such algorithms.
During the talk, we will discuss this new framework, and show some new (optimal) lower bounds conditioned on the 3SUM hardness conjecture. In particular, we will demonstrate how some old and new triangle listing algorithms are optimal for any graph density, and prove a conditional lower bound for incremental Maximum Cardinality Matching which introduces new techniques for obtaining amortized lower bounds.
## For-all Sparse Recovery in Near-Optimal Time
Yi Li, Harvard
Monday, January 26, 2015 - 1:00pm to 2:30pm
Pierce 213
An approximate sparse recovery system in $\ell_1$ norm consists of parameters $k$, $\epsilon$, $N$, an $m$-by-$N$ measurement $\Phi$, and a recovery algorithm, $\mathcal{R}$. Given a vector, $\mb{x}$, the system approximates $x$ by $\widehat{\mb{x}} = \mathcal{R}(\Phi\mb{x})$, which must satisfy $\|\widehat{\mb{x}}-\mb{x}\|_1 \leq (1+\epsilon)\|\mb{x}-\mb{x}_k\|_1$. We consider the for all'' model, in which a single matrix $\Phi$, possibly constructed'' non-explicitly using the probabilistic method, is used for all signals $\mb{x}$.
The best existing sublinear algorithm uses $O(\epsilon^{-3} k\log(N/k))$ measurements and runs in time $O(k^{1-\alpha}N^\alpha)$ for any constant $\alpha > 0$. In this paper, we improve the number of measurements to $O(\epsilon^{-2} k \log(N/k))$, matching the best existing upper bound (attained by super-linear algorithms), and the runtime to $O(k^{1+\beta}\poly(\log N,1/\epsilon))$, with a modest restriction that $\epsilon \leq (\log k/\log N)^{\gamma}$, for any constants $\beta,\gamma > 0$. When $k\leq \log^c N$ for some $c>0$, the runtime is reduced to $O(k\poly(N,1/\epsilon))$.
## Private Information Retrieval with 2-Servers and sub-polynomial communication
Zeev Dvir, Princeton
Monday, December 15, 2014 - 1:00pm to 2:30pm
Pierce 213
A 2-server Private Information Retrieval (PIR) scheme allows a user to retrieve the i'th bit of an n-bit database replicated among two servers (which do not communicate) while not revealing any information about i to either server. The privacy of the user is information theoretic and does not rely on any cryptographic assumptions. In this work we construct a new 2-server PIR scheme with total communication cost sub-polynomial in n. This improves over the currently known 2-server protocols which require n^{1/3} communication and matches the communication cost of known 3-server PIR schemes. Our improvement comes from reducing the number of servers in existing protocols, based on Matching Vector Codes, from 3 or 4 servers to 2. This is achieved by viewing these protocols in an algebraic way (using polynomial interpolation) and extending them using partial derivatives.
Joint work with Sivakanth Gopi (Princeton).
## Approximating the best Nash Equilibrium in n^{o(log n)}-time breaks ETH
Omri Weinstein, Princeton
Monday, December 8, 2014 - 1:00pm to 2:30pm
Pierce 213
The celebrated PPAD hardness result for finding an exact Nash equilibrium in a two-player game initiated a quest for finding *approximate* Nash equilibria efficiently, and is one of the major open questions in algorithmic game theory. We study the computational complexity of finding an \eps-approximate Nash equilibrium with good social welfare. Hazan and Krauthgamer and subsequent improvements showed that finding an epsilon-approximate Nash equilibrium with good social welfare in a two player game and many variants of this problem is at least as hard as finding a planted clique of size O(log n) in the random graph g(n,1/2). We show that any polynomial time algorithm that finds an eps-approximate Nash equilibrium with good social welfare refutes (the worst-case) Exponential Time Hypothesis by Impagliazzo and Paturi. Specifically, it would imply a 2^O(\sqrt{n}) algorithm for SAT. Our lower bound matches the quasi-polynomial time algorithm by Lipton, Markakis and Mehta for solving the problem. Our key tool is a reduction from the PCP machinery to finding Nash equilibrium via free games, the framework introduced in the recent work by Aaronson, Impagliazzo and Moshkovitz. Techniques developed in the process may be useful for replacing planted clique hardness with ETH-hardness in other applications.
Joint work with Mark Braverman and Young Kun Ko.
## Randomized Symmetry Breaking and the Constructive Lovász Local Lemma
Seth Pettie, University of Michigan
Monday, December 1, 2014 - 1:00pm to 2:30pm
Pierce 213
Symmetry breaking problems pervade every area of distributed computing. The devices in a distributed system are often assumed to be initially undifferentiated, except for having distinct IDs. In order to accomplish basic tasks they must break this initial symmetry. In distributed graph algorithms (where the communications network and the input graph are identical) some symmetry breaking tasks include computing maximal matchings, vertex- and edge-colorings, and maximal independent sets (MIS). Running time is measured in rounds of communication.
In this talk I'll present two general randomized methods for solving symmetry breaking problems. The first can be applied to "decomposable" problems (such as MIS or maximal matching). The second can be applied to a broader class of problems; it uses an efficient distributed version of the constructive Lovász Local Lemma.
Joint work with Hsin-Hao Su, Kai-Min Chung, Leonid Barenboim, Michael Elkin, and Johannes Schneider. Preliminary results appeared in FOCS 2012 and PODC 2014.
## Fast algorithms for optimization of submodular functions
Monday, November 24, 2014 - 1:00pm to 2:30pm
Pierce 213
Much progress has been made on problems involving optimization of submodular functions under various constraints. However, the resulting algorithms, in particular the ones based on the "multilinear relaxation", are often quite slow. In this talk, I will discuss some recent efforts on making these algorithms faster and more practical.
We design near-linear and near-quadratic algorithms for maximization of submodular functions under several common constraints. The techniques that we use include ground set sparsification, a coarse variant of the continuous greedy algorithm, and multiplicative weight updates. Some of the new algorithms have been implemented and showed improvements on instances arising in active set selection for nonparametric learning and exemplar-based clustering.
Based on recent works with
1. C. Chekuri and T.S. Jayram
2. B. Mirzasoleiman, A. Badanidiyuru, A. Karbasi and A. Krause
## Exponential Separation of Information and Communication
Gillat Kol, IAS
Monday, November 17, 2014 - 1:00pm to 2:30pm
Pierce 213
In profoundly influential works, Shannon and Huffman show that if Alice wants to send a message X to Bob, it's sufficient for her to send roughly H(X) bits (in expectation), where H denotes Shannon's entropy function. In other words, the message x can be compressed to roughly H(X) bits, the information content of the message. Can one prove similar results in the interactive setting, where Alice and Bob engage in an interactive communication protocol?
We show the first gap between communication complexity and information complexity, by giving an explicit example of a partial boolean function with information complexity O(k), and distributional communication complexity > 2^k. This shows that a communication protocol cannot always be compressed to its internal information, answering (the standard formulation of) the above question in the negative. By a result of Braverman, our example gives the largest possible gap.
By a result of Braverman and Rao, our example gives the first gap between communication complexity and amortized communication complexity, implying that strong direct sum does not hold for distributional communication complexity, answering a long standing open problem.
## Graph Sparsification in the Streaming Model
Christopher Musco, MIT
Monday, November 10, 2014 - 1:00pm to 2:30pm
Pierce 213
Streaming algorithms have received significant attention for their power in processing dynamically changing data under space constraints. As graph datasets grow (e.g. social networks) and graphs are increasingly used to model non-linked datasets (e.g. spectral clustering), a rich subfield has developed around streaming algorithms specifically designed for processing graphs.
One powerful tool for compressing the storage required for a graph is "graph sparsification". It is possible to eliminate most of the edges from a dense graph while still maintaining much of the graph's structural information. This talk will focus on computing graph sparsifiers in the streaming model. Specifically, we consider how to dynamically maintain a graph sparsifier as edges are continually inserted into and deleted from a graph.
I will review graph sparsification and prior work on streaming algorithms for the problem. Then I'll discuss our recent result, which is the first algorithm for computing spectral graph sparsifiers from a stream of edge insertions and deletions in essentially optimal space. The result can be viewed as a sparse-recovery type algorithm for graph data, extending a powerful technique introduced by Ahn, Guha, and McGregor.
(Joint work with Michael Kapralov, Yin Tat Lee, Cameron Musco, and Aaron Sidford. See: http://arxiv.org/abs/1407.1289)
## Learning Halfspaces with Noise
Pranjal Awasthi, Princeton
Monday, November 3, 2014 - 1:00pm to 2:30pm
Pierce 213
We study the problem of learning halfspaces in the malicious noise model of Valiant. In this model, an adversary can corrupt an η fraction of both the label part and the feature part of an example. We design a polynomial-time algorithm for learning halfspaces in R^d under the uniform distribution with near optimal noise tolerance.
Our results also imply the first active learning algorithm for learning halfspaces that can handle malicious noise.
Joint work with Nina Balcan and Phil Long.
## Constant-time Testing and Learning Algorithms for Image Properties
Monday, October 27, 2014 - 1:00pm to 2:30pm
Pierce 213
We initiate a systematic study of sublinear-time algorithms for image analysis that have access only to labeled random samples from the input. Most previous sublinear-time algorithms for image analysis were query-based, that is, they could query pixels of their choice. We consider algorithms with two types of input access: sample-based algorithms that draw independently random pixels, and block-sample-based algorithms that draw pixels from independently random square blocks of the image. We investigate three basic properties of black-and-white images: being a half-plane, convexity and connectedness. For the first two properties, all our algorithms are sample-based, and for connectedness they are block-sample-based. All algorithms we present have low sample complexity that depends only on the error parameter, but not on the input size.
We design algorithms that approximate the distance to the three properties within a small additive error or, equivalently, tolerant testers for being a half-plane, convexity and connectedness. Tolerant testers for these properties, even with query access to the image, were not investigated previously. Tolerance is important in image processing applications because it allows algorithms to be robust to noise in the image. We also give (non-tolerant) testers for convexity and connectedness with better complexity than implied by our distance approximation algorithms. Our testers are faster than previously known query-based testers.
To obtain our algorithms for convexity, we design two fast proper PAC learners of convex sets in two dimensions that work under the uniform distributions: non-agnostic and agnostic..
(Joint work with Piotr Berman and Meiram Murzabulatov)
## Path-Finding Methods for Linear Programming
Aaron Sidford, MIT
Monday, October 6, 2014 - 1:00pm to 2:30pm
Pierce 213
In this talk I will present a new algorithm for solving linear programs. Given a linear program with n variables, m > n constraints, and bit complexity L, our algorithm runs in Õ(sqrt(n) L) iterations each consisting of solving Õ(1) linear systems and additional nearly linear time computation. Our method improves upon the convergence rate of previous state-of-the-art linear programming methods which required solving either Õ(sqrt(m)L) linear systems [R88] or consisted of Õ((mn)^(1/4)) steps of more expensive linear algebra [VA93].
Interestingly, our algorithm not only nearly matches the convergence rate of the universal barrier of Nesterov and Nemirovskii [NN94], but in the special case of the linear programming formulation of various flow problems our methods converge at a rate faster than that predicted by any self-concordant barrier. In particular, we achieve a running time of Õ(|E| sqrt(|V| log^2 U) for solving the maximum flow problem on a directed graph with |E| edges, |V| vertices, and capacity ratio U, thereby improving upon the previous fastest running time for solving this problem when |E| > Ω(|V|^epsilon) for any constant epsilon.
This talk will assume little exposure to linear programming algorithms, convex optimization, or graph theory and will require no previous experience with the universal barrier or self-concordance.
This talk reflects joint work with Yin Tat Lee. See http://arxiv.org/abs/1312.6677 and http://arxiv.org/abs/1312.6713.
## The Coordinated-Attack Problem Revisited
Eli Gafni, UCLA
Monday, September 29, 2014 - 1:00pm to 2:30pm
Pierce 213
The Coordinated Attack problem was posed and proved impossible in 1975. It held the promise of creating a new kind of reasoning about computing: the field of distributed algorithms.
It did not pan out that way. Only old-timers still remember this problem. There was not much to learn from it, since unlike the impossibility result of Fischer, Lynch and Patterson (FLP) for asynchronous agreement with a single fault that followed in 1983, straightforward variants of the underlying problem turned out to be trivial and uninteresting. In contrast, with FLP, the model of just a single fault still allowed for lesser coordination mechanisms than agreement that were still non-trivial. Indeed, the discovery a decade later of the connection between algebraic topology and distributed algorithms can be traced back to the FLP result. Thus FLP, rather than the Coordinated Attack, delivered on the original promise.
In this talk, Dr. Gafni will show a simple tweak to the Coordinated Attack problem that allows for some coordination. This possible coordination leads directly and simply to the FLP impossibility result, and the subsequent connection between distributed computing and algebraic topology.
## Local Reductions
Emanuele Viola, Northeastern University and Visiting Scholar at Harvard
Monday, September 22, 2014 - 1:00pm to 2:30pm
Pierce 213
We reduce non-deterministic time T > 2^n to a 3SAT instance phi of size |phi| = T polylog T such that there is an explicit circuit C that on input an index i of log |phi| bits outputs the i-th clause, and each output bit of C depends on O(1) inputs bits. The previous best result was C in NC^1. Even in the simpler setting of |phi| = poly(T) the previous best result was C in AC^0.
We also somewhat optimize the complexity of PCP reductions.
As an application, we tighten Williams' connection between satisfiability (or derandomization) and circuit lower bounds. The original connection employed previous reductions as black boxes. If one instead employs the reductions above as black boxes then the connection is more direct.
Based on joint works with Hamid Jahanjou and Eric Miles, and with Eli Ben-Sasson.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7474380731582642, "perplexity": 1176.9977746753184}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125937074.8/warc/CC-MAIN-20180419223925-20180420003925-00226.warc.gz"}
|
https://www.onemathematicalcat.org/Math/Precalculus_obj/ellipseDefn.htm
|
# Definition of an Ellipse
by Dr. Carol JVF Burns (website creator)
Follow along with the highlighted text while you listen!
• PRACTICE (online exercises and printable worksheets)
Ellipses were introduced in Introduction to Conic Sections,
as one of several different curves (‘conic sections’) that are formed by intersecting a plane with an infinite double cone.
Identifying Conics by the Discriminant introduced the general equation for any conic section,
and gave conditions under which the graph would be an ellipse.
In this current section, we present and explore the standard definition of an ellipse.
This definition facilitates the derivation of standard equations for ellipses.
Recall that the notation ‘$\,d(P,Q)\,$’ denotes the distance between points $\,P\,$ and $\,Q\,$.
DEFINITION ellipse An ellipse is the set of points in a plane such that the sum of the distances to two fixed points is constant. More precisely: Let $\,F_1\,$ and $\,F_2\,$ be points; they are called the foci of the ellipse (pronounced FOE-sigh). (The singular form of ‘foci’ is ‘focus’.) Let $\,k\,$ be a positive real number, with $\,k > d(F_1,F_2)\,$. In this section, $\,k\,$ is referred to as the ellipse constant. The ellipse determined by $\,F_1\,$, $\,F_2\,$ and $\,k\,$ is the set of all points $\,P\,$ in a plane such that: $$\cssId{s18}{\overbrace{d(P,F_1) + d(P,F_2)}^{\text{the sum of the distances to two fixed points}}} \quad \cssId{s19}{\overbrace{=\strut}^{\text{is}}}\quad \cssId{s20}{\overbrace{k}^{\text{constant}}}$$ $\,P\,$ is a general point on the ellipse. $\,d(P,F_1) + d(P,F_2) = \text{constant}$
## Old-Fashioned Playing with the Definition of an Ellipse
Got a piece of cardboard, paper, tape, string/cord (not stretchy), and pen/pencil? Then, you can create your own ellipse: Tape the paper to the cardboard (at the corners is sufficient). Punch two small holes through the paper/cardboard at the desired foci. Put the cord/string through the two holes from front to back, and tie securely on the back. The length of the string that protrudes in the front, when held taut by a pen (see photo), is the ellipse constant. Keeping the string taut, trace the ellipse. (The string gets a bit twisted near the line through the two foci. So, draw one continuous motion for the upper half, then re-position and draw the lower half.) (The sunflower in a vase is optional. ☺ I grew my own sunflowers from seed in 2017, when I was writing this section!)
## More Playing with the Definition of an Ellipse
You can also play with ellipses using the dynamic JSXGraph at right: $F_1\,$ and $\,F_2\,$ are the foci. Move them around! As you hover over each focus, you can see the coordinates of the point. The slider at the top sets the ellipse constant. The slider can be set to numbers between $\,0\,$ and $\,20\,$ with increments of $\,0.5\,$. The starting value is $\,12\,$ (refresh the page as needed). The current distance between the foci is displayed near the top. Watch this distance change as you move the foci around. The starting value for $\,d(F_1,F_2)\,$ is $\,10\,$ (refresh the page as needed). In order to see an ellipse, the ellipse constant (slider value) must be greater than the distance between the foci. $\,P\,$ is a general point on the ellipse. Move it around! When the ellipse constant equals the distance between the foci, the ‘ellipse’ degenerates to a line segment. Ignore the line segment that you see when the ellipse constant $\,k\,$ is less than the distance between the foci. As discussed below, in this case there are actually no points $\,P\,$ that satisfy $\,d(F_1,P) + d(F_2,P) = k\,$.
## Notes:
In the definition of ellipse, the ellipse constant $\,k\,$ is required to be
strictly greater than the distance between the two foci.
Why?
As shown below, other values of $\,k\,$ don't give anything that a reasonable person would want to call an ellipse!
### A ‘LINE SEGMENT’ ELLIPSE: $\,k = d(F_1,F_2)\,$
Suppose the ellipse constant, $\,k\,$, equals the distance between the foci:
that is, $\,k = d(F_1,F_2)\,$.
In this case, the solution set to the equation $$\cssId{s62}{\color{green}{d(P,F_1)} + \color{red}{d(P,F_2)} = k}$$ is the line segment between $\,F_1\,$ and $\,F_2\,$ (including the endpoints).
Most people don't want to call a line segment an ellipse!
This is why $\,k\,$ is not allowed to equal $\,d(F_1,F_2)\,$ in the definition of ellipse.
• $\,\color{green}{d(P,F_1)}\,$ is the length of the green segment
• $\,\color{red}{d(P,F_2)}\,$ is the length of the red segment
• together, the green and red segments give $\,d(F_1,F_2)\,$
### AN ‘EMPTY’ ELLIPSE:$\,k < d(F_1,F_2)\,$
The shortest distance between any two points is a straight line.
In particular, the shortest distance from $\,F_1\,$ to $\,F_2\,$ is the length of the line segment between them,
and is denoted by $\,d(F_1,F_2)\,$.
Thus, any path from $\,F_1\,$ to $\,F_2\,$ must have length greater than or equal to $\,d(F_1,F_2)\,$.
In particular (refer to sketch at right),
the piecewise-linear path from $\,F_1\,$ to $\,P\,$ and then from $\,P\,$ to $\,F_2\,$
always has length greater than or equal to $\,d(F_1,F_2)\,$.
Therefore, if the ellipse constant $\,k\,$ is strictly less than $\,d(F_1,F_2)\,$,
there are no points $\,P\,$ that make the following equation true: $$\cssId{s79}{\overbrace{d(P,F_1) + d(P,F_2)\strut }^{\text{always \,\,\ge\,\, d(F_1,F_2)}}} \qquad \cssId{s80}{=}\qquad \cssId{s81}{\overbrace{\strut k}^{< \,\, d(F_1,F_2)}}$$ You might want to call this an empty ellipse, an invisible ellipse, or an imaginary ellipse!
There's nothing there!
Master the ideas from this section
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9154648184776306, "perplexity": 420.50361023331436}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338244.64/warc/CC-MAIN-20221007175237-20221007205237-00307.warc.gz"}
|
https://spark.apache.org/docs/2.4.7/api/python/_modules/pyspark/broadcast.html
|
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
#
# Unless required by applicable law or agreed to in writing, software
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
#
import gc
import os
import socket
import sys
from tempfile import NamedTemporaryFile
from pyspark.cloudpickle import print_exec
from pyspark.java_gateway import local_connect_and_auth
from pyspark.serializers import ChunkedStream
from pyspark.util import _exception_message
if sys.version < '3':
import cPickle as pickle
else:
import pickle
unicode = str
def _from_id(bid):
"""
Access its value through C{.value}.
Examples:
>>> from pyspark.context import SparkContext
>>> sc = SparkContext('local', 'test')
>>> b = sc.broadcast([1, 2, 3, 4, 5])
>>> b.value
[1, 2, 3, 4, 5]
>>> sc.parallelize([0, 0]).flatMap(lambda x: b.value).collect()
[1, 2, 3, 4, 5, 1, 2, 3, 4, 5]
>>> b.unpersist()
"""
def __init__(self, sc=None, value=None, pickle_registry=None, path=None,
sock_file=None):
"""
Should not be called directly by users -- use L{SparkContext.broadcast()}
"""
if sc is not None:
# we're on the driver. We want the pickled data to end up in a file (maybe encrypted)
f = NamedTemporaryFile(delete=False, dir=sc._temp_dir)
self._path = f.name
self._sc = sc
if sc._encryption_enabled:
# with encryption, we ask the jvm to do the encryption for us, we send it data
# over a socket
(encryption_sock_file, _) = local_connect_and_auth(port, auth_secret)
else:
# no encryption, we can just write pickled data directly to the file from python
if sc._encryption_enabled:
self._pickle_registry = pickle_registry
else:
# we're on an executor
self._sc = None
if sock_file is not None:
# the jvm is doing decryption for us. Read the value
# immediately from the sock_file
else:
# the jvm just dumps the pickled data in path -- we'll unpickle lazily when
# the value is requested
assert(path is not None)
self._path = path
[docs] def dump(self, value, f):
try:
pickle.dump(value, f, 2)
except pickle.PickleError:
raise
except Exception as e:
msg = "Could not serialize broadcast: %s: %s" \
% (e.__class__.__name__, _exception_message(e))
print_exec(sys.stderr)
raise pickle.PicklingError(msg)
f.close()
with open(path, 'rb', 1 << 20) as f:
# "file" could also be a socket
gc.disable()
try:
finally:
gc.enable()
@property
def value(self):
"""
if not hasattr(self, "_value") and self._path is not None:
# we only need to decrypt it here when encryption is enabled and
# if its on the driver, since executor decryption is handled already
if self._sc is not None and self._sc._encryption_enabled:
(decrypted_sock_file, _) = local_connect_and_auth(port, auth_secret)
else:
return self._value
[docs] def unpersist(self, blocking=False):
"""
Delete cached copies of this broadcast on the executors. If the
broadcast is used after this is called, it will need to be
re-sent to each executor.
:param blocking: Whether to block until unpersisting has completed
"""
raise Exception("Broadcast can only be unpersisted in driver")
[docs] def destroy(self):
"""
Use this with caution; once a broadcast variable has been destroyed,
it cannot be used again. This method blocks until destroy has
completed.
"""
raise Exception("Broadcast can only be destroyed in driver")
def __reduce__(self):
raise Exception("Broadcast can only be serialized in driver")
"""
def __init__(self):
self.__dict__.setdefault("_registry", set())
def __iter__(self):
for bcast in self._registry:
yield bcast
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4251449704170227, "perplexity": 29784.705762015572}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320299927.25/warc/CC-MAIN-20220129032406-20220129062406-00660.warc.gz"}
|
http://newvillagegirlsacademy.org/math/?page_id=2284
|
# 4.2 – Slope
Key Terms
• Rate of Change – The measure of how much a dependent variable y changes for a given change in the independent variable x. If the graph of the function is a straight line, the rate of change equals the slope of the line.
• Rise – The vertical distance between two points on a line. It equals the difference between the y-coordinates of the two points.
• Run – The horizontal distance between two points on a line. It equals the difference between the x-coordinates of the two points.
• Slope – A measure of the steepness of a line. The slope equals rise divided by run for any two points on the line. A line that rises from left to right has a positive slope. A line that falls from left to right has a negative slope.
Notes
• Horizontal means “left and right”
• x-axis
• Vertical means “up and down”
• y-axis
• Slopes can rise, fall, be zero (horizontal), or be undefined (vertical)
• Rise – UP from Left to Right
• Positive slope
• Fall – DOWN from Left to Right
• Negative slope
• Straight horizontal lines that do not rise or fall
• Zero slope
• Straight vertical lines that are all rise and no run
• Remember: undefined fractions have a zero in the denominator
• Undefined slopes have a zero run (which in the denominator of the formula: rise over run.
• Slopes can be steep (big) or shallow (small)
• Steep slopes have larger positive or negative values
• The steeper the line, the greater the slope
• Ex: slope = 5 is greater than slope = 2
• Ex: slope = 10 is greater than slope = 0.34
• Ex: slope = -4 is greater than slope = 2
• Even though 4 is negative, the value of 4 is larger (steeper) than 2.
• Shallow slopes have smaller positive or negative values
• The less steep the line, the smaller the slope
• Ex: slope = 2 is less steep than slope = 2.5
• Ex. slope = 5 is less steep than slope = -10
• Even though 10 is negative, the value of 10 is larger (steeper) than 5.
• Parallel Slopes
• Two lines that have the same exact slope are parallel
• Slope Formula
• Rise: vertical (up and down, like an elevator)
• Rise Up: Positive
• Rise Down: Negative
• Run: horizontal (left and right, like walking across a room)
• Run Left: Negative
• Run Right: Positive
• Positive and Negative Rise and Run
• Lines that go UP have a Positive Rise and a Positive Run
• Lines that go DOWN have a Negative Rise and a Positive Run
• Unless you are calculating a line’s run backwards, the Run will always be positive.
• You can do this with careful attention to detail.
• The slope of a line is CONSTANT
• It will never change, no matter what two points you compare!
• The slopes will be the same for ALL points on a line
• $\frac{12}{3}$ is the same slope as $\frac{4}{1}$ if you reduce the fraction! Both will be a slope of 4 over 1.
• The rise will be 4 and the run will be 1.
• Formula for Finding the Slope Using 2 Points
• Each point has an x-value and a y-value
• Choose either point to be Point 1 and call the other point, Point 2
• It’s usually a good idea to pick the point on the Left as Point 1
• Label Point 1: $(x_1,y_1)$
• Label Point 2: $(x_2,y_2)$
• Substitute the values into the formula, reduce the fraction, write your slope in the form of Rise over Run
• Example: Points (2, -4) and (9,10)
• Point 1: (2, -4)
• $x_1=2$ and $y_1=-4$
• Point 2: (9, 10)
• $x_2=9$ and $y_2=10$
Answer: the slope is $\frac{14}{7}$, which can be reduced to $\frac{7}{1}$, which can be written as just “7”
• Equation of a Line
• Formula: y = mx + b
• Slope: m
• y-intercept (where the line crosses the y-axis): b
• b can be 0 if the line crosses the y-axis at the origin (0, 0)
• Any x-value (point) on the line: x
• Any y-value (point) on the line: y
• Identify Slope in an Equation
• Slope is the coefficient of the x-value
• Ex. y = 3x + 2 has a slope of 3.
• Ex. y = -12x has a slope of -12.
• Ex. y = 4(x – 2) + 6 has a slope of 4.
• In this case, distribute the 4 into the parenthesis where the x-value is to simplify: y = 4x – 8 + 6. You can also add -8 + 6 to find the y-intercept (b) of -2.
• Real World Example
• A graph is constructed such that time (in seconds) is the x-variable and distance (in miles) is the y-variable.
• If you plot the distance that a space shuttle travels at a speed of 5 miles per second, what is the slope of the graph?
• Note: in the real world, most lines START at the ORIGIN (0,0).
• $x_1=0$$y_1=0$$x_2=1$$y_2=5$
• $Slope=\frac{y_2-y_1}{x_2-x_1}$
• $Slope=\frac{5-0}{1-0}$
• $Slope=\frac{5}{1}$ or just “5”
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 19, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8569758534431458, "perplexity": 1341.986093700253}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120001.0/warc/CC-MAIN-20170423031200-00242-ip-10-145-167-34.ec2.internal.warc.gz"}
|
http://nghiaho.com/?page_id=846
|
# Decomposing and composing a 3×3 rotation matrix
This post shows how to decompose a 3×3 rotation matrix into the 3 elementary Euler angles, sometimes referred to as yaw/pitch/roll, and going the other way around. The technique I’m presenting is based off http://planning.cs.uiuc.edu/node102.html.
If you have ever seen Wikipedia’s entry on Rotation matrix or Euler angles, you will have no doubt been swamped to your neck with maths equations all over the place, depending how tall your neck is. It turns out there is no single correct answer on defining a rotation matrix in terms of Euler angles. There are a few ways to accomplish it and all of them are valid. But that’s okay, I’ll just show one way, which should be adequate for most applications. I won’t go into any maths derivation, aiming to keep this post implementation friendly.
# Decomposing a rotation matrix
Given a 3×3 rotation matrix
$R = \left[ \begin{array}{ccc} r_{11} & r_{12} & r_{13} \\ r_{21} & r_{22} & r_{23} \\ r_{31} & r_{32} & r_{33} \end{array} \right]$
The 3 Euler angles are
$\theta_{x} = atan2\left(r_{32}, r_{33}\right)$
$\theta_{y} = atan2\left(-r_{31}, \sqrt{r_{32}^2 + r_{33}^2}\right)$
$\theta_{z} = atan2\left(r_{21}, r_{11}\right)$
Here atan2 is the same arc tangent function, with quadrant checking, you typically find in C or Matlab.
# Composing a rotation matrix
Given 3 Euler angles $\theta_{x}, \theta_{y}, \theta_{z}$, the rotation matrix is calculated as follows:
$X = \left[ \begin{array}{ccc} 1 & 0 & 0 \\ 0 & \cos\left(\theta_{x}\right) & -\sin\left(\theta_{x}\right) \\ 0 & \sin\left(\theta_x\right) & \cos\left(\theta_{x}\right) \end{array} \right]$
$Y = \left[ \begin{array}{ccc} \cos\left(\theta_{y}\right) & 0 & \sin\left(\theta_{y}\right) \\ 0 & 1 & 0 \\ -\sin\left(\theta_{y}\right) & 0 &\cos\left(\theta_{y}\right) \end{array} \right]$
$Z = \left[ \begin{array}{ccc} \cos\left(\theta_{z}\right) & -\sin\left(\theta_{z}\right) & 0 \\ \sin\left(\theta_{z}\right) & \cos\left(\theta_{z}\right) & 0 \\ 0 & 0 & 1 \end{array} \right]$
$R = ZYX$
# Note on angle ranges
The Euler angles returned when doing a decomposition will be in the following ranges:
$\theta_{x} \rightarrow \left(-\pi, \pi \right)$
$\theta_{y} \rightarrow \left(-\frac{pi}{2}, \frac{pi}{2} \right)$
$\theta_{z} \rightarrow \left(-\pi, \pi \right)$
If you keep your angles within these ranges, then you will get the same angles on decomposition. Conversely, if your angles are outside these ranges you will still get the correct rotation matrix, but the decomposed values will be different to your original angles.
Code
The Octave/Matlab script contains the decompose/compose function and a demo on using it. It picks random Euler angles, makes a rotation matrix, decomposes it and verifies the results are the same. An example output
octave:1> rotation_matrix_demo
x = -2.6337
y = -0.47158
z = -1.2795
Rotation matrix is:
R =
0.25581 -0.77351 0.57986
-0.85333 -0.46255 -0.24057
0.45429 -0.43327 -0.77839
Decomposing R
x2 = -2.6337
y2 = -0.47158
z2 = -1.2795
err = 0
Results are correct!
## 12 thoughts on “Decomposing and composing a 3×3 rotation matrix”
1. Hi Nghia,
I believe you misplaced number 1 in the Y rotation matrix. Number 1 should be at the centre of Y. As a result, your derivation of the angles is not generally correct. From the RHS of equation 3.42 of the link you cited (http://planning.cs.uiuc.edu/node102.html), one can easily obtain yaw, pitch and roll angles as follows:
\theta_x = arctan(r_{3,2}/r_{3,3})
\theta_y = -arcsin(r_{3,1})
\theta_z = arctan(r_{2,1}/r_{1,1})
By the way, you other posts and source codes on computer vision are amazing. They are really useful to me.
Cheers,
Chuong
• Hi,
Well spotted!
I came to the same result for theta_y, using arcsin instead of atan2. I wasn’t sure if there was any ‘gotchas’ at the time, so I left it using the original equation I cited.
Glad you found something useful on the site
2. Hi Nghia,
I think this approach has one singularity. If theta_y is +-90 degrees, it will come that r_{1,1}, r_{2,1}, r_{3,2} and r_{3,3} will evaluate to zero due to the cos(theta_y) term. So you end up to atan(0) which, will return always 0, independent of theta_x and theta_z.
If you try this in your matlab code, make sure that you do not have numerical errors on r_{1,1}, r_{2,1}, r_{3,2} and r_{3,3}, after generating the rotation matrix.
Do you have the solution for that case also?
Cheers,
Andre
• I’ve tried putting 90 degrees in my demo program, at line 6
y = pi;
x,z are still random between -180,180 degrees. The matrix changes accordingly. The returned values however in the x,z axis are out by 180 degrees.
Try editing the demo code and see if you get the same results as me.
• I’ve tried to put in line 6:
y = pi*0.5;
and after the line “R = compose_rotation(x,y,z)”, the lines:
R(1,1) = 0;
R(2,1) = 0;
R(3,2) = 0;
R(3,3) = 0
to kill numerical errors of the composition process. If you read these values before, you will get values very close to zero (e-17 in my machine). They actually should be zero, due to the “cos(pi*0.5)” term in that specific case.
You can let x and z random.
• Ooops my bad, wasn’t thinking straight. You’re very right.
I guess the only one around it is to explicitly check for the +- 90 degree situation and use a different rotation composition. Probably one of the many reasons why people hate using matrices to represent rotation.
3. Hi Nghia!
Thank you very much for your article. It helped me very much. For about 2 days I couldn’t solve my problem, I’ve messed up in these rotation-conventions. But when I’ve used your formulas, it worked fine!
4. It’s important to note that the Euler angles described above are applied in the order theta_X -> theta_Y -> theta_Z relative to the world axis, due to the non-commutativity of 3D rotation.
5. thank you nghiaho, you excelent, i see much your paper. your’s result very good.
6. Hello: I have a visualization program which expects 3 values for rotation (x, y, z) between 0 and 360 degrees. Now, my fusion library (reading data from sensors) provides either euler angles or quaternions. I didin’t know how to use quaternions to feed the visualization application, so I ‘m working with euler angles.
The problem is I see euler angles values range from (0 to 360), (-180 to 180), and (-90 to 90). For the first case is Ok. For the second, I can add 180 and I’m done (range 0 to 360). Now for the last one, I see if doing a 360 movement with the board (sensors), the values are changing from -90 to +90 and +90 to -90 again, so in this case I’m not sure what to do. Adding 90 will end in (0 to 180) range.
How can I convert the euler angles ranges to the right (0 to 360) value range the visualization application requires?
Gus
• If you want to convert all your angles to [0,360] correctly then I would simply do:
if(angle < 0)
angle += 360;
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 12, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37296971678733826, "perplexity": 1281.2070027164145}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00008-ip-10-147-4-33.ec2.internal.warc.gz"}
|
http://www.r-bloggers.com/writing-an-r-package-from-scratch/
|
# Writing an R package from scratch
April 29, 2014
By
(This article was first published on Not So Standard Deviations » R, and kindly contributed to R-bloggers)
As I have worked on various projects at Etsy, I have accumulated a suite of functions that help me quickly produce tables and charts that I find useful. Because of the nature of iterative development, it often happens that I reuse the functions many times, mostly through the shameful method of copying the functions into the project directory. I have been a fan of the idea of personal R packages for a while, but it always seemed like A Project That I Should Do Someday and someday never came. Until…
Etsy has an amazing week called “hack week” where we all get the opportunity to work on fun projects instead of our regular jobs. I sat down yesterday as part of Etsy’s hack week and decided “I am finally going to make that package I keep saying I am going to make.” It took me such little time that I was hit with that familiar feeling of the joy of optimization combined with the regret of past inefficiencies (joygret?). I wish I could go back in time and create the package the first moment I thought about it, and then use all the saved time to watch cat videos because that really would have been more productive.
This tutorial is not about making a beautiful, perfect R package. This tutorial is about creating a bare-minimum R package so that you don’t have to keep thinking to yourself, “I really should just make an R package with these functions so I don’t have to keep copy/pasting them like a goddamn luddite.” Seriously, it doesn’t have to be about sharing your code (although that is an added benefit!). It is about saving yourself time. (n.b. this is my attitude about all reproducibility.)
(For more details, I recommend this chapter in Hadley Wickham’s Advanced R Programming book.)
Step 0: Packages you will need
The packages you will need to create a package are devtools and roxygen2. I am having you download the development version of the roxygen2 package.
install.packages("devtools")
library("devtools")
devtools::install_github("klutometis/roxygen")
library(roxygen2)
Step 1: Create your package directory
You are going to create a directory with the bare minimum folders of R packages. I am going to make a cat-themed package as an illustration.
setwd("parent_directory")
create("cats")
If you look in your parent directory, you will now have a folder called cats, and in it you will have two folders and one file called DESCRIPTION.
You should edit the DESCRIPTION file to include all of your contact information, etc.
If you’re reading this, you probably have functions that you’ve been meaning to create a package for. Copy those into your R folder. If you don’t, may I suggest something along the lines of:
cat_function <- function(love=TRUE){
if(love==TRUE){
print("I love cats!")
}
else {
print("I am not a cool person.")
}
}
Save this as a cat_function.R to your R directory.
This always seemed like the most intimidating step to me. I’m here to tell you — it’s super quick. The package roxygen2 that makes everything amazing and simple. The way it works is that you add special comments to the beginning of each function, that will later be compiled into the correct format for package documentation. The details can be found in the roxygen2 documentation — I will just provide an example for our cat function.
The comments you need to add at the beginning of the cat function are, for example, as follows:
#' A Cat Function
#'
#' This function allows you to express your love of cats.
#' @param love Do you love cats? Defaults to TRUE.
#' @keywords cats
#’ @export
#' @examples
#' cat_function()
cat_function <- function(love=TRUE){
if(love==TRUE){
print("I love cats!")
}
else {
print("I am not a cool person.")
}
}
I’m personally a fan of creating a new file for each function, but if you’d rather you can simply create new functions sequentially in one file — just make sure to add the documentation comments before each function.
Now you need to create the documentation from your annotations earlier. You’ve already done the “hard” work in Step 3. Step 4 is as easy doing this:
setwd("./cats") # this is Mac/Linux specific
document()
This automatically adds in the .Rd files to the man directory, and adds a NAMESPACE file to the main directory. You can read up more about these, but in terms of steps you need to take, you really don’t have to do anything further.
Step 5: Install!
Now it is as simple as installing the package! You need to run this from the parent working directory that contains the cats folder.
setwd("..") # this is Mac/Linux specific
install("cats")
Now you have a real, live, functioning R package. For example, try typing ?cat_function. You should see the standard help page pop up!
(Bonus) Step 6: Make the package a GitHub repo
This isn’t a post about learning to use git and GitHub — for that I recommend Karl Broman’s Git/GitHub Guide. The benefit, however, to putting your package onto GitHub is that you can use the devtools install_github() function to install your new package directly from the GitHub page.
install_github('cats','github_username')
Step 7-infinity: Iterate
This is where the benefit of having the package pulled together really helps. You can flesh out the documentation as you use and share the package. You can add new functions the moment you write them, rather than waiting to see if you’ll reuse them. You can divide up the functions into new packages. The possibilities are endless!
Additional pontifications: If I have learned anything from my (amazing and eye-opening) first year at Etsy, it’s that the best products are built in small steps, not by waiting for a perfect final product to be created. This isn’t to say that you should “move fast and break things” but rather that it’s best to get a project started and improve it through iteration. R packages can seem like a big, intimidating feat, and they really shouldn’t be.
Additional side-notes: I learned basically all of these tricks at the rOpenSci hackathon. My academic sister Alyssa wrote a blog post describing how great it was. Hadley Wickham gets full credit for envisioning that R packages should be the easiest way to share code, and making functions/resources that make it so easy to do so.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.15099681913852692, "perplexity": 1518.2335540407175}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119646269.50/warc/CC-MAIN-20141024030046-00166-ip-10-16-133-185.ec2.internal.warc.gz"}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.