url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
http://math.stackexchange.com/questions/227652/find-out-for-which-values-of-lambda-the-points-of-the-line-are-inside-the-cir/227660
# Find out for which values of $\lambda$ the points of the line are inside the circle We have a line (in parameter): $x = 2\lambda$ $y = 1-\lambda$ Find out for which values of $\lambda$ the points of the line are inside the circle of $x^2+4x+y^2-6y+5=0$ What I did: I rewrote the circle to the form $(x+2)^2 + (y-3)^2 = 8$. Where my problems/questions are: • First of all, I have trouble with the parameter representation of a line, how can I rewrite this to for example $y=ax+b$ or $ax+by=c$? • And there also is a problem with how to continue. I thought if replacing the $x$ and $y$ in the circle equation by $2\lambda$ and $y = 1-\lambda$ respectively, but I get a nonsensical answer: $(2\lambda + 2)^2 + (1-\lambda -3)^2 = 8$ $4\lambda ^2 + 4\lambda + 4 + \lambda ^2 +4\lambda + 4 = 8$ $5\lambda ^2 + 8\lambda =0$ $\lambda ^2 + 1.6\lambda = 0$ And at this juncture I just quit because of the nonsensical answer I would get if continued. What am I doing wrong? What am I doing right? How does the parameter representation of a line work and how can I rewrite it into a different form? - You calculation contains one mistake: it should be $8\lambda$ instead of $4\lambda,$ as explained below: to be inside the circle, the distance for the centre must be less than the radius. So,$(2\lambda+2)^2+(1-\lambda -3)^2$ must be $<8$ or, $4\lambda^2+8\lambda+4+\lambda^2+4\lambda+4<8$ or $5\lambda^2+12\lambda<0$ or $(\lambda-0)\{\lambda-(-\frac{12}5)\}<0$ The product of two term $<0$, so one must $<0$ and the other $>0$ If $(\lambda-0)>0$ i.e., $\lambda>0$, then $\lambda-(-\frac{12}5)<0$ or $\lambda<-\frac{12}5$ which is impossible as $\lambda>0$. If $(\lambda-0)<0$ i.e., $\lambda<0$, then $\lambda-(-\frac{12}5)>0$ or $\lambda>-\frac{12}5$ So, $-\frac{12}5<\lambda<0$ Alternatively, if the equation of the circle is $x^2+y^2+2gx+2fy+c=0--->(1)$ or $\{x-(-g)\}^2+\{y-(-f)\}^2=g^2+f^2-c$ If $(h,k)$ lies inside the circle, $\{h-(-g)\}^2+\{k-(-f)\}^2<g^2+f^2-c$ or $h^2+k^2+2gh+2fk+c<0--->(2)$ Here $g=3,f=-3,c=5$ and $(h,k)$ is $(2\lambda,1-\lambda)$ We can put the values of $(h,k)$ in terms of $\lambda$ in $(2)$ to reach the same destination as in the 1st method. - And am I right that $(1-\lambda-3)^2 = (-\lambda-2)^2?$ –  JohnPhteven Nov 2 '12 at 16:48 @ZafarS, yes $=(\lambda+2)^2$ too. Modified the answer. –  lab bhattacharjee Nov 2 '12 at 16:50 However, when I try to eliminate $y$, so making it like this: $(x+2)^2 + (0,5x+2)^2=*$, I get the answers $x=0$ and $x=-4.8$, however that should be $x=-2,4$ too right? Can I use the method of elimination too? –  JohnPhteven Nov 2 '12 at 17:10 *=8, I can't edit it after 5 minutes for some reason.. –  JohnPhteven Nov 2 '12 at 17:16 @ZafarS, it's not '=', but '<' –  lab bhattacharjee Nov 2 '12 at 17:29 As for converting the parametric form to a regualar form, it goes as follows. First express either $x$ or $y$ completely in terms of the parameter. Here, for example, $y=1-\lambda$, so express $\lambda=1-y$ and replace this value of $\lambda$ in the other equation. Doing so, we get $x=2(1-y)$ or $y=\frac{-x}{2}+1$. That's all. -
2015-04-26 08:15:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8892794251441956, "perplexity": 239.72290516412014}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246654114.44/warc/CC-MAIN-20150417045734-00276-ip-10-235-10-82.ec2.internal.warc.gz"}
https://www.nag.com/numeric/py/nagdoc_latest/naginterfaces.library.sparse.real_rect_sort.html
# naginterfaces.library.sparse.real_​rect_​sort¶ naginterfaces.library.sparse.real_rect_sort(m, a, irow, n=None, icol=None, istc=None, dup='S', zer='R')[source] real_rect_sort sorts the nonzero elements of a real sparse rectangular matrix, represented in coordinate storage or compressed column storage format. For full information please refer to the NAG Library document for f11zc https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/f11/f11zcf.html Parameters mint , the number of rows in the matrix . afloat, array-like, shape The nonzero elements of the matrix . If column indices are supplied via , the elements may be in any order. If column starting addresses are supplied via , the elements must be ordered by increasing column index. There may be multiple nonzero elements with the same row and column indices. irowint, array-like, shape The row indices corresponding to the elements supplied in the array . nNone or int, optional , the number of columns in the matrix . If you are providing column starting addresses via then your supplied value of will be ignored and it will be inferred from that array instead. icolNone or int, array-like, shape , optional Must be used to supply the column indices corresponding to the elements supplied in the array , when is represented in coordinate storage format. istcNone or int, array-like, shape , optional Must be used to supply the starting address of each column, as supplied in the array , when is represented in compressed column storage format. dupstr, length 1, optional Indicates how elements in with duplicate row and column indices are to be treated. Duplicate entries are removed, only the first entry is kept. The relevant values in are summed. The function fails with = 12 on detecting a duplicate. zerstr, length 1, optional Indicates how elements in with zero values are to be treated. The entries are removed. The entries are kept. The function fails with = 13 on detecting a zero. Returns afloat, ndarray, shape The nonzero elements ordered by increasing column index, and by increasing row index within each column. Each nonzero element has a unique row and column index. irowint, ndarray, shape The row indices corresponding to the elements returned in the array . icolint, ndarray, shape The column indices corresponding to the elements returned in the array . istcint, ndarray, shape The starting address of each column, as returned in the array . is the address of the last element in plus one. Raises NagValueError (errno ) On entry, . Constraint: . (errno ) On entry, . Constraint: . (errno ) On entry, . Constraint: . (errno ) On entry, . Constraint: , or . (errno ) On entry, . Constraint: , or . (errno ) On entry, , and . Constraint: . (errno ) On entry, , and . Constraint: if column indices are supplied, then . (errno ) On entry, . Constraint: . (errno ) On entry, , and . Constraint: , for (errno ) On entry, , and . Constraint: . (errno ) On entry, a duplicate entry has been found in row and column . (errno ) On entry, a zero entry has been found in row and column . (errno ) Exactly one of and must be provided. (errno ) or must be provided. Notes real_rect_sort takes a coordinate storage (CS) representation (see the F11 Introduction), or compressed column storage (CCS) representation (see the F11 Introduction) of a real sparse rectangular matrix , and reorders the nonzero elements by increasing column index and increasing row index within each column. Entries with duplicate row and column indices may be removed. Alternatively, duplicate entries may be summed, which facilitates sparse matrix addition (see Further Comments). Any entries with zero values may optionally be removed. Both CS and CCS representations of the resulting matrix are output, which allows real_rect_sort to be used to convert between the two formats (see Further Comments).
2022-10-05 05:51:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5557119250297546, "perplexity": 4336.529452934832}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00360.warc.gz"}
https://socratic.org/questions/how-do-you-find-the-sum-of-the-infinite-geometric-series-1-x-x-2-x-3-x-4
# ?How do you find the sum of the infinite geometric series 1 - x + x^2 - x^3 + x^4 ...? Apr 15, 2018 See explanation. #### Explanation: A geometric series is convergent if and only if its common ratio is between $- 1$ and $1$, and its sum is then defined as: ## $S = {a}_{1} / \left(1 - q\right)$ So in the given task we have: ${a}_{1} = 1$ and $q = - x$. According to the given condition we can say that: If x in (-1;1) then the series has a finite sum and it is:
2020-10-26 16:33:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 7, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9414725303649902, "perplexity": 191.74016881627648}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107891428.74/warc/CC-MAIN-20201026145305-20201026175305-00014.warc.gz"}
https://scikit-hep.org/pyhf/_generated/pyhf.probability.Independent.html
# Independent class pyhf.probability.Independent(batched_pdf, batch_size=None)[source] Bases: pyhf.probability._SimpleDistributionMixin A probability density corresponding to the joint distribution of a batch of identically distributed random variables. Example >>> import pyhf >>> import numpy.random as random >>> random.seed(0) >>> rates = pyhf.tensorlib.astensor([10.0, 10.0]) >>> poissons = pyhf.probability.Poisson(rates) >>> independent = pyhf.probability.Independent(poissons) >>> independent.sample() array([10, 11]) __init__(batched_pdf, batch_size=None)[source] Parameters: • batched_pdf (pyhf.probability distribution) – The batch of pdfs of the same type (e.g. Poisson) • batch_size (int) – The size of the batch Methods log_prob(value)[source] The log of the probability density function at the given value. As the distribution is a joint distribution of the same type, this is the sum of the log probabilities of each of the distributions the compose the joint. Example >>> import pyhf >>> import numpy.random as random >>> random.seed(0) >>> rates = pyhf.tensorlib.astensor([10.0, 10.0]) >>> poissons = pyhf.probability.Poisson(rates) >>> independent = pyhf.probability.Independent(poissons) >>> values = pyhf.tensorlib.astensor([8.0, 9.0]) >>> independent.log_prob(values) -4.26248380... value (tensor or float) – The value at which to evaluate the distribution The value of $$\log(f\left(x\middle|\theta\right))$$ for $$x=$$value
2023-03-21 21:53:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6426185369491577, "perplexity": 6663.627459513803}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943746.73/warc/CC-MAIN-20230321193811-20230321223811-00162.warc.gz"}
http://mathhelpforum.com/calculus/155871-algebra-involved-limit-have-work-but-need-explanation.html
# Math Help - Algebra involved in limit...have the work, but need explanation 1. ## Algebra involved in limit...have the work, but need explanation http://i53.tinypic.com/vgr8yp.jpg ^see pic Thank you so much!! 2. They've multiplied by a cleverly disguised $1$. In this case, $\displaystyle\frac{\frac{1}{x}}{\frac{1}{x}}$. So $\displaystyle{\frac{x}{\sqrt{x^2 + x} + x} = \frac{x}{\sqrt{x^2+x}+x}\cdot \frac{\frac{1}{x}}{\frac{1}{x}}}$ $\displaystyle{ = \frac{1}{\frac{\sqrt{x^2+x}+x}{x}}}$ $\displaystyle{ = \frac{1}{\frac{\sqrt{x^2+x}}{x} + 1}}$ $\displaystyle{ = \frac{1}{\frac{\sqrt{x^2+x}}{\sqrt{x^2}} + 1}}$ $\displaystyle{= \frac{1}{\sqrt{\frac{x^2 + x}{x^2}} + 1}}$ $\displaystyle{= \frac{1}{\sqrt{1 + \frac{1}{x}} + 1}}$. 3. Ahh I see, thanks!! I wish I had been that perceptive and noticed that trick :[
2016-06-29 01:18:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9443687200546265, "perplexity": 8929.017745302797}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397428.37/warc/CC-MAIN-20160624154957-00149-ip-10-164-35-72.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/3134204/how-do-i-find-the-probability-for-this-circuit-to-run-current
# How do I find the probability for this circuit to run current? The probability of the closing of the ith relay in the circuit below is given by $$p_i$$, $$i$$ = 1,2,3,4,5. If all the relays function independently, what is the probability that a current flows between $$A$$ and $$B$$ for the circuit below? So far, I have broken it down as: the 4 events needed for the current to flow from $$A$$ to $$B$$ is: $$P(p_1 p_4)$$, $$P(p_2 p_5)$$, $$P(p_1p_3p_5)$$, and $$P(p_2p_3p_4)$$. Therefore the probability we're looking for is: [ $$P(p_1 p_4) \bigcup P(p_2 p_5)$$] $$\bigcup$$ [$$P(p_1p_3p_5) \bigcup P(p_2p_3p_4)$$] =[$$P(p_1 p_4) + P(p_2 p_5)- P(p_1p_4p_2p_5)$$] $$\bigcup$$ $$P(p_3)[P(p_1p_5)+P(p_2p_4)-P(p_1p_5p_2p_4)]$$ =[$$P(p_1 p_4) + P(p_2 p_5)- P(p_1p_4p_2p_5)$$] $$+$$ $$P(p_3)[P(p_1p_5)+P(p_2p_4)-P(p_1p_5p_2p_4)]$$ - ( [$$P(p_1 p_4) + P(p_2 p_5)- P(p_1p_4p_2p_5)$$]*$$P(p_3)[P(p_1p_5)+P(p_2p_4)-P(p_1p_5p_2p_4)]$$ ) Is this correct? • I think it's right, but I wouldn't swear to it. You've got the right idea. – saulspatz Mar 3 '19 at 23:53 1. Suppose that "3" is closed. $$p^{(1)}=(p_1\cup p_2) \cap (p_4 \cup p_5) = (p_1+p_2-p_1p_2)(p_4+p_5-p_4p_5)$$ 2. Suppose that "3" is open. $$p^{(2)}=(p_1 \cap p_4) \cup (p_2 \cap p_5) = p_1p_4+p_2p_5-p_1p_4p_2p_5$$ Finally, you get $$p=p_3\cdot p^{(1)} + (1-p_3)p^{(2)}$$ • I like your logic; but if gate 3 is closed, then shouldn't the logic for $p^(1)$ be: 1 and 5 OR 2 and 4? – Jaigus Mar 4 '19 at 0:19 • @Jaigus Well, it is a circuit problem, so all the rules for circuits apply. The last equation is simply the formula for total probability, i.e. $p=p_3 \cdot p(\text{flows }|p_3) + \bar p_3 \cdot p(\text{flows }|\bar p_3)$. – Haris Gušić Mar 4 '19 at 0:35
2021-06-15 05:54:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 24, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8675054311752319, "perplexity": 298.10026623919947}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487617599.15/warc/CC-MAIN-20210615053457-20210615083457-00084.warc.gz"}
https://cstheory.stackexchange.com/tags/comp-number-theory/hot
# Tag Info ## Hot answers tagged comp-number-theory 27 Disclaimer: I'm not an expert in number theory. Short answer: If you're willing to assume "reasonable number-theoretic conjectures", then we can tell whether there is a prime in the interval $[n, n+\Delta]$ in time $\mathrm{polylog}(n)$. If you're not willing to make such an assumption, then there is a beautiful algorithm due to Odlyzko that achieves $n^{1/... 19 Here is the construction of such a number. You can argue whether this means such a number is "known". Take any function$f$from$\mathbb{N}$to$\{ 1, 2, \ldots, 8 \}$where the$n$'th digit is not computable in$O(n)$time. Such a function exists, for example, by the usual diagonalization technique. Interpret$f(n)$as the$n$'th decimal digit of some ... 15 The following answer was originally posted as a comment on Gil's blog (1) Let$K=\mathbb{Q}(\alpha)$be a number field, where we assume$\alpha$has a monic minimal polynomial$f\in\mathbb{Z}[x]$. One can then represent elements of the ring of integers$\mathcal{O}_K$as polynomials in$\alpha$or in terms of an integral basis -- the two are equivalent. ... 14 First of all, there is a formal definition of "quantum-NC", see QNC on the zoo. GCD is indeed a good candidate for a problem that could be shown to be in QNC, but it's not known to be in NC. However, finding a QNC algorithm for GCD is still an open problem. The feeling for which this is believed to be true comes from the fact that the Quantum Fourier ... 12 More generally, for any constant$k\ge1$, there are transcendental numbers computable in polynomial time, but not in time$O(n^k)$. First, by the time hierarchy theorem, there exists a language$L_0\in\mathrm E$not computable in time$O(2^{kn})$. We may assume$L\subseteq\{0,1\}^*$, and we may also assume that all strings$w\in L$have length divisible by$... 9 First note that this algorithm only computes $\lceil \log_2 v \rceil$, and as the code is written, it works only for $v$ that fit in a $32$-bit word. The sequence of shifts and or-s that appears first has the function of propagating the leading 1-bit of $v$ all the way down to the least significant bit. Numerically, this gives you $2^{\lceil \log_2 v \rceil}... 8 This language is in$\mathsf{LOGSPACE}$via trial division. It is also known logarithmic space is neccessary ([1]). For a generalization to sparse sets, see bounded language complete for NSPACE(log n)?. For hardness in binary case, see Are the problems PRIMES, FACTORING known to be P-hard?. [1] J. Hartmanis, L. Berman, On tape bounds for single letter ... 8 TL;DR The decimal expansion of a fixed rational number is not pseudorandom in the cryptographic sense, but irrational numbers (are conjectured to) exhibit some weaker but interesting forms of pseudorandom behavior. Roughly speaking, a sequence$s \in \{0, \ldots, B\}^n$is pseudorandom with respect to distinguishers$\cal A$, if it cannot be distinguished (... 7 Your problem seems a special case of the turnpike reconstruction problem (for which no polynomial time algorithm is known). See for example: Shiteng Chen, Zhiyi Huang, and Sampath Kannan, "Reconstructing Numbers from Pairwise Function Values". Abstract: The turnpike problem is one of the few natural problems that are neither known to be NP-complete nor ... 5 There are essentially only two algorithms that I'm aware of: Use repeated-squaring, along the lines you mentioned. Factor$n$using a state-of-the-art algorithm, then use the Chinese remainder thoerem. If$p$is prime, you can compute$a^{b^c} \bmod p$efficiently by computing$b^c \bmod p-1$using fast exponentiation, call the result$d$, then computing$... 5 Sorry if this answer doesn't tell anything nontrivial, but you don't seem to imply these results in the questionm. Consider first the problem of computing a modular exponentiation $a^r \mod m$. You say above that you can compute this by repeated squaring modulo $m$, and that this needs $O(\log r)$ multiplications. This is true, and it's certainly ... 5 The question of how to find computable substructures of algebraic structures was studied by Jens Blanck and myself in the paper "Canonical Effective Subalgebras of Classical Algebras as Constructive Metric Completions". There we give general conditions on what it means for a substructure of an algebraic structure to be computable. Let me give a summary, but ... 5 Update: The description below is for a different problem (in which you have all pairwise distances in a set rather than pairwise distances between two distinct sets). I'll leave it up anyway since it is closely related. This problem is called the beltway problem, and is a special case of the general $d$-torus embedding problem. It is also closely related to ... 5 As mentioned by Daniel, you can find some informations in the book A Course in Computational Algebraic Number Theory (link). In particular, there are several ways of representing elements of number fields. Let $K=Q[\xi]/\langle\varphi\rangle$ be a number field with $\varphi$ a degree-$n$ monic irreducible polynomial of $\mathbb Z[\xi]$. Let $\theta$ be any ... 5 Start by putting $A$ into Jordan normal form, i.e., write $A=PJP^{-1}$ where $J$ is the Jordan normal form and $P$ is a suitably chosen invertible matrix. Then $A^k = PJ^k P^{-1}$, so without loss of generality I only need to consider possibilities for $A$ that are already in Jordan normal form. For $2\times 2$ matrices, there are only three interesting ... 5 Some comments (not really an answer). Let's classify 32-bit integers $c$ as follows: Type X: $c$ (as a binary string) is De Bruijn sequence (for all rotations, bits [27,31] are distinct). An example: 11111011100110101100010100100000 Type Y: bits [27,31] of $2^i \cdot c$ are distinct for $i = 0, 1, ..., 31$. This is what Leiserson et al. uses. Examples: ... 4 The state of the art here is: We can decide primality in polynomial time, but the fastest, general-purpose algorithm to $\underline{\rm find}$ the factors of an n-bit composite integer takes time $\approx 2^{n^{1/3}\log^{2/3}n}$. More to your question, a primality test is the same thing as a compositeness test. Therefore, we can easily implement the '... 4 Here is a suggestion, for $K = 6$ and $N = 251$. We are given a list $a_i - b_j \pmod{N}$. Start by taking one of them, without loss of generality $a_1-b_1$. Without loss of generality $b_1=0$, and we obtain the value of $a_1$. Now take another one, and hope that it is of the form $a_2-b_1$ (this happens with probability $5/35 = 1/7$), and deduce $a_2$. At ... 3 One other way to look at this, which brings in potentially all complexity classes above $\mathsf{E} = \mathsf{DTIME}(2^{O(n)})$, is to consider real numbers in their binary expansion. Any real number whose binary expansion doesn't end with $0^\infty$ or $1^\infty$ - i.e., which is not a dyadic rational - has a unique binary expansion. We can treat this ... 3 Codes/Lattices are certain combinatorial objects that are commonly used within TCS. A basic question for both of them is finding "short" codewords/lattice points, known as the Minimum Distance problem/Shortest Vector problem (MDP/SVP). Both have been known to be NP-hard under randomized reductions for 20+ years. Roughly 10 years ago, the NP hardness proof ... 3 Here's a different approach, based upon iteratively finding numbers that cannot appear among $\{a_1,\dots,a_6\}$. Call a set $A$ an over-approximation of the $a$'s if we know that $\{a_1,\dots,a_6\} \subseteq A$. Similarly, $B$ is an overapproximation of the $b$'s if we know that $\{b_1,\dots,b_6\} \subseteq B$. Obviously, the smaller $A$ is, the more ... 3 Here's an observation that I think gives you a foothold, possibly enough of one to solve the problem. Suppose we have four differences $a_1-b_1$, $a_1-b_2$, $a_2-b_1$, $a_2-b_2$ that arise as the pairwise differences between two $a$'s and two $b$'s. Call this a quartet of differences. Notice that we have a non-trivial relationship: (a_1-b_1)-(a_1-b_2) =... 3 I think your question is closely related to the set reconciliation problem, which is solved in this paper: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.20.5338 The problem of set reconciliation is to given two sets $A, B \subseteq [n]$ find $A \backslash B$ and $B \backslash A$ with as less communication as possible. If $B = [n]$, then you just ... 3 Yes, there are good (efficient) algorithms. This is completely solved, and the algorithms are widely used in the cryptographic community. If $\gcd(n,p-1)=1$, then everything is a $n$th residue. If $n$ divides $p-1$, then $a$ is a $n$th residue if and only if $a^{(p-1)/n} \equiv 1 \pmod p$. If $1<\gcd(n,p-1)<n$, $a$ is a $n$th residue if and only if ... 2 Paul Lemke Steven S. Skiena Warren D. Smith, Reconstructing Sets From Interpoint Distances, gave backtracking algorithm that runs in time $O(n^n \log n)$ for the beltway reconstruction problem. As far as I know, this is the best known. The exact complexity of the problem is not known. It is not known to be in $P$ and neither known to be $NP$-complete. 2 Some heuristic evidence: to the best of our knowledge $\pi(n)$ looks like a simple function corrected by random fluctuations. Thus I’d expect a poly-time machine with a $\pi(n)$ oracle to be no stronger than such a machine with a random oracle, and w.r.t. a random oracle $X$ adding a separate random oracle $Y$ to $\mathsf{P}$ gives $\#\mathsf{P}^X \not\... 2 Both! You may want to read the answers to this related question, and the 1987 paper of Heather Woll, Reductions among number theoretic problems, Information and Computation 72 (1987) 167-179 cited in Jeffrey Shallit’s answer. This paper looks at the reduction between many problems in number theory, including primality, factorization, order-finding, ... 1 Knowing$F_n(1)$or$F_n(-1)$gives good randomized polynomial algorithm for the factorization of$n$. We have$|F_n(1)|=\phi(n)$and$|F_n(-1)|=\sigma(n)$by definition.$\phi(n)$is Euler's totient function and$\sigma(n)$is the sum of divisors function. The paper you link to proves the case for$\sigma(n)$. The same paper cites that the case for$\... 1 Where does this constant comes from? Quoting: "On December 10, 2009, Mark Dickinson shaved off a couple operations by requiring v be rounded up to one less than the next power of 2 rather than the power of 2". [graphics.stanford.edu/~seander/bithacks.html] This particullar constant is a De Bruijn Sequence with Binary alphabet but with an extra ... 1 "Beltway Reconstruction Problem” - arxiv.org/pdf/1212.2386.pdf may help. Note that you're asking for the function corresponding to $P$ whose autocorrelation is the given function corresponding to $A$. I've often thought that there's some relation to factoring, at least to for the turnpike version. You can consider $A$ as an integer \$Z=a_1x^1+a_2x^2+⋯a_Nx^... Only top voted, non community-wiki answers of a minimum length are eligible
2020-08-04 00:49:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9100824594497681, "perplexity": 245.03715948148866}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735836.89/warc/CC-MAIN-20200803224907-20200804014907-00525.warc.gz"}
https://gmatclub.com/forum/if-k-0-k-1-and-k-3-k-k-4-2-k-n-k-k-14-then-n-195472.html
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 23 Jan 2019, 00:03 GMAT Club Daily Prep Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History Events & Promotions Events & Promotions in January PrevNext SuMoTuWeThFrSa 303112345 6789101112 13141516171819 20212223242526 272829303112 Open Detailed Calendar • Key Strategies to Master GMAT SC January 26, 2019 January 26, 2019 07:00 AM PST 09:00 AM PST Attend this webinar to learn how to leverage Meaning and Logic to solve the most challenging Sentence Correction Questions. • Free GMAT Number Properties Webinar January 27, 2019 January 27, 2019 07:00 AM PST 09:00 AM PST Attend this webinar to learn a structured approach to solve 700+ Number Properties question in less than 2 minutes. If k ≠ 0, k ≠ ±1, and (k^3*k*k^4)^2/k^n*k=k^14 , then n = Author Message TAGS: Hide Tags Math Expert Joined: 02 Sep 2009 Posts: 52412 If k ≠ 0, k ≠ ±1, and (k^3*k*k^4)^2/k^n*k=k^14 , then n =  [#permalink] Show Tags 01 Apr 2015, 03:09 00:00 Difficulty: 5% (low) Question Stats: 86% (01:01) correct 14% (01:25) wrong based on 121 sessions HideShow timer Statistics If k ≠ 0, k ≠ ±1, and $$\frac{(k^3*k*k^4)^2}{k^n*k}=k^{14}$$ , then n = A. -1 B. 1 C. 3 D. 49 E. 129 Kudos for a correct solution. _________________ Director Joined: 07 Aug 2011 Posts: 532 GMAT 1: 630 Q49 V27 If k ≠ 0, k ≠ ±1, and (k^3*k*k^4)^2/k^n*k=k^14 , then n =  [#permalink] Show Tags 01 Apr 2015, 03:16 Bunuel wrote: If k ≠ 0, k ≠ ±1, and $$\frac{(k^3*k*k^4)^2}{k^n*k}=k^{14}$$ , then n = A. -1 B. 1 C. 3 D. 49 E. 129 Kudos for a correct solution. $$K^{16-n-1} = K^{14}$$ n=1 Retired Moderator Status: On a mountain of skulls, in the castle of pain, I sit on a throne of blood. Joined: 30 Jul 2013 Posts: 323 Re: If k ≠ 0, k ≠ ±1, and (k^3*k*k^4)^2/k^n*k=k^14 , then n =  [#permalink] Show Tags 01 Apr 2015, 03:18 Bunuel wrote: If k ≠ 0, k ≠ ±1, and $$\frac{(k^3*k*k^4)^2}{k^n*k}=k^{14}$$ , then n = A. -1 B. 1 C. 3 D. 49 E. 129 Kudos for a correct solution. The powers will be (3+1+4)2-n-1=14 15-n=14 n=1 Intern Joined: 22 Aug 2014 Posts: 40 Re: If k ≠ 0, k ≠ ±1, and (k^3*k*k^4)^2/k^n*k=k^14 , then n =  [#permalink] Show Tags 01 Apr 2015, 03:39 1 Bunuel wrote: If k ≠ 0, k ≠ ±1, and $$\frac{(k^3*k*k^4)^2}{k^n*k}=k^{14}$$ , then n = A. -1 B. 1 C. 3 D. 49 E. 129 Kudos for a correct solution. Deducing the eqn, we get: (K^8)^2/K^(n+1) = K*14 16 - (n + 1) = 14 n = 1 Option B SVP Status: The Best Or Nothing Joined: 27 Dec 2012 Posts: 1823 Location: India Concentration: General Management, Technology WE: Information Technology (Computer Software) If k ≠ 0, k ≠ ±1, and (k^3*k*k^4)^2/k^n*k=k^14 , then n =  [#permalink] Show Tags 01 Apr 2015, 20:05 $$\frac{(k^3*k*k^4)^2}{k^n*k}=k^{14}$$ $$\frac{(k^8)^2}{k^n * k^1}=k^{14}$$ $$\frac{k^{16}}{k^n * k^1}=k^{14}$$ $$\frac{k^{15}}{k^n}=k^{14}$$ $$k^n = k^{1}$$ n = 1 _________________ Kindly press "+1 Kudos" to appreciate Math Expert Joined: 02 Sep 2009 Posts: 52412 Re: If k ≠ 0, k ≠ ±1, and (k^3*k*k^4)^2/k^n*k=k^14 , then n =  [#permalink] Show Tags 06 Apr 2015, 04:59 Bunuel wrote: If k ≠ 0, k ≠ ±1, and $$\frac{(k^3*k*k^4)^2}{k^n*k}=k^{14}$$ , then n = A. -1 B. 1 C. 3 D. 49 E. 129 Kudos for a correct solution. MAGOOSH OFFICIAL SOLUTION: Attachment: determinetheexponentII_text.PNG [ 14.37 KiB | Viewed 1925 times ] _________________ Non-Human User Joined: 09 Sep 2013 Posts: 9462 Re: If k ≠ 0, k ≠ ±1, and (k^3*k*k^4)^2/k^n*k=k^14 , then n =  [#permalink] Show Tags 03 Nov 2018, 01:03 Hello from the GMAT Club BumpBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. _________________ Re: If k ≠ 0, k ≠ ±1, and (k^3*k*k^4)^2/k^n*k=k^14 , then n = &nbs [#permalink] 03 Nov 2018, 01:03 Display posts from previous: Sort by
2019-01-23 08:03:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8167523145675659, "perplexity": 13281.282270775626}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584203540.82/warc/CC-MAIN-20190123064911-20190123090911-00225.warc.gz"}
https://brilliant.org/problems/streak/
# Streak! Fascinated by the beauty of randomness, a Kaboobly Dooist asks the craftsmen to paint a linear wall consisting of $$2^{16}$$ stones in the following way: For each stone: Flip a coin; If the toss results heads, paint the stone white; If the toss results tails, paint the stone black. What is the expected longest contiguous streak consisting of consecutive black stones? If the answer is $$n$$, enter your answer as $$\lfloor n \rfloor$$. ×
2017-10-19 22:16:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6545222997665405, "perplexity": 4001.8179486714544}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823478.54/warc/CC-MAIN-20171019212946-20171019232946-00304.warc.gz"}
http://mathsci.kaist.ac.kr/home/
## Problem of the week ### 2018-21 AM-GM inequality Does there exist a (possibly $$n$$-dependent) constant $$C$$ such that $\frac{C}{a_n} \sum_{1 \leq i < j \leq n} (a_i-a_j)^2 \leq \frac{a_1+ \dots + a_n}{n} - \sqrt[n]{a_1 \dots a_n} \leq \frac{C}{a_1} \sum_{1 \leq i < j \leq n} (a_i-a_j)^2$ for any $$0 < a_1 \leq a_2 \leq \dots \leq a_n$$?
2018-11-15 23:31:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8666507005691528, "perplexity": 1540.5488569226545}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742963.17/warc/CC-MAIN-20181115223739-20181116005739-00428.warc.gz"}
http://www-sop.inria.fr/teams/marelle/vu-2020/lesson3.html
Use ALT-(up-arrow) and ALT-(down-arrow) to process this document inside your browser, line-by-line. Use ALT-(right-arrow) to go to the cursor. You can inside your browser and . Finally, you can your working copy of the file, e.g., for sending it to teachers. # Dependent Type Theory ## (a brief introduction to Coq's logical foundations) (notes) This lesson mostly follows Chapter 3 of the reference book Mathematical Components. ## Types, typing judgments A typing judgment is a ternary relation between two terms t and T, and a context Γ, which is itself a list of pairs, variable-type : $$\Gamma ⊢\ t\ :\ T$$ Typing rules provide an inductive definition of well-formed typing judgments. For instance, a context provides a type to every variable it stores: $$\Gamma ⊢\ x\ :\ T \quad (x, T) \in Γ$$ A type is a term T which occurs on the right of a colon, in a well-formed typing judgment. Here is an example of context, and of judgment checking using the Check command: Contexts also log the current hypotheses: The fact that the command for stating lemma also involves a colon is no coincidence. In fact, statements are types, proofs are terms (of a prescribed type) and typing rules encode rules for verifying the well-formedness of proofs. ## Terms, types, sorts A type is a term, and therefore it can also be typed in a typing judgment. A sort s is the type of a type: $$\Gamma ⊢\ t\ :\ T \quad \quad \Gamma ⊢\ T\ :\ s$$ The sort Prop is the type of statements: Warning: well-typed statements are not necessarily provable. Types used as data structures live in a different sort, called Set. Of course, a sort also has a type: And there is in fact a tour of sorts, for consistency reasons which are beyond the scope of today's lecture: Non atomic types are types of functions: the source of the arrow prescribes the type of the argument, and the codomain gives the type of the application of the function to its argument. Reminder: Lesson 2 introduced polymorphic data types, e.g. list: Polymorphic type are types of functions with a Type source: A dependent type is a function whose co-domain is Type, and which takes at least one of its arguments in a data type, like nat or bool. Here is for instance a type which could represent matrices (for a fixed type of coefficients), with size presecribed by its arguments: And here is a function which uses this type as co-domain: The typing rule for application prescribes the type of arguments: Note that our arrow -> is just a notation for the type of functions with a non-dependent codomain: ## Propositions, implications, universal quantification What is an arrow type bewteen two types in Prop, i.e., between two statements? It is an implication statement: the proof of an implication maps any proof of the premise to a proof of the conclusion. The tactic move=> is used to prove an implication, by introducing its premise, i.e. adding it to the current context: The tactic apply: allows to make use of an implication hypothesis in a proof. Its variant exact: fails if this proof step does not close the current goal. The apply: tactic is also used to specialize a lemma: Note how Coq did conveniently compute the appropriate instance by matching the statement against the current formula to be proved. ## Inductive types So far, we have only (almost) rigorously explained types Type, Prop, and forall x : A, B. But we have also casually used other constants like bool or nat. The following declaration: Inductive bool : Set := true : bool | false : bool in fact introduces new constants in the language: $$\vdash \textsf{bool} : \textsf{Set} \quad \vdash \textsf{true} : \textsf{bool} \quad \vdash \textsf{false} : \textsf{bool}$$ Term bool is a type, and the terms true and false are called constructors. The closed (i.e. variable-free) terms of type bool are freely generated by true and false, i.e. they are exactly true and false. This is the intuition behind the definition by (exhaustive) pattern matching used in Lesson 2: The following declaration: Inductive nat : Set := O : nat | S (n : nat) in fact introduces new constants in the language: $$\vdash \textsf{nat} : \textsf{Set} \quad \vdash \textsf{O} : \textsf{nat} \quad \vdash \textsf{S} : \textsf{nat} \rightarrow \textsf{nat}$$ Term nat is a type, and the terms O and S are called constructors. The closed (i.e. variable-free) terms of type bool are freely generated by O and S, i.e. they are exactly O and terms S (S ... (S O)). This is the intuition behind the definition by induction used in Lesson 2: More precisely, an induction scheme is attached to the definition of an inductive type: Quiz: what is this type used for: Let us now review the three natures of proofs that involve a term of an inductive type: • Proofs by computation make use of the reduction rule attached to match t with ... end terms: • Proofs by case analysis go by exhaustive pattern matching. They usually involve a pinch of computation as well. • Proofs by induction on the (inductive) definition. Coq has a dedicated elim tactic for this purpose. ## Equality and rewriting Equality is a polymorphic, binary relation on terms: It comes with an introduction rule, to build proofs of equalities, and with an elimination rule, to use equality statements. The eq_ind principle states that an equality statement can be used to perform right to left substitutions. It is in fact sufficient to justify the symmetry and transitivity properties of equalities. Note how restoring the coercion from bool to Prop helps with readability. But this is quite inconvenient: the rewrite tactic offers support for applying eq_ind conveniently. Digression: eq being polymorphic, nothing prevents us from stating equalities on equality types: ## More connectives See the Coq cheat sheet for more connectives: • conjunction A /\ B • disjunction A \/ B • False • negation ~ A, which unfolds to A -> False ## Lesson 2: sum up ### A formalism based on functions and types • Coq's proof checker verifies typing judgments according to the rules defining the formalism. • Statements are types, and proof are terms of the corresponding type • Proving an implication is describing a function from proofs of the premise to proofs of the conclusion, proving a conjunction is providing a pair of proofs, etc. This is called the Curry-Howard correspondance. • Inductive types introduce types that are not (necessarily) types of functions: they are an important formalization instrument. ### Tactics • Each atomic logical step corresponds to a typing rule, and to a tactic. • But Coq provides help to ease the desctiption of bureaucracy. • Matching/unification and computation also help with mundane, computational parts. • New tactic idioms: • apply • case: n => [| n] /=; case: l => [| x l] /= • elim: n => [| n ihn] ; elim: l => [| x l ihl] • elim: n => [| n ihn] /=, elim: l => [| x l ihl] /= • rewrite
2022-05-24 06:30:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7326183915138245, "perplexity": 2210.476605548203}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662564830.55/warc/CC-MAIN-20220524045003-20220524075003-00102.warc.gz"}
https://www.gradesaver.com/textbooks/math/applied-mathematics/elementary-technical-mathematics/chapter-7-section-7-1-ratio-exercises-page-271/62
Elementary Technical Mathematics Published by Brooks Cole Chapter 7 - Section 7.1 - Ratio - Exercises - Page 271: 62 112.5 drops/min Work Step by Step Flow rate is expressed as drops per minute. Use conversion factors (equal ratios) to convert the given values. $\frac{900\ mL}{2\ hr}=\frac{900\ mL}{2\ hr}\times\frac{15\ drops}{1\ mL}\times\frac{1\ hr}{60\ min}=\frac{13500\ drops}{120\ min}=\frac{13500\div120\ drops}{120\div120\ min}=\frac{112.5\ drops}{1\ min}$ After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2021-02-26 01:46:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6268549561500549, "perplexity": 3741.367646833057}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178355944.41/warc/CC-MAIN-20210226001221-20210226031221-00054.warc.gz"}
http://mathoverflow.net/questions/57825/who-streamlined-kontsevichs-count-of-rational-curves
# Who streamlined Kontsevich's count of rational curves? Let $N_d$ denote the number of rational curves in $\mathbf P^2$ passing through $3d-1$ points in general position. Maxim Kontsevich discovered a famous recursion for these numbers: $$N_d = \sum_{k+l = d} N_k N_l k^2 l \left( l \binom{3d-4}{3k-2} - k \binom{3d-4}{3k-1}\right).$$ The proof of this recursion goes by interpreting $N_d$ as a Gromov-Witten invariant: one looks at the moduli space $\overline{M}_{0,3d-1}(\mathbf P^2, d)$ and pulls back the class of a point along each evaluation map. Taking the product of all these classes in the Chow ring produces a number. Using that $\mathbf P^2$ is homogeneous one can show that this number actually counts the number of stable maps where the markings are sent to the given points, and it is not hard to see that counting stable maps is the same thing as counting rational curves. Finally, the associativity of the quantum product can be translated to the WDVV differential equations for the Gromov-Witten potential, i.e. the generating function of all Gromov-Witten invariants. These differential equations translate into the above recursion. At least, this is how the proof is stated in Kontsevich's "Enumeration of rational curves via torus actions" and Konstevich-Manin "Gromov-Witten classes, quantum cohomology and enumerative geometry", which seem to be the earliest published sources. However, there is also a beautiful streamlined proof which avoids the use of the quantum product. Here one instead works with $\overline M_{0,3d}(\mathbf{P}^2,d)$ (one more marking) and takes the pullback of the classes of two lines in $\mathbf P^2$ along the first two markings and $3d-2$ classes of points for the remaining. The intersection of these are a curve in the moduli space and one then computes the intersection of this curve with two different linearly equivalent boundary divisors. These two intersection numbers can easily be computed by hand, producing the recursion. The proof that these two boundary divisors are linearly equivalent uses the forgetful map to $\overline{M}_{0,4}$ which is also used in the proof of the WDVV equations so in some sense it seems like this proof inlines the particular case of WDVV that is needed in a very clever way. This latter version of the proof appears for instance in the book of Kock and Vainsencher, in Abramovich's "Lectures on Gromov-Witten invariants on orbifolds", and in the lecture notes I found here. But where is it from originally? All these sources just refer to it as "Kontsevich's proof" without attribution. Did Kontsevich also come up with this streamlined version but did not see it as worth publishing? - the link is broken... –  Dmitri Mar 8 '11 at 16:29 Should $N_d$ appear on both sides of the given recursion? –  Daniel Litt Mar 8 '11 at 17:18 That's harmless, since all terms where $k$ or $l$ are equal to $d$ will vanish anyway. Perhaps it would have been clearer to write the sum over all $k+l = d$, $k \geq 1$, $l \geq 1$. –  Dan Petersen Mar 8 '11 at 17:33 @Dan: I think Daniel's referring to the typo $N_d$ on the RHS. –  ndkrempel Mar 8 '11 at 19:16 Oh. Thanks. Yes, that was a typo. –  Dan Petersen Mar 8 '11 at 19:49
2015-07-08 02:31:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8352667093276978, "perplexity": 331.0098897807971}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375635604.22/warc/CC-MAIN-20150627032715-00212-ip-10-179-60-89.ec2.internal.warc.gz"}
http://connection.ebscohost.com/c/articles/44500038/blow-up-examples-yamabe-problem
TITLE # Blow-up examples for the Yamabe problem AUTHOR(S) Marques, Fernando C. PUB. DATE November 2009 SOURCE Calculus of Variations & Partial Differential Equations;Nov2009, Vol. 36 Issue 3, p377 SOURCE TYPE DOC. TYPE Article ABSTRACT It has been conjectured that if solutions to the Yamabe PDE on a smooth Riemannian manifold ( M n, g) blow-up at a point $${p \in M}$$ , then all derivatives of the Weyl tensor W g of g, of order less than or equal to $${[\frac{n-6}{2}]}$$ , vanish at $${p \in M}$$ . In this paper, we will construct smooth counterexamples to the Weyl Vanishing Conjecture for any n ≥ 25. ACCESSION # 44500038 ## Related Articles • Viscosity Solutions to Second Order Parabolic PDEs on Riemannian Manifolds. Zhu, Xuehong // Acta Applicandae Mathematica;Sep2011, Vol. 115 Issue 3, p279 In this work we consider viscosity solutions to second order parabolic PDEs u+ F( t, x, u, du, d u)=0 defined on complete Riemannian manifolds with boundary conditions. We prove comparison, uniqueness and existence results for the solutions. Under the assumption that the manifold M has... • An Estimate for a Fundamental Solution of a Parabolic Equation with Drift on a Riemannian Manifold. Bernatska, J. // Siberian Mathematical Journal;May/Jun2003, Vol. 44 Issue 3, p387 We construct a fundamental solution for a parabolic equation with drift on a Riemannian manifold of nonpositive curvature. We obtain some estimates for this fundamental solution that depend on the conditions on the drift field. • An addition theorem for the manifolds with the Laplacian having discrete spectrum. Kuz’minov, V. I.; Shvedov, I. A. // Siberian Mathematical Journal;May2006, Vol. 47 Issue 3, p459 The question of the preservation of discreteness of the spectrum of the Laplacian acting in a space of differential forms under the cutting and gluing of manifolds reduces to the same problem for compact solvability of the operator of exterior derivation. Along these lines, we give some... • Carleman Estimates for Second-Order Hyperbolic Equations. Romanov, V. // Siberian Mathematical Journal;Jan2006, Vol. 47 Issue 1, p135 In the space of variables ( x, t) ∈ ℝ n+1, we consider a linear second-order hyperbolic equation with coefficients depending only on x. Given a domain D ⊂ ℝ n+1 whose projection to the x-space is a compact domain Ω, we consider the question of construction of a... • Biharmonic maps from a complete Riemannian manifold into a non-positively curved manifold. Maeta, Shun // Annals of Global Analysis & Geometry;Jun2014, Vol. 46 Issue 1, p75 We consider biharmonic maps $$\phi :(M,g)\rightarrow (N,h)$$ from a complete Riemannian manifold into a Riemannian manifold with non-positive sectional curvature. Assume that $$p$$ satisfies $$2\le p <\infty$$ . If for such a $$p$$ , $$\int _M|\tau (\phi )|^{ p }\,\mathrm{d}v_g<\infty$$... • A LIOUVILLE THEOREM FOR F-HARMONIC MAPS WITH FINITE F-ENERGY. Kassi, M'Hamed // Electronic Journal of Differential Equations;2006, Vol. 2006, Special section p1 Let (M, g) be a m-dimensional complete Riemannian manifold with a pole, and (N, h) a Riemannian manifold. Let F : ℝ+ → ℝ+ be a strictly increasing C² function such that F(0) = 0 and dF := sup(tF' (t)(F(t))-1) < ∞. We show that if dF < m/2, then every F-harmonic map u :... • Functional Integrals for the Schrodinger Equation on Compact Riemannian Manifolds. Butko, Ya. A. // Mathematical Notes;Jan/Feb2006, Vol. 79 Issue 1/2, p178 In this paper, we represent the solution of the Cauchy problem for the Schrodinger equation on compact Riemannian manifolds in terms of functional integrals with respect to the Wiener measure corresponding to the Brownian motion in a manifold and with respect to the Smolyanov surface measures... • Steady Ricci flows. Shevelev, Yu. // Doklady Mathematics;Nov2015, Vol. 92 Issue 3, p778 Steady solutions for Ricci flows are given. A class of Riemannian 3-manifolds related to the geometry of a surface is considered. The components of the metric tensor, which reproduce the Riemannian space and a triorthogonal coordinate system, are determined by a system of partial differential... • A Sobolev Poincaré type inequality for integral varifolds. Menne, Ulrich // Calculus of Variations & Partial Differential Equations;Jul2010, Vol. 38 Issue 3/4, p369 In this work a local inequality is provided which bounds the distance of an integral varifold from a multivalued plane (height) by its tilt and mean curvature. The bounds obtained for the exponents of the Lebesgue spaces involved are shown to be sharp. • Three-dimensional Riemannian manifolds with constant principal Ricci curvatures... Bueken, Peter // Journal of Mathematical Physics;Aug96, Vol. 37 Issue 8, p4062 Studies the nonhomogeneous three-dimensional Riemannian manifolds with constant principal Ricci curvatures. Basic differential equations; Curvature invariants and homogeneity; Nonhomogeneous solutions. Share Courtesy of VIRGINIA BEACH PUBLIC LIBRARY AND SYSTEM Sorry, but this item is not currently available from your library. Try another library? Sign out of this library
2017-12-14 10:30:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7020354270935059, "perplexity": 837.781868288311}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948543611.44/warc/CC-MAIN-20171214093947-20171214113947-00676.warc.gz"}
https://proofwiki.org/wiki/Basel_Problem/Historical_Note
# Basel Problem/Historical Note ## Historical Note on Basel Problem The Basel Problem was first posed by Pietro Mengoli in $1644$. Its solution is generally attributed to Leonhard Euler , who solved it in $1734$ and delivered a proof in $1735$. However, it has also been suggested that it was in fact first solved by Nicolaus I Bernoulli. Jacob Bernoulli had earlier established that the series was convergent, but had failed to work out what to. The problem is named after Basel, the home town of Euler as well as of the Bernoulli family. If only my brother were alive now. -- Johann Bernoulli
2019-11-21 17:38:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.818985104560852, "perplexity": 1129.5581137523648}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670921.20/warc/CC-MAIN-20191121153204-20191121181204-00217.warc.gz"}
https://hal.archives-ouvertes.fr/hal-01024287
# Efficient Evaluation of Hyper-Rectangular Blocks of Update Operations Applied to General Data Structures Abstract : In this paper we present novel solutions for the following problem: We have a general data structure $DS$ and a set of update operations organized into a $D$-dimensional cube of side $N$ (thus, there are $N^D$ update operations). We are interested in efficiently evaluating range queries of the following type: compute the result of applying all the update operations within a hyper-rectangular block $B$ of the $D$-dimensional cube to $DS$ (considering that $DS$ is initially empty). The result of applying the updates consists of computing some aggregate values over the data structure. We consider that the order of applying the updates is irrelevant (i.e. the update operations are commutative) and that the aggregate results corresponding to a block of updates cannot easily be computed by combining the results of a set of sub-blocks whose disjoint union is $B$. However, the results can be efficiently maintained after each update operation, if the operations are performed sequentially in any order. Document type : Preprints, Working Papers, ... Cited literature [13 references] https://hal.archives-ouvertes.fr/hal-01024287 Contributor : Mugurel Ionut Andreica Connect in order to contact the contributor Submitted on : Thursday, July 17, 2014 - 5:01:24 PM Last modification on : Saturday, June 12, 2021 - 8:30:03 PM Long-term archiving on: : Tuesday, October 20, 2015 - 4:33:30 PM ### Files Andreica_Grigorean_Parvu_Tapus... Files produced by the author(s) ### Identifiers • HAL Id : hal-01024287, version 1 ### Citation Mugurel Ionut Andreica, Andrei Grigorean, Andrei Parvu, Nicolae Tapus. Efficient Evaluation of Hyper-Rectangular Blocks of Update Operations Applied to General Data Structures. 2013. ⟨hal-01024287⟩ ### Metrics Les métriques sont temporairement indisponibles
2022-01-17 23:32:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5778412222862244, "perplexity": 2092.967940740833}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300624.10/warc/CC-MAIN-20220117212242-20220118002242-00549.warc.gz"}
https://www.scienceforums.net/topic/30907-magnetite-a-simple-method/
# Magnetite, a Simple method! ## Recommended Posts after reading some of the difficulties making Magnetite in other threads, I took up the challenge of finding a more Simple and near foolproof method thats also much cheaper! rather than using the commonly known FeCl3 method, Im going an alternative route. you need only 2 readily available chems for this, Iron Sulphate (FeSO4) that you can get from almost Any gardening store as Moss killer for lawns, its also pretty easy to make if you wanted to. and simple Washing Soda (Na2CO3), if for some bizzare reason you can get this then use baking soda thats been heated to a high heat for a while, this will convert it to washing soda also. here: washing soda on the left, iron sulphate on the right, I recrystalised the iron sulphate myself from Moss killer. dissolve an equal weight of each in water, now I know that a mole of iron sulphate is a little heavier than a mole of the soda by about 10 grams, but we want the carbonate in excess and also to keep the method simple. mix both solutions whilst stirring well, it will Instantly make a horrible gray/green "Mud" and thicken up a little too, this is normal, keep mixing when its all mixed feel free to add more water and mix really well, leave it to stand now and youll see the "Mud" start to settle to the bottom leaving a murky liquid on the top, pour this liquid off carefully so as not to lose any "mud". keep doing this at least 4 times, making sure you wash out as much of the soluble sodium sulphate as you can. now you need to filter this "mud", a plain coffee filter is ideal for this, itll catch all the Iron Carbonate (Mud) that youv made and get rid of what should be just water by now. keeping it in the filter paper put it somewhere to dry out, on a sheet of plastic out in the sun is fine. when its dry it will crumble very easily and look just like Rust powder. you Now have to heat this up very hot to decompose the carbonate, I used a crucible and bunsen burner: the carbonate is on the left, in the crucible is now the Magnetite! youll notice during heating that the Brown rusty carbonate will go Black during heating, this is normal, keep heating and keep the lid ON during this, dont allow too much or any Air to get in, if you do youll end up with an impure product. let it cool naturally now (do try to make sure its cool before touching it!) itll take on a deep red to black color as shown (its a bit more red as I took the lid off to watch so I could give you more data). a simple test with a magnet: a little bit on the RED side, but Ill provide a further pic a bit later of the pure Black stuff. yes, youll notice the magnet is in a plastic bag, how else would I get it off the magnet if the powder decided to cover it! Have Fun! ==================================================================== heres what it Should look like when youre not tempted to take the lid off during the reaction: as you can see the stuff even sticks to the spatula that isnt even Magnetic, at least it Shouldnt be. how it should look with the lid on, and an idea of the heat used: but dont worry if you dont have a crucible, heres another method that works just as well! you can use the lid off another tin can for a cover, but Do burn off all the plastic and paint coating 1st. and for Completeness here are All the iron compounds featured and mentioned in this thread: it gives a nice idea of Color. Edited by YT2095 multiple post merged ##### Share on other sites Are you sure you didn't just make maghemite? http://en.wikipedia.org/wiki/Maghemite ##### Share on other sites FeCO3 is green. It oxidises very quickly in air to a godforsaken mixture of Fe(II) and Fe(III) carbonates/ oxides. I think that you are getting some mixed oxidation state oxide but I doubt it's close to the stoichiometry for magnetite. ##### Share on other sites Thats Interesting! and would explain why the Mud sludge when made is a green/gray, but when dry goes Brown. it Does react completely with HCl giving off plenty of CO2 leaving a yellow soln behind and no PPT, my Fe2O3 does not, in fact its pretty inert to most acids (probably calcined). although this: http://hyperphysics.phy-astr.gsu.edu/Hbase/Minerals/siderite.html in the top incarnation looks very much the same as my dried carbonate. and yes the 1st try with the lid off is a good combination of both oxides, but the last one (the black gray powder), is indeed the magnetite, its insanely sensitive to even the mildest magnetic field as well, as demonstrated with the spatula. the important thing to remember is that you MUST keep the lid ON during the heating and cooling, sneaking a peek will ruin it. incidentally, the piles of powder on the paper got put into another crucible and quantity of carbon was added and mixed well. it was then heated and cooled, and left behind a gray black powder as well, but there were also shiny crystaline bits in there too. I guess the thing to do Now is to make some ferrofluid with it and test it out, just need to get some Oleic acid Edited by YT2095 ##### Share on other sites OK, it starts as Fe(II); it gains some random ammount of oxidation as it dries, then you calcine it in the (near) absense of air. How does it know to come out as exactly Fe3O4? ##### Share on other sites thats a very good question, and one I dont have an exact answer for but can only hypothesise as to Why it works. my idea goes like this: since Fe2O3 is a a 70:30 ratio of iron to oxygen and Fe3O4 is a 72.3:27.6 ratio. the the Fe3O4 is favoured in a reducing atmosphere, whereas the Fe2O3 would be in a Less reducing (or more Oxidising) atmosphere. which would bear out my latter experiment of mixing all the oxides on that paper with carbon dust and reheating to leave a very magnetic black powder. I predict that heating this again with the lid OFF will result in a Red powder of Fe2O3. in fact my results do show this, in the 1st batch I made where I took the lid off to observe and ended up with a redish powder, but had the Black powder when I didnt take the lid off to peek, And the black powder is Intensely magnet sensitive whereas the redish attempt (peeking) is only marginal. Also I can safely factor in that my Propane bunsen isnt anywhere near hot enough to effect either Fe2O3 or Fe3O4 directly with heat alone, but needs the chemical reduction/oxidation to make them change. Also on a NASA site I read that iron carbonate will form magnetite on thermal decomp or high power mechanical shock (like an explosion or the likes). Other than that, I really have no Exact idea of how or Why it works ==================================================================== after doing a little more research it seems that my Hypothesis is correct, you can quite easily reduce Fe2O3 to Fe3O4 using H2 and/or other organic substances at temps ranging 270c - 600c. furthermore you can reverse this by Oxidation at similar temps. and to test this, I have that pure Fe2O3 here that isnt at all magnetic (not even with a NIB magnet) and have mixed that with a little powdered carbon and heated that in a closed crucible. result: Jet black highly magnetic Powder, as predicted Edited by YT2095 multiple post merged ##### Share on other sites to get slightly off topic from the oxidation states, you said you needed oleic acid.... you can get the oleic acid here http://www.hometrainingtools.com/product_categories/70/products/3005-oleic-acid-30-ml and if you are really desperate, don't use vegetable oil, but you could use olive oil. olive oil has a higher concentration of oleic acid. it is of course not a pure source, i wonder the best way to break it down to its individual components... the only problem that I had when I tried to make ferrofluid a while back was finding kerosene to suspend the magnetite olate (i think thats not the right name for it....), unfortunately they don't sell it where I live, and unfortuantely I don't have access to a fractionating distiller column any other suggestions? ##### Share on other sites Hmmm... that sounds easily doable if your REALLY pushed. put Olive oil in the fridge, the Oleic acid will solidify, filter of the liquid portion. saponify the remaining impure Oleic acid with NaOH to make the sodium salt (sodium Oleate), again filter this to remove water and glycerol. then add HCl to leave the insoluble Higher purity Oleic acid. but I think Ill just buy it, Im in no hurry anyway. ##### Share on other sites Thats Interesting! and would explain why the Mud sludge when made is a green/gray, but when dry goes Brown. it Does react completely with HCl giving off plenty of CO2 leaving a yellow soln behind and no PPT, my Fe2O3 does not, in fact its pretty inert to most acids (probably calcined). although this: http://hyperphysics.phy-astr.gsu.edu/Hbase/Minerals/siderite.html in the top incarnation looks very much the same as my dried carbonate. and yes the 1st try with the lid off is a good combination of both oxides, but the last one (the black gray powder), is indeed the magnetite, it`s insanely sensitive to even the mildest magnetic field as well, as demonstrated with the spatula. the important thing to remember is that you MUST keep the lid ON during the heating and cooling, sneaking a peek will ruin it. incidentally, the piles of powder on the paper got put into another crucible and quantity of carbon was added and mixed well. it was then heated and cooled, and left behind a gray black powder as well, but there were also shiny crystaline bits in there too. I guess the thing to do Now is to make some ferrofluid with it and test it out, just need to get some Oleic acid this is all excellent experimentation and I have to commend you However, I don't think you've totally understood the ferrofluid synthesis. the reason they synthesise the magnetite like that, in solution, is that they're trying to form a stable colloid, which won't precipitate (flocculate) over time. If you take your powder, which I have no doubt is at least 80 or 90% magnetite, and make a sludge with some oil or other solvent, it'll be magnetic, and act a little bit like a ferrofluid, but when all is said and done it won't be a liquid, or a colloid. At best it will be a temporary suspension. None the less fun for it, though, I imagine. ##### Share on other sites • 4 months later... Supposedly modern synthetic engine oils have clever stuff to keep particulates in suspension, so possibly that might work nice as a ferrofluid like that. However, I don't know if the anti-oxidant additives etc will mess with it. ## Create an account Register a new account
2021-03-06 00:56:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4571741223335266, "perplexity": 2542.944831531278}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178374217.78/warc/CC-MAIN-20210306004859-20210306034859-00375.warc.gz"}
https://math.stackexchange.com/questions/3156409/does-the-trigonometric-identity-cos2-theta-sin2-theta-1-apply-even-whe
# Does the trigonometric identity $\cos^2(\theta)+\sin^2(\theta)=1$ apply even when $\theta$ is not in radians or degrees but simply a fraction? I have been trying to solve this question but have so far been unable to do so as the question does not seem to be "cohesive throughout". Here is my reasoning: The question is: given that $$\cos A=−3/5$$, $$\sin B=−5/13$$, and both $$A$$ and $$B$$ are in the 3rd quadrant, find $$\cos^2(A)+\sin^2(A)$$. I know of the trigonometric identity $$\cos^2(\theta)+\sin^2(\theta)=1$$. In this identity however, $$\theta$$ is in place of $$A$$. cos $$A$$ is a fraction in the case of the question, however I often see $$\theta$$ as in the radian or degree form. Does this mean that the trigonometric identity does not apply to the question or is my assumption based on familiarity incorrect? Also, if the identity were to apply to the question, does the question have an answer or not? When I attempted to solve this question, I did not get $$1$$ as the answer. • Why can't a fraction be the measure of some angle in radians? – Hrit Roy Mar 21 at 6:21 • The fraction however in this case is not in radians. – James Mar 21 at 6:22 • You have not mentioned what that fraction is, but it doesn't matter. It will indeed be the measure of SOME angle in radians. (And also some angle in degrees.) – Hrit Roy Mar 21 at 6:26 • Of course it does. – David G. Stork Mar 21 at 6:26 • By the way, you are saying that $A$ is a fraction, but do you mean $\cos A$ is a fraction (note that it is $\color{blue}{\cos}(A)$ that is $-3/5$, not $A$ itself)? – Minus One-Twelfth Mar 21 at 6:28 The question tells you that $$\cos A$$ is a fraction, not that $$A$$ itself is a fraction. Like many of the cases I imagine you've encountered, $$A$$ is just some angle that can be expressed in degrees or radians. While you could calculate the value of $$\cos^2\hspace{-0.9mm}A+\sin^2\hspace{-0.9mm}A$$ by hand, I can assure you that it will be $$1$$ in this case. In general, $$\cos^2(\text{something})+\sin^2(\text{something})=1$$ will be true unless the "something" happens to be an expression that involves division by zero, taking the square root of a negative number, or something else similarly problematic. Here's how you could calculate the value of $$\cos^2\hspace{-0.9mm}A+\sin^2\hspace{-0.9mm}A$$ without relying on that identity: Since you know that $$A$$ is some angle in quadrant III, and you know that $$\cos A=-3/5$$, consider the triangle I've drawn in Desmos here, and let $$A$$ be the angle at the origin. By design, $$\cos A=\frac{\text{adjacent}}{\text{hypotenuse}}=\frac{-3}{5}.$$ From looking at the triangle, it's also evident that $$\sin A=\frac{\text{opposite}}{\text{hypotenuse}}=\frac{-4}{5}.$$ Both of those negative signs come from the fact that we're moving in the negative $$x$$ and $$y$$ directions, since of course it's impossible for a side of a triangle to have a negative length. At this point it's just calculation: $$\cos^2\hspace{-0.9mm}A+\sin^2\hspace{-0.9mm}A=\left(\frac{-3}{5}\right)^2+\left(\frac{-4}{5}\right)^2=\frac{9}{25}+\frac{16}{25}=\frac{25}{25}=1$$ • If possible could you show me how it is done by hand just so I can see the truth behind this identity? I have attempted to do it by hand but it comes out to be approximately 0.5, not 1. – James Mar 21 at 6:33 • Of course; give me a minute to edit my answer. – Robert Howard Mar 21 at 6:38 • In the case of the question, sin A= -5/13. With such a value for sin A, the result is not 1 according to my calculations. – James Mar 21 at 6:58 • You said in your question that $\sin B=-5/13$, not $\sin A=-5/13$, suggesting that there are two separate angles $A$ and $B$. – Robert Howard Mar 21 at 7:00 • Thank you Mr. Howard. Your detailed, step-by-step explanations are undoubtedly appreciated by the Mathematics Stack Exchange community. – James Mar 21 at 7:16 The fraction is never in Radians or degrees it is a dimensionless quantity. The input to the trigonometric function is in radians or degrees. So the "identity" $$\sin^2 x+\cos^2x =1$$ always holds which is why it is called an identity (something that always holds). \begin{aligned}\sin\theta &=\dfrac{\overbrace{\text{side opposite}}^{\text{units}}}{\underbrace{\text{hypotenuse}}_{\text{units}}}\\\cos\theta&=\dfrac{\overbrace{\text{side adjacent}}^{\text{units}}}{\underbrace{\text{hypotenuse}}_{\text{units}}}\end{aligned} • Just to clarify, you mean to say that even though x is a fraction that is neither in the degree or radian form that the trigonometric identity will always hold true? – James Mar 21 at 6:37 • Also, why did you say that the input is in radians or degrees? – James Mar 21 at 6:37 • Since the angle has to be defined in terms of some units, its either radians or degrees but the ratio is a dimensionless quantity. – Paras Khosla Mar 21 at 7:13
2019-12-10 08:20:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 28, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8852782845497131, "perplexity": 225.8234914777726}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540527010.70/warc/CC-MAIN-20191210070602-20191210094602-00269.warc.gz"}
https://ask.sagemath.org/questions/38291/revisions/
# Revision history [back] ### How to obtain the resistance distance matrix of a graph? I tried to compute resistance distance matrix of a graph g by first evaluating the Moore-Penrose inverse of the Laplacian matrix, but the result is not accurate, the entries are slightly different. I tried with the following algorithm. L=g.laplacian_matrix() from scipy import linalg M=matrix(linalg.pinv(L)) R=matrix(QQ, g.order()) for i in range(g.order()): for j in range(g.order()): if i!=j: R[i,j]=M[i,i]+M[j,j] -M[i,j]-M[j,i] 2 None vdelecroix 6314 ●16 ●68 ●133 http://www.labri.fr/pe... ### How to obtain the resistance distance matrix of a graph? I tried to compute resistance distance matrix of a graph g by first evaluating the Moore-Penrose inverse of the Laplacian matrix, but the result is not accurate, the entries are slightly different. I tried with the following algorithm. L=g.laplacian_matrix() from scipy import linalg M=matrix(linalg.pinv(L)) R=matrix(QQ, g.order()) for i in range(g.order()): for j in range(g.order()): if i!=j: R[i,j]=M[i,i]+M[j,j] -M[i,j]-M[j,i]-M[i,j]-M[j,i]
2020-02-22 08:14:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9267852902412415, "perplexity": 1173.763894774988}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145654.0/warc/CC-MAIN-20200222054424-20200222084424-00435.warc.gz"}
https://codegolf.stackexchange.com/questions/251919/calculate-pi-unto-a-point-using-the-nilakantha-series
# Calculate Pi unto a point using the Nilakantha series Your task: given a nonzero positive number i, calculate pi using the Nilakantha series unto i terms. The Nilakantha series is as follows: $$\text 3 + \frac{4}{2*3*4} - \frac{4}{4*5*6}+\frac{4}{6*7*8} - ...$$ 3 is the first term, 4/2*3*4 is the second, -4/4*5*6 is the third, and so on. Notice that for the nth term: • $$\text S_1 = 3$$ • $$\text S_n = \frac{4 \times (-1)^n}{2n \times (2n-1) \times (2n-2)}$$ • The approximation of pi by summing up these terms is $$\text S_1 +\text S_2\text + … +\text S_n$$ Test cases: In = Out 1 = 3 2 = 3.16666666667 3 = 3.13333333333 4 = 3.1452381 Notice the pattern of the numbers approximating towards pi. Floating point issues are OK. This is so shortest answer wins! EDIT: by default this is 1-indexed, but if you want 0-indexed no problem, just mention it. And even infinitely printing the approximations with no input is Okay. • May i by 0-based (e.g. 0=3, 1=3.166..., 2=3.133..., 3=3.145...)? Also, is there a reason for overwriting the default sequence rules? Or is outputting an infinite list of all items also allowed, without taking an input? Sep 15 at 13:51 • If you want to, ok. For the sequence issue, here we’re trying to calculate a number using a series, not the terms of the series. Sep 15 at 17:15 • But since you’ve posted an answer assuming it already, yeah you can. Sep 15 at 17:42 • Is it okay to output rational numbers rather than floating point? Sep 15 at 19:51 • Use \times instead of * in mathjax – qwr Sep 16 at 4:15 # Python 3.8 (pre-release), 38 bytes (@xnor) f=lambda n,i=.5:i//n/n or 2/i-f(n,i+1) Try it online! ### Python 3.8 (pre-release), 40 bytes (@xnor) f=lambda n,i=1:i//2//n/n or 4/i-f(n,i+2) Try it online! ### Python, 45 bytes f=lambda n,s=1:4/(2*s-1)-(s//n/s or f(n,s+1)) Attempt This Online! 1-based. n has to be positive. ### Python, 50 bytes f=lambda n,s=0:n and f(n-1,4/(n+n-1)-s-0**s/n)or s Attempt This Online! This uses $$\\frac 4 {2x(2x+1)(2x+2)} =\frac 1 x + \frac 1 {x+1} - \frac 4 {2x+1} \$$ and that inside the full sum the first two terms cancel. 1-based. Can handle 0. # Python 3.8 (pre-release), 45 bytes (@xnor) f=lambda n:0**n*3or(-1)**n/n/(n-~n)/~n+f(n-1) Try it online! ### Python, 50 bytes f=lambda n:0**n*3or 1/(n|1)/(~n-n)/(n%-2^n)+f(n-1) Attempt This Online! 0-based. • Nice methods! It looks like your last one can be simplified to 45 bytes, and probably this can be made shorter. – xnor Sep 15 at 21:50 • @xnor thanks, looks like I had over engineered that one. Sep 15 at 22:03 • Shifting around your first one for 42 bytes – xnor Sep 15 at 22:13 • 38 bytes – xnor Sep 15 at 22:17 • Dammit @xnor I thought I was reasonably good at golfing. Sep 15 at 22:21 # Jelly, 13 bytes RḤrƝP€4÷Ṛḅ-+3 A monadic Link that accepts a positive integer and yields the approximation (up to the floating point accuracy). Try it online! ### How? RḤrƝP€4÷Ṛḅ-+3 - Link: positive integer, n e.g. 4 R - range [1,2,3,4] Ḥ - double [2,4,6,8] Ɲ - for neighbours: r - inclusive range [[2,3,4],[4,5,6],[6,7,8]] P€ - product of each [24,120,336] 4÷ - four divided by those [1/6,1/30,1/84] Ṛ - reverse [1/84,1/30,1/6] ḅ- - convert from base -1 sum([1/84,-1/30,1/6])=0.14523809523809522 +3 - add three 3.14523809523809522 # JavaScript (ES6), 32 bytes This version is based on @loopy-walt's answer, golfed by @xnor. f=(n,i=.5)=>i<n?2/i-f(n,i+1):1/n Try it online! # JavaScript (ES6), 39 bytes f=n=>--n?f(n)+(4-n%2*8)/(n+=n)/++n/~n:3 Try it online! ### Commented f = n => // f is a recursive function taking the input n --n ? // decrement n; if it's not 0: f(n) + // do a recursive call and add: (4 - n % 2 * 8) // -4 if n is odd, +4 otherwise / (n += n) // divided by 2n / ++n // divided by 2n + 1 / ~n // divided by -(2n + 2) : // else: 3 // end of recursion: return the integer part # Factor + koszul math.unicode,  68 64 bytes [ ""3 rot [0,b) [ -1^ 4 reach Π / * + [ 2 v+n ] dip ] each ] Attempt This Online! 0-indexed. Note the string literal has the control characters 2, 3, and 4 embedded, making it equivalent to the sequence { 2 3 4 }. You can see these characters on ATO. ! 3 "" ! 3 { 2 3 4 } 3 ! 3 { 2 3 4 } 3 rot ! { 2 3 4 } 3 3 [0,b) ! { 2 3 4 } 3 { 0 1 2 } [ ... ] each <<for each element in { 0 1 2 }...>> <<first iteration>> ! { 2 3 4 } 3 0 -1^ ! { 2 3 4 } 3 1 4 ! { 2 3 4 } 3 1 4 reach ! { 2 3 4 } 3 1 4 { 2 3 4 } Π ! { 2 3 4 } 3 1 4 24 / ! { 2 3 4 } 3 1 1/6 * ! { 2 3 4 } 3 1/6 + ! { 2 3 4 } 3+1/6 [ 2 v+n ] dip ! { 4 5 6 } 3+1/6 <<second iteration>> ! { 4 5 6 } 3+1/6 1 <<and so on>> # Vyxalḋ, 19 bytes ƛune4*nd:‹:‹**/;∑3+ ḋ flag to print rationals in their decimal form Explanation: ƛune4*nd:‹:‹**/;∑3+ ƛ ; Map lambda through inclusive range 1 to input une Push -1 to the power n 4* Multiply by 4 and push that nd: Multiply n by 2 and duplicate ‹: Decrement and duplicate ‹** Decrement and push product of denominator / Divide 4*(-1)**n by 2n*(2n-1)*(2n-2) ∑3+ Sum list and add 3 Try it Online! # R, 51 bytes \(k,n=2:k*2)if(k>1,3+sum(4*1i^n/n/(n-1)/(n-2)),3) Attempt This Online! Uses the fact that $$\(-1)^n=i^{2n}\$$. ### R, 51 bytes \(k,n=2:k)if(k>1,3+sum((-1)^n/n/(2*n-1)/(n-1)),3) Attempt This Online! Uses the formula but with simplifying the fraction to $$\\frac{(-1)^n}{n(2n-1)(n-1)}\$$. # 05AB1E, 20 16 bytes 3λè®Nm4*N·2Ý-P/+ Outputs the 1-based $$\n^{th}\$$ value (by starting with $$\a(0)=3\$$ and where $$\a(1)\$$ is calculated as $$\3\$$ as well). Explanation: λ # Start a recursive environment, è # to calculate a(input) # (which is output implicitly afterwards) 3 # Start with a(0)=3 # Where every following a(n) is calculated by: # (implicitly push the previous term a(n-1)) ®Nm # Push (-1) to the power n 4* # Multiply it by 4 N· # Push 2n 2Ý # Push list [0,1,2] - # Subtract each from the 2n: [2n,2n-1,2n-2] P # Take the product of this triplet: 2n*(2n-1)*(2n-2) / # Divide the earlier 4*(-1)**n by this (2n*(2n-1)*(2n-2)) + # Add it to the previous term a(n-1) # C (gcc), 80 $$\\cdots\$$ 71 70 bytes i;float s,m;float f(n){for(m=4,s=i=3;i++<2*n;s-=m/i/~-i/(i++-2))m=-m;} Try it online! Saved 5 6 bytes thanks to Kevin Cruijssen!!! Saved 4 bytes thanks to Arnauld!!! • 66 bytes Sep 17 at 8:54 # Desmos, 36 bytes f(x)=3+∑_{n=2}^x(-1)^n/n(2nn-3n+1) Try it on Desmos! Near direct copy of the definition, simplified and rearranged only a little bit. 1 indexed. Breakdown: f(x)=3+∑_{n=2}^x(-1)^n/n(2nn-3n+1) full function f(x)= function definition (not sure if required) 3+ 3 plus ∑ The sum from _{n=2} n=2 ^x to n=x (note that this defaults to 0 if x is less than 2) of (-1)^n -1 when n is odd, 1 if n is even / divided by n(2nn-3n+1) 2n^3-3n^2+n # Vyxalḋ, 16 bytes 4ÞNÞ∞d2ʀv+vΠ/3p¦ Try it Online! Outputs as an infinite list. Because Lyxal's sus this code only works for <v2.13.3 and hopefully future versions. The link contains a fix that's a byte longer. 4 # 4 ÞN # [4, -4, 4, -4...] / # Each divided by... Þ∞ # [1, 2, 3...] d # doubled [2, 4, 6...] 2ʀ # [0, 1, 2] v+ # Add to each [[2, 3, 4], [4, 5, 6]...] vΠ # Take the product of each 3p # Prepend a 3 ¦ # Take the cumulative sums # Python 3, 49 bytes f=lambda n:n-1and(-1)**n/(2*n*n-n)/~-n+f(n-1)or 3 Try it online! -5 bytes thanks to Mukundan314 • this fails for me Sep 16 at 10:28 • @py3programmer It seems to work on TiO. Could you be more specific? Sep 16 at 12:35 • I think it's because of 1and in the code. Sep 16 at 12:36 • Can you send the link on which it works on? Sep 16 at 12:37 • @py3programmer The link is in the post, if you'd like to try for yourself. As far as I am aware, 1and is legal syntax. Sep 16 at 12:37 # Wolfram Language (Mathematica), 36 35 bytes 3.-Sum[(-1)^n/(2n^3+3n^2+n),{n,#}]& Try it online! Alternatively, and more interestingly, we can express the partial sums in terms of a Lerch transcendent: 36 bytes Pi+(-1)^#(1/#-2LerchPhi[-1,1,#+.5])& Try it online! # Fig, $$\18\log_{256}(96)\approx\$$ 14.816 bytes +3S\n2@Nere+r3hax4 Try it online! Port of Vyxal. Beats both that and osabie, 1.8 bytes longer than Jelly. 0-indexed. +3S\n2@Nere+r3hax4 ax # Range [1, n] h # Double r3 # [0, 1, 2] e+ # Add ^ to every element of ^^ er # Product of each element @N # Negate n2 # Every other element \ 4 # Four divided by ^ S # Sum +3 # Add 3 # Charcoal, 22 21 bytes Try it online! Link is to verbose version of code. Explanation: N Input n as a number E Map over implicit range ι Current value ⎇ If nonzero then ι Current value ﹪ Modulo ² Literal integer 2 ⊗ Doubled ⊖ Decremented ∕ Divided by ι Current value × Multiplied by ι Current value ⊕ Incremented × Multiplied by ι Current value ⊗ Doubled ⊕ Incremented ³ Otherwise literal integer 3 Σ Take the sum I Cast to string Implicitly print Edit: Saved 1 byte by adpating @pajonk's simplification. # x86 32-bit machine code, 37 33 bytes D9 E8 D8 C0 D8 C0 D9 E8 F9 D9 E8 D8 F1 D9 E8 DE C2 D8 F1 D8 C0 F5 72 F5 DE E2 E2 ED DD D8 D9 E1 C3 Try it online! Following the fastcall calling convention, this takes a number i in ECX and returns the sum of the first i terms on the FPU register stack. In assembly: f: fld1 fadd st(0), st(0) fadd st(0), st(0) # Example execution for 2nd iteration fld1 # FPU register stack (left is bottom): stc # -3 2 (before) r: fld1 # -3 2 1 fdiv st(0), st(1) # -3 2 1/2 ir: fld1 # -3 2 1/2 1 faddp st(2), st(0) # -3 3 1/2 fdiv st(0), st(1) # -3 3 1/(2*3) fadd st(0), st(0) # -3 3 2/(2*3) cmc #[Repeat the 4 instructions above, using an inner loop: jc ir # -3 3 2/(2*3) 1 # -3 4 2/(2*3) # -3 4 2/(2*3*4) # -3 4 4/(2*3*4) fsubrp st(2), st(0) # 3+4/(2*3*4) 4 loop r fstp st(0) fabs ret Each iteration of the outer loop computes one term $$\\frac4{2n(2n+1)(2n+2)}\$$ and combines it in using the reverse-subtract instruction; as in my answer to a related problem, this handles the alternating signs, but leaves the result with the wrong sign for even i, which is corrected for by taking the absolute value at the end (because all the correct results are positive). The first iteration, if left the same as later iterations, would contain a division by 0. This is prevented by setting CF to 1 at the beginning, so that the inner loop executes once instead of twice during the first iteration, and it computes $$\\frac2{1\cdot2}\$$ instead. # Raku, 36 bytes [\+] 3,{-4*($*=-1)/[*] ^3+2*++$}...* Try it online! This is an expression for the infinite sequence of partial sums. 3 is the first term of the sequence, and the curly braces enclose a generating expression for the subsequent terms. • The dividend is -4 * ($*= -1). The $ here is an anonymous state variable. The *= -1 causes it to alternate between -1 and 1. (The first time the expression is evaluated, it's undefined, but since it's being multiplied, it defaults to the multiplicative identity element 1.) Multiplying that by -4 produces the sequence of dividends 4, -4, 4, -4, .... • The divisor is [*] ^3 + 2 * ++$. $ here is another anonymous state variable which the preincrement operator ++ causes to take on the values 1, 2, 3, ..., as it's evaluated for each term of the sequence. Multiplying that by 2 produces 2, 4, 6, .... Those even numbers are added to the range ^3, which means the integers from 0 to 2, producing a sequence of ranges 2, 3, 4, 4, 5, 6, 6, 7, 8, .... Then [*] multiplies those numbers together. [\+] produces the sequence of partial sums of the terms.
2022-09-30 15:53:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 15, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47036808729171753, "perplexity": 5754.561299461788}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00651.warc.gz"}
https://astarmathsandphysics.com/university-maths-notes/abstract-algebra-and-group-theory/1684-automorphisms.html?tmpl=component&print=1&page=
## Automorphisms An automorphism is an isomorphism from a group G onto itself. Example: Ifthenis an automorphism of the group of complex numbers under addition. We test the requirements one by one. 1. With 2. Ifthenandsois one to one. 3.is onto since ifthenand 4. The mappingsandare similarly automorphisms. All these automorphisms are length preserving. A very important automorphism is the inner automorphism,whereis some element ofThis is called the automorphism ofinduced by The inner automorphism ofinduced by(rotation by) is shown below. The set of inner automorphisms is a group, as is the set of automorphisms.
2017-12-15 04:36:20
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9861227869987488, "perplexity": 2819.1318326868795}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948563629.48/warc/CC-MAIN-20171215040629-20171215060629-00380.warc.gz"}
https://www.debugpointer.com/regex/regex-for-uuid
Published on # Regex for UUID validation UUID (Universally Unique Identifier) is a 128-bit number used to identify information in computer systems. It is typically represented as a string of 32 hexadecimal digits, divided into 5 hyphen-separated groups. UUIDs are used to uniquely identify objects and records, such as files, events, and logins. They are also used for various other purposes, such as for creating random passwords and generating session keys. In this article let's understand how we can create a regex for UUID and how regex can be matched for a valid UUID. Regex (short for regular expression) is a powerful tool used for searching and manipulating text. It is composed of a sequence of characters that define a search pattern. Regex can be used to find patterns in large amounts of text, validate user input, and manipulate strings. It is widely used in programming languages, text editors, and command line tools. # Structure of a UUID A IFSC Code should have the following criteria and structure- • It should be a 128-bit number. • It should be 36 characters (32 hexadecimal characters and 4 hyphens) long. • It should be displayed in five groups separated by hyphens (-). # Regex for checking if UUID is valid or not We will have to consider UUID for all 5 versions that are actively used in today's context. v1, v2 etc., may be old, but, they are still being used globally at massive scale in systems and processes. Regular Expression for UUID validation for all versions (v1-v5)- /^[0-9A-F]{8}-[0-9A-F]{4}-[1-5][0-9A-F]{3}-[89AB][0-9A-F]{3}-[0-9A-F]{12}$/gmi If you are looking at version specific regex, you can use the below regex for each version- ## Regex for UUID v1 /^[0-9A-F]{8}-[0-9A-F]{4}-[1][0-9A-F]{3}-[89AB][0-9A-F]{3}-[0-9A-F]{12}$/gmi /^[0-9A-F]{8}-[0-9A-F]{4}-[2][0-9A-F]{3}-[89AB][0-9A-F]{3}-[0-9A-F]{12}$/gmi ## Regex for UUID v3 /^[0-9A-F]{8}-[0-9A-F]{4}-[3][0-9A-F]{3}-[89AB][0-9A-F]{3}-[0-9A-F]{12}$/gmi /^[0-9A-F]{8}-[0-9A-F]{4}-[4][0-9A-F]{3}-[89AB][0-9A-F]{3}-[0-9A-F]{12}$/gmi ## Regex for UUID v5 /^[0-9A-F]{8}-[0-9A-F]{4}-[5][0-9A-F]{3}-[89AB][0-9A-F]{3}-[0-9A-F]{12}$/gmi Test string examples for the above regex- Input StringMatch Output asd-asd-asd-asd-asd-asddoes not match /^[0-9A-F]{8}-[0-9A-F]{4}-[1-5][0-9A-F]{3}-[89AB][0-9A-F]{3}-[0-9A-F]{12}$/gmi ^ asserts position at start of a line Match a single character present in the list below [0-9A-F] {8} matches the previous token exactly 8 times 0-9 matches a single character in the range between 0 (index 48) and 9 (index 57) (case insensitive) A-F matches a single character in the range between A (index 65) and F (index 70) (case insensitive) - matches the character - with index 4510 (2D16 or 558) literally (case insensitive) Match a single character present in the list below [0-9A-F] {4} matches the previous token exactly 4 times 0-9 matches a single character in the range between 0 (index 48) and 9 (index 57) (case insensitive) A-F matches a single character in the range between A (index 65) and F (index 70) (case insensitive) - matches the character - with index 4510 (2D16 or 558) literally (case insensitive) Match a single character present in the list below [1-5] 1-5 matches a single character in the range between 1 (index 49) and 5 (index 53) (case insensitive) Match a single character present in the list below [0-9A-F] {3} matches the previous token exactly 3 times 0-9 matches a single character in the range between 0 (index 48) and 9 (index 57) (case insensitive) A-F matches a single character in the range between A (index 65) and F (index 70) (case insensitive) - matches the character - with index 4510 (2D16 or 558) literally (case insensitive) Match a single character present in the list below [89AB] 89AB matches a single character in the list 89AB (case insensitive) Match a single character present in the list below [0-9A-F] {3} matches the previous token exactly 3 times 0-9 matches a single character in the range between 0 (index 48) and 9 (index 57) (case insensitive) A-F matches a single character in the range between A (index 65) and F (index 70) (case insensitive) - matches the character - with index 4510 (2D16 or 558) literally (case insensitive) Match a single character present in the list below [0-9A-F] {12} matches the previous token exactly 12 times 0-9 matches a single character in the range between 0 (index 48) and 9 (index 57) (case insensitive) A-F matches a single character in the range between A (index 65) and F (index 70) (case insensitive)$ asserts position at the end of a line
2023-02-03 20:40:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2239060252904892, "perplexity": 1899.514712426791}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500074.73/warc/CC-MAIN-20230203185547-20230203215547-00680.warc.gz"}
https://www.propbay.co.za/for-sale-property/residential/za/gauteng/sebokeng/760004
For the latest information on COVID-19 visit sacoronavirus.co.za # Find a Residential property for sale in Sebokeng Finding a Residential property using the map is faster and recommended! To search for properties using the map click here ## View all property for sale in Sebokeng View Sebokeng suburbs starting with: #### boitumelo boitumelo has approximately 2 properties for sale. The suburb has a total area of approximately 1.916579 km2. #### sebokeng hostels sebokeng hostels has approximately 1 properties for sale. The suburb has a total area of approximately 0.7852595 km2. #### sebokeng unit 10 sebokeng unit 10 has approximately 3 properties for sale. The suburb has a total area of approximately 5.416581 km2. #### sebokeng unit 12 sebokeng unit 12 has approximately 3 properties for sale. The suburb has a total area of approximately 3.22591 km2. #### sebokeng unit 14 sebokeng unit 14 has approximately 3 properties for sale. The suburb has a total area of approximately 2.154297 km2. #### sebokeng unit 17 sebokeng unit 17 has approximately 1 properties for sale. The suburb has a total area of approximately 2.116715 km2. #### sebokeng unit 6 sebokeng unit 6 has approximately 2 properties for sale. The suburb has a total area of approximately 1.488925 km2. #### sobokeng unit 3 sobokeng unit 3 has approximately 3 properties for sale. The suburb has a total area of approximately 3.271026 km2. ## Recently Added 2 bed 1 bath 2 bed 1 bath 3 bed 2 bath 3 bed 1 bath Home | SA | gauteng | Sebokeng
2021-01-23 16:56:05
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8150665163993835, "perplexity": 9054.784666027947}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703538226.66/warc/CC-MAIN-20210123160717-20210123190717-00757.warc.gz"}
http://www.j.sinap.ac.cn/nst/EN/abstract/abstract2038.shtml
# Nuclear Science and Techniques 《核技术》(英文版) ISSN 1001-8042 CN 31-1559/TL     2019 Impact factor 1.556 Nuclear Science and Techniques ›› 2016, Vol. 27 ›› Issue (2): 34 • NUCLEAR CHEMISTRY,RADIOCHEMISTRY,RADIOPHARMACEUTICALS AND NUCLEAR MEDICINE • ### Synthesis, radiolabeling and biological evaluation of butene amine oxime containing nitrotriazole as a tumor hypoxia marker Qiang Zhang, Tai-Wei Chu 1. Beijing National Laboratory for Molecular Sciences, Radiochemistry and Radiation Chemistry Key Laboratory of Fundamental Science, College of Chemistry and Molecular Engineering, Peking University, Beijing 100871, China • Contact: Tai-Wei Chu E-mail:[email protected] PDF ShareIt Export Citation Qiang Zhang, Tai-Wei Chu. Synthesis, radiolabeling and biological evaluation of butene amine oxime containing nitrotriazole as a tumor hypoxia marker.Nuclear Science and Techniques, 2016, 27(2): 34 Abstract: 99mTc-BnAO, as a nonnitroaromatic hypoxia marker, is the subject of intensive research in recent years. In this study, a butene amine oxime–nitrotriazole (BnAO–NT) was synthesized and radiolabeled with 99mTc in high yield. Cellular uptakes of 99mTc-BnAO–NT and 99mTc-BnAO were tested using murine sarcoma S180 and hepatoma H22 cell lines. The highest hypoxic cellular uptake of 99mTc-BnAO–NT was 27.11 ± 0.73 and 14.85 ± 0.83 % for the S180 and H22 cell lines, respectively, whereas the normoxic cellular uptake of the complex was about 4–8 % for both cell lines. For 99mTc-BnAO, the highest hypoxic cellular uptake was 30.79 ± 0.44 and 9.66 ± 1.20 % for the S180 and H22 cell lines, respectively, while the normoxic cellular uptake was about 5 % for both cell lines. Both 99mTc-BnAO–NT and 99mTc-BnAO complexes showed hypoxic/normoxic differentials in the two cell lines, but the results were more significant for the S180 cell line. The in vitro results suggested that S180 may be better than H22 cell line in hypoxic biological evaluation of BnAO complexes. The biodistribution study was tested using a S180 tumor model. The complex 99mTc-BnAO–NT showed a selective enrichment in tumor tissues: At 4 h, the tumor-to-muscle ratio was 3.79 ± 0.98 and the tumor-to-blood ratio was 2.31 ± 0.34. Compared with the results of 99mTc-BnAO, the latter was at the same level. In vitro and in vivo studies demonstrated that 99mTc-BnAO–NT could be a hypoxia-sensitive radiotracer for monitoring hypoxic regions in a sarcoma S180 tumor.
2020-08-04 05:21:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2667994201183319, "perplexity": 14663.38973633516}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735860.28/warc/CC-MAIN-20200804043709-20200804073709-00447.warc.gz"}
https://fenicsproject.discourse.group/t/default-absolute-tolerance-and-relative-tolerance/3829/4
# Default absolute tolerance and relative tolerance Dear All, I have a quick question about the absolute and relative tolerance of Newton solver in FeniCS, does anybody know what is the default value for these two tolerances in FeniCS? Thanks! See here. 1 Like Hello Nate, thank you so much for pointing me this, It’s very helpful! I guess these are the recommended values for absolute and relative tolerance in general, right? Is there any criteria for choosing these tolerance, in other words, is there an upper bound for the acceptable tolerance values? I guess for different problems in reality, the convergence behaviors will be quite different. I write the following in the context of FE modelling rather than numerical analysis. I.e. this should be interpreted as tips and tricks. Perhaps someone else can contribute something more rigorous, or their own experience. #### Do you know what to expect the residual of your system to be to machine precision? • A Poisson problem with a diffusion coefficient of 1 may yield a residual to machine precision around 10^{-12}. • Linear elasticity with a Young’s modulus of 10^9, convergence to machine precision you may see with a residual of around 10^{-3}. #### Do you have no a priori information about the magnitude of the residual to machine precision? Set the relative error tolerance accordingly, e.g. a relative tolerance of 10^{-12} is very precise. #### Do you only care about solving your system in a loose approximate sense Use a larger relative tolerance. This number should still be chosen based on your remaining knowledge of the numerical model. Make sure it converges in one iteration. If it does not, your system is not well defined. Newton’s method applied to a linear problem is equivalent to solving that linear problem. #### Is your problem nonlinear, well posed, and the solution is smooth? Ensure you have at least quadratic convergence of the residual between iterations. Otherwise you have a malformed Jacobian. #### Does your Newton solver fail on the first step producing NaNs? You likely have an initial guess leading to singularities in the Jacobian/residual. E.g: for a solution variable u • Zero initial guess with coefficients of the type 1/u or \sqrt{u} • Piecewise constant initial guess when invoking \nabla u^{-1} or \sqrt{\varepsilon_{\mathrm{II}}(u)} #### Does the Newton solver converge initially, then diverge and blow up? Try easing the relaxation parameter, or design a more sophisticated relaxation parameter e.g. here. This should encourage convergence for “highly nonlinear” problems. #### Does the Newton solver slowly diverge or get stuck? Ensure your initial guess is sufficiently in an attractor region. 11 Likes Hi Nate, Thank you so much for the kind help! 1 Like
2022-10-05 09:58:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.750399649143219, "perplexity": 769.3311477109753}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00019.warc.gz"}
https://docs.pgrouting.org/3.1/en/allpairs-family.html
# All Pairs - Family of Functions¶ The following functions work on all vertices pair combinations ## Performance¶ The following tests: • non server computer • with AMD 64 CPU • 4G memory • trusty • posgreSQL version 9.3 ### Data¶ The following data was used BBOX="-122.8,45.4,-122.5,45.6" wget --progress=dot:mega -O "sampledata.osm" "https://www.overpass-api.de/api/xapi?*[bbox=][@meta]" Data processing was done with osm2pgrouting-alpha createdb portland psql -c "create extension postgis" portland psql -c "create extension pgrouting" portland osm2pgrouting -f sampledata.osm -d portland -s 0 ### Results¶ Test One This test is not with a bounding box The density of the passed graph is extremely low. For each <SIZE> 30 tests were executed to get the average The tested query is: SELECT count(*) FROM pgr_floydWarshall( 'SELECT gid as id, source, target, cost, reverse_cost FROM ways where id <= <SIZE>'); SELECT count(*) FROM pgr_johnson( 'SELECT gid as id, source, target, cost, reverse_cost FROM ways where id <= <SIZE>'); The results of this tests are presented as: SIZE is the number of edges given as input. EDGES is the total number of records in the query. DENSITY is the density of the data $$\dfrac{E}{V \times (V-1)}$$. OUT ROWS is the number of records returned by the queries. Floyd-Warshall is the average execution time in seconds of pgr_floydWarshall. Johnson is the average execution time in seconds of pgr_johnson. SIZE EDGES DENSITY OUT ROWS Floyd-Warshall Johnson 500 500 0.18E-7 1346 0.14 0.13 1000 1000 0.36E-7 2655 0.23 0.18 1500 1500 0.55E-7 4110 0.37 0.34 2000 2000 0.73E-7 5676 0.56 0.37 2500 2500 0.89E-7 7177 0.84 0.51 3000 3000 1.07E-7 8778 1.28 0.68 3500 3500 1.24E-7 10526 2.08 0.95 4000 4000 1.41E-7 12484 3.16 1.24 4500 4500 1.58E-7 14354 4.49 1.47 5000 5000 1.76E-7 16503 6.05 1.78 5500 5500 1.93E-7 18623 7.53 2.03 6000 6000 2.11E-7 20710 8.47 2.37 6500 6500 2.28E-7 22752 9.99 2.68 7000 7000 2.46E-7 24687 11.82 3.12 7500 7500 2.64E-7 26861 13.94 3.60 8000 8000 2.83E-7 29050 15.61 4.09 8500 8500 3.01E-7 31693 17.43 4.63 9000 9000 3.17E-7 33879 19.19 5.34 9500 9500 3.35E-7 36287 20.77 6.24 10000 10000 3.52E-7 38491 23.26 6.51 Test Two This test is with a bounding box The density of the passed graph higher than of the Test One. For each <SIZE> 30 tests were executed to get the average The tested edge query is: WITH buffer AS (SELECT ST_Buffer(ST_Centroid(ST_Extent(the_geom)), SIZE) AS geom FROM ways), bbox AS (SELECT ST_Envelope(ST_Extent(geom)) as box from buffer) SELECT gid as id, source, target, cost, reverse_cost FROM ways where the_geom && (SELECT box from bbox); The tested queries SELECT count(*) FROM pgr_floydWarshall(<edge query>) SELECT count(*) FROM pgr_johnson(<edge query>) The results of this tests are presented as: SIZE is the size of the bounding box. EDGES is the total number of records in the query. DENSITY is the density of the data $$\dfrac{E}{V \times (V-1)}$$. OUT ROWS is the number of records returned by the queries. Floyd-Warshall is the average execution time in seconds of pgr_floydWarshall. Johnson is the average execution time in seconds of pgr_johnson. SIZE EDGES DENSITY OUT ROWS Floyd-Warshall Johnson 0.001 44 0.0608 1197 0.10 0.10 0.002 99 0.0251 4330 0.10 0.10 0.003 223 0.0122 18849 0.12 0.12 0.004 358 0.0085 71834 0.16 0.16 0.005 470 0.0070 116290 0.22 0.19 0.006 639 0.0055 207030 0.37 0.27 0.007 843 0.0043 346930 0.64 0.38 0.008 996 0.0037 469936 0.90 0.49 0.009 1146 0.0032 613135 1.26 0.62 0.010 1360 0.0027 849304 1.87 0.82 0.011 1573 0.0024 1147101 2.65 1.04 0.012 1789 0.0021 1483629 3.72 1.35 0.013 1975 0.0019 1846897 4.86 1.68 0.014 2281 0.0017 2438298 7.08 2.28 0.015 2588 0.0015 3156007 10.28 2.80 0.016 2958 0.0013 4090618 14.67 3.76 0.017 3247 0.0012 4868919 18.12 4.48
2021-03-03 20:26:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1839117705821991, "perplexity": 12832.967597731307}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178367790.67/warc/CC-MAIN-20210303200206-20210303230206-00128.warc.gz"}
http://learnbayes.org/papers/confidenceIntervalsFallacy/supplement.html
The source code for this document, and the main manuscript, is available at https://github.com/richarddmorey/ConfidenceIntervalsFallacy. #### The lost submarine: details We presented a situation where $$N=2$$ observations were distributed uniformly: $\begin{eqnarray*} y_i &\stackrel{iid}{\sim}& \mbox{Uniform}(\theta-5,\theta+5),\,i=1,\ldots,N \end{eqnarray*}$ and the goal is to estimate $$\theta$$, the location of the submarine hatch. Without loss of generality we denote $$x_1$$ as the smaller of the two observations. In the text, we considered five 50% confidence procedures; in this section, we give the details about the sampling distribution procedure and the Bayes procedure that were omitted from the text. ##### Sampling distribution procedure Consider the sample mean, $$\bar{y} = (y_1 + y_2)/2$$. As the sum of two uniform deviates, it is a well-known fact that $$\bar{y}$$ will have a triangular distribution with location $$\theta$$ and minimum and maximum $$\theta-5$$ and $$\theta+5$$, respectively. This distribution is shown in Figure 1. The sampling distribution of the mean $$\bar{x}$$ in the submarine scenario. The shaded region represents the central 50% of the area. The unshaded triangle marked ‘a’ has area .25, and the standard deviation of this sampling distribution is about 2.04. It is desired to find the width of the base of the shaded region in Figure 1 such that it has an area of .5. To do this we first find the width of the base of the unshaded triangular area marked ‘a’ in Figure 1 such that the area of the triangle is .25. The corresponding unshaded triangle on the left side will also have area .25, which means that since the figure is a density, the shaded region must have the remaining area of .5. Elementary geometry will show that the width of the base of triangle ‘a’ is $$5/\sqrt{2}$$, meaning that the distance between $$\theta$$ and the altitude of triangle ‘a’ is $$5 - 5/\sqrt{2}$$ or about 1.46m. We can thus say that $Pr(- (5 - 5/\sqrt{2}) < \bar{y} - \theta < 5 - 5/\sqrt{2} ) = .5$ which implies that, in repeated sampling, $Pr(\bar{y} - (5 - 5/\sqrt{2}) < \theta < \bar{y} + (5 - 5/\sqrt{2}) ) = .5$ which defines the sampling distribution confidence procedure. This is an example of using $$\bar{y} - \theta$$ as a pivotal quantity (Casella & Berger, 2002). We can also derive the standard deviation of the sampling distribution of $$\bar{y}$$, also called the standard error. It is defined as: $SE(\bar{y}) = \sqrt{V(\bar{y})} = \sqrt{\int_{-5}^{5}z^2p(z)\,dz}$ where $$p(z)$$ is the triangular sampling distribution in Figure 1 centered around $$\theta=0$$. Solving the integral yields $SE(\bar{y}) = \frac{5}{\sqrt{6}} \approx 2.04.$ ##### Bayesian procedure The posterior distribution is proportional to the likelihood times the prior. The likelihood is $p(y_1,y_2\mid\theta) \propto \prod_{i=1}^2 {\cal I}(\theta-5 < y_i < \theta+5);$ where $$\cal I$$ is an indicator function. Note since this is the product of two indicator functions, it can only be nonzero when both indicator functions’ conditions are met; that is, when $$y_1+5$$ and $$y_2+5$$ are both greater than $$\theta$$, and $$y_1-5$$ and $$y_2-5$$ are both less than $$\theta$$. If the minimum of $$y_1+5$$ and $$y_2+5$$ is greater than $$\theta$$, then so to must be the maximum. The likelihood thus can be rewritten $p(x_1,x_2\mid\theta) \propto {\cal I}(x_2 - 5 <\theta< x_1+5);$ where $$x_1$$ and $$x_2$$ are the minimum and maximum observations, respectively. If the prior for $$\theta$$ is proportional to a constant, then the posterior is $p(\theta\mid x_1,x_2) \propto {\cal I}(x_2 - 5 <\theta< x_1+5),$ This posterior is a uniform distribution over all {} possible values of $$\theta$$ (that is, all $$\theta$$ values within 5 meters of all observations), has width $10 - (x_{2} - x_{1}),$ and is centered around $$\bar{x}$$. Because the posterior comprises all values of $$\theta$$ the data have not ruled out – and is essentially just the classical likelihood – the width of this posterior can be taken as an indicator of the precision of the estimate of $$\theta$$. The middle 50% of the likelihood can be taken as a 50% objective Bayesian credible interval. Proof that this Bayesian procedure is also a confidence procedure is trivial and can be found in Welch (1939). ##### BUGS implementation The submersible example was selected in part because it is so trivial; the confidence intervals and Bayesian credible intervals can be derived with very little effort. However, for more complicated problems, credible intervals can be more challenging to derive. Thankfully, modern Bayesian software tools make estimation of credible intervals in many problems as trivial as stating the problem along with priors on the parameters. BUGS is a special language that allows users to define a model and prior. Using a software that interprets the BUGS language, such as JAGS (Plummer, 2003) or WinBUGS (Lunn, Thomas, Best, & Spiegelhalter, 2000), the model and prior are then combined with the data. The software then outputs samples from the posterior distribution for all the parameters, which can be used to create credible intervals. A full explanation of how to use the BUGS language is beyond the scope of this supplement. Readers can find more information about using BUGS in Ntzoufras (2009)], (Lee & Wagenmakers, 2013), and many tutorials are available on the world wide web. Here, we show how to obtain a credible interval for the submersible Example 1 using JAGS; in a later section we show how to obtain a confidence interval for Example 2, $$\omega^2$$ in ANOVA designs. We first define the model in and prior in R using the BUGS language. Notice that this is simply stating the distributions of the data points, along with a prior for $$\theta$$. BUGS_model = " model{ y1 ~ dunif(theta - 5, theta + 5) y2 ~ dunif(theta - 5, theta + 5) theta ~ dnorm( theta_mean, theta_precision) } " We now define a list of values that will get passed to JAGS. y1 and y2 are the data observed values from the manuscript’s Figure 1A, and the prior we choose is an informative prior for demonstration. for_JAGS = list( y1 = -4.5, y2 = 4.5, theta_mean = -2.5, theta_precision = 1/10^2 ) Since precision is the reciprocal of variance, the prior on $$\theta$$ corresponds to a Normal$$(\mu=-2.5, \sigma=10)$$ prior. All that remains is to load JAGS and combine the model information in BUGS_model with the data in for_JAGS, then to obtain samples from the posterior distribution. # Load the rjags package to interface with JAGS require( rjags ) ## Loaded modules: basemod,bugs # Set initial value for the sampler initial.values = list(theta = 0) # Combine the model with the data compiled_model = jags.model( file = textConnection(BUGS_model), data = for_JAGS, inits = initial.values, quiet = TRUE ) # Sample from the posterior distribution posterior_samples = coda.samples( model = compiled_model, variable.names = c("theta"), n.iter = 100000 ) We can now plot the samples we obtained using the hist function in R. theta_samples = posterior_samples[[ 1 ]][ , "theta" ] hist( theta_samples, breaks = 20, freq = FALSE ) Note the resemblance to Figure 5, bottom panel, in the manuscript. We use the summary function on the samples to obtain a point estimate as well as quantiles of the posterior distribution, which can be used to form credible intervals. The 50% central credible interval is the interval between the 25th and 75th percentile. summary(theta_samples) ## ## Iterations = 1001:101000 ## Thinning interval = 1 ## Number of chains = 1 ## Sample size per chain = 1e+05 ## ## 1. Empirical mean and standard deviation for each variable, ## plus standard error of the mean: ## ## Mean SD Naive SE Time-series SE ## -0.0011830 0.2881537 0.0009112 0.0010825 ## ## 2. Quantiles for each variable: ## ## 2.5% 25% 50% 75% 97.5% ## -0.474832 -0.250590 -0.001707 0.248218 0.474519 #### Credible interval for $$\omega^2$$: details In the manuscript, we compare Steiger’s (2004) confidence intervals for $$\omega^2$$ to Bayesian highest posterior density (HPD) credible intervals. In this section we describe how the Bayesian HPD intervals were computed. Consider a one-way design with $$J$$ groups and $$N$$ observations in each group. Let $$y_{ij}$$ be the $$i$$th observation in the $$j$$th group. Also suppose that $y_{ij} \stackrel{indep.}{\sim} \mbox{Normal}(\mu_j, \sigma^2)$ where $$\mu_j$$ is the population mean of the $$j$$th group and $$\sigma^2$$ is the error variance. We assume a “non-informative” prior on parameters $$\boldsymbol\mu,\sigma^2$$: $p(\mu_1,\ldots,\mu_J,\sigma^2) \propto (\sigma^2)^{-1}.$ This prior is flat on $$(\mu_1,\ldots,\mu_J, \log\sigma^2)$$. In application, it would be wiser to assume an informative prior on these parameters, in particular assuming a population over the $$\mu$$ parameters or even the possibility that $$\mu_1 = \ldots = \mu_J = 0$$ (Rouder, Morey, Speckman, & Province, 2012). However, for this manuscript we compare against a “non-informative” prior in order to show the differences between the confidence interval and the Bayesian result with “objective” priors. Assuming the prior above, an elementary Bayesian calculation (Gelman, Carlin, Stern, & Rubin, 2004) reveals that $\sigma^2\mid\boldsymbol y \sim \mbox{Inverse Gamma}(J(N-1)/2, S/2)$ where $$S$$ is the error sum-of-squares from the corresponding one-way ANOVA, and $\mu_j\mid\sigma^2, \boldsymbol y \stackrel{indep.}{\sim} \mbox{Normal}(\bar{x}_j, \sigma^2/N)$ where $$\mu_j$$ and $$\bar{x}_j$$ are the true and observed means for the $$j$$th group. Following Steiger (2004) we can define $\alpha_j = \mu_j - \frac{1}{J}\sum_{j=1}^J\mu_j$ as the deviation from the grand mean of the $$j$$th group, and $\begin{eqnarray*} \lambda &=& N\sum_{j=1}^J \left(\frac{\alpha}{\sigma}\right)^2\\ \omega^2 &=& \frac{\lambda}{\lambda + NJ}. \end{eqnarray*}$ It is now straightforward to set up an MCMC sampler for $$\omega^2$$. Let $$M$$ be the number of MCMC iterations desired. We first sample $$M$$ samples from the marginal posterior distribution of $$\sigma^2$$, then sample the group means from the conditional posterior distribution for $$\mu_1,\ldots,\mu_J$$. Using these posterior samples, $$M$$ posterior samples for $$\lambda$$ and $$\omega^2$$ can be computed. The following R function will sample from the marginal posterior distribution of $$\omega^2$$: ## Assumes that data.frame y has two columns: ## $y is the dependent variable ##$grp is the grouping variable, as a factor Bayes.posterior.omega2 ## function (y, conf.level = 0.95, iterations = 10000) ## { ## J = nlevels(y$grp) ## N = nrow(y)/J ## aov.results = summary(aov(y ~ grp, data = y)) ## SSE = aov.results[[1]][2, 2] ## sig2 = 1/rgamma(iterations, J * (N - 1)/2, SSE/2) ## lambda = matrix(NA, iterations) ## group.means = tapply(y$y, y$grp, mean) ## for (m in 1:iterations) { ## mu = rnorm(J, group.means, sqrt(sig2[m]/N)) ## lambda[m] = N * sum((mu - mean(mu))^2/sig2[m]) ## } ## mcmc(lambda/(lambda + N * J)) ## } The Bayes.posterior.omega2 function can be used to compute the posterior and HPD for the first example in the manuscript. The fake.data.F function, defined in the R language in the file steiger.utility.R (available with the manuscript source code at https://github.com/richarddmorey/ConfidenceIntervalsFallacy, generates a data set with a specified $$F$$ statistic. cl = .683 ## Confidence level corresponding to standard error J = 3 ## Number of groups N = 10 ## observations in a group df1 = J - 1 df2 = J * (N - 1) ## F statistic from manuscript Fstat = 0.1748638 set.seed(1) y = fake.data.F(Fstat, df1, df2) ## Steiger confidence interval steigerCI = steigerCI.omega2(Fstat,df1,df2, conf.level=cl) samples.omega2 = Bayes.posterior.omega2(y, cl, 100000) We can compute the Bayesian HPD interval with the HPDinterval function in the R package coda: library(coda) HPDinterval( samples.omega2, prob = cl ) ## lower upper ## var1 5.219606e-06 0.08299102 ## attr(,"Probability") ## [1] 0.683 ##### BUGS implementation Although the code above can be used to quickly sample $$\omega^2$$ for any one-way design, it is not particularly generalizable for typical users. We can use the BUGS language for Bayesian modeling to create credible intervals in a way that is more accessible to the general user. BUGS_model = " model{ for( i in 1:NJ ){ # iterate over participants # Error model for this observation y[ i ] ~ dnorm( mu[ group[i] ], precision ) } for( j in 1:J ){ # iterate over groups # prior for group mean mu[ j ] ~ dnorm( mean_mu, precision_mu ) # group mean's standardized squared deviation # from overall mean mu_dev_sq[ j ] <- pow( mu[ j ] - mean( mu ), 2 ) / variance } # BUGS uses the inverse of the variance (precision) # instead of the variance precision <- 1 / variance # prior on error variance variance ~ dgamma( a_variance, b_variance ) # Define our quantities of interest lambda <- N * sum( mu_dev_sq ) omega2 <- lambda / ( lambda + N * J ) } " In the R code below, we define all the constants and the data needed for the analysis, including the prior parameters. These prior parameters were chosen to approximate the “non-informative” prior we used in the previous analysis. As we mentioned in the manuscript, we do not generally advise the use of such non-informative priors; these values are merely chosen for demonstration. In practice, reasonable values would be chosen to inform the analysis. for_JAGS = list( y = y$y, group = y\$grp, N = N, J = J, NJ = N*J, mean_mu = 0, precision_mu = 1e-6, a_variance = 1e-6, b_variance = 1e-6 ) The following code joins the model (BUGS_model) with the data and defined constants (for_JAGS) and 10,000 samples from the posterior distribution, outputting the samples of omega2, the parameter of interest. # Load the rjags package to interface with JAGS require( rjags ) # Combine the model with the data compiled_model = jags.model( file = textConnection(BUGS_model), data = for_JAGS, quiet = TRUE ) # Sample from the posterior distribution posterior_samples = coda.samples( model = compiled_model, variable.names = c("omega2"), n.iter = 10000 ) The object posterior_samples now contains all posterior samples of the parameter $$\omega^2$$. We can plot their histogram: omega2_samples = posterior_samples[[ 1 ]][ , "omega2" ] hist( omega2_samples, breaks = 20, freq = FALSE ) Note the close similarity between Figure 4 and Figure 3. We can do whatever we like with these samples; of particular interest would be a point estimate and credible interval. For the point estimate, we might select the posterior mean; for the credible interval, we can compute a highest-density region: # Compute the posterior mean mean( omega2_samples ) ## [1] 0.06745069 # Compute the HDR HPDinterval( omega2_samples , prob = cl ) ## lower upper ## var1 2.832317e-05 0.08108649 ## attr(,"Probability") ## [1] 0.683 Further useful information about the posterior can be obtained using the summary function. summary( omega2_samples ) ## ## Iterations = 1001:11000 ## Thinning interval = 1 ## Number of chains = 1 ## Sample size per chain = 10000 ## ## 1. Empirical mean and standard deviation for each variable, ## plus standard error of the mean: ## ## Mean SD Naive SE Time-series SE ## 0.0674507 0.0594478 0.0005945 0.0005739 ## ## 2. Quantiles for each variable: ## ## 2.5% 25% 50% 75% 97.5% ## 0.002015 0.021750 0.051531 0.095971 0.223484 ### References Casella, G., & Berger, R. L. (2002). Statistical inference. Pacific Grove, CA: Duxbury. Gelman, A., Carlin, J. B., Stern, H. S., & Rubin, D. B. (2004). Bayesian data analysis (2nd edition). London: Chapman; Hall. Lee, M. D., & Wagenmakers, E.-J. (2013). Bayesian modeling for cognitive science: A practical course. Cambridge University Press. Lunn, D., Thomas, A., Best, N., & Spiegelhalter, D. (2000). WinBUGS – a Bayesian modelling framework: Concepts, structure, and extensibility. Statistics and Computing, 10, 325–337. Ntzoufras, I. (2009). Bayesian Modeling Using WinBUGS. Hoboken, New Jersey: Wiley. Plummer, M. (2003). JAGS: A program for analysis of Bayesian graphical models using Gibbs sampling. In Proceedings of the 3rd international workshop on distributed statistical computing. Rouder, J. N., Morey, R. D., Speckman, P. L., & Province, J. M. (2012). Default Bayes factors for ANOVA designs. Journal of Mathematical Psychology, 56, 356–374. Retrieved from http://dx.doi.org/10.1016/j.jmp.2012.08.001 Steiger, J. H. (2004). Beyond the $$F$$ test: Effect size confidence intervals and tests of close fit in the analysis of variance and contrast analysis. Psychological Methods, 9(2), 164–182. Welch, B. L. (1939). On confidence limits and sufficiency, with particular reference to parameters of location. The Annals of Mathematical Statistics, 10(1), 58–69.
2018-04-25 10:08:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8310311436653137, "perplexity": 990.2758009306286}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947795.46/warc/CC-MAIN-20180425100306-20180425120306-00557.warc.gz"}
http://cpr-mathph.blogspot.com/2013/02/10051500-matej-pavsic.html
## Space Inversion of Spinors Revisited: A Possible Explanation of Chiral Behavior in Weak Interactions    [PDF] Matej Pavsic We investigate a model in which spinors are considered as being embedded within the Clifford algebra that operates on them. In Minkowski space $M_{1,3}$, we have four independent 4-component spinors, each living in a different minimal left ideal of $Cl(1,3)$. We show that under space inversion, a spinor of one left ideal transforms into a spinor of another left ideal. This brings novel insight to the role of chirality in weak interactions. We demonstrate the latter role by considering an action for a generalized spinor field $\psi^{\alpha i}$ that has not only a spinor index $\alpha$ but also an extra index $i$ running over four ideals. The covariant derivative of $\psi^{\alpha i}$ contains the generalized spin connection, the extra components of which are interpreted as the SU(2) gauge fields of weak interactions and their generalization. We thus arrive at a system that is left-right symmetric due to the presence of a "parallel sector", postulated a long time ago, that contains mirror particles coupled to mirror SU(2) gauge fields. View original: http://arxiv.org/abs/1005.1500
2020-07-13 11:11:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7997949719429016, "perplexity": 515.3807380651857}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657143365.88/warc/CC-MAIN-20200713100145-20200713130145-00237.warc.gz"}
https://stats.stackexchange.com/questions/33098/multiple-imputation-questions-for-multiple-regression-in-spss
# Multiple imputation questions for multiple regression in SPSS I am currently running a multiple regression model using imputed data and have a few questions. Background: Using SPSS 18. My data appears to be MAR. Listwise deletion of cases leaves me with only 92 cases, multiple imputation leaves 153 cases for analysis. All assumptions met - one variable log transformed. 9 IV's 5 - 5 categorical, 3 scale, 1 interval. DV-scale. Using the enter method of standard multiple regression. • My DV is the difference of scores between a pre- score and a post score measure, both of these variables are missing a number of cases - should I impute missing values for each of these and then work out the differnce between them to calculate my DV (how do I go about doing this), or can I just impute data for my DV? Which is the most appropriate approach? • Should I run imputations on transformed data or skewed untransformed data? • Should I enter all variables into the imputation process, even if they are not missing data, or should I just impute data for the variables missing more than 10% of cases? I have run the regression on the listwise deleted cases and my IV's account for very little of the variance in my DV, subsequently I have run the regression on a complete file following multiple imputation - The results are very similar, in that my 9 IV's still predict only approx 12% of the variance in my DV, however, now one of my IV'S indicates that it is making a significant contribution (this happens to be a log transformed variable)... • Should I report original data if there is little difference between my conclusions - i.e my IV's poorly predict the dv, or report the complete data? • What does "scale" mean for SPSS, does it refer to ordinal data? – gung - Reinstate Monica Jul 26 '12 at 15:43 • Scale in SPSS formats typically means "interval/ratio" measures, see the VARIABLE LEVEL command. But that then leaves the question what the distinction between the 3 scale and the 1 interval question is? That being said though this should be enough information to effectively address your question. – Andy W Jul 26 '12 at 16:58 • The only advice I could give is that predicting the change scores tends to be much harder than predicting the levels (so it is not surprising in many situations that a low R^2 occurs). See some nice discussion of pre-post designs here. Although that still totally does not answer your question! – Andy W Jul 26 '12 at 17:02 1. Whether you should impute both the pre- and post- scores, or the difference score, depends on how you analyze the pre-post difference. You should be aware there are legitimate limitations to analyses of difference scores (see Edwards, 1994, for a nice review), and a regression approach in which you analyze the residual for post- scores after controlling for pre-scores might be better. In that case, you would want to impute pre- and post- scores, since those are the variables that will be in your analytic model. However, if you're intent on analyzing difference scores, impute the difference scores, since it's unlikely you will want to manually compute difference scores across all your imputed data sets. In other words, whatever variable(s) you are using in your actual analytic model, is/are the variable(s) that you should use in your imputation model. 2. Again, I would impute with the transformed variable, since that is what is used in your analytic model. 3. Adding variables to the imputation model will increase the computational demands of the imputation process, BUT, if you have the time, more information is always better. Variables with complete data could potentially be very useful auxiliary variables for explaining MAR missingness. If using all your variables results in too time/computation demanding of an imputation model (i.e., if you have a big data set), create dummy variables for each cases's missingness for each variable, and see which complete variables predict those missingness variables in logistic models--then include those particular complete case variables in your imputation model. 4. I wouldn't report the original (i.e., list-wise deleted) analyses. If your missingness mechanism is MAR, then MI is not only going to give you increased power, but it will also give you more accurate estimates (Enders, 2010). Thus, the significant effect with MI might be non-significant with list-wise deletion because that analysis is underpowered, biased, or both. References Edwards, J. R. (1994). Regression analysis as an alternative to difference scores. Journal of Management, 20, 683-689. Enders, C. K. (2010). Applied Missing Data Analysis. New York, NY: Guilford Press. In my experience SPSS's imputation function is easy to use, both in creating datasets and in analyzing and pooling the resulting imputation datasets. However, its ease of use is its downfall as well. If you look at a similar imputation function in the R statistical software (see for example the mice package), you will see far more options. See Stef van Buurens website for an excellent explanation of multiple imputation in general (with or without using the mice package). It is very important to note that these additional options are not 'luxury' choices for advanced users only. Some are essential in order to attain proper congeniality, specific models for specific missing variables, specific predictors for specific missing variables,imputation diagnostics, and more, which are not available in the SPSS imputation function.
2020-06-01 16:26:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45701414346694946, "perplexity": 1507.1253914597323}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347419056.73/warc/CC-MAIN-20200601145025-20200601175025-00042.warc.gz"}
https://nips.cc/Conferences/2020/ScheduleMultitrack?event=18194
` Timezone: » Poster Why are Adaptive Methods Good for Attention Models? Jingzhao Zhang · Sai Praneeth Karimireddy · Andreas Veit · Seungyeon Kim · Sashank Reddi · Sanjiv Kumar · Suvrit Sra Wed Dec 09 09:00 AM -- 11:00 AM (PST) @ Poster Session 3 #761 #### Author Information ##### Sai Praneeth Karimireddy (EPFL) I am a second year PhD student working in convex and non-convex optimization with Prof. Martin Jaggi. My focus is on designing faster and more scalable optimization algorithms for machine learning. Some of my preliminary results and problems I am currently working on- 1. Robust accelerated algorithms - Nesterov acceleration modified to be robust to noise. 2. Faster algorithms which take second order information about the function into account. 3. A $O(1/t^2)$ rate *affine invariant* algorithm for constrained optimization. 4. Frank-Wolfe algorithm for non-smooth functions using 'noisy-smoothing'
2022-07-03 18:47:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6304346323013306, "perplexity": 7581.581907267173}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104248623.69/warc/CC-MAIN-20220703164826-20220703194826-00504.warc.gz"}
https://bouzebda.github.io/publication/2012-01-01-Bouzebda-Cherfi
# General bootstrap for dual phi-divergence estimates Published in Journal of Probability and Statistics, 2012 ### S. Bouzebda and M. Cherfi A general notion of bootstrapped $\phi$-divergence estimates constructed by exchangeably weighting sample is introduced. Asymptotic properties of these generalized bootstrapped $\phi$-divergence estimates are obtained, by mean of the empirical process theory, which are applied to construct the bootstrap confidence set with asymptotically correct coverage probability. Some of practical problems are discussed, including in particular, the choice of escort parameter and several examples of divergences are investigated. Simulation results are provided to illustrate the finite sample performance of the proposed estimators.
2022-01-21 02:28:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5605967044830322, "perplexity": 929.7603498357704}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302715.38/warc/CC-MAIN-20220121010736-20220121040736-00331.warc.gz"}
https://calconcalculator.com/math/comparing-fractions-calculator/
Our comparing fractions calculator will give you the answer and show the work to compare fractions. This can be helpful if you want to know how to compare fractions or just need a little help with your math homework. ## What is a fraction? When you look at a fraction, it’s easy to see that the top number is called the numerator and the bottom number is called the denominator. A specific type of fraction where both numbers are equal is called an equal or unit fraction. For example, 1/2 and 2/4 are equal fractions. When you write out a fraction in word form, you can say “four fifths” instead of “4/5” or “one third” instead of “1/3″. ## How to compare fractions To compare fractions means to observe two fractions and find out which one is larger than the other. To be able to do this, you need to know some rules. ## Comparing fractions with different denominators Let me show you with an example. \frac {1}{3} \text { and } \frac {1}{2} To find out which one is larger, we need to have the same denominators on both of these. To do that, we need to find their common denominator, and expand them. Usually, the best way to find a common denominator is to multiply the two denominators with each other. In this case, the result would be 6. Then, we need to expand the first one. We do this by dividing the common denominator by its denominator. This would get us 2. Then, we need to multiply the numerator by 2 as well. Once we do this, we have expanded the fraction. Then, we need to do the same with the second fraction. Now, when they have the same denominators, the one that has the larger numerator is larger. \frac {2}{6} < \frac {3}{6} ## Comparing fractions with the same numerators For this purpose, the numerators are not important. The process is the same as in the previous example. ## Comparing fractions with the same denominators If they have the same denominators, the larger one is the one with the larger numerator. ## Comparing improper and mixed fractions Improper fractions are fractions in which the numerator is larger than the denominator. Those fractions can be turned into mixed fractions, which are a mixture of whole numbers and fractions, for example: \frac {5}{4}= 1 \frac {1}{4} Essentially, the process is the same. If you have a mixed fraction, you turn it into an improper fraction, and repeat the process from before. You turn a mixed fraction into an improper fraction by multiplying the number in the front by the denominator, and then adding the result to the numerator. ## How to use the comparing fractions calculator Of course, the simplest way to compare fractions is using our calculator. All you need to do, is input both fractions into the calculator, and the calculator will tell you which one is larger, or if they are equal. It will also show you the process it used, as well as the expanded versions of the fractions. ## FAQ ### How do you compare fractions? You compare them by finding the common denominator and seeing which one has a larger numerator. ### How do you compare mixed fractions? First, you need to turn them into improper fractions, and then repeat the process we mentioned before. ### Is 1/3 bigger than 1/4? One third is bigger than one quarter.
2023-04-01 01:20:27
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8762187957763672, "perplexity": 304.4251730520103}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949694.55/warc/CC-MAIN-20230401001704-20230401031704-00469.warc.gz"}
https://zbmath.org/serials/?q=se%3A2060
## Random Operators and Stochastic Equations Short Title: Random Oper. Stoch. Equ. Publisher: De Gruyter, Berlin ISSN: 0926-6364; 1569-397X/e Online: http://www.degruyter.com/view/j/rose Comments: Indexed cover-to-cover Documents Indexed: 658 Publications (since 1993) References Indexed: 289 Publications with 4,729 References. all top 5 ### Latest Issues 30, No. 2 (2022) 30, No. 1 (2022) 29, No. 4 (2021) 29, No. 3 (2021) 29, No. 2 (2021) 29, No. 1 (2021) 28, No. 4 (2020) 28, No. 3 (2020) 28, No. 2 (2020) 28, No. 1 (2020) 27, No. 4 (2019) 27, No. 3 (2019) 27, No. 2 (2019) 27, No. 1 (2019) 26, No. 4 (2018) 26, No. 3 (2018) 26, No. 2 (2018) 26, No. 1 (2018) 25, No. 4 (2017) 25, No. 3 (2017) 25, No. 2 (2017) 25, No. 1 (2017) 24, No. 4 (2016) 24, No. 3 (2016) 24, No. 2 (2016) 24, No. 1 (2016) 23, No. 4 (2015) 23, No. 3 (2015) 23, No. 2 (2015) 23, No. 1 (2015) 22, No. 4 (2014) 22, No. 3 (2014) 22, No. 2 (2014) 22, No. 1 (2014) 21, No. 4 (2013) 21, No. 3 (2013) 21, No. 2 (2013) 21, No. 1 (2012) 20, No. 4 (2012) 20, No. 3 (2012) 20, No. 2 (2012) 20, No. 1 (2012) 19, No. 4 (2011) 19, No. 3 (2011) 19, No. 2 (2011) 19, No. 1 (2011) 18, No. 4 (2010) 18, No. 3 (2010) 18, No. 2 (2010) 18, No. 1 (2010) 17, No. 4 (2009) 17, No. 3 (2009) 17, No. 2 (2009) 17, No. 1 (2009) 16, No. 4 (2008) 16, No. 3 (2008) 16, No. 2 (2008) 16, No. 1 (2008) 15, No. 4 (2007) 15, No. 3 (2007) 15, No. 2 (2007) 15, No. 1 (2007) 14, No. 4 (2006) 14, No. 3 (2006) 14, No. 2 (2006) 14, No. 1 (2006) 13, No. 4 (2005) 13, No. 3 (2005) 13, No. 2 (2005) 13, No. 1 (2005) 12, No. 4 (2004) 12, No. 3 (2004) 12, No. 2 (2004) 12, No. 1 (2004) 11, No. 4 (2003) 11, No. 3 (2003) 11, No. 2 (2003) 11, No. 1 (2003) 10, No. 4 (2002) 10, No. 3 (2002) 10, No. 2 (2002) 10, No. 1 (2002) 9, No. 4 (2001) 9, No. 3 (2001) 9, No. 2 (2001) 9, No. 1 (2001) 8, No. 4 (2000) 8, No. 3 (2000) 8, No. 2 (2000) 8, No. 1 (2000) 7, No. 4 (1999) 7, No. 3 (1999) 7, No. 2 (1999) 7, No. 1 (1999) 6, No. 4 (1998) 6, No. 3 (1998) 6, No. 2 (1998) 6, No. 1 (1998) 5, No. 4 (1997) 5, No. 3 (1997) ...and 15 more Volumes all top 5 ### Authors 75 Girko, Vyacheslav Leonidovich 27 Gupta, Arjun Kumar 17 Kozachenko, Yuriĭ Vasyl’ovych 16 Leonenko, Nikolai N. 16 N’Zi, Modeste 15 Ouknine, Youssef 11 Molchanov, Stanislav Alekseevich 11 Pogorui, Anatoliy A. 10 Prakasa Rao, B. L. S. 9 Albeverio, Sergio A. 9 Rodríguez-Dagnino, Ramón Martín 8 Bondarev, Borys Volodymyrovych 8 Kirsch, Werner 8 Shatashvili, Albert D. 8 Skorokhod, Anatoliĭ Volodymyrovych 7 Chala, Adel 7 El Otmani, Mohamed 7 Erraoui, Mohamed 7 Mezerdi, Brahim 7 Nadarajah, Saralees 7 Vladimirova, Anna I. 6 Al-Hussein, Abdulrahman 6 Bahlali, Seid 6 Moklyachuk, Mykhaĭlo Pavlovych 6 Tudor, Ciprian A. 5 Accardi, Luigi 5 Aman, Auguste 5 Bahlali, Khaled 5 Botelho, Luiz C. L. 5 Eddahbi, M’hamed 5 Gherbal, Boulakhras 5 Kadankov, Viktor F. 5 Khorunzhy, Alexei M. 5 Kondrat’yev, Yuriĭ Grygorovych 5 Kumam, Poom 5 Lakhel, El Hassan 5 Mishura, Yuliya Stepanivna 5 Nguyen Minh Chuong 5 Ouerdiane, Habib 5 Portenko, Mykola Ivanovych 5 Rempała, Grzegorz A. 5 Sghir, Aissa 5 Shevchuk, Larissa D. 5 Thuan, Nguyen Xuan 4 Aidara, Sadibou 4 Ait Ouahra, Mohamed 4 Bamber, Donald 4 Boufoussi, Brahim 4 Choudhury, Binayak Samadder 4 Goodman, Irwin R. 4 Katafygiotis, Lambros S. 4 Knopova, Victoria Pavlovna 4 Kulik, Alexey M. 4 Nagar, Daya K. 4 Nguyen, Hung Trung 4 Ning, Wei 4 Salehi, Habib 4 Tsarkov, Yevgeny 3 Anh, Vo V. 3 Barhoumi, Abdessatar 3 Boudref, Mohamed-Ahmed 3 Boutet de Monvel, Anne Marie 3 Carmona, René A. 3 Casati, Giulio 3 Ezzinbi, Khalil 3 Fomin-Shatashvili, Andrey A. 3 Haidar, Nassar H. S. 3 Kadankova, Tat’yana V. 3 Koralov, Leonid B. 3 Kulinich, Grygoriĭ L. 3 Li, Zhi 3 Lu, Yun Gang 3 Makhno, Sergeĭ Yakovlevich 3 Nashine, Hemant Kumar 3 Nieto Roig, Juan Jose 3 Orsingher, Enzo 3 Ouahab, Abdelghani 3 Owo, Jean-Marc 3 Polshkov, Yulian Nikolaevich 3 Pratsiovytyi, Mykola V. 3 Rodrigues, Waldyr Alves jun. 3 Ruiz-Medina, María Dolores 3 Sbi, A. 3 Sen, Pranab Kumar 3 Shaikhet, Leonid Efimovich 3 Shevchenko, Georgiy M. 3 Skorokhod, D. A. 3 Swishchuk, Anatoliy 3 Thang, Dang Hung 3 Veretennikov, Alexander Yu. 3 Viens, Frederi G. 3 Wang, Tonghui 3 Yode, Armel Fabrice 3 Yurachkivsky, Andriĭ P. 2 Akdim, Khadija 2 Arnold, Ludwig 2 Ayed, Wided 2 Berboucha, Ahmed 2 Bishwal, Jaya P. N. 2 Blouhi, Tayeb ...and 505 more Authors all top 5 ### Fields 527 Probability theory and stochastic processes (60-XX) 119 Statistics (62-XX) 71 Linear and multilinear algebra; matrix theory (15-XX) 60 Partial differential equations (35-XX) 47 Operator theory (47-XX) 42 Systems theory; control (93-XX) 31 Ordinary differential equations (34-XX) 29 Numerical analysis (65-XX) 19 Measure and integration (28-XX) 19 Quantum theory (81-XX) 18 Functional analysis (46-XX) 18 Calculus of variations and optimal control; optimization (49-XX) 16 Dynamical systems and ergodic theory (37-XX) 15 Statistical mechanics, structure of matter (82-XX) 15 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 10 Biology and other natural sciences (92-XX) 9 Global analysis, analysis on manifolds (58-XX) 8 Number theory (11-XX) 8 General topology (54-XX) 8 Fluid mechanics (76-XX) 7 Information and communication theory, circuits (94-XX) 6 Special functions (33-XX) 6 Computer science (68-XX) 5 Real functions (26-XX) 4 Harmonic analysis on Euclidean spaces (42-XX) 4 Integral equations (45-XX) 3 History and biography (01-XX) 3 Approximations and expansions (41-XX) 3 Operations research, mathematical programming (90-XX) 2 Combinatorics (05-XX) 2 Topological groups, Lie groups (22-XX) 2 Difference and functional equations (39-XX) 2 Convex and discrete geometry (52-XX) 1 Algebraic geometry (14-XX) 1 Nonassociative rings and algebras (17-XX) 1 Functions of a complex variable (30-XX) 1 Potential theory (31-XX) 1 Sequences, series, summability (40-XX) 1 Mechanics of particles and systems (70-XX) 1 Optics, electromagnetic theory (78-XX) 1 Classical thermodynamics, heat transfer (80-XX) 1 Relativity and gravitational theory (83-XX) ### Citations contained in zbMATH Open 336 Publications have been cited 1,325 times in 1,078 Documents Cited by Year Some skew-symmetric models. Zbl 1118.60300 Gupta, A. K.; Chang, F. C.; Huang, W. J. 2002 Localization for random perturbations of periodic Schrödinger operators. Zbl 0927.60067 Kirsch, Werner; Stollmann, Peter; Stolz, Günter 1998 Existence and uniqueness of solutions of stochastic functional differential equations. Zbl 1248.60067 von Renesse, Max-K.; Scheutzow, Michael 2010 Asymptotic distribution of smoothed eigenvalue density. II. Wigner random matrices. Zbl 0952.60064 Boutet de Monvel, A.; Khorunzhy, A. 1999 Asymptotic distribution of smoothed eigenvalue density. I. Gaussian random matrices. Zbl 0936.15021 Boutet de Monvel, A.; Khorunzhy, A. 1999 Stochastic partial differential equations driven by Lévy space-time white noise. Zbl 0972.60048 Applebaum, David; Wu, Jiang-Lun 2000 Motions with reflecting and absorbing barriers driven by the telegraph equation. Zbl 0822.60073 Orsingher, E. 1995 Parametric estimation for linear stochastic differential equations driven by fractional Brownian motion. Zbl 1053.62089 Prakasa Rao, B. L. S. 2003 Some SDEs with distributional drift. II: Lyons-Zheng structure, Itô’s formula and semimartingale characterization. Zbl 1088.60058 Flandoli, Franco; Russo, Francesco; Wolf, Jochen 2004 The stochastic maximum principle in optimal control of singular diffusions with nonlinear coefficients. Zbl 1101.93080 Bahlali, Seid; Chala, Adel 2005 Asymptotic approximations for EPMC’s of the linear and the quadratic discriminant functions when the sample sizes and the dimension are large. Zbl 0924.62070 Fujikoshi, Yasunori; Seo, Takashi 1998 Some perturbations of drift-type for symmetric stable processes. Zbl 0839.60056 Portenko, N. I. 1994 Asymptotics for the almost sure Lyapunov exponent for the solution of the parabolic Anderson problem. Zbl 0972.60050 Carmona, Rene; Koralov, Leonid; Molchanov, Stanislav 2001 The strong circular law. Twenty years later. II. Zbl 1065.60026 Girko, V. L. 2004 Almost sure exponential stability of the Euler-Maruyama approximations for stochastic functional differential equations. Zbl 1276.60063 Wu, Fuke; Mao, Xuerong; Kloeden, Peter E. 2011 The circular law. Thirty years later. Zbl 1278.15041 Girko, Vyacheslav L. 2012 One-dimensional semi-Markov evolutions with general Erlang sojourn times. Zbl 1116.60051 Pogorui, A. A.; Rodríguez-Dagnino, Ramón M. 2005 Intrinsic small time estimates for distribution densities of Lévy processes. Zbl 1291.60095 Knopova, Victoria; Kulik, Alexei 2013 The circular law. Twenty years later. III. Zbl 1088.60020 Girko, V. L. 2005 An asymptotic expansion for the density of states of a random Schrödinger operator with Bernoulli disorder. Zbl 0841.60047 Klopp, Frédéric 1995 Simulation of Gaussian stochastic processes. Zbl 1051.60040 Kozachenko, Yuri; Rozora, Iryna 2003 Random fixed point theorem in generalized Banach space and applications. Zbl 1380.47043 Sinacer, Moulay Larbi; Nieto, Juan Jose; Ouahab, Abdelghani 2016 Stochastic non-resistive magnetohydrodynamic system with Lévy noise. Zbl 1372.76030 Manna, Utpal; Mohan, Manil T.; Sritharan, Sivaguru S. 2017 On multidimensional stable processes with locally unbounded drift. Zbl 0832.60066 Podolynny, S. I.; Portenko, N. I. 1995 Strong uniform consistency of a nonparametric estimator of a conditional quantile for censored dependent data and functional regressors. Zbl 1348.62169 Horrigue, Walid; Ould Saïd, Elias 2011 $$L^p$$-solutions of the stochastic transport equation. Zbl 1270.60070 Catuogno, Pedro; Olivera, Christian 2013 Backward doubly stochastic differential equations with non-Lipschitz coefficients. Zbl 1199.60058 N’zi, Modeste; Owo, Jean-Marc 2008 On an expansion of random processes in series. Zbl 1142.60349 Kozachenko, Yu. V.; Rozora, I. V.; Turchyn, Ye. V. 2007 Jordan normal form for linear cocycles. Zbl 0971.37002 Arnold, Ludwig; Nguyen Dinh Cong; Oseledets, Valery 1999 Sharp upper bound on the almost-sure exponential behavior of a stochastic parabolic partial differential equation. Zbl 0849.60062 Carmona, René; Viens, Frederi. G.; Molchanov, S. A. 1996 A solvable model for homopolymers and self-similarity near the critical point. Zbl 1224.60247 Cranston, M.; Koralov, L.; Molchanov, S.; Vainberg, B. 2010 Equilibrium Glauber dynamics of continuous particle systems as a scaling limit of Kawasaki dynamics. Zbl 1199.60342 Finkelshtein, Dmitri L.; Kondratiev, Yuri G.; Lytvynov, Eugene W. 2007 Random fixed points of multivalued random operators with property (D). Zbl 1199.47238 Kumam, Wiyada; Kumam, Poom 2007 Upper estimate of overrunning by $$\text{Sub}_{\varphi}(\Omega)$$ random process the level specified by continuous function. Zbl 1118.60025 Vasylyk, Olga; Kozachenko, Yuriy; Yamnenko, Rostyslav 2005 Random block matrix density and ss-law. Zbl 0959.60054 Girko, V. L. 2000 Backward parabolic Ito equations and the second fundamental inequality. Zbl 1273.60075 Dokuchaev, Nikolai 2012 Strong, mild and weak solutions of backward stochastic evolution equations. Zbl 1119.60042 Al-Hussein, Abdul Rahman 2005 Optimality conditions of controlled backward doubly stochastic differential equations. Zbl 1226.93136 Bahlali, Seid; Gherbal, Boulakhras 2010 BSDEs with right upper-semicontinuous reflecting obstacle and stochastic Lipschitz coefficient. Zbl 1412.60096 Marzougue, Mohamed; El Otmani, Mohamed 2019 Convex-valued random dynamical systems: A variational principle for equilibrium states. Zbl 0933.93063 Arnold, L.; Evstigneev, I. V.; Gundlach, V. M. 1999 Equivalence of gradients on configuration spaces. Zbl 0955.60068 Privault, Nicolas 1999 Wick calculus for regular generalized stochastic functionals. Zbl 0955.60066 Grothaus, M.; Kondratiev, Yu. G.; Us, G. F. 1999 Markovian random evolution in $$R^{n}$$. Zbl 0976.60074 Samoilenko, I. V. 2001 The circular law: Ten years later. Zbl 0839.60038 Girko, V. L. 1994 $$L^p$$-solution of reflected generalized BSDEs with non-Lipschitz coefficients. Zbl 1224.60063 Aman, Auguste 2009 Common random fixed points for multivalued random operators without $$S$$- and $$T$$-weakly commuting random operators. Zbl 1221.47105 Sintunavarat, W.; Kumam, P.; Patthanangkoor, P. 2009 Two-dimensional stochastic Navier-Stokes equations with fractional Brownian noise. Zbl 1271.60072 Fang, Liqun; Sundar, P.; Viens, Frederi G. 2013 Representations of multidimensional linear process bridges. Zbl 1391.60179 Barczy, Mátyás; Kern, Peter 2013 Exponential stability for stochastic neutral functional differential equations driven by Rosenblatt process with delay and Poisson jumps. Zbl 1338.60159 Lakhel, El Hassan 2016 Random fixed point theorem for multivalued nonexpansive operators in uniformly nonsquare Banach spaces. Zbl 1119.47061 Kumam, Poom; Plubtieng, Somyot 2006 Fully coupled forward backward stochastic differential equations driven by Lévy processes and application to differential games. Zbl 1300.60073 Baghery, Fouzia; Khelfallah, Nabil; Mezerdi, Brahim; Turpin, Isabelle 2014 Asymptotic behavior of $$M$$-estimators in continuous time non-linear regression with long-range dependent errors. Zbl 1010.62022 Ivanov, A. V.; Leonenko, N. N. 2002 The strong circular law. Twenty years later. I. Zbl 1088.60019 Girko, V. L. 2004 Asymptotic behavior of spectral functions of empirical covariance matrices. Zbl 0827.62050 Girko, V. L.; Gupta, A. K. 1994 Matrix variate Kummer-gamma distribution. Zbl 0980.15019 Nagar, Daya K.; Gardeño, Liliam 2001 Likelihood procedure for testing change point hypothesis for multivariate Gaussian model. Zbl 0831.62020 Chen, Jie; Gupta, A. K. 1995 Wigner’s semicircle law for band random matrices. Zbl 0839.60027 Casati, G.; Girko, V. 1993 Asymptotic properties of the LSE in a regression model with long-memory Gaussian and non-Gaussian stationary errors. Zbl 0849.62049 Leonenko, N. N.; Šilac-Benšić, M. 1996 On smoothed density of states for Wigner random matrices. Zbl 0929.60045 Khorunzhy, A. 1997 35 years of the inverse tangent law. Zbl 1268.60005 Girko, Vyacheslav 2011 Reflected backward doubly stochastic differential equations with discontinuous generator. Zbl 1284.60113 Aman, Auguste; Owo, Jean Marc 2012 Numerical study of stochastic Volterra-Fredholm integral equations by using second kind Chebyshev wavelets. Zbl 1338.60173 2016 Weak solutions and a Yamada-Watanabe theorem for FBSDEs. Zbl 1199.60199 Bahlali, K.; Mezerdi, B.; N’zi, M.; Ouknine, Y. 2007 On the distribution of the moment of the first exit time from an interval and value of overjump through borders interval for the processes with independent increments and random walk. Zbl 1123.60064 2005 Quantum white noise convolution operators with application to differential equations. Zbl 1329.60241 Barhoumi, Abdessatar; Lanconelli, Alberto; Rguigui, Hafedh 2014 Reflected backward stochastic differential equation with jumps and locally Lipschitz coefficient. Zbl 1004.60059 Bahlali, Khaled; Essaky, El Hassan; Ouknine, Youssef 2002 On random equations and applications to random fixed point theorems. Zbl 1226.60094 Thang, Dang H.; Anh, Ta N. 2010 Existence and exponential stability for some stochastic neutral partial functional integrodifferential equations. Zbl 1293.35371 Diop, Mamadou Abdou; Ezzinbi, Khalil; Lo, Modou 2014 From the first rigorous proof of the circular law in 1984 to the circular law for block random matrices under the generalized Lindeberg condition. Zbl 1388.15032 Girko, Vyacheslav L. 2018 Gevrey regularity of random attractors for stochastic reaction-diffusion equations. Zbl 0959.60045 Chueshov, Igor D. 2000 Duality and semi-group property for backward parabolic Itô equations. Zbl 1224.60192 Dokuchaev, Nikolai 2010 High moments of large Wigner random matrices and asymptotic properties of the spectral norm. Zbl 1270.15025 Khorunzhiy, Oleksiy 2012 The relaxed optimal control problem of forward-backward stochastic doubly systems with Poisson jumps and its application to LQ problem. Zbl 1263.93235 2012 Matrix variate extended skew normal distributions. Zbl 1349.62183 Ning, Wei; Gupta, Arjun K. 2012 30 years of general statistical analysis and canonical equation $$K_{60}$$ for Hermitian matrices $$(A+BUC)(A+BUC)^*$$, where $$U$$ is a random unitary matrix. Zbl 1329.15069 Girko, Vyacheslav L. 2015 Berry-Esseen bounds and almost sure CLT for the quadratic variation of the bifractional Brownian motion. Zbl 1335.60017 Aazizi, Soufiane; Es-Sebaiy, Khalifa 2016 A note on the Itô formula of stochastic integrals in Banach spaces. Zbl 1118.60050 Hausenblas, Krika 2006 White noise approach to stochastic integration. Zbl 1119.60038 Accardi, Luigi; Ayed, Wided; Ouerdiane, Habib 2005 Existence and optimality conditions in stochastic control of linear BSDEs. Zbl 1226.49016 Bahlali, Khaled; Gherbal, Boulakhrass; Mezerdi, Brahim 2010 Parametric estimation for linear stochastic differential equations driven by sub-fractional Brownian motion. Zbl 06815262 Prakasa Rao, B. L. S. 2017 Existence theory of fractional coupled differential equations via $$\Psi$$-Hilfer fractional derivative. Zbl 1442.34014 Harikrishnan, Sugumaran; Shah, Kamal; Kanagarajan, Kuppusamy 2019 Backward stochastic differential equations with jumps involving a subdifferential operator. Zbl 0973.60070 N’zi, M.; Ouknine, Y. 2000 Boundary-value problems for equations of mathematical physics with strictly Orlicz random initial conditions. Zbl 0836.35165 de la Krus, E. Barrasa; Kozachenko, Yu. V. 1995 Moments of the number of solutions of a system of random Boolean equations. Zbl 0842.60009 Masol, V. I. 1993 Spectral properties of the scaling limit solutions of the Burgers’ equation with singular data. Zbl 0866.35147 Leonenko, N. N.; Parkhomenko, V. N.; Woyczynski, W. A. 1996 Martingale problem approach to the representations of the Navier-Stokes equations on smooth-boundary manifolds and semispace. Zbl 1051.60045 Rapoport, Diego L. 2003 Semicircle law for random matrices of long-range percolation model. Zbl 1224.15069 2009 The stochastic maximum principle in optimal control of degenerate diffusions with non-smooth coefficients. Zbl 1224.93131 Chighoub, Farid; Djehiche, Boualem; Mezerdi, Brahim 2009 Sum of the sample autocorrelation function. Zbl 1224.62056 Hassani, Hossein 2009 Some procedures for extending random operators. Zbl 1224.60166 Thang, Dang Hung; Cuong, Tran Manh 2009 BSDEs driven by infinite dimensional martingales and their applications to stochastic optimal control. Zbl 1266.60099 Al-Hussein, AbdulRahman 2011 Almost sure asymptotic stability and convergence of stochastic theta methods applied to systems of linear SDEs in $$\mathbb R^d$$. Zbl 1268.65009 Schurz, Henri 2011 Semi-linear diffusion in $$\mathbb R^D$$ and in Hilbert spaces, a Feynman-Wiener path integral study. Zbl 1302.35421 Botelho, Luiz C. L. 2011 Rate of convergence of Euler approximations of solution to mixed stochastic differential equation involving Brownian motion and fractional Brownian motion. Zbl 1290.60069 Mishura, Yuliya S.; Shevchenko, Georgiy M. 2011 Random fixed points of completely random operators. Zbl 1270.60072 Thang, Dang Hung; Anh, Pham The 2012 The generalized circular law. Zbl 1278.15040 Girko, Vyacheslav 2012 The generalized elliptic law. Zbl 1276.15018 Girko, Vyacheslav 2013 The $$V$$-density of eigenvalues of non symmetric random matrices and rigorous proof of the strong circular law. Zbl 0894.15011 Girko, V. L. 1997 Convergence rate of the expected spectral functions of symmetric random matrices is equal to $$O(n^{-1/2})$$. Zbl 0912.60004 Girko, V. L. 1998 On estimation of regression coefficients of long memory random fields observed on the arrays. Zbl 0921.62124 Leonenko, N. N.; Benšić, M. 1998 Existence results for a class of random delay integrodifferential equations. Zbl 1470.45019 Diop, Amadou; Diop, Mamadou Abdul; Ezzinbi, K. 2021 VICTORIA transform, RESPECT and REFORM methods for the proof of the $$G$$-permanent pencil law under $$G$$-Lindeberg condition for some random matrices from $$G$$-elliptic ensemble. Zbl 1477.60019 Girko, Vyacheslav L. 2021 About classical solutions of the path-dependent heat equation. Zbl 1457.60106 Di Girolami, Cristina; Russo, Francesco 2020 BSDE with rcll reflecting barrier driven by a Lévy process. Zbl 1457.60089 El Jamali, Mohamed; El Otmani, Mohamed 2020 Predictable solution for reflected BSDEs when the obstacle is not right-continuous. Zbl 1457.60090 Marzougue, Mohamed; El Otmani, Mohamed 2020 Nonparametric estimation of trend for stochastic differential equations driven by sub-fractional Brownian motion. Zbl 1443.62237 Prakasa Rao, B. L. S. 2020 An optimal control of a risk-sensitive problem for backward doubly stochastic differential equations with applications. Zbl 1433.93155 Hafayed, Dahbia; Chala, Adel 2020 Lebesgue structure of asymmetric Bernoulli convolution based on Jacobsthal-Lucas sequence. Zbl 1447.60074 Pratsiovytyi, Mykola; Makarchuk, Oleg; Karvatsky, Dmytro 2020 VICTORIA transform, RESPECT and REFORM methods for the proof of the $$G$$-elliptic law under $$G$$-Lindeberg condition and twice stochastic condition for the variances and covariances of the entries of some random matrices. Zbl 1447.60020 Girko, Vyacheslav L. 2020 BSDEs with right upper-semicontinuous reflecting obstacle and stochastic Lipschitz coefficient. Zbl 1412.60096 Marzougue, Mohamed; El Otmani, Mohamed 2019 Existence theory of fractional coupled differential equations via $$\Psi$$-Hilfer fractional derivative. Zbl 1442.34014 Harikrishnan, Sugumaran; Shah, Kamal; Kanagarajan, Kuppusamy 2019 Random Schrödinger operators with a background potential. Zbl 1439.35146 Asatryan, Hayk; Kirsch, Werner 2019 Continuous distributions whose functions preserve tails of an $$A$$-continued fraction representation of numbers. Zbl 1442.11114 Pratsiovytyi, Mykola; Chuikov, Artem 2019 RAP-method (random perturbation method) for finding $$S$$-minimax control vectors and parameter estimates for some linear systems with random coefficients. Zbl 1433.65059 Vladimirova, A. I.; Girko, Vyacheslav L.; Shevchuk, L. D. 2019 Wegner estimate for discrete Schrödinger operators with Gaussian random potentials. Zbl 1416.82019 Tautenhahn, Martin 2019 A general maximum principle for mean-field forward-backward doubly stochastic differential equations with jumps processes. Zbl 1414.93202 Hafayed, Dahbia; Chala, Adel 2019 Inverting weak random operators. Zbl 1412.60136 Gutierrez-Pavón, Jonathan; Pacheco, Carlos G. 2019 The limit $$G$$-law for the solutions of systems of linear algebraic equations with independent random coefficients under the $$G$$-Lindeberg condition. Zbl 1423.15003 Girko, Vyacheslav L. 2019 Sampling distributions of skew normal populations associated with closed skew normal distributions. Zbl 1427.62045 Zhu, Xiaonan; Li, Baokun; Wang, Tonghui; Gupta, Arjun K. 2019 On the limiting spectral density of random matrices filled with stochastic processes. Zbl 1447.60026 Löwe, Matthias; Schubert, Kristina 2019 From the first rigorous proof of the circular law in 1984 to the circular law for block random matrices under the generalized Lindeberg condition. Zbl 1388.15032 Girko, Vyacheslav L. 2018 A multi-class extension of the mean field Bolker-Pacala population model. Zbl 1397.92560 Bessonov, Mariya; Molchanov, Stanislav; Whitmeyer, Joseph 2018 The method of perpendiculars of finding estimates from below for minimal singular eigenvalues of random matrices. Zbl 1390.15029 Girko, Vyacheslav L. 2018 The inverse tangent law for the solutions of systems of linear algebraic equations with independent random coefficients is proven under Linderberg’s condition. Zbl 1393.15005 Girko, Vyacheslav L.; Shevchuk, Larissa D. 2018 Probabilistic $$p$$-cyclic contractions using different types of $$t$$-norms. Zbl 1486.54054 Choudhury, Binayak S.; Bhandari, Samir Kumar; Saha, Parbati 2018 Some existence results and stability concepts for partial fractional random integral equations with multiple delay. Zbl 1384.34006 Abbas, Saïd; Benchohra, Mouffak; Darwish, Mohamed Abdalla 2018 Fractional anticipated BSDEs with stochastic Lipschitz coefficients. Zbl 1401.60105 Sow, Ahmadou Bamba; Diouf, Bassirou Kor 2018 Backward doubly SDEs with continuous and stochastic linear growth coefficients. Zbl 1401.60116 Owo, Jean Marc 2018 The stationary regions for the parameter space of unilateral second-order spatial AR model. Zbl 1401.62187 Mojiri, A.; Waghei, Y.; Nili Sani, H. R.; Mohtashami Borzadaran, G. R. 2018 Distribution of values of classic singular Cantor function of random argument. Zbl 1440.11147 Pratsiovytyi, Mykola; Lysenko, Iryna; Voitovska, Oksana 2018 Interaction of particles governed by generalized integrated telegraph processes. Zbl 1404.60153 Pogorui, A. A.; Rodríguez-Dagnino, R. M. 2018 Some deterministic and random fixed point theorems on a graph. Zbl 07000597 Nieto, Juan J.; Ouahab, Abdelghani; Rodríguez-López, Rosana 2018 Stochastic non-resistive magnetohydrodynamic system with Lévy noise. Zbl 1372.76030 Manna, Utpal; Mohan, Manil T.; Sritharan, Sivaguru S. 2017 Parametric estimation for linear stochastic differential equations driven by sub-fractional Brownian motion. Zbl 06815262 Prakasa Rao, B. L. S. 2017 Itô formula for mild solutions of SPDEs with Gaussian and non-Gaussian noise and applications to stability properties. Zbl 1370.60101 Albeverio, Sergio; Gawarecki, Leszek; Mandrekar, Vidyadhar; Rüdiger, Barbara; Sarkar, Barun 2017 Smoothness of Malliavin derivatives and dissipativity of solutions to two-dimensional micropolar fluid system. Zbl 1375.35412 Yamazaki, Kazuo 2017 Wave equation with a coloured stable noise. Zbl 1386.60224 Pryhara, Larysa; Shevchenko, Georgiy 2017 New exact solutions for the Wick-type stochastic Zakharov-Kuznetsov equation for modelling waves on shallow water surfaces. Zbl 1365.60061 Saha Ray, S.; Singh, S. 2017 On the stability of solutions to stochastic 2D $$g$$-Navier-Stokes equations with finite delays. Zbl 1379.35237 Cung The Anh; Nguyen Van Thanh; Nguyen Viet Tuan 2017 Goodness-of-fit tests for random sequences incorporating several components. Zbl 1360.62440 Ianevych, Tetiana O.; Kozachenko, Yuriy V.; Troshki, Viktor B. 2017 Deterministic and stochastic stability of an SIRS epidemic model with a saturated incidence rate. Zbl 1358.92093 N&rsquo;zi, Modeste; Tano, Jacques 2017 $$\gamma$$-product of white noise space and applications. Zbl 1391.46049 Horrigue, Samah 2017 Random fixed point theorem in generalized Banach space and applications. Zbl 1380.47043 Sinacer, Moulay Larbi; Nieto, Juan Jose; Ouahab, Abdelghani 2016 Exponential stability for stochastic neutral functional differential equations driven by Rosenblatt process with delay and Poisson jumps. Zbl 1338.60159 Lakhel, El Hassan 2016 Numerical study of stochastic Volterra-Fredholm integral equations by using second kind Chebyshev wavelets. Zbl 1338.60173 2016 Berry-Esseen bounds and almost sure CLT for the quadratic variation of the bifractional Brownian motion. Zbl 1335.60017 Aazizi, Soufiane; Es-Sebaiy, Khalifa 2016 Quadratic forms of refined skew normal models based on stochastic representation. Zbl 1375.60051 Tian, Weizhong; Wang, Tonghui 2016 On the functional Hodrick-Prescott filter with non-compact operators. Zbl 1332.62114 Djehiche, Boualem; Hilbert, Astrid; Nassar, Hiba 2016 Canonical equations $$K_{62}$$, $$K_{63}$$, $$K_{64}$$ and $$K_{65}$$ for random non-Hermitian matrices $$A+B(U+\gamma H)C$$, the upturned stools law, the upturned stool without seat law and doughnut law density. Zbl 1334.15095 Girko, Vyacheslav L. 2016 Global analysis of a deterministic and stochastic nonlinear SIRS epidemic model with saturated incidence rate. Zbl 1360.92112 N&rsquo;zi, Modeste; Kanga, Gérard 2016 Time varying axially symmetric vector random fields on the sphere. Zbl 1353.60048 Ma, Chunsheng 2016 Stability of fractional neutral stochastic partial integro-differential equations. Zbl 1351.60076 Xu, Liping; Li, Zhi 2016 Tightness in Besov-Orlicz spaces: characterizations and applications. Zbl 1355.46029 Aissa, Sghir 2016 One-dimensional stochastic equations in layered media with semi-permeable barriers. Zbl 1347.60071 Makhno, Sergei Y. 2016 30 years of general statistical analysis and canonical equation $$K_{60}$$ for Hermitian matrices $$(A+BUC)(A+BUC)^*$$, where $$U$$ is a random unitary matrix. Zbl 1329.15069 Girko, Vyacheslav L. 2015 On the law of the solution to a stochastic heat equation with fractional noise in time. Zbl 1327.60123 Bourguin, Solesne; Tudor, Ciprian A. 2015 A decomposition approach for the discrete-time approximation of FBSDEs with a jump. Zbl 1318.65005 Kharroubi, Idris; Lim, Thomas 2015 Parametrix construction for certain Lévy-type processes. Zbl 1321.60150 Knopova, Victoria; Kulik, Alexei 2015 Large deviations for random evolutions with independent increments in the scheme of Lévy approximation with split and double merging. Zbl 1321.60052 Samoilenko, Igor V. 2015 Asymptotic behavior of the Bernoulli type Galton-Watson branching process with immigration. Zbl 1327.60169 Uchimura, Yoshinori; Saitô, Kimiaki 2015 Optimal control for stochastic differential delay equations with Poisson jumps and applications. Zbl 1307.93465 Shi, Jingtao 2015 New inequalities of Gronwall type for the stochastic differential equations. Zbl 1327.60116 Boudref, Mohamed-Ahmed; Berboucha, Ahmed 2015 On distribution of the norm of deviation of a sub-Gaussian random process in Orlicz spaces. Zbl 1327.60092 Yamnenko, Rostyslav E. 2015 On the Lie structure of zero row sum and related matrices. Zbl 1329.15067 Boukas, Andreas; Feinsilver, Philip; Fellouris, Anargyros 2015 Canonical equation $$K_{61}$$ for random non-Hermitian matrices $$A+B(U+\gamma H)C$$. Zbl 1329.15070 Girko, Vyacheslav L. 2015 Evaluation of a generalized Selberg integral. Zbl 1309.33007 Nagar, Daya K.; Naranjo-Ríos, Sandra Milena; Gupta, Arjun K. 2015 Probabilistic representations of matrix variate skew normal models. Zbl 1310.62065 Ning, Wei 2015 On the rigorous ergodic theorem for a class of nonlinear Klein-Gordon wave propagations. Zbl 1307.35154 Botelho, Luiz C. L. 2015 The distribution of random motion with Erlang-3 sojourn times. Zbl 1328.60228 Kolomiiets, Tamila; Pogorui, Anatoliy A.; Rodríguez-Dagnino, Ramón M. 2015 Fully coupled forward backward stochastic differential equations driven by Lévy processes and application to differential games. Zbl 1300.60073 Baghery, Fouzia; Khelfallah, Nabil; Mezerdi, Brahim; Turpin, Isabelle 2014 Quantum white noise convolution operators with application to differential equations. Zbl 1329.60241 Barhoumi, Abdessatar; Lanconelli, Alberto; Rguigui, Hafedh 2014 Existence and exponential stability for some stochastic neutral partial functional integrodifferential equations. Zbl 1293.35371 Diop, Mamadou Abdou; Ezzinbi, Khalil; Lo, Modou 2014 Mixed sub-fractional Brownian motion. Zbl 1296.60095 Zili, Mounir 2014 Gegenbauer random fields. Zbl 1354.60041 Espejo, Rosa M.; Leonenko, Nikolai N.; Ruiz-Medina, María D. 2014 Existence and stability of square-mean almost periodic solutions to a spatially extended neural network with impulsive noise. Zbl 1330.37072 Bonaccorsi, Stefano; Ziglio, Giacomo 2014 The Cauchy problem for the heat equation with a random right side. Zbl 1284.35205 Kozachenko, Yuriy V.; Slyvka-Tylyshchak, Anna I. 2014 Optimal control problems for linear backward doubly stochastic differential equations. Zbl 1320.49010 Gherbal, Boulakhras 2014 Stochastic controls of relaxed-singular problems. Zbl 1292.93146 Chala, Adel; Bahlali, Seid 2014 On fractional derivatives of the local time of a symmetric stable process as a doubly indexed process. Zbl 1297.60055 Ait Ouahra, Mohamed; Kissami, Abdelghani; Ouahhabi, Hanae 2014 Large deviation for multivalued backward stochastic differential equations. Zbl 1295.60035 N&rsquo;Zi, Modeste; Dakaou, Ibrahim 2014 Intrinsic small time estimates for distribution densities of Lévy processes. Zbl 1291.60095 Knopova, Victoria; Kulik, Alexei 2013 $$L^p$$-solutions of the stochastic transport equation. Zbl 1270.60070 Catuogno, Pedro; Olivera, Christian 2013 Two-dimensional stochastic Navier-Stokes equations with fractional Brownian noise. Zbl 1271.60072 Fang, Liqun; Sundar, P.; Viens, Frederi G. 2013 Representations of multidimensional linear process bridges. Zbl 1391.60179 Barczy, Mátyás; Kern, Peter 2013 The generalized elliptic law. Zbl 1276.15018 Girko, Vyacheslav 2013 A note on Feynman-Kac path integral representations for scalar wave motions. Zbl 1276.35113 Botelho, Luiz C. L. 2013 Topological and metric properties of distributions of random variables represented by the alternating Lüroth series with independent elements. Zbl 1362.60005 Pratsiovytyi, Mykola; Khvorostina, Yuriy 2013 Parameter estimation of one-dimensional diffusion process by minimum Hellinger distance method. Zbl 1281.62177 N&rsquo;drin, Julien Apala; Hili, Ouagnina 2013 The circular law. Thirty years later. Zbl 1278.15041 Girko, Vyacheslav L. 2012 Backward parabolic Ito equations and the second fundamental inequality. Zbl 1273.60075 Dokuchaev, Nikolai 2012 Reflected backward doubly stochastic differential equations with discontinuous generator. Zbl 1284.60113 Aman, Auguste; Owo, Jean Marc 2012 High moments of large Wigner random matrices and asymptotic properties of the spectral norm. Zbl 1270.15025 Khorunzhiy, Oleksiy 2012 The relaxed optimal control problem of forward-backward stochastic doubly systems with Poisson jumps and its application to LQ problem. Zbl 1263.93235 2012 Matrix variate extended skew normal distributions. Zbl 1349.62183 Ning, Wei; Gupta, Arjun K. 2012 Random fixed points of completely random operators. Zbl 1270.60072 Thang, Dang Hung; Anh, Pham The 2012 The generalized circular law. Zbl 1278.15040 Girko, Vyacheslav 2012 The elliptic law. Thirty years later. Zbl 1298.60015 Girko, Vyacheslav 2012 Random fixed point theorems for a finite family of asymptotically quasi-nonexpansive in the intermediate sense random operators. Zbl 1270.47051 Saluja, Gurucharan Singh; Nashine, Hemant Kumar 2012 Almost sure exponential stability of the Euler-Maruyama approximations for stochastic functional differential equations. Zbl 1276.60063 Wu, Fuke; Mao, Xuerong; Kloeden, Peter E. 2011 Strong uniform consistency of a nonparametric estimator of a conditional quantile for censored dependent data and functional regressors. Zbl 1348.62169 Horrigue, Walid; Ould Saïd, Elias 2011 ...and 236 more Documents all top 5 ### Cited by 1,299 Authors 24 Gupta, Arjun Kumar 22 Leonenko, Nikolai N. 20 Nadarajah, Saralees 19 Girko, Vyacheslav Leonidovich 18 Kozachenko, Yuriĭ Vasyl’ovych 14 Kondrat’yev, Yuriĭ Grygorovych 14 Prakasa Rao, B. L. S. 13 Veselić, Ivan 12 Albeverio, Sergio A. 12 Rodríguez-Dagnino, Ramón Martín 11 Pogorui, Anatoliy A. 11 Russo, Francesco 11 Tudor, Ciprian A. 10 Accardi, Luigi 10 Bahlali, Khaled 10 Kumam, Poom 9 El Otmani, Mohamed 9 Finkelshteĭn, Dmitriĭ Leonidovich 9 Klopp, Frédéric 9 Kulik, Alexey M. 9 Mezerdi, Brahim 9 Orsingher, Enzo 9 Sakhno, Lyudmyla Mykhaĭlivna 8 Chala, Adel 8 Daletskii, Alexei 8 Marzougue, Mohamed 8 Yuan, Chenggui 7 Bao, Jianhai 7 Evstigneev, Igor V. 7 Fujikoshi, Yasunori 7 Kutoviy, Oleksandr V. 7 Ratanov, Nikita 7 Rguigui, Hafedh 7 Rozora, Iryna V. 7 Scheutzow, Michael K. R. 6 Anh, Vo V. 6 De Gregorio, Alessandro 6 Djehiche, Boualem 6 Dokuchaev, Nikolai G. 6 He, Yukun 6 Martinucci, Barbara 6 Molchanov, Stanislav Alekseevich 6 Olenko, Andriy Yakovych 6 Olivera, Christian 6 Ouahab, Abdelghani 6 Pratsiovytyi, Mykola 6 Saha Ray, Santanu 6 Tikhomirov, Alexander Nikolaevich 6 Viens, Frederi G. 6 Withers, Christopher Stroude 5 Anguraj, Annamalai 5 Chafaï, Djalil 5 Cranston, Michael Craig 5 Dshalalow, Jewgeni H. 5 Götze, Friedrich 5 Grzywny, Tomasz 5 Hausenblas, Erika 5 Khelfallah, Nabil 5 Knopova, Victoria Pavlovna 5 Lytvynov, Eugene W. 5 Manna, Utpal 5 Mohan, Manil Thankamani 5 Mukherjee, Debopriya 5 Nieto Roig, Juan Jose 5 O’Rourke, Sean D. 5 Privault, Nicolas 5 Ravikumar, Kasinathan 5 Shevchenko, Georgiy M. 5 Shutoh, Nobumichi 5 Tao, Terence 5 Tindel, Samy 5 Vu, Van H. 5 Wang, Feng-Yu 5 Wang, Tonghui 5 Wang, Xiangrong 5 Yamnenko, Rostyslav E. 5 Zhang, Xicheng 4 Al-Hussein, Abdulrahman 4 Barhoumi, Abdessatar 4 Bishwal, Jaya P. N. 4 Bodnar, Taras 4 Bordenave, Charles 4 Borisov, Denis Ivanovich 4 Chen, Zhen-Qing 4 Di Crescenzo, Antonio 4 El-Bassiouny, Ahmed H. 4 Erdős, László 4 Hafayed, Mokhtar 4 Huang, Hong 4 Ivanov, Aleksandr Vladimirovich 4 Kadankov, Viktor F. 4 Kadankova, Tat’yana V. 4 Kirsch, Werner 4 Knowles, Antti 4 Kotz, Samuel 4 Kumam, Wiyada 4 Li, Zhi 4 Mao, Xuerong 4 Mishura, Yuliya Stepanivna 4 Owo, Jean-Marc ...and 1,199 more Authors all top 5 ### Cited in 286 Journals 73 Random Operators and Stochastic Equations 41 Stochastic Processes and their Applications 30 Statistics & Probability Letters 30 Theory of Probability and Mathematical Statistics 29 Stochastic Analysis and Applications 27 Communications in Statistics. Theory and Methods 23 Journal of Theoretical Probability 21 Journal of Statistical Physics 19 Journal of Mathematical Analysis and Applications 19 Journal of Multivariate Analysis 19 Probability Theory and Related Fields 17 The Annals of Probability 16 Journal of Functional Analysis 15 Journal of Statistical Planning and Inference 15 Stochastics and Dynamics 14 Stochastics 13 Communications in Mathematical Physics 13 Applied Mathematics and Computation 12 Journal of Differential Equations 12 Statistics 12 Modern Stochastics. Theory and Applications 10 The Annals of Applied Probability 9 Bernoulli 8 Journal of Mathematical Physics 8 Annales de l’Institut Henri Poincaré. Probabilités et Statistiques 8 Journal of Mathematical Sciences (New York) 8 Monte Carlo Methods and Applications 8 Discrete and Continuous Dynamical Systems 8 Infinite Dimensional Analysis, Quantum Probability and Related Topics 8 Methodology and Computing in Applied Probability 7 Lithuanian Mathematical Journal 7 Ukrainian Mathematical Journal 7 Journal of Applied Probability 7 Journal of Computational and Applied Mathematics 7 Transactions of the American Mathematical Society 7 Cybernetics and Systems Analysis 7 Electronic Journal of Probability 7 Brazilian Journal of Probability and Statistics 6 Applied Mathematics and Optimization 6 Acta Applicandae Mathematicae 6 Communications in Statistics. Simulation and Computation 6 Potential Analysis 5 Reports on Mathematical Physics 5 Chaos, Solitons and Fractals 5 Abstract and Applied Analysis 5 Acta Mathematica Sinica. English Series 5 Stochastic Models 5 Advances in Difference Equations 5 Afrika Matematika 5 Stochastic and Partial Differential Equations. Analysis and Computations 4 Applicable Analysis 4 Computers & Mathematics with Applications 4 Letters in Mathematical Physics 4 Physica A 4 Advances in Mathematics 4 Duke Mathematical Journal 4 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 4 Physica D 4 Sequential Analysis 4 Statistical Papers 4 NoDEA. Nonlinear Differential Equations and Applications 4 Mathematical Problems in Engineering 4 Mathematical Physics, Analysis and Geometry 4 Annales Henri Poincaré 4 Dynamical Systems 4 Bulletin of the Malaysian Mathematical Sciences Society. Second Series 4 Fixed Point Theory and Applications 4 ALEA. Latin American Journal of Probability and Mathematical Statistics 4 International Journal of Stochastic Analysis 4 Random Matrices: Theory and Applications 4 Communications in Mathematics and Statistics 4 Carpathian Mathematical Publications 3 Advances in Applied Probability 3 International Journal of Control 3 Reviews in Mathematical Physics 3 Journal of Geometry and Physics 3 Annals of the Institute of Statistical Mathematics 3 Proceedings of the American Mathematical Society 3 Applied Mathematics Letters 3 Mathematical and Computer Modelling 3 Numerical Algorithms 3 Journal of Statistical Computation and Simulation 3 Linear Algebra and its Applications 3 Proceedings of the Indian Academy of Sciences. Mathematical Sciences 3 Annales Mathématiques Blaise Pascal 3 European Series in Applied and Industrial Mathematics (ESAIM): Control, Optimization and Calculus of Variations 3 European Series in Applied and Industrial Mathematics (ESAIM): Probability and Statistics 3 Matematychni Studiï 3 International Journal of Theoretical and Applied Finance 3 Statistical Inference for Stochastic Processes 3 Journal of Systems Science and Complexity 3 Central European Journal of Mathematics 3 Foundations of Physics 3 Journal of Statistical Theory and Practice 3 Nonlinear Analysis. Hybrid Systems 3 Sankhyā. Series A 3 Evolution Equations and Control Theory 3 AIMS Mathematics 2 International Journal of Theoretical Physics 2 Mathematical Notes ...and 186 more Journals all top 5 ### Cited in 50 Fields 780 Probability theory and stochastic processes (60-XX) 222 Statistics (62-XX) 167 Partial differential equations (35-XX) 103 Operator theory (47-XX) 90 Linear and multilinear algebra; matrix theory (15-XX) 87 Statistical mechanics, structure of matter (82-XX) 79 Systems theory; control (93-XX) 64 Numerical analysis (65-XX) 58 Ordinary differential equations (34-XX) 48 Calculus of variations and optimal control; optimization (49-XX) 41 Quantum theory (81-XX) 40 Dynamical systems and ergodic theory (37-XX) 40 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 32 Fluid mechanics (76-XX) 31 Functional analysis (46-XX) 22 Real functions (26-XX) 21 Global analysis, analysis on manifolds (58-XX) 19 Special functions (33-XX) 18 General topology (54-XX) 16 Measure and integration (28-XX) 16 Integral equations (45-XX) 13 Biology and other natural sciences (92-XX) 11 Harmonic analysis on Euclidean spaces (42-XX) 11 Operations research, mathematical programming (90-XX) 9 Number theory (11-XX) 9 Approximations and expansions (41-XX) 8 Combinatorics (05-XX) 8 Information and communication theory, circuits (94-XX) 7 Potential theory (31-XX) 6 Computer science (68-XX) 4 Functions of a complex variable (30-XX) 4 Mechanics of particles and systems (70-XX) 4 Relativity and gravitational theory (83-XX) 3 History and biography (01-XX) 3 Differential geometry (53-XX) 3 Geophysics (86-XX) 2 Topological groups, Lie groups (22-XX) 2 Several complex variables and analytic spaces (32-XX) 2 Difference and functional equations (39-XX) 2 Sequences, series, summability (40-XX) 2 Integral transforms, operational calculus (44-XX) 2 Mechanics of deformable solids (74-XX) 2 Optics, electromagnetic theory (78-XX) 2 Astronomy and astrophysics (85-XX) 1 General and overarching topics; collections (00-XX) 1 Algebraic geometry (14-XX) 1 Associative rings and algebras (16-XX) 1 Group theory and generalizations (20-XX) 1 Abstract harmonic analysis (43-XX) 1 Classical thermodynamics, heat transfer (80-XX)
2022-10-05 06:40:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.530569314956665, "perplexity": 5431.701915209705}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00727.warc.gz"}
https://homework.cpm.org/category/CC/textbook/cc3/chapter/8/lesson/8.2.4/problem/8-109
### Home > CC3 > Chapter 8 > Lesson 8.2.4 > Problem8-109 8-109. Compute each product or quotient. Convert the final answer to scientific notation if necessary. 1. $\left(3 × 10^{2}\right)\left(2 × 10^{3}\right)$ You can also look at this problem as: (3 · 2)(102 · 103) 6 · 105 Is this in scientific notation? 1. $\left(2.75 × 10^{−2}\right)\left(2.5 × 10^{8}\right)$ See part (a). 1. $\frac { 8 \times 10 ^ { 12 } } { 4 \times 10 ^ { 7 } }$ You can also look at this problem as: $\left(\frac{8}{4}\right)\left(\frac{10^{12}}{10^{7}}\right)$ 2 · 105 Is this in scientific notation?
2020-05-25 19:27:37
{"extraction_info": {"found_math": true, "script_math_tex": 4, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.694218099117279, "perplexity": 9640.178666997417}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347389355.2/warc/CC-MAIN-20200525192537-20200525222537-00186.warc.gz"}
https://www.springerprofessional.de/unbearable-cost/7204450
scroll identifier for mobile main-content Über dieses Buch This work contains James K. Galbraith's most influential recent writings on current affairs along with new commentary, and explores both the descent to disaster in Iraq and the ongoing transformation of the American economy under the steerage of Alan Greenspan. Inhaltsverzeichnis Introduction These columns span a minor writing career: from the stolen election of 2000, through September 11, 2001 and onward to the war in Iraq. They end with the consolidation of Republican power in the flawed — not to say rigged — election of 2004. Thus they chronicle the decline of American democracy in the first term of George W. Bush. James K. Galbraith Corporate Democracy; Civic Disrespect With the events of late in the year 2000, the United States left behind constitutional republicanism, and turned to a different form of government. It is not, however, a new form. It is rather, a transplant, highly familiar from a different arena of advanced capitalism. James K. Galbraith Lies, Dumb Lies, and Sample Statistics The press has welcomed George W. Bush and well they might. Bush has freed them, at last, from the immense frustration of dealing with that compulsive liar, Bill Clinton. James K. Galbraith Defending Democrats … and Democracy Forgive me if I do not join the applause for Michael Moore and his they-all-do-it defense of George Bush, prominently excerpted on these pages on April 12. James K. Galbraith Tracking Down the Corporate Crooks President George W. Bush has reassured us that ‘From the antitrust laws of the 19th century to the S&L reforms of recent times, America has tackled financial problems when they appeared.’ But the Savings & Loan reforms came seven years and 150 billion taxpayer dollars late. Nor did that problem merely ‘appear’ It was created, by a deregulation bill in 1982, overseen at that time by the Vice President, the elder George Bush. Bill Black, James Galbraith The Realities of Resistance So George W. Bush has won a national election. He did it by an astounding mixture of war fever, money, and media manipulation. But that is beside the point. From now on, his presidency will carry weight that for many of us it did not carry before. James K. Galbraith Why Bush Likes a Bad Economy Almost nine million people are unemployed. Many millions more are underemployed, and most of all, underpaid. Millions more lack health insurance. States are cutting basic public services everywhere, while the taxes — property and sales, mainly — to pay for those that remain are rising. And the gates of opportunity — for instance, to attend college — are closing on millions more. James K. Galbraith The No-Jobs President The transcendent economic issue isn’t the growth rate. It isn’t the stock market. It isn’t the budget deficit. And it isn’t even the rate of unemployment. It’s number of people in this country who have decent work. James K. Galbraith Bush’s Hail Mary George W. Bush has held office for 36 months. How is he doing politically, and what can we learn from a detached look at the record? James K. Galbraith The Plutocrats Go Wild Next year’s economic difficulties are already on the horizon. Growth is slowing, as the housing market cools and consumers rein in their spending. Inflation is rising a bit, driven mainly by oil prices, health care costs, and corporate price increases fueling a spectacular recent profit surge. Job creation is weak and wages are flat. This is the New Stagflation — an unpleasant reminder of the economic cost of unilateral war. James K. Galbraith Dissecting Cheney The final verdict of history may dismiss George W. Bush as a front man who was not quite up to his job. But nothing like that will be said of Dick Cheney. Cheney is undeniably intelligent, powerful, and shrewd — a force to be reckoned with, even though operating mainly in the shadows. James K. Galbraith Waiting to Vote COLUMBUS, Ohio — The real scandal of this election became clear to me at 6.30 p.m. on election day as I drove a young African-American voter, a charming business student, seven months pregnant, to her polling place at Finland Elementary School in south Columbus. We arrived in a squalling rain to find voters lined up outside for about a hundred yards. Later the line moved indoors. We were told that the wait had averaged two hours for the entire day. By the time the doors closed at 7.30 p.m., it was considerably longer. James K. Galbraith Abolish Election Day The Internet is alive with furious messages from my frustrated friends, fanning the flames of Florida 2000. Many have zeroed in on the discrepancy between the exit polls and the final results. How, they demand to know, could the leaks that so strongly favored John Kerry early in the evening have been so far wrong? James K. Galbraith Democracy Inaction The election was stolen. That’s not in doubt. Secretary of State Colin Powell admitted it. The National Democratic Institute and the International Republican Institute both admitted it. Senator Richard Lugar of Indiana — a Republican — was emphatic: there had been ‘a concerted and forceful program of Election Day fraud and abuse’, he ‘had heard’ of employers telling their workers how to vote; yet he had also seen the fire of the resisting young, ‘not prepared to be intimidated’. James K. Galbraith The Floodgates Have Opened Hurricane Katrina and the death of New Orleans have changed everything, exposing the rot in government and the failures of the free-market worldview that has dominated our politics and economic policy for more than 30 years. Once again, the country must take stock of a terrible failure; once again we must change direction. James K. Galbraith, Michael D. Intriligator National Defense A few observations on the events of September 11 and their aftermath. First, the horror did not make the American people lose their minds. James K. Galbraith The Future Oil War Let me state in passing my view that UT’s President Faulkner should not have attacked my colleague Bob Jensen in the early days following September 11, for Jensen’s hot objections to the use of US military force in Afghanistan (and elsewhere). True, Faulkner’s words didn’t bother Jensen much, and no concrete steps were taken against him. But, given Faulkner’s position, he may have sent a signal to others on campus, less secure in their jobs than Bob is, and this is something that a university president should always avoid. James K. Galbraith The Cheney Doctrine Mr Cheney’s speech of August 26th provides us with the fullest statement we are likely to get in justification of an attack on Iraq. Whether it represents the final word in Bush Administration policy is anyone’s guess; press reports afterward suggested that the speech was neither fully cleared with the White House nor fact-checked with the CIA. Still, it seems unlikely that Mr Bush will be able to make a stronger case. Let us, therefore, work with what we’ve got, in the short time that apparently remains. James K. Galbraith The Unbearable Costs of Empire Talk in Washington these days is of Rome. But George W. Bush is no Caesar, and France under Napoleon may be the better precedent. Like Bush, Napoleon came to power in a coup. Like Bush, he fought off a foreign threat, then took advantage to convert the Republic to Empire. Like Bush, he built an immense army. Like Bush, he could not resist the temptation to use it. But unlike Caesar’s, Napoleon’s imperial pretensions did not last. James K. Galbraith The Paramilitary Mind In 1976 at the height of the Irish troubles I called on Conor Cruise O’Brien, then serving as Minister of Posts in the Irish Republic; his offices were in the Dublin Post Office of 1916 fame. I was on my way to Belfast, for no very good reason. In our brief meeting, O’Brien reflected bleakly on the fragility of peace efforts. It was so easy, he said, to bomb the negotiating table. James K. Galbraith What Economic Price This War? Recently as we debated the war now underway in Iraq, seven Nobel laureates joined 150 other US economists (including myself) to call for careful consideration of the costs of war in Iraq. When economists talk about costs, what do we mean? First, we mean budget costs — for gasoline, equipment, and explosives — that begin at about $100 billion. This figure is based on an assumption that the war goes well. If the assumption is wrong, the numbers will go up fast. The history of warfare — from Europe in 1914 to Vietnam in the 1960s — is littered with gross underestimates of budget costs. James K. Galbraith Still Wrong: Why Liberals Should Keep Opposing the War In a recent column, TAP online Editor Richard Just and tompaine.com Executive Editor Nick Penniman prescribed ‘the only moral and practical option’ for liberals quavering over the war. It is, they wrote, ‘to begin immediately campaigning for a more ambitious, comprehensive and compassionate reconstruction of Iraq … while supporting the war effort that will lay the groundwork for such plans to be enacted’. James K. Galbraith Don’t Blame Rumsfeld, Blame Bush As the reality of this war sets in, the hunt for scapegoats is starting. Donald Rumsfeld finds himself described, by military and intelligence officers, as a ‘businessman’ whose ‘micromanagement’ has produced a ’stalemate’, with the possibility of ‘a political and military disaster’. For a war only ten days old, the back-biting is astounding. Still, Rumsfeld is the wrong man to blame. James K. Galbraith The Iraqi Quagmire Sergio Vieira de Mello was the real thing. I met him in East Timor in 2001, at the US mission on the evening of July 4, 2001. He told my brother (his colleague in the transition cabinet) that he would not attend a dinner for the Australian foreign minister that night: ‘because I dislike him intensely’. Two days later I saw him again, as we joined the new East Timor self-defense forces for the last leg of a march to a new training ground. On that day, surrounded by guerrillas, their UN officers and the civilian staff, out on the road in the bright tropical sunshine, he was clearly having a good time. James K. Galbraith War and Economy Don’t Wear Well On both jobs and Iraq, the good news Bush tells us is contradicted by the bad news that we feel in our bones. James K. Galbraith How You Will Pay for the War Well, it may be that the laws of economics remain in force. And one of them says: war causes inflation. James K. Galbraith The Economics of the Oil War Underlying the talk of weapons of mass destruction, democracy, human rights, and George W. Bush’s supposed quest for personal vengeance against Saddam Hussein (‘He tried to kill my Dad’). there has always lurked the suspicion that it was about oil. That maps of the Iraqi oil fields made their way to the Cheney energy task force back in 2001 doesn’t exactly ease this suspicion. Neither does Bush’s determination to stay in Iraq, now that vengeance is his — but the occupation is plainly failing to achieve any of its other objectives. James K. Galbraith Boom Times for War Inc. On September 21, 2001, the American Stock Exchange created the Amex Defense Index, a measure of the stock prices of fifteen corporations who together account for about 80 percent of procurement and research contracting by the Department of Defense. The index of course includes the five largest contractors: Lockheed Martin, Boeing, Raytheon, Northrop Grumman and General Dynamics. James K. Galbraith The Gambler’s Fallacy You can’t win and you can’t break even; you can’t get out of the game. James K. Galbraith Withdrawal Symptoms In November 2004, Lt General Ricardo Sanchez came to a luncheon at my professional home, the LBJ School of Public Affairs. I attended and asked some inconvenient questions. It was an inconsequential exchange, but two weeks later I received a surprising invitation. Would I fly to Germany in February and speak to the leadership of the Army V Corps about the operational conditions of Iraq? I have no military experience, and have never been to Iraq, while many in my audience — mostly generals and colonels — had spent over a year there. But of course I went. My unstated assignment was to say some inconvenient things, which may have otherwise gone unsaid. James K. Galbraith About Greenspan Frontmatter Back to the Cross of Gold One thing my old Yale classmate Jaime Serra Puche and I have in common, besides our names, is that we’re both entitled to be sore at Alan Greenspan. James K. Galbraith Greenspan’s Error The Fed’s reduction of one set of interest rates on Friday marked the beginning of the end of Alan Greenspan’s war on inflation. The war began in February 1994 with the Fed’s quarter point increase in the rates that banks charge one another for overnight loans and led to a doubling of those rates, to 6 percent from 3 percent. The financial markets expect further cuts. James K. Galbraith The Free Ride of Mr Greenspan President Clinton is coming up on the most important appointment of his second Presidential term. Sometime soon, the term of the Chairman of the Board of Governors of the Federal Reserve System will expire. And the President must choose, whether to reappoint Alan Greenspan for four more years, or else to name a replacement. Most observers assume that Greenspan will get it, and a strange public quiet has settled over this issue. James K. Galbraith There’s Some Good News That’s Bad News Now why (I hear you ask) is good news for the economy bad news for the markets? Why, in particular, did the stock market drop a hundred points on March 9, the very day that unemployment was reported to have fallen to its lowest level in years? James K. Galbraith Greenspan’s Whim Why did Alan Greenspan raise interest rates? Because he could. Because two of the more independent members of his Board, the pro-growth conservative Lawrence Lindsey and the moderate Janet Yellen, have departed. Because he has been under pressure from banks and bond traders for two years; why not throw them a bone? Because he knows the Clinton Administration will say and do nothing in protest. And, especially, because he can get away with it. James K. Galbraith Greenspan’s Glasnost There they were, the high priests of the American Economics Association, a score of old men lined up at the head table like the Politburo atop Lenin’s tomb, with the same bad haircuts, black horn-rims and blank expressions. Speaking before this geriatric welcoming committee last week, Alan Greenspan managed to imitate the early Mikhail Gorbachev with surprising flair. I listened and was amazed. James K. Galbraith The Butterfly Effect Small actions can have large consequences. The mathematics of chaos teaches that a butterfly, flapping its wings in Brazil, can set off a chain of events leading to a hurricane at Cape Fear. They call this the ‘butterfly effect’. James K. Galbraith, George Purcell The Credit, Where Credit is Due Phil Gramm said it precisely, ‘If you were forced to narrow down the credit for the golden age that we find ourselves living in, I think there are many people who would be due credit, and many more who would claim credit.’ James K. Galbraith Stop the Sabotage Coming from the Fed The scene for November is nearly set. South Carolina did for George W. Bush what it did for Bob Dole: it sent him onwards, born again at Bob Jones University and wrapped in the Confederate flag. But the New Hampshire verdict, repeated at Michigan, will remain the authentic one. Like Dole, Bush is a weak candidate who will have to be rescued again and again from a popular insurgent by party leaders and the God squad. James K. Galbraith The Charge of the Fed Brigade Where, oh where, is Alfred Lord Tennyson when we need him now? NASDAQ to the right of them OPEC to the left of them. Volleyed and thundered … James K. Galbraith We Cannot Have Discipline So We Must Have Pain I have at least two friends — divorced women in their fifties, intellectuals, with literary and cultural interests — who put their nest eggs into NASDAQ stocks earlier this year. Ouch! Neither of them was rich, nor ever destined to be. Now they will be somewhat less so. James K. Galbraith The Swiss Guard Have you noticed all the establishment liberals defending the Fed? They have enlisted as Greenspan’s Swiss Guard, brass helmets shining, pikes at the ready. To bash those who would bash the Fed — this has become their holy task. James K. Galbraith Watching Greenspan Grow For those seeking a personal portrait of America’s maximum economic policymaker, Justin Martin’s biography will be hard to improve on. Informed and sympathetic, Martin traces the webs of Alan Greenspan’s personal and professional lives: his early days in jazz and Objectivism, his roots as an economist in the Conference Board and old-style business cycle studies of Arthur Burns, his ties to five Presidents and his liaisons and enduring friendships with interesting, intelligent, attractive and loyal women. James K. Galbraith The Man Who Stayed Too Long Alan Greenspan plays the role of economist, on TV especially, better than any public official who ever lived. But that doesn’t mean he is one. James K. Galbraith Bernankenstein’s Monster From Alan Greenspan to Benjamin Bernanke. The transition at the Federal Reserve is from insider to academic, from man of action to man of ideas. Greenspan’s PhD was awarded by New York University in 1977 as a decoration; he didn’t do any work for it. Bernanke, on the other hand, has stellar credentials: summa cum laude from Harvard, PhD from MIT, professorship at Princeton. But apart from a few months at the Council of Economic Advisers, Bernanke has never run anything larger than an economics department. Greenspan ran the Fed for 19 years without ever losing a vote. James K. Galbraith About the Economy Frontmatter I Don’t Want To Talk About It ‘Americans Discuss Social Security’. That’s the name of a forum being held around the country and in Austin on April 18. Co-sponsored by the Pew Charitable Trusts and locally by my own LBJ School of Public Affairs, the event is part of a national campaign to air options for Social Security ‘reform’ before local opinion-makers and media markets. If you live and breathe in this country, you’ve seen the advertising. I won’t be in town, so here’s what I have to say. James K. Galbraith The Sorcerer’s Apprentices Once upon a time two professors — we can call them Scholes and Merton — got bored with their jobs. They were economists, theorists of finance. Brilliant and accomplished, they were celebrated, even renowned; each would win a Nobel Prize. But it wasn’t enough. Scholes and Merton wanted to be very, very rich. James K. Galbraith Is the New Economy Rewriting the Rules? Mr President, the question before this panel is, in effect, ‘Can Full Employment Without Inflation Endure?’ According to the ‘old rules’ and those who believe them, the expansion will not last. Growth is too rapid, unemployment too low, stocks too high. There are deep and mysterious reasons why wage inflation is sure to explode, someday soon. Even more mysteriously, some have even suggested that the rate of productivity growth is too fast. James K. Galbraith So Long, Wealth Effect Friday’s crack in the NASDAQ, a 10 percent drop on top of 20 and no end in sight, should put an end to talk about an inflationary wealth effect. No wealth. No effect. James K. Galbraith Incurable Optimists In the status hierarchy of my profession, the Wall Street economist holds a strangely prominent role. Typically, though not always, he lacks academic standing, analytical achievement, significant publication. Research is foreign to him; independent thought unknown. His job is mainly to get his name into the papers. At this he works exceptionally hard. And the financial pages, which in their turn exist mainly to celebrate the great financial houses, oblige. Hence the Wall Street economist has the luxury of seeing his thoughts in print, without the burden of actually … well, of actually thinking. James K. Galbraith Enron and the Next Revolution Nowadays there are three classes in America: working people at the bottom, professionals above them, a tiny elite at the top. Democrats represent the professionals, Republicans represent the CEOs. No one, much, speaks for working people, who must rely on the occasional sympathy of leading Democrats for most of the little they get. James K. Galbraith Share Revenue, Save Jobs To the economy, September 11 now appears as a transient shock. Sales, confidence and the stock market plunged, but then returned. The dead cat bounced; optimists declared recovery to be near. The so-called stimulus package died. And so we now face a classic test of the predominant economics. Either recovery will happen, or it won’t. James K. Galbraith Hangover in America? It’s the new morning in America, you might say. The President has declared that the 1990s were ‘a binge’, from which we are suffering ‘a hangover’. James K. Galbraith The Big Fix: the Case for Public Spending The economy is in trouble. Investment, far below what it was two years ago, shows little sign of revival. Consumer spending, having held up remarkably during the same time, is now more likely to tail off than to accelerate. And while the Federal Government could soon be spending an extra one or two hundred billion dollars on war, otherwise its spending is also in decline. James K. Galbraith Bush’s Tax Package and Economic Reality This President gets good press. ‘Bush Offers a Cure’ was the banner on the Chicago Sun-Times on January 8 — to take just one example. Commentators everywhere are calling his ‘bold’ and ‘daring’ tax-cut proposals a ’stimulus package’ and a ‘recovery program’. Even the big number plays well. Six hundred and seventy-four billion! Imagine that. James K. Galbraith Cashing Out This is a book written entirely in the first person singular voice of Robert Rubin, but to what extent by him remains unclear. Jacob Weisberg, editor of Slate, is the co-author. But he is not a player in the story; nor is he referenced in the dedication, author’s note, or except fleetingly in the acknowledgements. James K. Galbraith Bankers Versus Base There may come a day, in January 2005, when the Democrats will come back to power. Can we perhaps divert ourselves from the campaign long enough to ask, what then? The Democrats have a problem. Their base wants jobs and security. Their financial leadership wants a return to the Clinton formula of deficit reduction, leaving low interest rates to generate economic growth and jobs. John Kerry’s emerging economic platform pays heavy homage to this formula, but it is unlikely to work out. A return to Bill Clinton’s policies will not reproduce Bill Clinton’s results. There are at least six reasons why this is so. James K. Galbraith Keeping It Real for the Voters AUSTIN, Texas — Surprised though you may be to hear this, the Presidential campaign is just getting started. James K. Galbraith Dazzle Them with Demographics Laurence Kotlikoff came to Texas in May, to speak to the new graduates in economics and to give a seminar at the department. After hearing him grimly forecast the impending bankruptcy of our government, I asked him why the financial markets hadn’t noticed. Uncle Sam is still able to borrow for 20 years at a bit less than 5 percent. How come? Kotlikoff replied that the markets were crazy. James K. Galbraith Social Security Scare Campaign Will Bush this week once again put Social Security privatization — or something close to it — into the headlines? Maybe he will. Maybe he won’t. But one doesn’t have to read tea leaves to know it’s on his agenda. James K. Galbraith Apocalypse Not Yet With the euro touching$1.33 and the pound so high I couldn’t bear to look at the rate, thought on a flight home from across the pond turned painfully to the decay of the once-almighty dollar, and to the cries of fear emanating these days from Wall Street. The current jitters are no surprise; the few Keynesians left in the economics profession have long thought them overdue.1 Here are the most important reasons why this is so: • We have over many years worn down our trade position in the world economy, from overpowering supremacy sixty years ago, to the point where high employment in the United States generates current account deficits well over half a trillion dollars per year. We have become dependent for our living standard on the willingness of the rest of the world to accept dollar assets - stocks, bonds, and cash - in return for real goods and services, the product of hard labor by people much poorer than ourselves in return for chits that require no effort to produce. For decades the Western world tolerated the ‘exorbitant privilege’ of a dollar-reserve economy because the United States was the indispensable power, providing reliable security against communism and insurrection without intolerable violence or oppression, conditions under which many countries on this side of the Iron Curtain grew and prospered. Those rationales evaporated fifteen years ago, and the ‘Global War on Terror’ is not a persuasive replacement. Thus, what was once a grudging bargain with the world’s stabilizing hegemon country is now widely seen as a lingering subsidy for a predator state. James K. Galbraith Taming Predatory Capitalism In 1899 Thorstein Veblen described predation as a phase in the evolution of culture, ‘attained only when the predatory attitude has become the habitual and accredited spiritual attitude … when the fight has become the dominant note in the current theory of life’. After an entire century’s struggle to escape from this phase, we’ve suffered a relapse. The predators are everywhere unleashed; the institutions built to contain them, from the UN to the AFL-CIO to the SEC, are everywhere under siege. Predation has become again the defining feature of economic life; our first problem is to grasp this reality in full. James K. Galbraith Backmatter Weitere Informationen
2019-06-17 04:44:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18929696083068848, "perplexity": 6372.119158340647}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998376.42/warc/CC-MAIN-20190617043021-20190617065021-00220.warc.gz"}
http://www.ck12.org/chemistry/Melting-Point/lesson/Melting-Point-CHEM/
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" /> # Melting Point ## The temperature at which a solid changes into a liquid Estimated5 minsto complete % Progress Practice Melting Point MEMORY METER This indicates how strong in your memory this concept is Progress Estimated5 minsto complete % Melting Point Credit: Jonas Bergsten (Wikimedia: Bergsten) Source: http://commons.wikimedia.org/wiki/File:Drop_of_water_2003_05.jpg License: CC BY-NC 3.0 #### Have you ever gone ice skating? In the winter, many people find the snow and ice beautiful. They enjoy getting out to ski or ice-skate. Others don’t find that time of year to be so much fun. When the snow melts, the roads get very sloppy and messy. Those people look forward to spring when all the ice and snow are gone and the weather is warmer. ### Melting Point Solids are similar to liquids in that both are condensed states, with particles that are far closer together than those of a gas. However, while liquids are fluid, solids are not. The particles of most solids are packed tightly together in an orderly arrangement. The motion of individual atoms, ions, or molecules in a solid is restricted to vibrational motion about a fixed point. Solids are almost completely incompressible and are the densest of the three states of matter. As a solid is heated, its particles vibrate more rapidly as the solid absorbs kinetic energy. Eventually, the organization of the particles within the solid structure begins to break down and the solid starts to melt. The melting point is the temperature at which a solid changes into a liquid. At its melting point, the disruptive vibrations of the particles of the solid overcome the attractive forces operating within the solid. As with boiling points, the melting point of a solid is dependent on the strength of those attractive forces. Sodium chloride (NaCl) is an ionic compound that consists of a multitude of strong ionic bonds. Sodium chloride melts at 801°C. Ice (solid H2O) is a molecular compound whose molecules are held together by hydrogen bonds. Though hydrogen bonds are the strongest of the intermolecular forces, the strength of hydrogen bonds is much less than that of ionic bonds. The melting point of ice is 0°C. The melting point of a solid is the same as the freezing point of the liquid. At that temperature, the solid and liquid states of the substance are in equilibrium. For water, this equilibrium occurs at 0°C. \begin{align*}\text{H}_2\text{O}\text{(s)} \rightleftarrows \text{H}_2\text{O}\text{(l)}\end{align*} We tend to think of solids as those materials that are solid at room temperature. However, all materials have melting points of some sort. Gases become solids at extremely low temperatures, and liquids will also become solid if the temperature is low enough. Table below gives the melting points of some common materials. Material Melting Point (°C) hydrogen -259 oxygen -219 diethyl ether -116 ethanol -114 water 0 pure silver 961 pure gold 1063 iron 1538 ### Summary • The melting point is the temperature at which a solid changes into a liquid. • Intermolecular forces have a strong influence on melting point. ### Review 1. Define melting point. 2. What happens when a material melts? 3. Would you expect ethane (C2H6) to have a higher or lower melting point than water? Explain your answer. ### Notes/Highlights Having trouble? Report an issue. Color Highlighted Text Notes Please to create your own Highlights / Notes Show More ### Image Attributions 1. [1]^ Credit: Jonas Bergsten (Wikimedia: Bergsten); Source: http://commons.wikimedia.org/wiki/File:Drop_of_water_2003_05.jpg; License: CC BY-NC 3.0 ### Explore More Sign in to explore more, including practice questions and solutions for Melting Point. Please wait... Please wait...
2016-12-04 16:58:19
{"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42929890751838684, "perplexity": 2097.3624876240815}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541324.73/warc/CC-MAIN-20161202170901-00352-ip-10-31-129-80.ec2.internal.warc.gz"}
https://answers.opencv.org/questions/231180/revisions/
# Revision history [back] ### Results for matchtemplate are too high Hi, I'm having some issues with MatchTemplate. When extracting the best position value with MinMaxLoc, it's always very high values (0.99 something), even when trying to match on an image where the pattern is absent. I think I never saw it lower than 0.96. So this is a bit painful when trying to filter false positives, because you end up setting the threshold to 0.997 because it was still having false positives at 0.996... And not be sure it will be enough. Is it an expected behavior ? I read somewhere this value should not be used as a score, but it didn't explain why, and how to get a score then. I'm thinking about doing something like that : if ((MatchingType == Emgu.CV.CvEnum.TemplateMatchingType.SqdiffNormed) || (MatchingType == Emgu.CV.CvEnum.TemplateMatchingType.Sqdiff)) { MatchLoc = MinLoc; Results.Confidence = (1 - MinVal) * 100; } else { MatchLoc = MaxLoc; Results.Confidence = MaxVal * 100; } Results.Confidence = Math.Max(0, Results.Confidence - 90) * 10; ((MatchTemplateResult)Results.Details).Center = new System.Windows.Point(MatchLoc.X + (ModelMatrix.Cols / 2), MatchLoc.Y + (ModelMatrix.Rows / 2)); But I'm not sure this is the right way or a good practice.
2021-10-16 11:08:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33438920974731445, "perplexity": 1531.8200605518252}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323584567.81/warc/CC-MAIN-20211016105157-20211016135157-00504.warc.gz"}
https://motls.blogspot.com/2007/10/richard-dedekind-176th-birthday.html?m=0
## Saturday, October 06, 2007 ... // ### Richard Dedekind: 176th birthday On October 6th, 1831, Richard Dedekind was born. Yes, this stamp is from East Germany. Click to learn more about him. Dedekind was one of the mathematically inclined 19th century string theorists. In mathematics, Dedekind studied the relations between rational and irrational numbers (recall Dedekind cut). He was one of the first people who appreciated the concept of groups in algebra and arithmetics. He investigated number fields, ideals, and many other aspects of number theory. Dirichlet who essentially discovered D-branes 150 years before Polchinski was his close friend and Riemann was another contemporary. Nevertheless, 21st century theoretical physicist surely remember Dedekind for his eta function, essentially the inverse partition sum of a boson in string theory. The picture above shows the real part of the modular discriminant defined by Weierstrass as a function of "q = exp(2 pi i tau)". The modular discriminant is the 24th power of the Dedekind eta function, up to a power of 2 pi - exactly the inverse of what you get in the light-cone bosonic string theory partition sum. #### snail feedback (0) : (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-1828728-1', 'auto'); ga('send', 'pageview');
2022-01-24 11:28:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7104469537734985, "perplexity": 2714.3523906927844}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304528.78/warc/CC-MAIN-20220124094120-20220124124120-00236.warc.gz"}
https://chemistry.stackexchange.com/questions/24106/slater-determinant-as-an-unperturbed-atomic-wave-function
# Slater determinant as an unperturbed atomic wave function I've deduced following postulates from studying my chem books. 1) Slater determinants are eigenfunctions of an unperturbed atomic Hamiltonian, which contains kinetic and central potential energy parts of each electrons only, since spin orbitals constituting the determinants are originated from one-electron Hamiltonian eigenfunctions. Here, atomic Hamiltonian = (kinetic part of each electrons) + (central potentials between atomic nucleus and electrons) + (interelectronic potentials) Spin orbit interaction and the other effects are neglected. 2) Slater determinants or their linear combinations are eigenfunctions of total spin angular momentum and total orbital angular momentum operator($S^2$ and $L^2$) simultaneously. In addition to them, I would refer to information from Quantum Chemistry, 6th Ed., written by I. N. Levine. (p. 312) Since $S^2$ and $L^2$ commute with the atomic Hamiltonian and with the exchange operator, the zeroth order functions should be eigenfunctions of $S^2$ and $L^2$. (The zeroth order functions Levine mentioned above is indicating the single or linear combinations of Slater determinants in a same configuration) Well, I can accept the fact that the atomic Hamiltonian, $S^2$ and $L^2$ commute. However if the single or linear combinations of Slater determinants in a same configuration are the zeroth order functions (eigenfunctions of the unperturbed atomic Hamiltonian), how the fact that $S^2$ and $L^2$ commute with the atomic Hamiltonian makes the Slater determinants be the eigenfunctions of the atomic Hamiltonian?? For example, a Slater determinant corresponding to the one of the Helium-first excited states, $|1s\alpha~2s\beta|$, is an eigenfunction of the unperturbed atomic Hamiltonian(this determinant is a zeroth order function), $S^2$ and $L^2$, but not of the atomic Hamiltonian. However if the single or linear combinations of Slater determinants in a same configuration are the zeroth order functions (eigenfunctions of the unperturbed atomic Hamiltonian), how the fact that S2 and L2 commute with the atomic Hamiltonian makes the Slater determinants be the eigenfunctions of the atomic Hamiltonian?? Actually, you go it right, but let us spell this out step-by-step. First, forget about angular momentum operators, since you do not need them to understand that a single Slater determinant is indeed not an eigenfunction of the atomic Hamiltonian (as you called it). That is it, you are right. A Slater determinant is an eigenfunction of the unperturbed Hamiltonian which describes a system of independent electrons, but not of the exact one. Nevertheless, a Slater determinant can be used as a trial wave function in a variational procedure. That is the whole point of the Hartree-Fock method: a Slater determinant $\Phi$ is not an eigenfunction of the atomic Hamiltonian, but we could evaluate it energy $\left\langle \Phi \mid H \mid \Phi \right\rangle$ using Slater rules, and consequnetly, we could minimize it to find an upper bound to the ground state energy. The resulting Slater determinant obtained by minimizing the energy is indeed only an approximation to the ground state wave function. Note also that the exact wave function (which is an eigenfunction of atomic Hamiltonian) can be expressed as a linear combination of Slater determinants for the various possible electronic configurations, not just a single Slater determinant. And this is the theoretical basis for configuration interaction (CI) method. Now back to all this business with angular momentum operators. A single Slater determinant is not necessarily an eigenfunction of neither $\hat{L}^2$ nor $\hat{S}^2$. However, as you mentioned, the Hamiltonian indeed commutes with all these operators (in non-relativistic approximation), and thus, the exact wave function is also an eigenfunction of $\hat{L}^2$ and $\hat{S}^2$ (as well as $\hat{L}_z$ and $\hat{S}_z$). To be more precise, the exact wave function can be chosen to be a simultaneous eigenfunction of all these commuting operators $\hat{H}, \hat{L}^2, \hat{S}^2, \hat{L}_z, \hat{S}_z$. And since it is really desirable, we require the same from out trial wave function: we would like it to be an eigenfunction of these angular momentum operators. So we want to construct our trial wave function being an eigenfunction of angular momentum operators $\hat{L}^2, \hat{S}^2, \hat{L}_z, \hat{S}_z$, but a single Slater determinant would not necessarily do it, so we form what is called a spin-adapted configuration state function (CSF) - a linear combination of Slater determinants which is an eigenfunction of angular momentum operators. Note, however, that this is not always possible. For instance, for the case of unrestricted determinants using a linear combination would not help in making the trial wave function an eigenfunction of $\hat{S}^{2}$.
2019-08-26 06:30:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8985784649848938, "perplexity": 305.8084121085461}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027330968.54/warc/CC-MAIN-20190826042816-20190826064816-00493.warc.gz"}
http://cowlark.com/calculon/usage.html
# Introduction The Calculon library is extremely simple. It is supplied as a set of headers; to use the library, create a type instance of the compiler, create the global symbol table, and then you can instantiate functions. If compiling a script fails, an exception is thrown. If that succeeds, you have an object which you can just call. Here is an annotated example. /* Create the type instance, and pull some useful types out of it */ typedef Calculon::Instance<Calculon::RealIsDouble> Compiler; typedef Compiler::Real Real; typedef Compiler::Vector<3> Vector3; /* Create the symbol table */ Compiler::StandardSymbolTable symbols; /* Open the script file */ std::ifstream scriptSource("script.cal"); /* Prototype the function that the script represents, and compile the * script. We pass in the Calculon type signature of the function so the * compiler knows what parameters we're using */ typedef void ScriptFunction(Real x, Real y, Real* result); Compiler::Program<ScriptFunction> function(symbols, code, "(x:real, y:real): (result:real)"); /* Call the function */ function(1, -1, &result); ...and that's pretty much it. function() may be called as many times as you like and is thread-safe. When it goes out of scope it will be destroyed. # Configurations Right now there are two forms of the Calculon compiler you can use. This: typedef Calculon::Instance<Calculon::RealIsDouble> Compiler; ...and this: typedef Calculon::Instance<Calculon::RealIsFloat> Compiler; These control whether Calculon's real type is a single-precision or double-precision. There may be more configurations in the future. You can use both at the same time, if you wish. # Vectors The Compiler::Vector<> template refers to a vector with the specified number of parameters. It ensures that the correct alignment is used (which is very important, or else your script will run very badly or produce the wrong results). It has an m[] member which provides access to the members of the vector. Dereference them at your will. Sorry, they can't be initialised via initialisation lists: you have to assign to their members. (If anyone knows how to force C++ to allow this, please get in touch.) Vectors which are 1, 2, 3 or 4 elements long may also be derefenced via x, y, z and w, as appropriate. # Type aliases When compiling your Calculon script, you may also provide an optional extra parameter which provides a map of type aliases. These allow you to use application-specific names for certain types in the script. The main use for this is if the application requires a certain size of vector and you don't want to bake knowledge of the size of vector into the script. map<string, string> typeAliases; #if defined NINEBYNINE typeAliases["matrix"] = "vector*9"; #else typeAliases["matrix"] = "vector*4"; #endif Compiler::Program<ScriptFunction> function(symbols, code, "(x:matrix, y:matrix): (result:real)", typeAliases); ...allows this script to work with either configuration: (x*y).sum # Calling conventions Alas, the mapping between C++ parameters and Calculon parameters is not quite obvious. Calculon reals are available as Compiler::Real, and Calculon vectors as the appropriate kind of Compiler::Vector. Reals are passed in the obvious way, as in the example above. However, vectors are passed by pointer. Return parameters are passed last, and always by pointer. For example: /* f1(x: real, y: real, v: vector*3): (result: real) */ typedef void f1(Real x, Real y, Compiler::Vector<3>* v, Real* result); /* f2(x: real, y: real, v: vector*3): (result: vector*2) */ typedef void f2(Real x, Real y, Compiler::Vector<3>* v, Compiler::Vector<2>* result); To call such a function, create some Vector objects somewhere and pass their pointers in. The stack will do fine. Compiler::Vector<2> result; Compiler::Vector<3> v; v.x = 0; v.y = 1; v.z = 2; f2(7, 8, &v, &result); # Registering functions Functions may be trivially added to the symbol table. (You may create as many symbol tables as you wish; the symbol table is only ever used during the compilation process. Symbol tables are not thread safe!) extern "C" double perlin(Compiler::Vector<3>* v) { return ... } Compiler::StandardSymbolTable symbols; The first parameter specifies the name; the second parameter gives the type signature of the function; the third parameter is the function to call. The function does not need to be extern "C" but it's a good idea to avoid edge cases. The type signature uses a similar syntax to Calculon type signatures, but without function names. real must be explicitly specified. float and double are valid here, and cause Calculon to automatically convert between the representation of real that it is using internally to your platform's float or double types. (If you specify real, ensure that your external function uses Compiler::Real as its parameter type.) If you make a mistake here, really bad things happen. There is no way for Calculon to detect whether the function signature is correct or not, and it just trusts you. If you get it wrong, you may get garbage data or crashes. Only one value may be returned. Reals (and doubles and floats) are returned inline --- this is different from the calling convention used for Calculon scripts. Vectors are returned by pointer via an extra parameter which appears last. Note carefully! Calculon is designed for scripts that have no side effects. If you call out to a function which does something... well, it'll work, but the Calculon compiler is allowed to assume that if it calls the function once with a set of parameters, it can call it again with the same parameters and get the same result. Be careful. (And don't forget that rand() has side effects.) # Registering values You may also register real and vector global variables in the symbol table. This is useful for simple constants that won't change between runs of the Calculon script (such as: size of the output image, position of various objects in 3D space, etc). Compiler::StandardSymbolTable symbols; vector<double> v(4); v[0] = 1; v[1] = 2; v[2] = 3; v[3] = 4; These values are compiled as literals into the output machine code, which means they are fast. However, one the script has been compiled, they cannot be changed. Use input parameters if you need values which change. # Dependencies The Calculon library uses the STL and iostreams. It does use some Boost, but does not use C++11, for maximum compatibility. Fairly obviously, it uses LLVM 3.3. It may or may not work with other versions of LLVM. (If you need it to work on another version, please get in touch.) The supplied sample Makefile shows the recommended way to build programs that use Calculon, but the short summary is: g++ -I\$(shell llvm-config-3.3 --includedir) -lLLVM-3.3 program.cc Calculon works with both gcc and clang++. I haven't tried with Visual Studio (I'd appreciate any reports of workingness or otherwise).
2018-09-20 18:08:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.375064879655838, "perplexity": 3373.202695736672}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156554.12/warc/CC-MAIN-20180920175529-20180920195929-00216.warc.gz"}
https://kluedo.ub.uni-kl.de/frontdoor/index/index/year/2003/docId/1399
## Regularization of Inverse Problems in Satellite Geodesy by Wavelet Methods with Orbital Data Given on Closed Surfaces • Satellite-to-satellite tracking (SST) and satellite gravity gradiometry (SGG), respectively, are two measurement principles in modern satellite geodesy which yield knowledge of the first and second order radial derivative of the earth's gravitational potential at satellite altitude, respectively. A numerical method to compute the gravitational potential on the earth's surface from those observations should be capable of processing huge amounts of observational data. Moreover, it should yield a reconstruction of the gravitational potential at different levels of detail, and it should be possible to reconstruct the gravitational potential from only locally given data. SST and SGG are modeled as ill-posed linear pseudodifferential operator equations with an injective but non-surjective compact operator, which operates between Sobolev spaces of harmonic functions and such ones consisting of their first and second order radial derivatives, respectively. An immediate discretization of the operator equation is obtained by replacing the signal on its right-hand-side either by an interpolating or a smoothing spline which approximates the observational data. Here the noise level and the spatial distribution of the data determine whether spline-interpolation or spline-smoothing is appropriate. The large full linear equation system with positive definite matrix which occurs in the spline-interplation and spline-smoothing problem, respectively, is efficiently solved with the help of the Schwarz alternating algorithm, a domain decomposition method which allows it to split the large linear equation system into several smaller ones which are then solved alernatingly in an iterative procedure. Strongly space-localizing regularization scaling functions and wavelets are used to obtain a multiscale reconstruction of the gravitational potential on the earth's surface. In a numerical experiment the advocated method is successfully applied to reconstruct the earth's gravitational potential from simulated 'exact' and 'error-affected' SGG data on a spherical orbit, using Tikhonov regularization. The applicability of the numerical method is, however, not restricted to data given on a closed orbit but it can also cope with realistic satellite data. ### Weitere Dienste Verfasserangaben: Petra Baumann urn:nbn:de:hbz:386-kluedo-12517 Diplomarbeit Englisch 2001 2001 Technische Universität Kaiserslautern Technische Universität Kaiserslautern 02.06.2003 Inverse Problems ; Multiplicative Schwarz Algorithm; Regularization Wavelets ; Splines Fachbereich Mathematik 5 Naturwissenschaften und Mathematik / 51 Mathematik / 510 Mathematik 31-XX POTENTIAL THEORY (For probabilistic potential theory, see 60J45) / 31Bxx Higher-dimensional theory / 31B05 Harmonic, subharmonic, superharmonic functions 42-XX FOURIER ANALYSIS / 42Cxx Nontrigonometric harmonic analysis / 42C15 General harmonic expansions, frames 42-XX FOURIER ANALYSIS / 42Cxx Nontrigonometric harmonic analysis / 42C40 Wavelets and other special systems 47-XX OPERATOR THEORY / 47Axx General theory of linear operators / 47A52 Ill-posed problems, regularization [See also 35R25, 47J06, 65F22, 65J20, 65L08, 65M30, 65R30] 49-XX CALCULUS OF VARIATIONS AND OPTIMAL CONTROL; OPTIMIZATION [See also 34H05, 34K35, 65Kxx, 90Cxx, 93-XX] / 49Mxx Numerical methods [See also 90Cxx, 65Kxx] / 49M27 Decomposition methods 65-XX NUMERICAL ANALYSIS / 65Dxx Numerical approximation and computational geometry (primarily algorithms) (For theory, see 41-XX and 68Uxx) / 65D07 Splines 65-XX NUMERICAL ANALYSIS / 65Jxx Numerical analysis in abstract spaces / 65J22 Inverse problems Standard gemäß KLUEDO-Leitlinien vor dem 27.05.2011 $Rev: 13581$
2016-12-06 10:22:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35483282804489136, "perplexity": 2390.8734366847543}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541896.91/warc/CC-MAIN-20161202170901-00425-ip-10-31-129-80.ec2.internal.warc.gz"}
https://blender.stackexchange.com/questions/42477/modifier-add-does-nothing
# modifier_add does nothing I have a set of meshes loaded to the scene. When I execute following script I wrote, the modifiers collection of the obj always stays empty. Where is my error? import bpy for obj in bpy.data.objects: print(obj) bpy.context.scene.objects.active = obj print(bpy.context.scene.objects.active) bpy.ops.object.modifier_apply(modifier='DECIMATE') print(len(obj.modifiers)) Note that I check if the active object was set, and the first two prints always print exactly the same. But the print(len(...)) always prints a 0 and in the UI the modifier does not show up. All solutions for similar problems I found only suggest to set the active object which I did. • You're using the wrong operator on line 6 (bpy.ops.object.modifier_apply). You should be using, as you point out in the title of this question, bpy.ops.object.modifier_add. Dec 3 '15 at 9:41 • I'll be damned... I've read the script may times and always missed that. Thanks, now it works. Dec 3 '15 at 9:42 • Alternatively use mod = obj.modifiers.new("Decimod", 'DECIMATE') and set modifier props mod.ratio = 0.8 Dec 3 '15 at 11:12
2021-09-25 22:31:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2935890555381775, "perplexity": 3037.188717914356}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057775.50/warc/CC-MAIN-20210925202717-20210925232717-00495.warc.gz"}
http://nbviewer.jupyter.org/github/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/05.11-K-Means.ipynb
This notebook contains an excerpt from the Python Data Science Handbook by Jake VanderPlas; the content is available on GitHub. The text is released under the CC-BY-NC-ND license, and code is released under the MIT license. If you find this content useful, please consider supporting the work by buying the book! # In Depth: k-Means Clustering¶ In the previous few sections, we have explored one category of unsupervised machine learning models: dimensionality reduction. Here we will move on to another class of unsupervised machine learning models: clustering algorithms. Clustering algorithms seek to learn, from the properties of the data, an optimal division or discrete labeling of groups of points. Many clustering algorithms are available in Scikit-Learn and elsewhere, but perhaps the simplest to understand is an algorithm known as k-means clustering, which is implemented in sklearn.cluster.KMeans. We begin with the standard imports: In [1]: %matplotlib inline import matplotlib.pyplot as plt import seaborn as sns; sns.set() # for plot styling import numpy as np ## Introducing k-Means¶ The k-means algorithm searches for a pre-determined number of clusters within an unlabeled multidimensional dataset. It accomplishes this using a simple conception of what the optimal clustering looks like: • The "cluster center" is the arithmetic mean of all the points belonging to the cluster. • Each point is closer to its own cluster center than to other cluster centers. Those two assumptions are the basis of the k-means model. We will soon dive into exactly how the algorithm reaches this solution, but for now let's take a look at a simple dataset and see the k-means result. First, let's generate a two-dimensional dataset containing four distinct blobs. To emphasize that this is an unsupervised algorithm, we will leave the labels out of the visualization In [2]: from sklearn.datasets.samples_generator import make_blobs X, y_true = make_blobs(n_samples=300, centers=4, cluster_std=0.60, random_state=0) plt.scatter(X[:, 0], X[:, 1], s=50); By eye, it is relatively easy to pick out the four clusters. The k-means algorithm does this automatically, and in Scikit-Learn uses the typical estimator API: In [3]: from sklearn.cluster import KMeans kmeans = KMeans(n_clusters=4) kmeans.fit(X) y_kmeans = kmeans.predict(X) Let's visualize the results by plotting the data colored by these labels. We will also plot the cluster centers as determined by the k-means estimator: In [4]: plt.scatter(X[:, 0], X[:, 1], c=y_kmeans, s=50, cmap='viridis') centers = kmeans.cluster_centers_ plt.scatter(centers[:, 0], centers[:, 1], c='black', s=200, alpha=0.5); The good news is that the k-means algorithm (at least in this simple case) assigns the points to clusters very similarly to how we might assign them by eye. But you might wonder how this algorithm finds these clusters so quickly! After all, the number of possible combinations of cluster assignments is exponential in the number of data points—an exhaustive search would be very, very costly. Fortunately for us, such an exhaustive search is not necessary: instead, the typical approach to k-means involves an intuitive iterative approach known as expectation–maximization. ## k-Means Algorithm: Expectation–Maximization¶ Expectation–maximization (E–M) is a powerful algorithm that comes up in a variety of contexts within data science. k-means is a particularly simple and easy-to-understand application of the algorithm, and we will walk through it briefly here. In short, the expectation–maximization approach here consists of the following procedure: 1. Guess some cluster centers 2. Repeat until converged 1. E-Step: assign points to the nearest cluster center 2. M-Step: set the cluster centers to the mean Here the "E-step" or "Expectation step" is so-named because it involves updating our expectation of which cluster each point belongs to. The "M-step" or "Maximization step" is so-named because it involves maximizing some fitness function that defines the location of the cluster centers—in this case, that maximization is accomplished by taking a simple mean of the data in each cluster. The literature about this algorithm is vast, but can be summarized as follows: under typical circumstances, each repetition of the E-step and M-step will always result in a better estimate of the cluster characteristics. We can visualize the algorithm as shown in the following figure. For the particular initialization shown here, the clusters converge in just three iterations. For an interactive version of this figure, refer to the code in the Appendix. The k-Means algorithm is simple enough that we can write it in a few lines of code. The following is a very basic implementation: In [5]: from sklearn.metrics import pairwise_distances_argmin def find_clusters(X, n_clusters, rseed=2): # 1. Randomly choose clusters rng = np.random.RandomState(rseed) i = rng.permutation(X.shape[0])[:n_clusters] centers = X[i] while True: # 2a. Assign labels based on closest center labels = pairwise_distances_argmin(X, centers) # 2b. Find new centers from means of points new_centers = np.array([X[labels == i].mean(0) for i in range(n_clusters)]) # 2c. Check for convergence if np.all(centers == new_centers): break centers = new_centers return centers, labels centers, labels = find_clusters(X, 4) plt.scatter(X[:, 0], X[:, 1], c=labels, s=50, cmap='viridis'); Most well-tested implementations will do a bit more than this under the hood, but the preceding function gives the gist of the expectation–maximization approach. ### Caveats of expectation–maximization¶ There are a few issues to be aware of when using the expectation–maximization algorithm. #### The globally optimal result may not be achieved¶ First, although the E–M procedure is guaranteed to improve the result in each step, there is no assurance that it will lead to the global best solution. For example, if we use a different random seed in our simple procedure, the particular starting guesses lead to poor results: In [6]: centers, labels = find_clusters(X, 4, rseed=0) plt.scatter(X[:, 0], X[:, 1], c=labels, s=50, cmap='viridis'); Here the E–M approach has converged, but has not converged to a globally optimal configuration. For this reason, it is common for the algorithm to be run for multiple starting guesses, as indeed Scikit-Learn does by default (set by the n_init parameter, which defaults to 10). #### The number of clusters must be selected beforehand¶ Another common challenge with k-means is that you must tell it how many clusters you expect: it cannot learn the number of clusters from the data. For example, if we ask the algorithm to identify six clusters, it will happily proceed and find the best six clusters: In [7]: labels = KMeans(6, random_state=0).fit_predict(X) plt.scatter(X[:, 0], X[:, 1], c=labels, s=50, cmap='viridis'); Whether the result is meaningful is a question that is difficult to answer definitively; one approach that is rather intuitive, but that we won't discuss further here, is called silhouette analysis. Alternatively, you might use a more complicated clustering algorithm which has a better quantitative measure of the fitness per number of clusters (e.g., Gaussian mixture models; see In Depth: Gaussian Mixture Models) or which can choose a suitable number of clusters (e.g., DBSCAN, mean-shift, or affinity propagation, all available in the sklearn.cluster submodule) #### k-means is limited to linear cluster boundaries¶ The fundamental model assumptions of k-means (points will be closer to their own cluster center than to others) means that the algorithm will often be ineffective if the clusters have complicated geometries. In particular, the boundaries between k-means clusters will always be linear, which means that it will fail for more complicated boundaries. Consider the following data, along with the cluster labels found by the typical k-means approach: In [8]: from sklearn.datasets import make_moons X, y = make_moons(200, noise=.05, random_state=0) In [9]: labels = KMeans(2, random_state=0).fit_predict(X) plt.scatter(X[:, 0], X[:, 1], c=labels, s=50, cmap='viridis'); This situation is reminiscent of the discussion in In-Depth: Support Vector Machines, where we used a kernel transformation to project the data into a higher dimension where a linear separation is possible. We might imagine using the same trick to allow k-means to discover non-linear boundaries. One version of this kernelized k-means is implemented in Scikit-Learn within the SpectralClustering estimator. It uses the graph of nearest neighbors to compute a higher-dimensional representation of the data, and then assigns labels using a k-means algorithm: In [10]: from sklearn.cluster import SpectralClustering model = SpectralClustering(n_clusters=2, affinity='nearest_neighbors', assign_labels='kmeans') labels = model.fit_predict(X) plt.scatter(X[:, 0], X[:, 1], c=labels, s=50, cmap='viridis'); We see that with this kernel transform approach, the kernelized k-means is able to find the more complicated nonlinear boundaries between clusters. #### k-means can be slow for large numbers of samples¶ Because each iteration of k-means must access every point in the dataset, the algorithm can be relatively slow as the number of samples grows. You might wonder if this requirement to use all data at each iteration can be relaxed; for example, you might just use a subset of the data to update the cluster centers at each step. This is the idea behind batch-based k-means algorithms, one form of which is implemented in sklearn.cluster.MiniBatchKMeans. The interface for this is the same as for standard KMeans; we will see an example of its use as we continue our discussion. ## Examples¶ Being careful about these limitations of the algorithm, we can use k-means to our advantage in a wide variety of situations. We'll now take a look at a couple examples. ### Example 1: k-means on digits¶ To start, let's take a look at applying k-means on the same simple digits data that we saw in In-Depth: Decision Trees and Random Forests and In Depth: Principal Component Analysis. Here we will attempt to use k-means to try to identify similar digits without using the original label information; this might be similar to a first step in extracting meaning from a new dataset about which you don't have any a priori label information. We will start by loading the digits and then finding the KMeans clusters. Recall that the digits consist of 1,797 samples with 64 features, where each of the 64 features is the brightness of one pixel in an 8×8 image: In [11]: from sklearn.datasets import load_digits digits.data.shape Out[11]: (1797, 64) The clustering can be performed as we did before: In [12]: kmeans = KMeans(n_clusters=10, random_state=0) clusters = kmeans.fit_predict(digits.data) kmeans.cluster_centers_.shape Out[12]: (10, 64) The result is 10 clusters in 64 dimensions. Notice that the cluster centers themselves are 64-dimensional points, and can themselves be interpreted as the "typical" digit within the cluster. Let's see what these cluster centers look like: In [13]: fig, ax = plt.subplots(2, 5, figsize=(8, 3)) centers = kmeans.cluster_centers_.reshape(10, 8, 8) for axi, center in zip(ax.flat, centers): axi.set(xticks=[], yticks=[]) axi.imshow(center, interpolation='nearest', cmap=plt.cm.binary) We see that even without the labels, KMeans is able to find clusters whose centers are recognizable digits, with perhaps the exception of 1 and 8. Because k-means knows nothing about the identity of the cluster, the 0–9 labels may be permuted. We can fix this by matching each learned cluster label with the true labels found in them: In [14]: from scipy.stats import mode labels = np.zeros_like(clusters) for i in range(10): Now we can check how accurate our unsupervised clustering was in finding similar digits within the data: In [15]: from sklearn.metrics import accuracy_score accuracy_score(digits.target, labels) Out[15]: 0.79354479688369506 With just a simple k-means algorithm, we discovered the correct grouping for 80% of the input digits! Let's check the confusion matrix for this: In [16]: from sklearn.metrics import confusion_matrix mat = confusion_matrix(digits.target, labels) sns.heatmap(mat.T, square=True, annot=True, fmt='d', cbar=False, xticklabels=digits.target_names, yticklabels=digits.target_names) plt.xlabel('true label') plt.ylabel('predicted label'); As we might expect from the cluster centers we visualized before, the main point of confusion is between the eights and ones. But this still shows that using k-means, we can essentially build a digit classifier without reference to any known labels! Just for fun, let's try to push this even farther. We can use the t-distributed stochastic neighbor embedding (t-SNE) algorithm (mentioned in In-Depth: Manifold Learning) to pre-process the data before performing k-means. t-SNE is a nonlinear embedding algorithm that is particularly adept at preserving points within clusters. Let's see how it does: In [17]: from sklearn.manifold import TSNE # Project the data: this step will take several seconds tsne = TSNE(n_components=2, init='random', random_state=0) digits_proj = tsne.fit_transform(digits.data) # Compute the clusters kmeans = KMeans(n_clusters=10, random_state=0) clusters = kmeans.fit_predict(digits_proj) # Permute the labels labels = np.zeros_like(clusters) for i in range(10): # Compute the accuracy accuracy_score(digits.target, labels) Out[17]: 0.91930996104618812 That's nearly 92% classification accuracy without using the labels. This is the power of unsupervised learning when used carefully: it can extract information from the dataset that it might be difficult to do by hand or by eye. ### Example 2: k-means for color compression¶ One interesting application of clustering is in color compression within images. For example, imagine you have an image with millions of colors. In most images, a large number of the colors will be unused, and many of the pixels in the image will have similar or even identical colors. For example, consider the image shown in the following figure, which is from the Scikit-Learn datasets module (for this to work, you'll have to have the pillow Python package installed). In [18]: # Note: this requires the pillow package to be installed ax = plt.axes(xticks=[], yticks=[]) ax.imshow(china); The image itself is stored in a three-dimensional array of size (height, width, RGB), containing red/blue/green contributions as integers from 0 to 255: In [19]: china.shape Out[19]: (427, 640, 3) One way we can view this set of pixels is as a cloud of points in a three-dimensional color space. We will reshape the data to [n_samples x n_features], and rescale the colors so that they lie between 0 and 1: In [20]: data = china / 255.0 # use 0...1 scale data = data.reshape(427 * 640, 3) data.shape Out[20]: (273280, 3) We can visualize these pixels in this color space, using a subset of 10,000 pixels for efficiency: In [21]: def plot_pixels(data, title, colors=None, N=10000): if colors is None: colors = data # choose a random subset rng = np.random.RandomState(0) i = rng.permutation(data.shape[0])[:N] colors = colors[i] R, G, B = data[i].T fig, ax = plt.subplots(1, 2, figsize=(16, 6)) ax[0].scatter(R, G, color=colors, marker='.') ax[0].set(xlabel='Red', ylabel='Green', xlim=(0, 1), ylim=(0, 1)) ax[1].scatter(R, B, color=colors, marker='.') ax[1].set(xlabel='Red', ylabel='Blue', xlim=(0, 1), ylim=(0, 1)) fig.suptitle(title, size=20); In [22]: plot_pixels(data, title='Input color space: 16 million possible colors')
2017-12-11 19:04:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4609566628932953, "perplexity": 1398.3708489522423}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948513866.9/warc/CC-MAIN-20171211183649-20171211203649-00686.warc.gz"}
https://stats.stackexchange.com/questions/467779/covariance-matrix-of-the-residuals-in-the-linear-regression-model
# Covariance matrix of the residuals in the linear regression model I estimate the linear regression model: $$Y = X\beta + \varepsilon$$ where $$y$$ is an ($$n \times 1$$) dependent variable vector, $$X$$ is an ($$n \times p$$) matrix of independent variables, $$\beta$$ is a ($$p \times 1$$) vector of the regression coefficients, and $$\varepsilon$$ is an ($$n \times 1$$) vector of random errors. I want to estimate the covariance matrix of the residuals. To do so I use the following formula: $$Cov(\hat{\varepsilon}) = \sigma^2 (I-H)$$ where $$\hat{\varepsilon}=Y-X\hat{\beta}$$, $$\sigma^2$$ is estimated by $$\hat{\sigma}^2 = \frac{e'e}{n-p}$$, $$I$$ is an identity matrix, and $$H = X(X'X)^{-1}X$$ is a hat matrix. However, in some source I saw that the covariance matrix of the residuals is estimated in other way. The residuals are assumed to follow $$AR(1)$$ process: $$\varepsilon_t = \rho \varepsilon_{t-1} + \eta_t$$ where $$E(\eta) = 0$$ and $$Var({\eta}) = \sigma^2_{0}I$$. The covariance matrix is estimated as follows $$Cov(\varepsilon) = \sigma^2 \begin{bmatrix} 1 & \rho & \rho^2 & ... & \rho^{n-1}\\ \rho & 1 & \rho & ... & \rho^{n-2} \\ ... & ... & ... & ... & ... \\ \rho^{n-1} & \rho^{n-2} & ... & ... & 1 \end{bmatrix}$$ where $$\sigma^2 = \frac{1}{1-\rho^2}\sigma^2_0$$ My question is are there two different specifications of the covariance matrix of residuals or these are somehow connected with each other? • Connected is a pretty vague term but I would claim that they're dis-connected. The first method assumes that the disturbances are iid and normally distributed. The second method assumes that they are autocorrelated and follow an AR(1) process. How you estimate the regression model depends on which model you assume for the disturbances so they are connected only in that sense. – mlofton May 24 at 1:35 • The first one is suitable for independent observations while the second one for serial observations on the same sampling unit. – papgeo May 24 at 9:04 • @mlofton: does the first method really assume independance of the disturbances ? If it were the case, an estimator of the covariance matrix of errors should just be $\hat {\sigma} I$, but here, the matrix $I - H$ is not necessarily diagonal.. – Pohoua May 25 at 11:18 • @CherryGarcia: what assumptions are you willling to make on your errors $\varepsilon$ ? Maybe looking at Feasible Generalized Least Squares methods could help.. – Pohoua May 25 at 11:20 • @Pohoua: Definitely the assumption is independence but the use of $(I-H)$ has something to do ( waving my hands here because I forget so hopefully someone else can explain more clearly ) wiith the fact that the estimates are not independent in that the sum of the residuals estimates has to equal zero because of the nature of OLS. That still doesn't explain why you get a non diagonal estimate but it's related to that. I'd love to know the answer myself so hopefully someone else can explain. Thanks for great question. – mlofton May 25 at 16:05 After some investigation, I think I found a small (but crucial!) imprecision in what your post. The first formula you wrote : $$var(\varepsilon) = \sigma^2 (I - H)$$ is actually not totally exact. The formula should be $$var(\hat \varepsilon) = \sigma ^2 (I - H)$$ where $$\hat\varepsilon = Y - \hat\beta X$$ considering the OLS estimator $$\hat\beta = (X^TX)^{-1}X^TY$$. Thus $$\hat\sigma(I - H)$$ is an estimator of the variance of the estimated residuals associated with OLS estimator. This formula does not suppose independance of the $$\varepsilon_i$$, just that they all have same variance $$\sigma^2$$. But this is not what you want! You want an estimate of the variance of the true residuals, not the estimated residuals under OLS estimation. OLS estimator corresponds to maximum likelihood estimator under the hypothesis that residuals are i.i.d. and normal. The estimated residuals can thus be very poor estimates of the true residuals if these hypothesis are not met, and there covariance matrix can be very different from the covariance of the true residuals. The second formula you wrote does correspond to the covariance matrix of the $$\varepsilon_i$$ under the hypothesis that they follow an AR(1) process. Estimating covariance matrix of the residuals of a linear regression without any asumption cannot easily be done: you would have more unknown than datapoints... So you need to specify some form for the covariance matrix of the residuals. Supposing that they follow an AR(1) process (if this is relevent) is a way of doing so. You can also assume that they have a stationnary parametrized autocorrelation function, whose parameters you can estimate, and use it to deduce the covariance matrix. • @Pohuoa: Thanks for explaining the $(I-H)$ issue. Now I get it that you don't estimate the covariance matrix of the residuals. The $(I-H)$ matrix is estimating something totally different. Cherry Garcia: The estimates of the residuals in the first case ( where you assume they are iid ) are just the $\hat{\epsilon}$ that come out of the regression. In the second case, you're specifying a model for the residuals, namely an AR(1). So, the estimation method needs to know that so that is why you use feasible generalized least squares. There are other methods also besides FGLS. – mlofton May 26 at 3:54 • @Pohuoa: Thank you very much for spotting the mistake; I have adjusted my question accordingly. I think that I understand what you mean, but to be sure, let me summarize: 1) Under Gauss-Markov assumptions the covariance matrix of the residuals is $Cov(\varepsilon) = \sigma^2I$ 2) Assuming residuals follow AR(1) process the covariance matrix of the residuals is the second case I described in the question. 3) The covariance matrix of OLS residuals is: $Cov(\hat{\varepsilon})=\hat{\sigma}^2(I-H)$ – CherryGarcia May 27 at 19:01 • @CherryGarcia: Yes, exactly. – Pohoua May 27 at 21:20 • don't mistake residuals for errors. residuals $r$ (that you call estimates of the residuals) are the difference between the real and the fitted y values, you don't have to estimate them, you have them. the errors $\epsilon$ instead are the random part of the data generating process. those are assumed to be IID, not residuals. a good estimate of errors are the studentized residuals (simple residuals are heteroshedastic and hence biased estimators) – carlo May 28 at 11:30 In basic OLS you don't estimate the covariance matrix of residuals. You assume that errors (not residuals) are spherical, meaning that they're not correlated with each other. Residuals will come out of OLS uncorrelated. What you described as a second method is a different assumption. When applying basic OLS to time series you run into an issue that its assumptions are not practical. In time series the residuals are often correlated. So, you could assume that they're AR(1) process, and that what that method does: it estimates the model assuming the errors are AR(1). This is called feasible generalized least squares • well but you do compute the covariance matrix of residuals in OLS, that's $\sigma^2(I-H)$ – carlo May 28 at 11:35
2020-09-29 07:50:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 29, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8617943525314331, "perplexity": 416.03402789117393}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401632671.79/warc/CC-MAIN-20200929060555-20200929090555-00669.warc.gz"}
http://de.vroniplag.wikia.com/wiki/Nm/097
# Nm/097 ## < Nm 31.835Seiten in diesem Wiki Investigative Data Mining: Mathematical Models for Analyzing, Visualizing and Destabilizing Terrorist Networks von Nasrullah Memon vorherige Seite | zur Übersichtsseite | folgende Seite Statistik und Sichtungsnachweis dieser Seite findet sich am Artikelende Zuletzt bearbeitet: 2012-05-01 09:29:32 Hindemith Borgatti 2002, Fragment, Gesichtet, Nm, SMWFragment, Schutzlevel sysop, Verschleierung Typus Verschleierung Bearbeiter Hindemith Gesichtet Untersuchte Arbeit: Seite: 97, Zeilen: 1-8 Quelle: Borgatti_2002 Seite(n): 2, Zeilen: 15ff [The natural graphical representation of an adjacency matrix is a] table, such as shown in Figure 3. 2. [TABLE, same as in source but extended by one row and one column] Figure 3.2. Adjacency matrix for graph in Figure 3.1. Examining either Figure 3.1 or Figure 3.2, we can see that not every vertex is adjacent to every other. A graph in which all vertices are adjacent to all others is said to be complete. The extent to which a graph is complete is indicated by its density, which is defined as the number of edges divided by the number possible. If self-loops are excluded, then the number possible is n(n-1)/2. Hence the density of the graph in Figure 3.1 is 7/21 = 0.33. The natural graphical representation of an adjacency matrix is a table, such as shown in Figure 2. [TABLE] Figure 2. Adjacency matrix for graph in Figure 1. Examining either Figure 1 or Figure 2, we can see that not every vertex is adjacent to every other. A graph in which all vertices are adjacent to all others is said to be complete. The extent to which a graph is complete is indicated by its density, which is defined as the number of edges divided by the number possible. If self-loops are excluded, then the number possible is n(n-1)/2. [...] Hence the density of the graph in Figure 1 is 6/15 = 0.40. Anmerkungen The source is not given anywhere in the thesis. Sichter (Hindemith), Bummelchen Zuletzt bearbeitet: 2012-04-26 07:59:44 Fiesh Brandes Erlebach 2005, Fragment, Gesichtet, Nm, SMWFragment, Schutzlevel sysop, Verschleierung Typus Verschleierung Bearbeiter Hindemith Gesichtet Untersuchte Arbeit: Seite: 97, Zeilen: 9-19 Quelle: Brandes_Erlebach_2005 Seite(n): 7, 8, Zeilen: p7: 30ff; p8: 1ff Graphs can be undirected or directed. The adjacency matrix of an undirected graph (as shown in Figure 3.2) is symmetric. An undirected edge joining vertices $u, v \in V$ is denoted by $\{u, v\}$. In directed graphs, each directed edge (arc) has an origin (tail) and a destination (head). An edge with origin $u \in V$ is represented by an order pair $(u, v)$. As a shorthand notation, an edge $\{u, v\}$ can also be denoted by $uv$. It is to note that, in a directed graph, $uv$ is short for $(u, v)$, while in an undirected graph, $uv$ and $vu$ are the same and both stands for $\{u, v\}$. Graphs that can have directed as well undirected edges are called mixed graphs, but such graphs are encountered rarely. Graphs can be undirected or directed. In undirected graphs, the order of the endvertices of an edge is immaterial. An undirected edge joining vertices $u, v \in V$ is denoted by $\{u, v\}$. In directed graphs, each directed edge (arc) has an origin (tail) and a destination (head). An edge with origin $u \in V$ and destination $v \in V$ is represented by an ordered pair $(u, v)$. As a shorthand notation, an edge $\{u, v\}$ or $(u, v)$ can also be denoted by $uv$. In a directed graph, $uv$ is short for $(u, v)$, while in an undirected graph, $uv$ and $vu$ are the same and both stand for $\{u, v\}$. [...]. Graphs that can have directed edges as well as undirected edges are called mixed graphs, but such graphs are encountered rarely [...] Anmerkungen The source is not mentioned anywhere in the thesis. The definitions given here are certainly standard and don't need to be quoted. However, Nm uses for several passages the same wording as the source. Note also that "An edge with origin $u \in V$ is represented by an order pair $(u, v)$" is a curious abbreviation of the statement "An edge with origin $u \in V$ and destination $v \in V$ is represented by an ordered pair $(u, v)$" in the source. Sichter (Hindemith). WiseWoman vorherige Seite | zur Übersichtsseite | folgende Seite Letzte Bearbeitung dieser Seite: durch Benutzer:Hindemith, Zeitstempel: 20120501093137
2017-02-25 02:17:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 29, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7554634809494019, "perplexity": 1217.1380867077671}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171646.15/warc/CC-MAIN-20170219104611-00204-ip-10-171-10-108.ec2.internal.warc.gz"}
https://space.stackexchange.com/questions/51040/how-did-the-spacex-come-up-with-such-a-disruptive-design-in-the-falcon-9?noredirect=1
# How did the SpaceX come up with such a disruptive design in the Falcon 9? Because of the answers to Why has a rocket system like Starship never been proposed before? , now I need to ask something I'd always taken for granted. If Falcon 9 wasn't revolutionary for technical reasons,surely it still has to be more than a matter of the money being there when needed. I mean, we were all shocked, weren't we? I cried the first time they stuck the landing of the first stage. No really, I did, kinda like a baby. I know a lot of drama surrounds SpaceX and it's not that I'd like to add to that. I think the use of the term 'revolution' for the effect of the Falcon 9 on the industry is easily deserved, and I'd really like to know what causes something like that to happen when it wasn't because of a major technical achievement. There's a rather long list out there of companies that tried and failed to do what they did. The Space Shuttle fell far short of the dream of cheap quick space access it also chased. There have already been a lot of insightful comments and a good answer here. I'm editing now to get this reopened (hopefully), and because it just would really be great to get more perspective on this. I already know there were a bunch of factors, money being an important one, but an achievement so disruptive has to come from a lot more than that. How did they succeed? • Btw I tried to search for a duplicate. It isn't easy to figure out how best to do that. I don't recall this being asked before and the searches I could think of went nowhere. – kim holder Mar 25 at 19:54 • I would argue that one key revolution is vertical landing of the booster. Back in 2012, someone at a conference said it was the main driver for cheap re-usability. Him and I discussed our disagreement at the time, and I'm glad to say that I've been proven wrong, and him right! – ChrisR Mar 25 at 20:10 • @ChrisR Vertical landing isn't revolutionary in 2012. It was in '96 when the DC-X demonstrated it. As I said, Falcon 9 is more evolutionary than revolutionary. – Polygnome Mar 25 at 20:40 • Comments are not for extended discussion; this conversation has been moved to chat. – called2voyage Mar 29 at 13:17 Most aspects of the Falcon 9 had been used elsewhere or had at least been discussed, but making rockets reusable was a bit of an anathema for manufacturers up till then. NASA had tried it with the shuttle, but it wasn’t fully reusable and it cost a fortune. DCX had tried it but it didn’t come to anything. The generally accepted view was that reusability was not worth it or at least no one was interested in paying for its development. If you can only squeeze a few percent of a rockets mass out as payload then adding extra propellants, legs, parachutes and similar is not an obviously good idea. Perhaps the revolutionary ingredients were a combination of SpaceX’s mission and Elon Musk’s drive to go back to first principles and question everything. And going back to first principles also involved the financial practicalities In order to make humanity a multi-planet species huge amounts of space transport will be required. In order for that to be practical, costs have to be reduced drastically and the biggest cost is the rocket so that needs to be reused (as all other normal transportation is). Then go back to first principles and do the math with an eye to the finances. Do all rockets always use all of their payload capacity? No. Can we design a high performance reusable rocket that has a good enough payload capacity? – Yes. Can we make money by reusing rockets? Yes. And the rest is history. • This is really insightful and probably the right answer; going back to first principles backed (in the beginning at least) by existing own money in pocket. – uhoh Mar 25 at 22:55 • "making rockets reusable was a bit of an anathema for manufacturers up till then. NASA had tried it with the shuttle, but it wasn’t fully reusable and it cost a fortune." Falcon 9 isn't fully reusable either. The cost per flight for shuttle was about US$525M in 2021 money. If you don't count the orbiter and crew as payload, it works out to \$20M/ton of payload to LEO. If you do count the orbiter, it's \$5.5M/ton. F9 gets it down to \$3.3M/ton to LEO. That's cheaper, but not revolutionarily cheaper IMO. – Russell Borogove Mar 25 at 23:02 • Is that price or cost? – Slarty Mar 25 at 23:13 • Price (i.e. cost to a regular commercial customer) for Falcon -- NASA might get a contract discount, but I didn't bother to look it up since I'm just running an order-of-magnitude comparison here. For shuttle, the accounting is complicated; the only numbers we have to work with are the costs of the entire shuttle program -- training, logistics, paperwork, facilities, etc. -- amortized over the number of launches. So the costs are actually a lot closer than my figures would suggest. – Russell Borogove Mar 26 at 23:47 • Yes it is very difficult to get any accurate cost figures. However SpaceX seems to be doing very nicely. – Slarty Mar 27 at 9:00
2021-06-22 11:49:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34309515357017517, "perplexity": 1369.3777672769404}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488517048.78/warc/CC-MAIN-20210622093910-20210622123910-00178.warc.gz"}
https://gmatclub.com/forum/scope-of-gmat-in-geometry-130502.html
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 18 Dec 2018, 03:02 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History ## Events & Promotions ###### Events & Promotions in December PrevNext SuMoTuWeThFrSa 2526272829301 2345678 9101112131415 16171819202122 23242526272829 303112345 Open Detailed Calendar • ### Happy Christmas 20% Sale! Math Revolution All-In-One Products! December 20, 2018 December 20, 2018 10:00 PM PST 11:00 PM PST This is the most inexpensive and attractive price in the market. Get the course now! • ### Key Strategies to Master GMAT SC December 22, 2018 December 22, 2018 07:00 AM PST 09:00 AM PST Attend this webinar to learn how to leverage Meaning and Logic to solve the most challenging Sentence Correction Questions. # Scope of GMAT in geometry Author Message TAGS: ### Hide Tags Intern Joined: 07 Mar 2012 Posts: 30 Scope of GMAT in geometry  [#permalink] ### Show Tags 10 Apr 2012, 05:22 Hi there! I just would like to ask a question about the scope of GMAT in geometry. How far does GMAT go in geometry? Do I need to know anything about spheres, cones and pyramids or should I just worry about circles and cylinders? Thanks! Manager Joined: 19 Mar 2012 Posts: 152 Location: United States Concentration: Finance, General Management GMAT 1: 750 Q50 V42 GPA: 3.69 WE: Analyst (Mutual Funds and Brokerage) Re: Scope of GMAT in geometry  [#permalink] ### Show Tags 10 Apr 2012, 07:59 1 I don't recall ever seeing any questions on spheres, but that could just be me. Most of the geometry deals with triangles, cubes, quadrilaterals, and coordinate geometry Intern Joined: 07 Mar 2012 Posts: 30 Re: Scope of GMAT in geometry  [#permalink] ### Show Tags 10 Apr 2012, 14:29 Swoosh617 wrote: I don't recall ever seeing any questions on spheres, but that could just be me. Most of the geometry deals with triangles, cubes, quadrilaterals, and coordinate geometry Thank you! Manager Joined: 06 Feb 2012 Posts: 88 WE: Project Management (Other) Re: Scope of GMAT in geometry  [#permalink] ### Show Tags 11 Apr 2012, 10:54 2 1 balkan wrote: Hi there! I just would like to ask a question about the scope of GMAT in geometry. How far does GMAT go in geometry? Do I need to know anything about spheres, cones and pyramids or should I just worry about circles and cylinders? Thanks! Hi Balkan, As far as what topics of Mathematics are covered in the GMAT, your best bet is to open the Official Guide (11th, 12th or 13th Edition) and go thru Chapter 4.0 Math Review. For geometry, reference section 4.3 Geometry, pages 107, p127 thru 139 (OG 12th Edition) 'Geometry is limited primarily to measurement and intuitive geometry or spatial visualization. Extensive knowledge of theorems and the ability to construct proofs, skills that are usually developed in a formal geometry course, are not tested. The topics included in this section are the following: 1. Lines 2. intersecting Lines and Angles 3. Perpendicular lines 4. Parallel Lines 5. Polygons (Convex) 6. Triangles 8. Circles 9. Rectangular Solids and Cylinders 10. Coordinate Geometry' I have not yet seen an 'official' GMAT problem with spheres, cones or pyramids that could not be 'reduced' to a 2D problem with circle(s), square(s) or triangles(s). For example: Now...it would hurt your general knowledge and/or personal confidence to know the following formulas: Sphere: $$Surface (Sphere) = 4*pi*R^2$$ $$Volume (Sphere) = \frac{4}{3}*pi*R^3$$ http://en.wikipedia.org/wiki/Sphere volume-of-a-sphere-84970.html sphere-inside-cube-35637.html method-to-solve-3-spheres-of-dough-problem-107119.html ps-crystal-spheres-34671.html Cone: $$Surface (Cone) = pi*R*(R+L)$$ where $$R$$ is the radius of the circle at the bottom of the cone and $$L$$ is the lateral height of the cone (given by the Pythagorean theorem $$L=\sqrt{R^2 + h^2}$$ where $$h$$ is the height of the cone). $$Volume (Cone) = \frac{1}{3}*B*h$$ where $$B$$ is the area of the base and $$h$$ the height (the perpendicular distance from the base to the apex). http://en.wikipedia.org/wiki/Cone_(geometry)#Geometry cones-and-spheres-on-gmat-96751.html Pyramid: $$Volume (Pyramid) = \frac{1}{3}*B*h$$ (same formula as for the Cone) $$Surface (Pyramid) = B + \frac{(P*L)}{2}$$ where $$B$$ is the base area, $$P$$ is the base perimeter and $$L$$ is the slant height $$L=\sqrt{R^2 + h^2}$$ where $$h$$ is the pyramid altitude and R is the inradius of the base. http://en.wikipedia.org/wiki/Pyramid_(geometry)#Volume find-the-volume-of-a-pyramid-13904.html _________________ Kudos is a great way to say Thank you... Intern Joined: 07 Mar 2012 Posts: 30 Re: Scope of GMAT in geometry  [#permalink] ### Show Tags 11 Apr 2012, 11:19 GreginChicago wrote: balkan wrote: Hi there! I just would like to ask a question about the scope of GMAT in geometry. How far does GMAT go in geometry? Do I need to know anything about spheres, cones and pyramids or should I just worry about circles and cylinders? Thanks! Hi Balkan, As far as what topics of Mathematics are covered in the GMAT, your best bet is to open the Official Guide (11th, 12th or 13th Edition) and go thru Chapter 4.0 Math Review. For geometry, reference section 4.3 Geometry, pages 107, p127 thru 139 (OG 12th Edition) 'Geometry is limited primarily to measurement and intuitive geometry or spatial visualization. Extensive knowledge of theorems and the ability to construct proofs, skills that are usually developed in a formal geometry course, are not tested. The topics included in this section are the following: 1. Lines 2. intersecting Lines and Angles 3. Perpendicular lines 4. Parallel Lines 5. Polygons (Convex) 6. Triangles 8. Circles 9. Rectangular Solids and Cylinders 10. Coordinate Geometry' I have not yet seen an 'official' GMAT problem with spheres, cones or pyramids that could not be 'reduced' to a 2D problem with circle(s), square(s) or triangles(s). For example: Now...it would hurt your general knowledge and/or personal confidence to know the following formulas: Sphere: $$Surface (Sphere) = 4*pi*R^2$$ $$Volume (Sphere) = \frac{4}{3}*pi*R^3$$ http://en.wikipedia.org/wiki/Sphere volume-of-a-sphere-84970.html sphere-inside-cube-35637.html method-to-solve-3-spheres-of-dough-problem-107119.html ps-crystal-spheres-34671.html Cone: $$Surface (Cone) = pi*R*(R+L)$$ where $$R$$ is the radius of the circle at the bottom of the cone and $$L$$ is the lateral height of the cone (given by the Pythagorean theorem $$L=\sqrt{R^2 + h^2}$$ where $$h$$ is the height of the cone). $$Volume (Cone) = \frac{1}{3}*B*h$$ where $$B$$ is the area of the base and $$h$$ the height (the perpendicular distance from the base to the apex). http://en.wikipedia.org/wiki/Cone_(geometry)#Geometry cones-and-spheres-on-gmat-96751.html Pyramid: $$Volume (Pyramid) = \frac{1}{3}*B*h$$ (same formula as for the Cone) $$Surface (Pyramid) = B + \frac{(P*L)}{2}$$ where $$B$$ is the base area, $$P$$ is the base perimeter and $$L$$ is the slant height $$L=\sqrt{R^2 + h^2}$$ where $$h$$ is the pyramid altitude and R is the inradius of the base. http://en.wikipedia.org/wiki/Pyramid_(geometry)#Volume find-the-volume-of-a-pyramid-13904.html Thank you! Math Expert Joined: 02 Sep 2009 Posts: 51281 Re: Scope of GMAT in geometry  [#permalink] ### Show Tags 11 Apr 2012, 11:30 1 balkan wrote: Hi there! I just would like to ask a question about the scope of GMAT in geometry. How far does GMAT go in geometry? Do I need to know anything about spheres, cones and pyramids or should I just worry about circles and cylinders? Thanks! Check Math Book Geometry chapters. Triangles: math-triangles-87197.html Polygons: math-polygons-87336.html Coordinate Geometry: math-coordinate-geometry-87652.html Circles: math-circles-87957.html 3-D Geometries: math-3-d-geometries-102044.html#p792331 For practice check our question banks. PS questions on geometry: search.php?search_id=tag&tag_id=53 DS questions on geometry: search.php?search_id=tag&tag_id=32 PS questions on coordinate geometry: search.php?search_id=tag&tag_id=62 DS questions on coordinate geometry: search.php?search_id=tag&tag_id=41 Hope it helps. _________________ Intern Joined: 07 Mar 2012 Posts: 30 Re: Scope of GMAT in geometry  [#permalink] ### Show Tags 11 Apr 2012, 21:51 Bunuel wrote: balkan wrote: Hi there! I just would like to ask a question about the scope of GMAT in geometry. How far does GMAT go in geometry? Do I need to know anything about spheres, cones and pyramids or should I just worry about circles and cylinders? Thanks! Check Math Book Geometry chapters. Triangles: math-triangles-87197.html Polygons: math-polygons-87336.html Coordinate Geometry: math-coordinate-geometry-87652.html Circles: math-circles-87957.html 3-D Geometries: math-3-d-geometries-102044.html#p792331 For practice check our question banks. PS questions on geometry: search.php?search_id=tag&tag_id=53 DS questions on geometry: search.php?search_id=tag&tag_id=32 PS questions on coordinate geometry: search.php?search_id=tag&tag_id=62 DS questions on coordinate geometry: search.php?search_id=tag&tag_id=41 Hope it helps. Thanks a lot! Non-Human User Joined: 09 Sep 2013 Posts: 9211 Re: Scope of GMAT in geometry  [#permalink] ### Show Tags 09 Dec 2017, 01:35 Hello from the GMAT Club BumpBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. _________________ Re: Scope of GMAT in geometry &nbs [#permalink] 09 Dec 2017, 01:35 Display posts from previous: Sort by
2018-12-18 11:02:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5262399911880493, "perplexity": 4623.657376227932}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376829140.81/warc/CC-MAIN-20181218102019-20181218124019-00027.warc.gz"}
https://stats.stackexchange.com/questions/191548/how-to-visualise-queue-wait-time
# How to visualise queue wait time Suppose I have a process that works as follows: • collect all outstanding requests • process all of them, which takes time • commit all results in one transaction • repeat I get a data point (start, duration) from each cycle: 00:00: 1h, 01:00: 1h, 02:00: 2h30m, 04:30: 2h, 06:30: 1h, 07:30: 1h Below is the best I've come up with so far, boxes ;-) It shows a "slow" batch @2:00 where: • all users had to wait, 2h30min for their results • this "sloweness" lasted just as long, 2h30min I this visualisation good? Is there a canonical visualisation for time spent in queue? How can I show multiple similar processes on same plot, e.g. requests from Germany (own process), France (-"-), UK (-"-)? • Could you provide - possibly made-up - data sample of such data that can be used as example? – Tim Jan 20 '16 at 11:31 • edited, included data for the plot. – Dima Tisnek Jan 20 '16 at 11:37 • Are the "wait time" and elapsed "wall time" always the same? If so, it seems odd to put totally redundant information on two axes. Also, is the wait time at 00:59 really 1 hour? At that point, you only need to wait 1 minute until the next phase. – Nuclear Wang Jan 20 '16 at 11:43 • They are same for this process (unless it fails, or is not ran at all); They would be different for e.g. a pipeline. The reason for axes if for viewer being able to grasp that at 5AM queue was 2 hours long – Dima Tisnek Jan 20 '16 at 11:47
2019-05-21 17:24:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2972681522369385, "perplexity": 3871.257866902791}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256494.24/warc/CC-MAIN-20190521162634-20190521184634-00127.warc.gz"}
https://homework.cpm.org/category/CCI_CT/textbook/calc/chapter/1/lesson/1.2.4/problem/1-71
### Home > CALC > Chapter 1 > Lesson 1.2.4 > Problem1-71 1-71. 1. Find the domain of each of the following functions. Homework Help ✎ 1. f(x) = log (x − 4) $\text{If the domain of the parent graph, } y=\sqrt{x}, \text{ is } x \geq 0,\text{ what is the domain of }f(x)?$ $\text{If the domain of the parent graph, } y=\frac{1}{x}, \text{ is } x \neq 0,\text{ what is the domain of }f(x)?$ $\text{If the domain of the parent graph, } y=\text{ log}{x}, \text{ is } x > 0,\text{ what is the domain of }f(x)?$
2020-01-17 22:23:57
{"extraction_info": {"found_math": true, "script_math_tex": 3, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6602946519851685, "perplexity": 6384.429875431311}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250591234.15/warc/CC-MAIN-20200117205732-20200117233732-00450.warc.gz"}
https://tangentiaventures.com/promise-at-xzd/77b08c-tangent-formula-graph
# tangent formula graph The tangent line for a graph at a given point is the best straight-line approximation for the graph at that spot. Having a graph is helpful when trying to visualize the tangent line. Bibliography Larson, R.E. As there is a phase shift in the sine and cosine graph, in the same way there is a phase shift in tangent graph. The range of tangent has no restrictions; you aren’t stuck between 1 and –1, like with sine and cosine. Below is a graph of y=tan⁡(x) showing 3 periods of tangent. TUCO 2020 is the largest Online Math Olympiad where 5,00,000+ students & 300+ schools Pan India would be partaking. Tim Brzezinski. In mathematics, the trigonometric functions (also called circular functions, angle functions or goniometric functions) are real functions which relate an angle of a right-angled triangle to ratios of two side lengths. Parent topic: Trigonometric Functions. The red dotted lines represent the asymptotes. The Guide to Preparing for Exams, Environment, Mind-set, Location, Material and Diet. Trigonometry has a variety of applications ranging from specialized fields like oceanography where it is used for calculating the height of tides in oceans and Calculus where it is used in combination with Algebra to the backyard of our home where it may be used to roof a house, to make the roof inclined in the case of single individual bungalows and to calculate the height of the roof in the buildings, etc. When a problem asks you to find the equation of the tangent line, you’ll always be asked to evaluate at the point where the tangent line intersects the graph. Stay Home , Stay Safe and keep learning!!! In the case of a line that is tangent to a graph, you can use the point (x,y) where the line touches the graph. Sections: The sine and cosine, The tangent, The co-functions The next trig function is the tangent, but that's difficult to show on the unit circle. Below is a picture of the graph of y = tan (x). Figure 2.108 Four possible graphs for a nonlinear differentiable function (in blue) and how it can be situated relative to its tangent line (in green) at a point. https://www.wikihow.com/Find-the-Equation-of-a-Tangent-Line Understand and interpret the sine graph and find out... An introduction to Algebra, learn the basics about Algebraic Expressions, Formulas, and Rules. Sine Function: Domain, Range, Properties and Applications. Learn Vedic Math Tricks for rapid calculations. The graph of f(x) = tan x is negative for angles in Quadrant II because sine is positive and cosine is negative for angles in this quadrant.             cos x = Adjacent Side/ Hypotenuse Side The first asymptote occurs when the angle, (Note: The period of the tangent graph is, which is different from that of sine and cosine.) The function is defined in the range from 0.5π + kπto 1.5π + kπradians and takes values from -∞to ∞. Conduct Cuemath classes online from home and teach math to 1st to 10th grade kids. How to Find the Tangent on a Graph in Excel. y = cos x is an even function which implies it is symmetric about the y-axis. Understand how the values of Sin 30, Cos 30, Tan 30, Sec 30, Cosec 30, Cot 30 & sine of -30 deg... Understanding what is the Trigonometric Table, its values, tricks to learn it, steps to make it by... Line of best fit refers to a line that best expresses the relationship between a scatter plot of... How to Find the Areas of Various Shapes in Geometry? E-learning is the future today. Placing trigonometric values like this against the angle produces a tan graph. Now take a look at the second preceding figure, which shows one cycle of the tangent function on a graph. At some angles the tangent function is undefined, and the problem is fundamental to drawing the graph of tangent function. tan (52°) = 8.2/6.5 = 1.8. Sin 30, Cos 30, Tan 30, Sec 30, Cosec 30, Cot 30. The figure shows the parent graph of tangent. Its graph is a tangent curve. A tangent line is a line that touches the graph of a function in one point. You can also see Graphs of Sine, Cosine and Tangent.. And play with a spring that makes a sine wave.. Less Common Functions (a) Find a formula for the tangent line approximation, $$L(x)$$, to $$f$$ at the point $$(2,−1)$$. Make $$y$$ the subject of the formula. The range of a function is the set of result values it can produce. This blog helps student understand the cosine function, cosine graph, domain and range of cosine,... Help students understand csc sec cot, their formula. Answering a major conception of students of "Is trigonometry hard?". Graph of a General Tangent Function General Form The general form of a tangent function is: L m : n F o ; E p. In this equation, we find several parameters of the function which will help us graph it. In the given domain, the solutions are x=π/2 and x=3π/2, according to the arccosine function. This blog helps students identify why they are making math mistakes. Some like action figures. The red dotted lines represent the asymptotes. (b) Use the tangent line approximation to estimate the value of $$f(2.07)$$. (1994). The tan graph is the graphical representation of the function tan x. With these formulas and definitions in mind you can find the equation of a tangent line. She is the author of several For Dummies books, including Algebra Workbook For Dummies, Algebra II For Dummies, and Algebra II Workbook For Dummies. To graph the tangent function, we mark the angle along the horizontal x axis, and for each angle, we put the tangent of that angle on the vertical y-axis. These Effective Study Tips will Help you Nail your Exams. f(-x) = f(x)......................Even Function Scholarships & Cash Prizes worth Rs.50 lakhs* up for grabs! sec(-x) = sec x. It is a periodic graph whose trigonometric values can be computed using the trigonometric formula: sin/cos=tan. Crystallography (The study of atom arrangements in a crystalline solid). The Power Rule. The slope of the tangent line reveals how steep the graph is rising or falling at that point. Keep in mind that f (x) is also equal to y, and that the slope-intercept formula for a line is y = mx + b where m is equal to the slope, and b is equal to the y intercept of the line. Learn Vedic Math Tricks for rapid calculations. A cycle of the tangent function has two asymptotes and a zero pointhalfway in‐ between. The shape of the tangent curve is the same for each full rotation of the angle and so the function is called 'periodic'. cos(x)=0 Example. The result, as seen above, is rather jagged curve that goes to positive infinity in one direction and negative infinity in the other. Every child loves toys. In order to find the domain of the tangent function f(x) = tan x, you have to locate the vertical asymptotes. Complete Guide: How to subtract two numbers using Abacus? tan (B (x - C)) + D where A, B, C, and D are constants. A keen aptitude for math improves critical thinking and promotes problem-solving abilities. arctan 1 = tan-1 1 = π/4 rad = 45° Graph of arctan. The graph of the tangent function, shown above, visualizes the output of the function for angles from 0 to a full rotation corresponding to the range [0, 2π]. Note: A tangent graph has no maximum or minimum points. The graph of y = tan x is Note that there are vertical asymptotes (the blue dotted lines) where the denominator of tan x has value zero. Learn about Circles, Tangents, Chords, Secants, Concentric Circles, Circle Properties. The tangent function can be equivalently defined in terms of SIN and COS: The period of the function is 360° or 2π radians.You can rotate the point as many times as you like. Finally, the graph of f(x) = tan x is positive for angles in Quadrant IV because sine is negative and cosine is positive for angles in this quadrant. Try this paper-based exercise where you can calculate the sine function for all angles from 0° to 360°, and then graph the result. Sound travels in the form of waves, and the same pattern though not as regular as a sine or cosine function, is still useful in developing computer music. equal to 0 and then solving. Solution: In a 30° triangle hypotenuse of length 2, an opposite side of length 1 and an adjacent side of √3: Solution: In a 35° triangle, an opposite side of length 2.8 and an adjacent side of 4: Solution: In a 45° triangle there is an opposite side of length 1 and an adjacent side of 1: Cosine function and Sec functions are even functions; the remaining other functions are odd functions. The tangent function has a range that goes automatically from positive infinity to negative infinity. some interesting things happen to tangent’s graph. Trig. The standard tangent graph, $$y = \tan \theta$$, for $$-\text{180}\text{°} \leq \theta \leq \text{180}\text{°}$$ is undefined at $$\theta = -\text{90}\text{°}$$ and $$\theta = \text{90}\text{°}$$. This blog deals with the common ratio of an geometric sequence. The graph of f(x) = tan x is positive for angles in Quadrant III because both sine and cosine are negative.   Complete Guide: How to add two numbers using Abacus? Figure out what’s happening to the graph between the intercepts and the asymptotes. Help students understand sine and its formula. This blog deals with equivalence relation, equivalence relation proof and its examples. Then the arctangent of x is equal to the inverse tangent function of x, which is equal to y: arctan x= tan-1 x = y. Section 3-1 : Tangent Planes and Linear Approximations. So let's take a closer look at the sine and cosines graphs, keeping in mind that tan (θ) = sin (θ)/ cos (θ). The tangent function $$f(x) = a \tan(b x + c) + d$$ and its properties such as graph, period, phase shift and asymptotes are explored interactively by changing the parameters a, b, c and d using an app. Once you have c, you have the equation of the line! Now it's true that triangles are one of the simplest geometrical figures, yet they have varied applications. The graph of a tangent function y = tan ( x ) is looks like this: Properties of the Tangent Function, y = tan ( x ) . Covid-19 has led the world to go through a phenomenal transition . They are widely used in all sciences that are related to geometry, such as navigation, solid mechanics, celestial mechanics, geodesy, and many others. Breaking down the myth of "Is Trigonometry Hard?". Below is a technique for working with division problems with four or more digits in the equation on... Blaise Pascal | Great French Mathematician. Using this trigonometric formula, we realize all the points where cos x is 0, the tan x value is undefined. If we look at the general definition - tan x=OAwe see that there are three variables: the measure of the angle x, and the lengths of the two sides (Opposite and Adjacent).So if we have any two of them, we can find the third.In the figure above, click 'reset'. Sin pi/3, Cos pi/3, Tan pi/3, Sec pi/3, Cosec pi/3, Cot pi/3. Finding all values of x on the interval [0,2π] such that tan⁡(x) is undefined, Complete Guide: How to divide two numbers using Abacus? Imagine we didn't know the length of the side BC.We know that the tangent of A (60°) is the opposite side (26) divided by the adjacent side AB - the one we are trying to find. The graph of y = sin x is symmetric about the origin because it’s an odd function. Before getting stuck into the functions, it helps to give a nameto each side of a right triangle: Trigonometry comes from Greek trigono "triangle" + metron "measure". It shows the roots (or zeros), the asymptotes (where the function is undefined), and the behavior of the graph in between certain key points on the unit circle. The coordinates that we are using are (1, 0) and (2.5, 2000). Understand How to get the most out of Distance Learning. Cue Learn Private Limited #7, 3rd Floor, 80 Feet Road, 4th Block, Koramangala, Bengaluru - 560034 Karnataka, India, Range of tangent, Period of tangent and other Properties, Applications of Trigonometric Ratios in Real Life. Tangent Function. Once you have the slope of the tangent line, which will be a function of x, you can find the exact slope at specific points along the graph. Mentioning some technological fields where there’s extensive use of trigonometric concepts. tan x … Mathematics is a subject that is dynamic for gaining a better perspective on events that happen in the natural world. In this graph, we can see that y=tan⁡(x) exhibits symmetry about the origin. tan theta or tangent of an angle is the ratio of the length of the opposite to the length of the adjacent. x = pi/2 + k pi, where k is an integer are the vertical asymptotes for a tangent graph. Hence we get a smooth curve with the value of tan tending to infinity every multiple of 90 degrees or 3pi/2. We can graph $y=\cot x$ by observing the graph of the tangent function because these two functions are reciprocals of one another. Trigonometry has not its direct applications in solving practical issues, but it is used in various things that we enjoy so much. Tanx is the ratio of the length of the opposite to the length of the adjacent. That is, sin x = Opposite Side/ Hypotenuse Side We can find the tangent line by taking the derivative of the function in the point. It is one of the six common trigonometric functions. Find the vertical asymptotes so you can find the domain. Mary Jane Sterling aught algebra, business calculus, geometry, and finite mathematics at Bradley University in Peoria, Illinois for more than 30 years. Subtract two numbers using Abacus tan y = tan x … the arctangent of is... Remember that to f ( x ) is increasing the result be defined as gaining a better perspective events. The coordinates that we are using are ( 1, 0 ) & 300+ schools Pan India would partaking... The main functions used in trigonometry and are based on a graph trigonometry defines relationships between side and... Line tangent to the right if # 0 and downward to the arccosine function domain x... As many times the tangent function repeats its pattern exercise where you can fill the... Pi/2, tan 30, cos pi/3, cos 30, tan theta, or tan function is undefined through. D where a, B, c, you have the equation of the function is 360° or 2π can... To play with one of the graph of y = tan x is defined in crystalline... The best straight-line approximation for the parent graph of tangent function of x when x is for. You like triangles are one of the cotangent function increases = 8.2/6.5 = 1.8 undefined... Of studying earthquakes ) is used in trigonometry and are based on a graph this. The basics of Integration and Differentiation Preparing for Exams, Environment, Mind-set, Location, and. D where a, B, c, you can find the domain of adjacent! Help you to understand these relatively simple functions not like a sine and cosine ( y\ ) the subject the! Blue and the problem is fundamental to drawing the graph at a given point stay Safe and keep!., Achievements, and Contributions Sec 30, tan pi/3, Sec pi/3, Cosec 30 Cosec... Trigonometry defines relationships between side lengths and angles of triangles no maximum or minimum.! Tending to infinity like to play with one of the tangent function is defined in the [. First figure isn ’ t all that exciting, but it is used in various things that are. To f ( x ) y coordinate of the six common trigonometric functions for main panel of the line! Make the same time is defined in a right-angled triangle ) and ( 2.5 2000... ) bx + c ) bx + c = 0 ⇒ x = pi/2 + pi. Of Integration and Differentiation has led the world to go through a phenomenal transition structure all! R ) except pi/2 + k pi, where the graph of y=tan⁡ ( )... The x-axis ) at numbers except whenever cos⁡ ( θ ) =0 in the point around., all points Graphing the tangent function shifts to the tangent function has a range that automatically! His Life, Achievements, and Contributions students identify why they are making math mistakes using! Right-Angled triangle as the ratio of an angle is the ratio of an Ancient Astronomer Claudius... Or 2π radians.You can rotate the point a around notice that after a rotation. Of studying earthquakes ) 1st to 10th Grade kids can not listen and music! From -pi/2 to pi/2, tan theta or tangent of an Ancient Astronomer: Claudius Ptolemy calculator,.... Add two numbers using Abacus you ’ ll need to … How to subtract two numbers Abacus!, Chords, Secants, Concentric Circles, Circle Properties blue and the asymptotes future of this nation to... Coordinates that we enjoy so much ( θ ) =0, where k is an are! = 45° graph of tangent are located wherever the sine value is 0, the of! Left by \ ( y\ ) tangent formula graph subject of the tangent function is undefined, and the vertical for... Most out of Distance learning ), the solutions are x=π/2 and x=3π/2, according to the right if 0! Periods of tangent function is basically one of the tangent function is to remember that radians.You can rotate the as. + c = 0 ⇒ x = -c/b that is the same of. Line approximation to estimate the value of tan tending to infinity be defined the... Cases correspond to a tangent graph has no maximum or minimum points upward to the right #! So you can fill in the natural world ) at: sin/cos=tan the curve at a given.. Figure isn ’ t stuck between 1 and –1, like with sine and cosine are.. 'S true that triangles are one of the function tan x π 2 + n π, 0 ) (... The set of result values it can produce in solving practical issues, but it does show many. Which implies it is one of each all at the second preceding figure, which shows one cycle the... 1 = π/4 rad = 45° graph of any function stuck between 1 –1... Has no maximum or minimum points, and D are constants approximation to estimate value... ∈ℝ ) the same type of transformation that applies tangent formula graph the length of the length of the function denoted..., 0 ) opposite and adjacent sides enjoy so much values tangent is not defined range,! Where the graph of tangent has no maximum or minimum points at your doorstep listen and comprehend music we. One cycle of the function is 360° or 2π radians.You can rotate the point and applications common ratio of tangent! The common ratio of the function in the missing points below is a periodic graph whose trigonometric values be... Trying to visualize the tangent function decreases, the fraction is 0, the graph of this nation known tan... Range that goes automatically from positive infinity to negative infinity period of the tangent curve is the ratio the! Subtract two numbers using Abacus that is dynamic for gaining a better perspective on events that happen in given. =X2At x =2 cos pi/3, Sec 30, tan 30, Sec 30, Cosec 30 Cosec. Diverges to infinity every multiple of 90 degrees or 3pi/2 tan x graph no... It can produce produce the tan graph is on the x–y plane ’! Are one of the applet showing the graph of the cotangent function increases ratios are any and numbers. Negative infinity falling at that point to negative infinity of Chios, his Life, Achievements, Contributions. To drawing the graph of a parabola '' + metron measure '' myth of is! Iii because both sine and cosine curve sin x is 0, the graph of f ( )! A cycle of the function tangent formula graph blue and the asymptotes about Circles, Properties... History of Hippocrates of Chios, his Life, Achievements, and D are constants in‐ between after a rotation. Function which implies it is used in various things that we are using are ( 1 0. Points where cos x is defined as the ratio of an angle is the line tangent to (... ( 1, 0 ) and ( 2.5, 2000 ) triangle the. = tan-1 1 = π/4 rad = 45° graph of any function we realize all the points where cos is... The right if # 0 is, tangent ’ s graph the representation! Some interesting things happen to tangent ’ s graph learn the basics of calculus basics... See that y=tan⁡ ( x ) exhibits symmetry about the History of Hippocrates of Chios, his Life,,... ( p\ ) cosine curve technological fields where there ’ s an function. Complete Guide: How to subtract two numbers using Abacus like discontinue curve because for certain values tangent always... Learn about Circles, Circle Properties pointhalfway in‐ between take a look at the type. Solving practical issues, but it does show How many times the tangent line approximation to estimate value! Period of the length of the opposite to the left by \ ( p\ ) 2.07 ) \ ) when... Sine function for all grades and book a trial class today results to see if are! ) = tan x is symmetric about the origin History of Hippocrates of,. The Guide to Preparing for Exams, Environment, Mind-set, Location, Material and Diet his Life Achievements! Angles in Quadrant III because both sine and cosine are negative is all numbers... -Pi/2 to pi/2, tan 30, Cot pi/3 an angle is designed against angle! In a crystalline solid ) example: tan y = tan x is positive for angles Quadrant! Crystalline solid ) following problem: find the tangent function has a parent graph of the function tan x defined! Lakhs * up for grabs represent it mathematically by its constituent sound tangent formula graph classes online from and... Are constants + kπradians and takes values from -∞to ∞ you ’ need! Triangles are one of the adjacent length of the adjacent placing trigonometric values like this against angle. Both sine and cosine are negative values like this against the angle and so the function measure to produce graph! Angle produces a tan graph to 360°, and the problem is fundamental to drawing the graph of tangent! Need to … How to divide two numbers using Abacus largest online math Olympiad where 5,00,000+ &! Down the myth of is trigonometry Hard? Hard? oldest calculator,.! Many times the tangent function: domain, the graph of tangent asymptotes... Point as many times the tangent line for a tangent graph is defined. Learn the basics of Integration and Differentiation x=3π/2, according to the right if #.... Calculate the sine value is undefined placing trigonometric values can be computed using the trigonometric formula, we find. To remember that through a phenomenal transition and interpret the csc Sec...! A cycle of the tangent function decreases, the graph of tangent are the vertical asymptotes in red calculator. Problem is fundamental to drawing the graph of y is equal to x: (... Can see that y=tan⁡ ( x ) exhibits symmetry about the History of Hippocrates of Chios, his,...
2021-04-10 11:34:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7265101075172424, "perplexity": 782.0455731930881}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038056869.3/warc/CC-MAIN-20210410105831-20210410135831-00625.warc.gz"}
http://mathhelpforum.com/algebra/159301-perfect-square-root-quadratic.html
# Thread: Perfect square root of a quadratic 1. ## Perfect square root of a quadratic Is it possible to find analytically the function that forms the perfect square root of a quadratic in all cases? For example with the quadratic x^2 + 10x + 25 you can find by factoring the perfect root x + 5. But how do you find the square root when a quadratic isn't expressed in the form of the completed square? With the quadratic x^2 + 10x + 20 it seems the simplest form you can get the root into is sqrt((x+5)^2 -5)). Is that the best that can be done? Is there some other way to express this root? 2. I really cannot make heads or tails out of what you are saying. Either a quadratic is a perfect square or it is not. You cannot change a quadratic that is not a perfect square to a different "form" in which it is a perfect square. And I have no idea why you would prefer $\sqrt{(x+ 5)^2- 5}$ to simply $\sqrt{x^2+ 10x+ 20}$. They both are "the function that forms the perfect square root of a quadratic". 3. Originally Posted by HallsofIvy I really cannot make heads or tails out of what you are saying. Either a quadratic is a perfect square or it is not. You cannot change a quadratic that is not a perfect square to a different "form" in which it is a perfect square. And I have no idea why you would prefer $\sqrt{(x+ 5)^2- 5}$ to simply $\sqrt{x^2+ 10x+ 20}$. They both are "the function that forms the perfect square root of a quadratic". I see. I ask such a strange question because I have a calculation in which a square rooted quadratic results for each step and x is unknown at the time of the calculation. It seems I will have to store the complete series eg. $\sqrt{5.123x^2 + 10.235x + 20.8} + \sqrt{2x^2 + 4.234x + 13.35} + ...$ whereas I was mistakenly thinking I would be able to simplify terms together. Thanks for your reply.
2016-09-29 22:06:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.625904381275177, "perplexity": 172.65622331165423}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661953.95/warc/CC-MAIN-20160924173741-00210-ip-10-143-35-109.ec2.internal.warc.gz"}
https://www.clutchprep.com/chemistry/practice-problems/99498/the-mass-of-a-sample-of-gas-is-827-mg-its-volume-is-0-270-l-at-a-temperature-of-
# Problem: The mass of a sample of gas is 827 mg. Its volume is 0.270 L at a temperature of 88°C and a pressure of 975 mmHg. Find its molar mass. ###### FREE Expert Solution Molar mass: First, we have to calculate the amount of gas in moles using the ideal gas equation. $\overline{){\mathbf{P}}{\mathbf{V}}{\mathbf{=}}{\mathbf{n}}{\mathbf{R}}{\mathbf{T}}}$ P = pressure, atm V = volume, L n = moles, mol R = gas constant = 0.08206 (L·atm)/(mol·K) T = temperature, K Isolate n (number of moles of gas): $\frac{\mathbf{P}\mathbf{V}}{\mathbf{R}\mathbf{T}}\mathbf{=}\frac{\mathbf{n}\overline{)\mathbf{R}\mathbf{T}}}{\overline{)\mathbf{R}\mathbf{T}}}\phantom{\rule{0ex}{0ex}}\overline{){\mathbf{n}}{\mathbf{=}}\frac{\mathbf{P}\mathbf{V}}{\mathbf{R}\mathbf{T}}}$ Given: P = 1.2829 atm V = 0.270 L T = 88°C + 273.15 = 361.15 K R = 0.08206 (L·atm)/(mol·K) Calculate for n: ###### Problem Details The mass of a sample of gas is 827 mg. Its volume is 0.270 L at a temperature of 88°C and a pressure of 975 mmHg. Find its molar mass.
2020-07-13 14:41:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6435914039611816, "perplexity": 2512.9274782580273}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657145436.64/warc/CC-MAIN-20200713131310-20200713161310-00347.warc.gz"}
http://icpc.njust.edu.cn/Problem/Zju/2850/
Time Limit: Java: 2000 ms / Others: 2000 ms Memory Limit: Java: 65536 KB / Others: 65536 KB ## Description Tom has a meadow in his garden. He divides it into N * M squares. Initially all the squares were covered with grass. He mowed down the grass on some of the squares and thinks the meadow is beautiful if and only if • Not all squares are covered with grass. • No two mowed squares are adjacent. • Two squares are adjacent if they share an edge. Here comes the problem: Is Tom's meadow beautiful now? ## Input The input contains multiple test cases! Each test case starts with a line containing two integers N, M (1 <= N, M <= 10) separated by a space. There follows the description of Tom's Meadow. There're N lines each consisting of M integers separated by a space. 0(zero) means the corresponding position of the meadow is mowed and 1(one) means the square is covered by grass. A line with N = 0 and M = 0 signals the end of the input, which should not be processed ## Output One line for each test case. Output "Yes" (without quotations) if the meadow is beautiful, otherwise "No"(without quotations). ## Sample Input 2 2 1 0 0 1 2 2 1 1 0 0 2 3 1 1 1 1 1 1 0 0 ## Sample Output Yes No No None ## Source Zhejiang Provincial Programming Contest 2007
2020-08-10 11:52:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24554955959320068, "perplexity": 2811.8564670963597}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738674.42/warc/CC-MAIN-20200810102345-20200810132345-00252.warc.gz"}
https://www.assignmentexpert.com/homework-answers/economics/microeconomics/question-49285
68 393 Assignments Done 98,5% Successfully Done In December 2018 # Answer to Question #49285 in Microeconomics for Bradly Question #49285 1) Suppose that at 100 units of output a monopolist is producing such that marginal revenue is equal to marginal cost. The firm is selling its output at a price of $8 per unit and is incurring average variable costs of$5 per unit and average fixed costs of $4 per unit. On the basis of this information we can conclude that the firm is: operating at maximum profit by producing the 100 units of output operating at a loss that could be reduced by shutting down operating at a profit that could be increased by producing more output operating at a loss that is less than the loss incurred by shutting down 2) Suppose a firm has monopoly power in the production of a particular good. If it finds that revenue and cost conditions are such that at all levels of output the price it can charge in order to sell all of the units is less than the average variable costs then it is in the firm’s best interest to: close down because its operating losses will exceed its shut-down losses at all levels of output maximize profits by producing where MR = MC close down because its total operating cost will exceed its total revenue minimize losses by producing where MR = MC Expert's answer 1) Suppose that at 100 units of output a monopolist is producing such that marginal revenue is equal to marginal cost. The firm is selling its output at a price of$8 per unit and is incurring average variable costs of $5 per unit and average fixed costs of$4 per unit. On the basis of this information we can conclude that the firm is: d) operating at a loss that is less than the loss incurred by shutting down 2) Suppose a firm has monopoly power in the production of a particular good. If it finds that revenue and cost conditions are such that at all levels of output the price it can charge in order to sell all of the units is less than the average variable costs then it is in the firm&rsquo;s best interest to: a) close down because its operating losses will exceed its shut-down losses at all levels of output Need a fast expert's response? Submit order and get a quick answer at the best price for any assignment or question with DETAILED EXPLANATIONS!
2018-12-15 14:59:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1960303783416748, "perplexity": 1286.3718088165851}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826856.91/warc/CC-MAIN-20181215131038-20181215153038-00590.warc.gz"}
https://simple.m.wikipedia.org/wiki/Moment_of_inertia
# Moment of inertia scalar measure of the rotational inertia with respect to a fixed axis of rotation Moment of inertia (${\displaystyle I}$), also called "angular mass" (kg·m2),[1] is the inertia of a rotating body with respect to its rotation. The angular momentum of the figure skater is conserved—as she decreases her radius by retracting her arms and legs, her moment of inertia decreases, but her angular velocity increases to compensate. It is a rotating body's resistance to angular acceleration or deceleration, equal to the product of the mass and the square of its radius measured perpendicularly to the axis of rotation. ## Moments of inertia for a few objects The moment of inertia I = ∫r2dm of a hoop, disk, cylinder, box, plate, rod, and spherical shell or solid can be found from this figure. ## References 1. Atkinson, P. (2012). Feedback Control Theory for Engineers. Springer Science & Business Media. p. 50. ISBN 978-1-4684-7453-4. The student is advised to regard moment of inertia as being equivalent to 'angular mass'; equations in rotational mechanics are generally analogous to those in translational mechanics. Wherever an equation occurs in translational mechanics involving mass m, there is an equivalent equation in rotational mechanics involving moment of inertia J. The units of moment of inertia are kilogram metres2 (abbreviation kg m2).
2023-01-27 18:20:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8569271564483643, "perplexity": 1103.4338542524085}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764495001.99/warc/CC-MAIN-20230127164242-20230127194242-00369.warc.gz"}
https://www.physicsforums.com/threads/gcse-additional-physics-need-help-d.239642/
# GCSE Additional physics - NEED HELP =D Tags: 1. Jun 10, 2008 ### olliebellamy 1. The problem statement, all variables and given/known data A beam of electrons leaves an electrom gun. The current carried by the beam is 4mA. a) How many coulombs of charge pass through a certain point in the beam per second. b) How many electrons pass this point per second? 2. Relevant equations KE(j) = Charge of electrons (e) X Accelerating voltage (V) 3. The attempt at a solution I need to find coulombs, and when only given '4mA' i do not see how i can work this out. I have my exam tomorrow afternoon and need to try to get this cleared up, please help =S Last edited: Jun 10, 2008 2. Jun 10, 2008 ### Ed Aboud Hey olliebellamy, welcome to PF. First off, I think you are over complicating things. 3. Jun 10, 2008 ### Kurdt Staff Emeritus How is the ampere defined? 4. Jun 10, 2008 ### olliebellamy Blooming forums went down, sorry for not replying. ok, i've re-looked at the question. i have no idea how to work out the answer. where do i start =P 5. Jun 10, 2008 ### Kurdt Staff Emeritus The forum is undergoing a software upgrade so its the same for everyone at the minute. Like I mentioned before, what is the definition of an ampere, or equally as good the definition of a coulomb? If you find that out it will help you with the question in hand. 6. Jun 10, 2008 ### olliebellamy in the revision guide i am using, there isn't any reference to the ampere with the Elecrtron Beam section. Just this question with no other information, except that : the charge on an electron is -1.6x10^-19C the book i am using is appauling -.- 7. Jun 10, 2008 ### Kurdt Staff Emeritus OK well 1 coulomb of charge is the amount of charge that passes a point in a second when a current of 1 ampere is present. So if a current of 4mA is present, how much charge passes a point in a second? 8. Jun 10, 2008 ### olliebellamy 4000? or 0.004? 9. Jun 10, 2008 ### Kurdt Staff Emeritus Check what milli means again. 10. Jun 10, 2008 ### olliebellamy milli is 1/1000 - still i'm confused =[ 11. Jun 10, 2008 ### lukas86 If the current is smaller (4mA < 1A), what does that say about what the charge passing? Would it be larger or smaller? 12. Jun 10, 2008 ### Kurdt Staff Emeritus Sorry, I posted before you edited. The latter is correct, and remember your units if this is for an exam. 13. Jun 10, 2008 ### olliebellamy OK - 4mA would be smaller than 1A. In that case i guess the charge would be greater 14. Jun 10, 2008 ### olliebellamy can i not just have an answer to thie question from somebody? possibly with an explanation of how they came around with this answer? 15. Jun 10, 2008 ### lukas86 50/50 chance there, Try again :P. If you have a smaller current flowing, less electrons (therefore: charge) would be flowing. Making sense... somewhat? 16. Jun 10, 2008 ### olliebellamy no sense what so ever....i'm gonna skip this section right now and move onto Work, power and energy......at least i can follow some simple formulae for this subject i'll just hope that i dont have to work out a coulomb in the exam....cos i realy can't do it 17. Jun 10, 2008 ### Kurdt Staff Emeritus We don't give out answers on this forum as you'll have read when you agreed to the rules when signing up. You had the answer before. 0.004 Coulombs. The formula you were using effectively was $Q = I t$. 18. Jun 10, 2008 ### olliebellamy thanks so much, if that formula could have just been displayed for my knowledge in my revision guide, i would have had nothing to worry about. now i know the formula, i shalln't forget it,
2016-12-11 02:51:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6293019652366638, "perplexity": 2934.01557311198}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698543782.28/warc/CC-MAIN-20161202170903-00407-ip-10-31-129-80.ec2.internal.warc.gz"}
https://wealth365.com/shrewsbury-results-ovtnur/hybridization-of-iodine-in-if7-58acc7
Enroll in one of our FREE online STEM summer camps. PCl5 2. This hybridization happens when two or more plant breeds are crossed. Draw the structure of I F 7 . Asked by Wiki User. Join the 2 Crores+ Student community now! Share with your friends. The pentagonal bipyramid is a case where bond angles surrounding an atom are not identical (see also trigonal bipyramidal molecular geometry). What is the hybridization of iodine in IF 3 and IF _{5} ? Welcome to Sarthaks eConnect: A unique platform where students can interact with teachers/experts/students to get solutions to their queries. A more accurate description of the bonding in in $\ce{IF7}$ Iodine heptafluoride is one of the classic examples when it comes to describing hypervalency and octet-expansion. Write its geometry and the type of hybridization. Among sp, sp2 and sp3, which hybrid orbital is more electronegative? If you are at an office or shared network, you can ask the network administrator to run a scan across the network looking for misconfigured or infected devices. Explain why, at room temperature, fluorine and chlorine are gases, bromine is a liquid, and iodine is a solid. View Solution in App. Also, iodine is in the seventh group of the periodic table and has seven valence electrons in its outer orbit. Give its structure. How many lone pairs of electrons, if any, would there be on the central iodine atom? The electronic configuration of Iodine in the third excited state can be written as: [Kr]4d 10 5s 1 5p 3 5d 3. I F 7 has seven bond pairs and zero lone pairs of electrons. 1.2k VIEWS. 1993, 115 (4), 1520–1526.and might as well be the culprit of this monstrosity. Only iodine forms heptafluoride IF7, but chlorine and bromine give pentafluorides. Structure of iodine heptafluoride, an example of a molecule with the pentagonal-bipyramidal coordination geometry. • 19 20 21. Lewis Structure Lewis structure is the representation of the electrons of the molecules. What is the hybridization of IF7? sp 3 d 3 HYBRIDIZATION - EXAMPLE 1) Iodine heptafluoride (IF 7): * The electronic configuration of Iodine atom in the ground state is: [Kr]4d 10 5s 2 5p 5. What is the molecular shape of IF7? Problem 109 What is the oxidation state of the halogen in each of the following? Students (upto class 10+2) preparing for All Government Exams, CBSE Board Exam, ICSE Board Exam, State Board Exam, JEE (Mains+Advance) and NEET can ask questions from any subject and get quick answers by subject teachers/ experts/mentors/students. The central iodine atom undergoes sp3d3 hybridisation which results in pentagonal bipyramidal … Share 0. sp 3 d 3 . so now . 1.2k SHARES. These will form 7 Ã Æ’ sp 3 d 3-p bonds with fluorine atoms. 1 ; View Full Answer ... it is sp 3 d 3 . Write Its Geometry and the Type of Hybridization. 1 Answer +2 votes . The central iodine atom undergoes sp 3 d 3 hybridisation which results in pentagonal bipyramidal geometry.. I think it exists, and it looks like octahedral, except it has one more bond. This gives an electron geometry of trigonal bypyramidal which corresponds to the sp3d hybridization. In chemistry, a pentagonal bipyramid is a molecular geometry with one atom at the centre with seven ligands at the corners of a pentagonal bipyramid.A perfect pentagonal bipyramid belongs to the molecular point group D 5h.. You may need to download version 2.0 now from the Chrome Web Store. Iodine heptafluoride (IF7) This problem has been solved! IF7 6. A perfect pentagonal bipyramid belongs to the molecular point group D 5h. Coordination Compounds. What are the types of hybridisation of iodine in interhalogen compounds IF3, IF5 and IF7, respectively? 1:34 500+ LIKES. al. Iodine belongs to the halogen family. We know that in the periodic table Fluorine [ F = 3.98 ] is the most electronegative element. 3.IF7. See the answer. Fill in the blanks : 1. 0 0. Answer:IF7 has seven bond pairs and zero lone pairs of electrons. iodine at ground state i am writin only the valence orbital . Am. Is IF7 an ionic or covalent? Learn vocabulary, terms, and more with flashcards, games, and other study tools. Iodine heptafluoride, also known as iodine(VII) fluoride or iodine fluoride, is an interhalogen compound with the chemical formula I F 7. Does IF7 exist? Hence, 7 orbitals are involved in hybridization which is sp3d3. Wiki User Answered . hybridisation of IF7. In chemistry, a pentagonal bipyramid is a molecular geometry with one atom at the centre with seven ligands at the corners of a pentagonal bipyramid. The hybridization of iodine in IF 3 and IF 5 are ____ and ____ respectively. In ICl3 you have a single bond between I and each Cl and two lone pairs of electrons on the I. Chem. Why does ICl7 or IBr7 doesn't exist whereas IF7 exists? Structure of and hybridization of iodine in this structure are 700+ LIKES. Top Answer . What is the Hybridization of Sulphur Hexafluoride? In SO3 S-atom undergoes …………. In {eq}{\rm{IF}}_2^ - {/eq}, two fluorine atoms are monovalent atoms and the charge on the ion is -1. Trigonal bipyramidal: Five electron groups involved resulting in sp3d hybridization, the angle between the orbitals is 90°, 120°. In this state, the sp 3 d 3 hybridization of iodine atom gives 7 half-filled sp 3 d 3 hybrid orbitals in pentagonal bi-pyramidal symmetry. 1)-91- (- 92 - 93] 2) - 42-43] +9156. Start studying test 5. * The electronic configuration of Iodine atom in the ground state is: [Kr]4d105s25p5. hybridisation. In this state, the sp 3 d 3 hybridization of iodine atom gives 7 half-filled sp 3 d 3 hybrid orbitals in pentagonal bi-pyramidal symmetry. sp3d hybridization involves the mixing of 3p orbitals and 1d orbital to form 5 sp3d hybridized orbitals of equal energy.
2021-03-09 11:09:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5065383315086365, "perplexity": 4245.2482387572145}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178389798.91/warc/CC-MAIN-20210309092230-20210309122230-00552.warc.gz"}
https://mathoverflow.net/questions/380801/the-cauchy-transform-and-the-convergence-of-the-fourier-stieltjes-transforms-of
# The Cauchy Transform, and the convergence of the Fourier-Stieltjes transforms of a sequence of measures Let $$C\left(\mathbb{R}/\mathbb{Z}\right)$$ denote the Banach space of continuous, $$1$$-periodic complex-valued functions on the unit interval, let $$M\left(\mathbb{R}/\mathbb{Z}\right)$$ denote its dual, the space of finite omplex Borel measures on $$\mathbb{R}/\mathbb{Z}$$, and let $$\mathcal{A}\left(\mathbb{D}\right)$$ denote the vector space space of all holomorphic functions $$f:\mathbb{D}\rightarrow\mathbb{C}$$ on the open unit disk. We equip $$\mathcal{A}\left(\mathbb{D}\right)$$ with the topology of compact convergence on $$\mathbb{D}$$—that is, uniform convergence on every compact subset of $$\mathbb{D}$$. Next, letting $$\omega$$ be a positive real number, define the order $$\omega$$ Cauchy transform $$\mathscr{C}_{\omega}$$ as the linear operator $$\mathscr{C}_{\omega}:M\left(\mathbb{R}/\mathbb{Z}\right)\rightarrow\mathcal{A}\left(\mathbb{D}\right)$$ given by: $$\mathscr{C}_{\omega}\left\{ d\mu\right\} \left(z\right)\overset{\textrm{def}}{=}\int_{0}^{1}\frac{d\mu\left(t\right)}{\left(1-e^{-2\pi it}z\right)^{\omega}},\textrm{ }\forall\left|z\right|<1,\textrm{ }\forall\mu\in M\left(\mathbb{R}/\mathbb{Z}\right)$$ Finally, for any $$\mu\in M\left(\mathbb{R}/\mathbb{Z}\right)$$, let $$\hat{\mu}:\mathbb{Z}\rightarrow\mathbb{C}$$ denote the Fourier coefficients of $$\mu$$/ Fourier-Stieltjes transform of $$\mu$$: $$\hat{\mu}\left(n\right)\overset{\textrm{def}}{=}\int_{0}^{1}e^{-2\pi int}d\mu\left(t\right),\textrm{ }\forall n\in\mathbb{Z}$$ I've been doing quite a bit of reading about Cauchy transforms (fractional or otherwise), but I haven't been able to find much regarding the behavior of the transform with respect to the Fourier coefficients of sequences of elements in $$M\left(\mathbb{R}/\mathbb{Z}\right)$$. Specifically, let $$\left\{ \mu_{m}\right\} _{m\geq1}$$ be a sequence in $$M\left(\mathbb{R}/\mathbb{Z}\right)$$, and let:$$f_{m}\left(z\right)\overset{\textrm{def}} {=}\mathscr{C}_{\omega}\left\{ d\mu_{m}\right\} \left(z\right),\textrm{ }\forall m\geq1$$ Now, suppose that: I. As $$m\rightarrow\infty$$, the $$f_{m}$$s converge compactly over $$\mathbb{D}$$ to a limit $$f\in\mathcal{A}\left(\mathbb{D}\right)$$. II. There is a function $$c:\mathbb{Z}\rightarrow\mathbb{C}$$ so that: $$\lim_{m\rightarrow\infty}\sup_{n\in\mathbb{Z}}\left|c\left(n\right)-\hat{\mu}_{m}\left(n\right)\right|=0$$ With these hypotheses, does it then follow that there is a measure $$d\mu$$ so that both: i. $$c=\hat{\mu}$$ ii. $$f=\mathscr{C}_{\omega}\left\{ d\mu\right\}$$ That is to say: are the pointwise limit of the Fourier coefficients of the $$\mu_{m}$$s the Fourier coefficients of a measure $$\mu$$, and is $$f$$ the Cauchy transform of this $$\mu$$? Additionally, would it make any difference if it was known that the $$f$$s also converged compactly outside of the closed unit disk? Finally, if, for each $$m$$, $$d\mu_{m}\left(t\right)=\phi_{m}\left(t\right)dt$$, where there is a $$p\in\left(1,\infty\right)$$ so that $$\phi_{m}\in L^{p}\left(\mathbb{R}/\mathbb{Z}\right)$$ for all $$m$$, is $$d\mu$$ of the form $$d\mu\left(t\right)=\phi\left(t\right)dt$$ for some $$\phi\in$$ $$L^{p}\left(\mathbb{R}/\mathbb{Z}\right)$$, where $$\mu=c$$, where $$c$$ is as described above? • Is this not related to fractional calculus? Jan 12 at 0:07 • No. This is a question of the representability of holomorphic functions in terms of their boundary behavior, and the stability of this representability with respect to sequences convergent in various topologies. – MCS Jan 12 at 21:31 • Then not related to the Euler integral for the beta function, various methods of analytic continuation for it, Mellin transform extension, Pochhammer contour integral rep., convolution rep. Jan 12 at 22:19 • Mmm... when you put it that way, I'd say that it is related—in that the formulas involved are in an open neighborhood about those topics. The sequences of measures in question are, in fact, the (partial) time averages of a dirac delta under the adjoint of a linear operator on functions on the disk, one which admits a representation as a contour integral against a kernel—a rational function of two complex variables. I could write a small paper on the questions I currently have. The present question is my effort to reduce it to its essentials, to increase my chance of getting a response. – MCS Jan 13 at 0:33 • A multi-dimensional neighborhood--an extension/interpolation of the Hilbert transform, the Stieltjes-Cauchy transform in free probability theory, even perhaps the Todd operator--so a number of possible approaches. Unfortunately, I have little time to spend on it, but I'm certainly interested in any results. Jan 13 at 16:18
2021-09-26 08:48:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 48, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9609717726707458, "perplexity": 154.06741891944057}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057857.27/warc/CC-MAIN-20210926083818-20210926113818-00574.warc.gz"}
https://cuhkmath.wordpress.com/2011/10/03/a-condition-for-a-riemannian-manifold-to-be-isometric-to-a-sphere-ii/
## A condition for a Riemannian manifold to be isometric to a sphere II [Updated on 5/10/2011: John Ma discovered that the result (re)discovered by him was proved by Lichnerowicz long time ago. $\underset{\sim}{>\;<}$] This is a sequel to the previous post on a condition on a Riemmannian manifold to be isometric to a sphere. In that post, we proved that ${M}$ is a sphere if there is a nontrivial function ${\phi}$ such that $\displaystyle \nabla ^2 \phi=-\phi g.$ It is natural to ask if we can weaken the assumption to the existence of a nontrivial ${\phi}$ such that $\displaystyle \Delta \phi =-n \phi.$ As it turns out, if we impose further condition on ${(M,g)}$, then it is true. More precisely, we can prove that Theorem 1 (Obata) For a compact Einstein manifold ${(M^n,g)}$ with positive scalar curvature ${n(n-1)}$, if there is a nonconstant function ${\phi}$ on ${M}$ such that $\displaystyle -\Delta \phi=n \phi, \ \ \ \ \ (1)$ then ${(M,g )}$ is isometric to the standard unit sphere. Recall that ${M}$ is Einstein if its Ricci curvature ${Ric=c g}$ for some constant ${c}$, in this case we can let ${c=(n-1)k}$ and then the scalar curvature ${R=tr_g(Ric)}$ is ${n(n-1)k}$. Actually Theorem 1 is the corollary of the following Theorem 2 (Obata, first eigenvalue estimate for laplacian) If ${(M,g)}$ is a connected compact Riemannian manifold, then if ${-\Delta \phi=n\lambda \phi}$ for some non-constant ${\phi}$ and nonzero ${\lambda}$, then $\displaystyle \lambda \geq \frac{(Rc(\nabla \phi), \nabla \phi)}{ (n-1) (\nabla \phi, \nabla \phi)}.$ The equality holds if and only if ${(M,g)}$ is isometric to the standard sphere of radius ${1/\sqrt{\lambda}}$ in the Euclidean space. In particular, if ${(M,g)}$ is Einstein with ${Ric = (n-1)g}$ and ${\lambda =1}$, i.e. there is ${\phi}$ with ${-\Delta \phi=n\phi}$, then ${(M,g)}$ is isometric to the standard unit sphere. 1. For an orientable compact ${M}$, for two tensors ${u,v}$ of the same type, we can define ${\langle u,v\rangle }$ at each point ${p\in M}$, we can then define $\displaystyle (u,v)=\int_M \langle u,v \rangle dV.$ If ${M}$ is non-orientable, we then take its orientable double cover ${\widetilde M}$ and lifts ${u,v}$ to ${\widetilde M}$ (with the same notations ${u,v}$), we then define (naturally all tensors, and in particular ${g}$ and ${Rm}$ etc, can be lifted to ${\widetilde M}$. ) $\displaystyle (u,v)=\int_{\widetilde M}\langle u ,v\rangle d V.$ 2. The $(0,2)$ tensor ${Ric=R_{ij}}$ can be raised to a $(1,1)$ tensor ${R_i^j}$ and thus can be treated as a linear map ${v^i\mapsto R_j^i v^j}$, we denote the later as ${Rc(v)\in TM}$. If ${M}$ is Einstein, i.e. ${Ric=(n-1)kg}$, then ${Rc}$ is just ${(n-1)k \;id}$. 3. We define ${\delta}$ as the differential operator which takes ${p}$-tensors to ${(p-1)}$-tensors by $\displaystyle \delta u =-\nabla _i u^i \,_{i_{p-1}\cdots\, i_1}$ for ${u=u_{i_p \cdots i_1}}$. By Stokes theorem, ${\nabla }$ and ${\delta}$ are dual to each other: $\displaystyle (\nabla u, v)=(u, \delta v)$ for ${(p-1)}$-tensor ${u}$ and ${p}$-tensor ${v}$. We define ${\Delta \phi=\nabla _i \nabla ^i \phi}$, in other words, $\displaystyle \Delta =- \delta \nabla .$ Note that in this notations, ${-\Delta}$ is non-negative definite. (See Lemma 3 (2)) With these notations in mind, we have Lemma 3 Suppose ${-\Delta \phi=n\lambda \phi}$ , we have $\displaystyle (\nabla \phi, \nabla \phi)= -(\Delta \phi, \phi)= n\lambda(\phi,\phi).$ $\displaystyle (\nabla ^2 \phi, \phi g)= (\Delta \phi , \phi)= -(\nabla \phi,\nabla \phi).\ \ \ \ \ (2)$ $\displaystyle \delta \nabla ^2 \phi=n\lambda \nabla \phi -Rc(\nabla \phi).\ \ \ \ \ (3)$ $\displaystyle (\nabla ^2 \phi, \nabla ^2 \phi)= n\lambda (\nabla\phi , \nabla \phi)-(Rc(\nabla \phi), \nabla \phi). \ \ \ \ \ (4)$ For ${v= \nabla ^2 \phi +\lambda \phi g}$, $\displaystyle (v,v) =(n-1) \lambda (\nabla \phi,\nabla \phi) -(Rc(\nabla \phi),\nabla \phi).\ \ \ \ \ (5)$ Proof: The first two are easy. For (3), consider $\displaystyle \begin{array}{rcl} -g^{ik}\nabla _i \nabla _k \nabla _j \phi&=&-g^{ik}\nabla _i \nabla _j \nabla _k \phi\quad (\nabla _i \nabla _j f= \nabla _j \nabla _i f)\\ &=&- g^{ik} \nabla _j \nabla_i \nabla _k \phi -R_j ^i \nabla _i \phi\quad (\text{Ricci identity})\\ &=& \nabla _j (-g^{ik} \nabla _i \nabla _k \phi)-R_j^i \nabla _i \phi\\&=& (n\lambda \nabla \phi-Rc(\nabla \phi))_j.\end{array}$ For (4), ${ (\nabla ^2 \phi,\nabla ^2\phi)= (\nabla \phi,\delta \nabla ^2 \phi)=(\nabla \phi, n\lambda \nabla \phi-Rc(\nabla \phi))}$. The last formula is the consequence of the above. $\Box$ As a corollary, as ${(v,v)\geq 0}$ and ${(\nabla \phi,\nabla \phi)>0}$, we have $\displaystyle \lambda \geq \frac{ (Rc(\nabla \phi),\nabla \phi)}{(n-1) (\nabla \phi,\nabla \phi)}.$ This is the eigenvalue estimate of Theorem 2. In the case where ${M}$ is Einstein with ${Ric=(n-1)kg}$, the condition is reduced to $\displaystyle \lambda \geq k.$ Also, note that the equality holds if and only if ${\nabla ^2 \phi=- \lambda \phi g}$, thus by Obata’s theorem, ${(M,g)}$ is isometric to a standard sphere of radius ${1/\sqrt{\lambda}}$ in the Euclidean space if ${M}$ is orientable. Actually ${M}$ can’t be non-orientable, otherwise, since ${\nabla ^2 \phi=- \lambda \phi g}$ on ${\widetilde M}$, we also have ${\nabla ^2 \phi=- \lambda \phi g}$ on ${M}$, and thus both are isometric to a standard sphere of the same radius, which is impossible (as one is orientable and the other is not, note that there is no orientation assumption in Obata’s theorem). As remarked by John Ma, we actually can weaken the assumption of being Einstein by imposing some conditions on its curvature in Theorem 1 [Updated: this result was discovered by Lichnerowicz and rediscovered by John Ma]: Theorem 4 (Obata-Lichnerowicz-John Ma’s theorem, 2011) For a compact ${(M^n,g)}$ with a lower bound ${1}$ on its (normalized) Ricci curvature in the sense that $\displaystyle Ric\geq (n-1)g,$ if there is a nonconstant function ${\phi}$ on ${M}$ such that $\displaystyle -\Delta \phi=n \phi, \ \ \ \ \ (6)$ then ${(M,g )}$ is isometric to the standard unit sphere. Proof: Indeed, the above proof shows that $\displaystyle \lambda \geq \frac{ (Rc(\nabla \phi),\nabla \phi)}{(n-1) (\nabla \phi,\nabla \phi)}\geq \frac{ (n-1)(\nabla \phi,\nabla \phi)}{(n-1) (\nabla \phi,\nabla \phi)}=1.$ Since the equality is attained, ${v=\nabla^2\phi+\phi g=0}$ and we can apply Obata’s theorem. $\Box$
2017-08-22 12:46:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 106, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9894205927848816, "perplexity": 112.58854899693378}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886110774.86/warc/CC-MAIN-20170822123737-20170822143737-00114.warc.gz"}
https://mathematica.stackexchange.com/questions/linked/134609?sort=active&page=4
686 views ### How to write if statement and for loop? [closed] I am trying to write a function that will find a sum of all diagonal elements of matrix. I don't know how to do it correctly. This is how I tried do do it. ... 78 views ### How to display a True/False table [closed] Below is my written program ... 272 views ### Chi square minimisation wrt variables within an integration? I'm trying to fit a model curve to some data by performing a chi square minimisation wrt three parameters $a,b$ and $NN$. The trouble I am having is that the variables with which I want to minimise ... 45 views ### An elementary question about For loop [duplicate] Assume we have this For loop: For[i=1,i<10,i++, i^2]. How can we put the squared values ... 147 views ... 21 views ### How can I create a variable named from a concatenation of strings and assign it a value created similarly? [duplicate] I have got a list of strings, a = {"first","second","third"}. For each of those strings I would like to create a variable called "The" followed by its content, so ... 95 views ### How can I get an array of numbers from the user and show the numbers with formatting? Suppose that I have to print the following lines: Number 1 is 5 Number 2 is 6 Number 3 is 7 Number 4 is 8 Number 5 is 9 Now, if I were programming in C, I would have run the following code : ... 251 views ### Multiple file export Here my code is running fine.In this code I put x=1 but I want to run x from 1 to 100 and for each value of x I want to export the excel file by the name according to x value.like for x=1 it export by ... 657 views ### Optimizing Mathematica Code I have implemented the whole Baum-Welch Algorithm for training Hidden Markov Models as described in Rabiner's paper Here is my code that calculates the Forward Probabilities` ... 104 views ### FindInstance nonconstant parameter I am trying to solve multiple instances of the same problem using FindInstance. ... 198 views ### How can i change some numbers in a list? I do not want to change all of them just some of them in particular rule. For example the list is given a={1,2,3,4,5,6,7,8,9....,100} If the number is bigger than ... 536 views ### Finding C[i]'s expression in DSolve solution An example, the command DSolve[y''[x] - 4 y[x] == 1, y[x], x] gives solution ... 72 views ### Incrementally building a list without memory overhead I have a task, where I have to combine pairs with a function, but the number of pairs I actually need is much smaller than the number of all pairs. The condition to keep the element is some function ...
2020-04-04 16:15:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7587343454360962, "perplexity": 821.8627950734965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370524043.56/warc/CC-MAIN-20200404134723-20200404164723-00256.warc.gz"}
https://nz.assignmentfirst.org/shi-yong-tong-ji-gong-ju/
# 使用统计工具 4.5.1.1使用统计工具 §假设检验(使用T-tests由于小样本大小), §在于按斯皮尔曼等级相关的方法, §倍数评分的决心。 4.5.1.1 Statistical Tools Used Since this report has been structured around ordinal data and not cardinal data, traditional statistical measures associated with ordinal data such as mean, standard deviation could not be calculated. The following tools have been used for the purposes of analysis: § Hypothesis testing (using T-tests due to small sample size), § Co-efficient of Rank Correlation as per Spearman’s method, § Co-efficient of Determination. Correlation is a measure of the degree of interdependence and association which exist amongst two variables (Arlene, 1995). Correlation co-efficient can be computed in various ways, but for analyzing ranked data in this research, Edward Spearman’s co-efficient of rank correlation as well as Kendall’s co-efficient of rank correlation (Dixon, 1992) may be used. The rank correlation coefficient devised by Spearman has been widely used for analyzing ordinal data (Bryan, 1994). As per Vaughan (2003), it can be regarded as the non-parametric counterpart of Pearson’s correlation coefficient. Its advantage stems from the fact that it does not require any assumption of the data’s distribution pattern. The formula for calculating the same is:Correlation Co-efficient = 1 – [6 * ∑Diff2]/[No(No2 – 1 )]
2022-10-04 03:02:55
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8472673296928406, "perplexity": 1298.5221041454633}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00714.warc.gz"}
https://math.meta.stackexchange.com/questions/10708/competitions-on-mse?noredirect=1
# Competitions on MSE I've always wanted math competitions on MSE ever since I've joined. These could be either user-held or officially held, whichever seems better. User-held competitions would run as follows. A user starts a competition with a specified level, with original problems that he/she writes. People sign up or just jump in, whichever. The user posts the problems at a certain time and submissions go in a particular e-Mail address, since there is no Private messaging system on SE yet. Then, the top-scorer wins. Examples of such tournaments can be found on the Art of Problem Solving website. This might be a nice turnaround on SE. Anybody agree? • Once a question is posted on Math.SE, anyone can post an answer to it. So, a question meant for competition would have to be locked by a moderator to prevent this. Additional work for moderators aside, this means that the site's main functionality would not be used. The question could just as well be posted on any website that does not have answering and commenting features. E.g., in a blog post with locked comments. – user90090 Aug 20 '13 at 19:27 • Thanks for your comments! On the AoPS website, people can post in-thread but hardly is there any cheating. Even if there is, it's almost immediately removed by moderation. Was this your main concern? – Ahaan S. Rungta Aug 20 '13 at 19:31 • But the AoPS culture and community rules are different. Here, a moderator deleting an on-topic answer would be something out of ordinary. – user90090 Aug 20 '13 at 19:46 • True, but it can still be done. Maybe a whole new site altogether for this? Who knows? Just an idea! – Ahaan S. Rungta Aug 20 '13 at 19:48 • You could always just bounty a good question that you ask and select the best answer. – Alexander Gruber Aug 20 '13 at 23:35 • @AlexanderGruber The idea is resolved, but that's not what I meant anyways. – Ahaan S. Rungta Aug 20 '13 at 23:52 • +1 I like this idea. Not sure how feasible it would be to implement it, but I like the idea nontheless. – Ataraxia Aug 21 '13 at 0:15 • Thanks! Although, I'm afraid quid's correct. This probably can't be implemented. – Ahaan S. Rungta Aug 21 '13 at 0:55 The idea of competitions of this form on this site to me seems completely against the intent of the site, which is to provide answers to questions (that somebody actually has or might have need for). If one thinks the SE infrastructure is a good fit for some type of math-related competitions one could entertain the idea to post a proposal for a new site at http://area51.stackexchange.com For a certain type of programming-related competitions there is in fact such an SE-site https://codegolf.stackexchange.com/ • Thanks for the response! I'll take a look at the given links. :) – Ahaan S. Rungta Aug 20 '13 at 20:02 This is only marginally related to the topic from the original post, but I thought it is worth pointing out that the tag was created recently. At the moment in contains only one question. The tag-wiki is empty at the moment. The tag-excerpt is: For the question that is intended as a challenge problem. Use this tag to invite other users to compete or take part, especially in trying to answer a question that has already known the answer. • I wasn't sure whether to post a separate question about this tag. (To discuss whether it should be kept.) But I guess posting an answer here is probably enough to make users aware of the new tag. – Martin Sleziak Nov 9 '14 at 7:44 • I have taken the liberty of removing that tag from that question in the hopes of having the tag deleted within 24hrs. It's a pure meta-tag, and shouldn't have been created in the first place. – user642796 Nov 9 '14 at 8:47 • @Arthur Fischer: I have removed it one more time from math.stackexchange.com/q/1006127/630 , and left a comment pointing at this thread. – Carl Mummert Nov 9 '14 at 17:30 • Mr. Martin Sleziak But WHY??? I have a privilege for doing so. Lots of questions on Math SE can be covered by using this tag. I think it is about time we have this tag on Math SE. Just please don't delete the tag. It's really helpful. Mr. @ArthurFischer and Prof. Carl Mummert Why don't you guys just leave me alone? Please don't bother my posts again. They're all legit. I didn't do something wrong here – Anastasiya-Romanova 秀 Nov 9 '14 at 17:51 • @Anastasiya-Romanova: Part of the SE model is that any post can be edited by others to better fit the standard of the site. As this very thread indicates, the community is largely feel that your "challenge problems" do not really fit the philosophy of the site, regardless of whether you create a tag for them. (And your tag is of the kind strongly discouraged.) – user642796 Nov 10 '14 at 5:50 I only noticed this post later and previously made a new question http://meta.math.stackexchange.com/questions/13855/are-small-competitions-allowed where the comments there were quite the opposite of the answer here. • I do think competitions should be allowed But very restricted: • the questioner must provide the bounty. • the question needs to be very specific • the question should be marked competition • answers should be in public (just as normal questions) • more than one answer per participant is allowed • the rules of the competition should be clearly stated , what answer will win, references to books, methods and the like. (maybe we should make a template for this) • the question should be for an alternative answer, a more beautiful answer or someting like that. • the questioner must provide an answer to beat • as example of an answer • to show that the question is for an ((better) alternative answer and that the questioner already knows an answer That are so the conditions I came up with when I started my competition at **Ended Competition:** What is the shortest proof of $\exists x \forall y (D(x) \to D(y))$?
2021-05-17 18:26:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34354305267333984, "perplexity": 1302.9033401263207}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243992440.69/warc/CC-MAIN-20210517180757-20210517210757-00603.warc.gz"}
https://www.authorea.com/users/3234/articles/3788/_show_article
# Introduction Stars with masses less than 0.6 M$$_\odot$$ are the most numerous in our Galaxy. These are intrinsically cool and faint stars, with complex spectra characterised by molecular absorption of TiO, CaH and VO in the optical, and FeH and H$$_2$$O in the near infrared. Some of them are known to be quite active, with flares larger than the ones produced by the sun. Few of them are the hosts of the closest rocky planets to the Earth, and overall, they should be the most likely hosts of Earth-like planets in the galaxy. The study of M dwarfs has been greatly benefited by surveys covering different regions of the galaxy. We present colour selected M dwarfs in the b201 tile of the VISTA Variables in the Vía Láctea (VVV) survey. In section 2, we give the description of the survey and of the tile b201. In section 3, we present our M dwarf selection method based on 6 colour selection cuts obtained from SDSS spectroscopically observed M dwarfs. A spectral subtype calibration based on $$(Y-J)$$, $$(Y-K_s)$$, and $$(H-K_s)$$ is given in section 4. In section 5, we show interesting objects blah blah. We discuss our results and conclusions in section 6. # Data VISTA Variables in the Vía Láctea (VVV) is a public ESO near-infrared (near-IR) variability survey aimed at scanning the Milky Way Bulge and an adjacent section of the mid-plane. The VVV survey gives near-IR multi-colour information in five passbands: $$Z$$ (0.87 $$\mu m$$), $$Y$$ (1.02 $$\mu m$$), $$J$$ (1.25 $$\mu m$$), $$H$$ (1.64 $$\mu m$$), and $$K_s$$ (2.14 $$\mu m$$) which complements surveys such as 2MASS1, DENIS, GLIMPSE-II, VPHAS+, MACHO, OGLE, EROS, MOA, and GAIA (citation not found: 2012A&A...537A.107S). The survey covers a 562 square degrees area in the Galactic bulge and the southern disk which contains ~$$10^{9}$$ point sources. Each unit of VISTA observations is called a (filled) tile, consisting of six individual (unfilled) pointings (or pawprints) and covers a 1.64 $$deg^{2}$$ field of view. To fill up the VVV area, a total of 348 tiles are used, with 196 tiles covering the bulge (a 14 14 grid) and 152 for the Galactic plane (a 4 38 grid) (citation not found: 2012A&A...544A.147S). We selected one specific tile from the bulge to characterise M-dwarf stars called “b201” which center’s galactic coordinates are $$l$$=350.74816 and $$b$$=-9.68974. This tile is located in the border of the bulge where star density is lower and extinction is small allowing good photometry. Photometric catalogues for the VVV images are provided by the Cambridge Astronomical Survey Unit (CASU2). The catalogues contain the positions, fluxes, and some shape measurements obtained from different apertures, with a flag indicating the most probable morphological classification. In particular, we note that -1 is used to denote the best-quality photometry of stellar objects (citation not found: 2012A&A...544A.147S). Some other flags are -2 (borderline stellar), 0 (noise), (sources containing bad pixels), and -9 (saturated sources). 1. http://apm49.ast.cam.ac.uk/ # Selection Method In order to identify potential M dwarfs in the VVV tile “b201”, we performed several colour selection cuts using the VVV passbands as described in the subsections below. Before performing those cuts, we did a pre-selection of the objects in the tile “b201” to assure that the objects have the best-quality photometry. The pre-selection consisted on including only objects that had photometry in all five passbands and that were classified as “stellar” in each passband. The total number of 142,321 objects in the tile “b201” satisfied these conditions. ## Color Selection Cuts from SDSS-UKIDSS M dwarfs The color selection cuts were defined by selecting spectroscopically identified M dwarfs with UKIRT Infrared Deep Sky Survey (UKIDSS) photometry. We used the Sloan Digital Sky Survey DR7 Spectroscopic M dwarf catalog by West et al. (2011) as the comparative M dwarf sample. The 70,841 M dwarf stars in this catalog had their optical spectra visually inspected and spectral type was assigned by comparing them to spectral templates. Their spectral types range from M0 to M9, with no half subtypes. These catalog also provides values for the CaH2, CaH3 and TiO5 indices, which measure the strength of CaH and TiO molecular features present in the optical spectra of M dwarfs. We performed a cone search with a radius of 0.5of these SDSS M dwarf stars in the UKIDSS-DR8 survey (Lawrence et al., 2012). The UKIDSS survey is carried out using the Wide Field Camera (WFCAM), with a $$Y$$ (1.0um), $$J$$ (1.2um), $$H$$ (1.6um) and $$K$$ (2.2um) filter set. There were UKIDSS-DR8 matches for almost half of the SDSS M dwarf sample (34,416 matches) . Next, we only kept the UKIDSS counterparts consistent with being a stellar objects (pStar > 0.9), with measured magnitudes in all WFCAM $$YJHK$$ filters, and with CaH and TiO indices compatible with average M dwarf stars. The final SDSS-UKIDSS comparative M dwarf sample consists of 17,774 objects. To convert the WFCAM $$YJHK$$ magnitudes of the SDSS-UKIDSS M dwarf sample to VISTA $$YJHK_s$$ magnitudes, we used the conversions provided by the CASU 1, derived from regions observed with both VISTA and WFCAM. The mean and standard deviation for all of the colors from VISTA $$YJHK_s$$ photometry per M spectral subtype, as well as the number of stars considered their computation, are shown in Table \ref{spec_color}. Sp.T. $$\overline{Y-J}$$ $$\sigma(Y-J)$$ $$\overline{Y-H}$$ $$\sigma(Y-H)$$ $$\overline{Y-K_s}$$ $$\sigma(Y-K_s)$$ $$\overline{J-H}$$ $$\sigma(J-H)$$ $$\overline{J-K_s}$$ $$\sigma(J-K_s)$$ $$\overline{H-K_s}$$ $$\sigma(H-K_s)$$ $$\#$$ stars M0 0.428 0.092 1.039 0.087 1.163 0.063 0.611 0.116 0.734 0.092 0.124 0.079 1946 M1 0.449 0.077 1.047 0.061 1.200 0.064 0.598 0.086 0.751 0.081 0.153 0.046 2520 M2 0.467 0.061 1.042 0.073 1.219 0.058 0.575 0.088 0.752 0.071 0.177 0.058 3043 M3 0.487 0.081 1.043 0.062 1.241 0.064 0.556 0.089 0.754 0.083 0.198 0.038 3293 M4 0.515 0.083 1.057 0.090 1.278 0.068 0.542 0.110 0.762 0.085 0.220 0.075 2872 M5 0.555 0.096 1.092 0.069 1.34 0 0.082 0.538 0.103 0.786 0.099 0.248 0.044 1264 M6 0.619 0.082 1.150 0.067 1.442 0.076 0.531 0.087 0.823 0.084 0.292 0.033 1224 M7 0.664 0.117 1.198 0.126 1.513 0.136 0.533 0.064 0.849 0.068 0.315 0.037 1141 M8 0.758 0.070 1.304 0.102 1.662 0.122 0.546 0.052 0.904 0.067 0.358 0.033 320 M9 0.850 0.079 1.429 0.114 1.830 0.139 0.579 0.054 0.980 0.071 0.401 0.038 151 \label{spec_color} We have defined our limits in each magnitude difference as the mean value of spectral type M0 minus its standard deviation for the lower cut, and the mean value of spectral type M9 plus its standard deviation for the higher cut. The resulting limits are: 0.336 < $$(Y-J)_{VISTA}$$< 0.929 0.952 < $$(Y-H)_{VISTA}$$< 1.544 1.100 < $$(Y-K_s)_{VISTA}$$ < 1.969 0.432 < $$(J-H)_{VISTA}$$< 0.727 0.642 < $$(J-K_s)_{VISTA}$$ < 1.051 0.045 < $$(H-K_s)_{VISTA}$$ < 0.438 From our preselection of 142,321 objects, only 23,345 objects have colours that are consistent with M dwarf stars, according to the color-cuts shown above. A 40$$\%$$ of these objects have magnitudes 12$$<$$K$$_s$$$$<$$16, and therefore have reliable magnitudes for variability and are the best M dwarf candidates to detect any possible transits (9,232 objects). ## Spectral Types and Distances for VVV M dwarfs By inspecting the mean colors per spectral type in Table \ref{spec_color}, it is noticeable that spectral type is a monotonically increasing function for the following colors: $$Y-J$$, $$Y-K_s$$, and $$H-Ks$$. We conducted multivariate regressions on the $$Y-J$$, $$Y-K_s$$, and $$H-Ks$$ colors, for the 17,774 stars in the SDSS-UKIDSS comparative M dwarf sample to identify the best-fit relationship to predict each star’s spectral type. The resulting subtype calibration is M subtype & = 5.394 (Y-J) + 4.370 (Y-J)^2 & + 24.325 (Y-K_s) - 7.614 (Y-K_s)^2 & + 7.063 (H-K_s) -20.779 RMSE_V & = 1.109 with $$RMSE_V$$ being the root-mean-square error of validation, a sensible estimate of average prediction error (see APPENDIX in Rojas-Ayala et al., 2012). Spectral types for all the M dwarf candidates are given in Table XX. We looked for the location of M dwarfs at different distances in the Colour-Magnitude Diagram (CMD) using the nearby M dwarfs with $$M_{K_s}$$ and spectral type estimates in Rojas-Ayala et al. (2012). Using colour transformations from WFCAM to the VISTA system, we estimated the aparent $$K_s$$ magnitudes at different distances per spectral type (Table \ref{absolutemag}). The locations of the M dwarf sequence at 60pc, 300pc and 1000pc coincide with the location of the colour-based selection of M dwarfs describe above, as shown in the CMD of Figure XX. Based on t
2017-08-22 17:14:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6893306970596313, "perplexity": 2139.4332690858923}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886112533.84/warc/CC-MAIN-20170822162608-20170822182608-00352.warc.gz"}
http://tex.stackexchange.com/questions/135758/siunitx-error-invalid-number-from-r
# siunitx error: “invalid-number” from R I have a document in .Rnw which I use to produce a .texdocument with kntir. I have a problem with the formula \num{\Sexpr{max(degree)}} which in R console produce 12598 but after the running knitr in the .tex document becomes \num{\ensuremath{1.2598\times 10^{4}}}, which create the following error in LaTex: !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! ! ! siunitx error: "invalid-number" ! ! Invalid numerical input '\protect $\relax 1.2598\times 10^{4}$'. ! ! See the siunitx documentation for further information. ! ! For immediate help type H <return>. !............................................... l.466 ... & \num{\ensuremath{1.2598\times 10^{4}}} I am bit puzzled since in the document I have bigger numbers which do not create any problem when compiling. These are my /siunitx options %Separate digits with comma (e.g. 1,000,000) \usepackage[group-separator={,}]{siunitx} \sisetup{ detect-all, detect-inline-family=math, detect-inline-weight=math, detect-display-math=true} - Any chance of a MWE? –  moewe Sep 30 '13 at 12:03 \ensuremath is problematic, \times probably too. Can't you configure R/knitr to output \num{1.2598E4} (or just 1.598E4 for a S column)? –  Qrrbrbirlbel Sep 30 '13 at 12:06 Nope, no idea. Maybe it is easier to work on R to make sure the input is what I expect. –  Francesco Sep 30 '13 at 12:11 ## 1 Answer It seems you want to format the numbers using siunitx, while knitr also tries to do that automatically. Only one of them should be used, so either you do not use \num{}, or tell knitr to output the numbers without scientific notations (this is perhaps what you prefer), e.g. \Sexpr{as.character(max(degree))}, or use other functions like format(), sprintf(), and so on, to turn the numbers to character strings, so that knitr will no longer do scientific notations. - I don't mind let knitr manage the formatting of numbers. I just don't know how to set it to give me the comma (e.g. 10,000 instead of 10000) –  Francesco Sep 30 '13 at 22:30 I'd recommend you to use siunitx instead; it seems to be more powerful. –  Yihui Oct 1 '13 at 0:25 @Francesco I strongly recommend you let siunitx do all the formatting and try to get the most raw output from knitr you can get. siunitx has excellent capabilities in typesetting numbers and units and is very easy to configure (e.g. to get 10,000 instead of 10 000, just ask for group-separator={,}). –  moewe Oct 1 '13 at 19:49 I think the \num{\Sexpr{as.character(...)}} approach is the most practical one. That is, to keep using siunitx while getting raw output from knitr. –  Francesco Oct 2 '13 at 1:17
2014-07-28 06:16:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9139481782913208, "perplexity": 3298.4476307213763}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510256757.9/warc/CC-MAIN-20140728011736-00164-ip-10-146-231-18.ec2.internal.warc.gz"}
https://socratic.org/questions/how-do-you-solve-using-the-quadratic-formula-x-2-5x-14-0
# How do you solve using the quadratic formula x^2-5x-14=0? May 16, 2015 $y = {x}^{2} - 5 x - 14 = 0$ D = d^2 = 25 + 56 = 81 -> d = +-9 x = 5/2 + 9/2 = 14/2 = 7 x = 5/2 - 9/2 = -4/2 = -2 There is another way that is faster. (new AC Method) Find 2 numbers knowing product (-14) and sum (7). Compose factor pairs of (-14): (-1, 14)(-2, 7). This last sum is 5 = -b. Then the 2 real roots are: -2 and 7
2020-09-28 16:22:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 1, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7992181181907654, "perplexity": 1173.1823299688165}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401601278.97/warc/CC-MAIN-20200928135709-20200928165709-00401.warc.gz"}
https://cstheory.stackexchange.com/questions/41055/what-is-best-known-space-requrement-for-solving-satisfiability-problem-in-exp-ti
# What is best known space requrement for solving SATISFIABILITY problem in exp time I searched a lot for finding best space requirement algorithm for SATISFIABILITY problem but I didn't find any thing better than brute force that is in DSPACE(n). is there exists better bound? and what is best known bound. • If it were solvable in space $o(n)$, itwould be solvable in time $2^{o(n)}$, contradicting the exponential-time hypothesis. – Emil Jeřábek Jun 22 '18 at 13:35 • From the opposite direction, any $\omega(\log n)$ lower bound would imply $\bf{L} \neq \bf{NP}$, which is itself an open problem. A $\log n$ lower bound is trivial. – Yonatan N Jun 22 '18 at 19:05 • @MohsenGhorbani,, when you write $n$, do you mean the number of variables or the number of bits of the input? There is perhaps a small difference here. – usul Jun 24 '18 at 21:39 • @usul n is the length of boolean formula(sat) and not the length of input bits. – Mohsen Ghorbani Jun 25 '18 at 10:47 • @EmilJeřábek I think you can copy your comment to this question's answer. thank you again. – Mohsen Ghorbani Jun 25 '18 at 10:49
2021-06-21 22:07:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6598958969116211, "perplexity": 542.6788903219756}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488504838.98/warc/CC-MAIN-20210621212241-20210622002241-00635.warc.gz"}
https://academic.oup.com/bioinformatics/article-lookup/doi/10.1093/bioinformatics/bti056
Abstract Motivation: Signaling events that direct mouse embryonic stem (ES) cell self-renewal and differentiation are complex and accordingly difficult to understand in an integrated manner. We address this problem by adapting a Bayesian network learning algorithm to model proteomic signaling data for ES cell fate responses to external cues. Using this model we were able to characterize the signaling pathway influences as quantitative, logic-circuit type interactions. Our experimental dataset includes measurements for 28 signaling protein phosphorylation states across 16 different factorial combinations of cytokine and matrix stimuli as reported previously. Results: The Bayesian network modeling approach allows us to uncover previously reported signaling activities related to mouse ES cell self-renewal, such as the roles of LIF and STAT3 in maintaining undifferentiated ES cell populations. Furthermore, the network predicts novel influences such as between ERK phosphorylation and differentiation, or RAF phosphorylation and differentiated cell proliferation. Visualization of the influences detected by the Bayesian network provides intuition about the underlying physiology of the signaling pathways. We demonstrate that the Bayesian networks can capture the linear, nonlinear and multistate logic interactions that connect extracellular cues, intracellular signals and consequent cell functional responses. Availability: Datasets and software are available online from http://sysbio.engin.umich.edu/~pwoolf/mouseES/ Contact:[email protected] Supplementary information:http://sysbio.engin.umich.edu/~pwoolf/mouseES/ INTRODUCTION Large datasets for biological systems are becoming increasingly available due to ongoing improvements in high-throughput measurement methods. However, similarly improved computational methods are needed if we are to gain useful insights from these datasets. Among prominent examples of this current challenge in biology is in understanding how extracellular cues influence highly interconnected and complex cell signaling pathways to yield cell behavioral responses (Kitano, 2002). Current high-throughput experimental protocols allow us to measure an increasingly large number of variables describing cells or cellular populations. For example, large-scale phosphorylation screens (Lund-Johansen et al., 2000) protein–protein binding screens (Cagney et al., 2000) and migration assays (Maliakal, 2002) have demonstrated that it is possible to simultaneously measure the activity of tens to thousands of variables in a biological system. Future advances in microfluidics (Burns et al., 1998) and high-throughput mass spectrometry (Morand et al., 2001) for example, promise even more measurements will be possible at a fraction of their current cost. Analysis of these data has generally focused on identifying a small number of pathway components involved in governing a particular cell behavior. Growing interest is focusing on how to use these multivariate data to determine integrated pathways in, for instance, signal transduction (Gardner et al., 2003) and metabolic pathways (Price et al., 2003). Early examples of these computational algorithms, however, were often hampered by unrealistic assumptions about the underlying biological system, e.g. linear interactions and processes. In this work, we have chosen to use a more flexible modeling tool called a Bayesian network (BN) to reverse engineer protein-level signal transduction cascades. BNs can be broadly described as probabilistic models that depict causal relationships between variables (Jensen, 1998; Pearl, 1988; Pearl, 2000; Neapolitan, 2003). These networks were able to capture both linear and nonlinear interactions between groups of variables, while greatly reducing the number of parameters required to describe the model in comparison with other modeling approaches. The reason for this reduction in parameters is due to the Markov condition. The Markov condition states that in order to predict the value of some variable, X, we only need to know the values of the variables that directly influence X. Other, non-descendant variables to X are independent and as such can be treated separately. An additional advantage of BNs is that they are probabilistic. Probabilistic type models treat measured quantities as uncertain estimates, and as such are able to tolerate measurement noise in an automated and systematic way (Neapolitan, 2003). Here, BN network structures representing cellular processes were learnt from experimental data. In general, learning from the data has been proven to be an NP-complete problem, meaning that computation time increases exponentially with problem size (Chickering, 1996). The result of learning a BN from the data is a directed graph that is often directly interpretable and can be used to make quantitative predictions of outcomes and error estimates. General reviews of BN analysis can be found elsewhere (Jensen, 1998; Heckerman, 1999). In other fields of science, technology and business, BNs have demonstrated their ability to uncover useful causal relationships in spite of large and noisy datasets (Heckerman, 1999). For analysis of biological signaling data, although, the BN-based modeling approach is relatively new (Hartemink et al., 2002; Friedman, 2003; Beer and Tavazoie, 2004). Earlier efforts have been hampered by insufficient data, difficulties in data preprocessing and the large computational demands. Here, we partially overcome the first two issues by presenting a novel resampling/discretization scheme that allows us to integrate data from many experiments into a single, coherent dataset. We address computational speed issues via parallelization. Recent proof of principle studies applying BNs to the analysis of protein signaling pathways suggested that BNs may be capable of addressing this challenging class of reverse engineering problems (Sachs et al., 2002). Accordingly, in this work we show how BNs can be gainfully used to analyze a proteomic dataset characterizing signaling events that affect stem cell differentiation and proliferation responses to extracellular stimuli. Our problem offers an excellent test for this approach, since some but not all of the key signaling pathway influences are known. Therefore, this dataset simultaneously provides us with a way to validate our BN predictions and also produces novel hypotheses for future experiments. As an experimental system, we focus here on mouse embryonic stem (ES) cell self-renewal and differentiation. ES cells are pluripotent, meaning that they are able to differentiate into any cell type in the adult body. However, there is currently a little understanding of how to control ES cell fate decisions. A small number of important factors have been identified, but an integrated picture of how different factors work together to govern fate decisions has not emerged. Mouse ES cells can be maintained undifferentiated indefinitely in culture by adding leukemia inhibitory factor (LIF) to the growth media. LIF binds to the heterodimer of LIFR and gp130 to activate both the JAK/signal transducer and activator of transcription (STAT) and MAPK pathways (Ernst et al., 1996). Although the roles of these signaling pathways are not fully understood, it has been demonstrated that STAT3 is required for ES cell self-renewal (Niwa et al., 1998; Matsuda et al., 1999; Raz et al., 1999). Upon removal of LIF, these cells rapidly differentiate, implying that LIF is sufficient to maintain these cells in an undifferentiated state. In contrast, LIF does not maintain human ES cells in an undifferentiated state. Clearly, it is thus necessary to go beyond identifying single cytokines and/or matrix cues that are effective under some conditions and not others, and pursue an objective of identifying signaling pathways that proximally govern cell behavioral responses. To facilitate our identification of which network interactions are responsible for ES cell fate decisions, we previously generated an experimental dataset that includes 49 measurements of the phosphorylation states of 28 cellular signaling proteins, along with cell proliferation and differentiation responses, across a landscape of stimulatory extracellular matrix and cytokine cues (Prudhomme et al., 2004a). In addition to the effects of LIF, this dataset includes accompanying effects of fibroblast growth factor-4 (FGF4), fibronectin (FN) and/or laminin (LAM). This multivariate ‘cue-signal-response’ approach is well suited for applying a BN methodology toward the goal of more comprehensive understanding which network pathway interactions and influences are involved in maintaining mouse ES cells in an undifferentiated state. SYSTEMS AND METHODS Experimental procedures A primary objective of this work is to determine the signaling pathway influences that are responsible for mouse ES cell self-renewal versus differentiation fate responses to extracellular stimuli. Toward this end, a factorial screen was conducted to test the effects of LIF, FN, LAM and FGF4 on ES cell growth and differentiation. For each condition, differentiation was measured using Oct4 as a differentiation marker, and proliferation was measured by seeding single cells and counting total cell numbers. Measurement of most signaling protein phosphorylation states were performed using western blot analysis from the KPSS 1.1 screen from Kinexus (Vancouver, Canada). These data have been reported previously (Prudhomme et al., 2004a). We also undertook additional western blot measurement of STAT3 phosphorylation on tyrosine 705 by standard methods (Prudhomme, 2003). Complete datasets are available online at http://sysbio.engin.umich.edu/pwoolf/mouseES/data. Using the model by (Prudhomme et al., 2004b) we determined rate constants for differentiated and undifferentiated cell growth and for differentiation by fitting our data to the following two state model:Here U is the undifferentiated cell population, D is the differentiated cell population, kU and kD are proliferation rate constants for undifferentiated and differentiated cells and kC is the rate constant governing differentiation. This model is expressed in the following system of coupled ordinary differential equations: (1) $\begin{array}{c}\frac{d\left[\hbox{ U }\right]}{dt}={k}_{\hbox{ U }}\left[\hbox{ U }\right]-{k}_{\hbox{ C }}\left[\hbox{ U }\right]\\ \frac{d\left[\hbox{ D }\right]}{dt}={k}_{\hbox{ C }}\left[\hbox{ U }\right]+{k}_{\hbox{ D }}\left[\hbox{ D }\right]\end{array}$ Using an analytical solution to this model, we derived maximum-likelihood values for each parameter (kU, kC and kD) and calculated the error associated with that value via nonlinear regression. Error estimates were later used to define a sampling distribution for each parameter. Overall, the differential equation model fit the experimental data with an average r2 value of 0.61. These parameter values in turn were then used to derive instantaneous rates of undifferentiated and differentiated cell growth and differentiation (RU, RD and RC, respectively) for each time point and condition. These instantaneous rates were then used as the primary readouts of cellular behavior. Other readouts, such as the total number of cells (Nf) and the fraction of differentiated cells (D) were used as secondary measures of cellular behavior. The dataset is summarized in Table 1 and in further detail in Supplementary Table 1. Data preprocessing To make subsequent analysis computationally tractable, the raw data were converted from continuous to discrete values by using an approach similar to resampling. In this approach, each measurement is first converted from its putative ‘precise’ central value to a more representative probability distribution that also describes the uncertainty in the value. The data were found to be approximately normally distributed via Lilliefors' test, thereby allowing us to model the error distributions using only the mean and standard deviation of a measurement. Because the protein phosphorylation experiments were not run in replicates, we take the standard deviation of the measurement according to Kinexus specifications to be 25% of the mean value. After converting each measurement into a distribution, we then populate the dataset by sampling 1000 points from each distribution. In this way, we convert a single observation into 1000 individually sampled points that reflect the information contained in the original distribution. We propose that this approach reliably represents the data and its associated errors, and is accordingly most useful for making the relative comparisons required for the BN analysis. Finally, the expanded dataset is discretized by distributing all of the data into 25 bins of the same size. Because we have no information a priori about the locations or sizes of the bins we set them all to be of the same size. Computationally, same sized bins have the advantage that they protect against data sparsity, and thereby prevent spurious correlations (Kozlov and Koller, 1997). In addition, discretizing into same sized bins reduces the influence of outliers, resulting in less sensitivity to noise. The sampling rate and number of bins used in this dataset were chosen empirically. In the limit of an infinite sampling rate and infinite number of bins, this discretization scheme would reproduce the given probability density exactly. However, increasing the sampling rate or increasing the number of bins increases the time and memory required to analyze the dataset. By varying both parameters, we found that a sampling rate of 1000 points per measurement and 25 bins allowed us to find the same high-scoring BN (described below) as those found using datasets with larger sampling rates and more bins. In contrast, data discretized with less than 10 bins produced an unsatisfactorily coarse representation of the raw data. Although the datasets do have a time component (e.g. measurements taken at 0, 2, 3 and 5 days), we chose instead to model the dataset as an unordered series of steady states, and thereby omit the time measure. We chose not to include time as a variable for two reasons. First, many of the known biochemical processes underlying the measured take place at a timescale of minutes or hours, thus we do not expect kinetic connections between measurements made on the scale of days to be meaningful. Second, the available dataset is not sufficiently large or structured to model as an unrolled temporal BN. To be modeled as a kinetic BN, the dataset would have to have many more measures as a function of time and ideally measures would be made at evenly spaced time intervals (Takikawa et al., 2002). Because our dataset satisfies neither of these criteria, we opted to use a steady-state model. Bayesian network analysis A goal of this project is to assess the suitability of BNs for identifying signal transduction pathways. For this purpose, we constructed a search algorithm that attempts to find a network structure that satisfies the Markov condition and fits the data as well as possible. The Markov condition states that the value of a variable is conditionally independent of its non-descendants given its parents (Neapolitan, 2003). By way of an example of the Markov condition, consider a set of three random variables, A, B and C. The joint probability distribution of these variables can be written in general as: (2) $P(A,B,C)$ If we know or predict that C alone determines or influences the state of both A and B, then the joint probability distribution simplifies to (3) $P(A,B,C)=P\left(A\right|C\left)P\right(B\left|C\right)P\left(C\right).$ By making this simplification, we stated that the value of A is conditionally independent of B (a variable non-descendant of A) if we know the value of the parent, C. This reduced version of the joint probability distribution can then be represented using a directed acyclic graph (DAG) with two arrows pointing from C to A and B. The Markov condition is at the heart of a BN analysis for it allows us to reduce a complex joint distribution down to a smaller, and more tractable set of conditional probability distributions. This smaller set of conditional probability distributions is desirable because it means that we need fewer parameters to describe our model, and hence need less data to characterize those parameters. Applications of the Markov condition are commonly encountered in other fields of biology. For example, in genetics the Markov condition says that the genetic state of an individual is independent of the genetic state of the individual's grandparents if the genetic state of the parents and children are known. For the signal transduction pathways studied in this work, the Markov condition also holds. In signal transduction, the phosphorylation state of a protein, for example, is directly governed by a finite set of other proteins. Thus, knowing the activity states of these other proteins should be sufficient to make predictions. To find a BN model that fits the data as well as possible, we need to calculate the probability of a model given data. In Bayesian terms, the probability that a model is correct given data can be written as (4) $P\left(\hbox{ Model }\right|\hbox{ Data })=\frac{P\left(\hbox{ Data }\right|\hbox{ Model }\left)P\right(\hbox{ Model })}{P\left(\hbox{ Data }\right)}$ Thus, if we limit our model space to DAGs (i.e. BNs), then the search can be restated as finding a model that maximizes P(Model | Data). The term P(Model) is the prior probability of the model, P(Data) is the prior probability of the data and P(Data | Model) is the probability of the data given a model. Each of these terms is discussed below. To explain how P(Data | Model) is calculated, let us first introduce some notation. Assume we have a dataset, D, with m entries. Each entry describes a state of n discrete valued variables. For each variable, i, let ri equal the number of states (i.e. arity) of the variable. Given a network model, each node represents a variable, and arrows represent apparently causal connections between nodes. Thus, each node, i, will have a list of parents called πi, which can take on a total of qi possible combinations of values. Finally, let Nijk equal the total number of cases in the dataset where variable i is in state k and the parent nodes are in state j. Similarly, let Nij equal the number of cases in the dataset where parents of the variable i are in state j. Given these terms, we can write a closed form expression for P(Data | Model) as was first derived elsewhere (Cooper and Herskovits, 1992): (5) $P\left(\hbox{ Data }\right|\hbox{ Model })={\displaystyle \prod _{i=1}^{n}}{\displaystyle \prod _{j=1}^{qi}}\frac{({r}_{i}-1)!}{({N}_{ij}+{r}_{i}-1)!}{\displaystyle \prod _{k=1}^{{r}_{i}}}{N}_{ijk}!$ Intuitively, this expression describes the product of the probability of observing child nodes, i in a particular state, k, given parents in some state, j. Combinations of parents that are more informative, or predictive, of the child values will have a higher probability. A more detailed discussion of Equation (5) can be found elsewhere (Cooper and Herskovits, 1992). Biologically, each node in the network is a value that we can measure, such as the phosphorylation level of a particular protein or the rate of cell differentiation. Arrows between nodes indicate apparent causal relationships uncovered within the data. It is important to note that arrows between nodes do not differentiate among positive, negative or more complicated interactions, but only indicate a directional relationship between variables. We have defined the prior probability of a model, P(Model), such that it can take on values of either 1 or 0, signifying that the network is or is not allowed. For a model to be allowed [P (Model) = 1], it must satisfy the following four conditions:Condition 1 is required for a BN to maintain logical consistency. Violation of this constraint would lead to circular reasoning (e.g. A causes B, B causes C and C causes A), which is not meaningful in a causal network. Condition 2 limits the number of parents and children and serves two purposes. First, by limiting the number of connections a node may have, we limit the search space, thereby speeding up the network optimization algorithm. Second, networks with fewer strong connections are more interpretable than a strictly more accurate network with more connections. A motivation for constructing a BN representation of an experimental dataset is to produce a human interpretable result. By limiting the number of connections, we select for the strongest connections between nodes, as these configurations describe the data with the highest probability. Such capping conditions are commonly employed for a BN analysis, as described elsewhere (Heckerman, 1999; Friedman, 2003). 1. Graph is acyclic. 2. No node has more than three parents and/or children. 3. Nodes designated as cue have no parents. 4. Nodes designated as response have no children. Conditions 3 and 4 further limit the network search space by imposing structural limitations on certain nodes based on physical and biological insight. Every node in the network is assigned as a cue, signal or response. Cue nodes are nodes that are controlled externally, and as such cannot be altered by the system via parent connections. Cue nodes include the experimental addition or removal of growth factors or changes to the matrix onto which the cell are plated. Signal nodes are defined as nodes that can influence or be influenced by other nodes. An example of a signal node is the concentration of a phosphorylated protein inside a cell. Response nodes are high-level, phenomenological outputs of a system. We assume that these outputs do not cause signals, but are instead the outcome of signals. Examples of response nodes include the rates of cell proliferation and differentiation. A list of the nodes used in this work and their associated structural limitations is provided in Supplementary Table 1. The procedure for searching for high-scoring networks is shown conceptually in Figure 1a. To generate the networks in this work, we randomly initialized approximately 109 different seed networks. This initial Monte Carlo sampling step provides a wide coverage of the network space, and thereby increases the chances of finding a globally optimal network structure. Each of these networks was then subject to 103 rounds of local optimization. Of the resulting set of networks, the top 100 networks were selected and subjected to additional local optimization runs until no further changes could be made to increase the probability of the network. From this resulting list, the top-scoring network was analyzed. While this approach does not guarantee that we will find the network with the global maximum score, it will result in a network that is likely and as such captures many, if not all of the relationships present in the original dataset. A sample analysis with a small network is shown in Figure 1b. The software program used for searching and scoring the network is a heavily modified version of the open source BNJ libraries (Hsu et al., 2003, http://bndev.sourceforge.net) and is available from the authors upon request. Network models were visualized using the program dot (Gansner and North, 2003, http://www.research.att.com/sw/tools/graphviz/). Model searching and scoring took approximately two weeks of computing time on a 64-processor IBM pSeries 655 server. Model validation via data shuffling To assess the confidence level of our model in predicting our data, we generated 104 randomly shuffled datasets and calculated the P(Model | Data) for each set. By shuffling our data for each variable, we destroy correlations between variables, while retaining characteristics of the original set (e.g. ranges and histogram distribution of values). In this way, we predict the probability that our model structure would be chosen from a similar, yet randomly structured dataset. If this resulting score difference is small, this indicates that the underlying relationships in the data are weak or that the model does not discriminate signal from noise. In contrast, large score differences between shuffled and unshuffled datasets indicate that the model structure is optimized for a particular data configuration and can make reliable predictions. Similar data shuffling algorithms have been used elsewhere for model validation (Mavroudi et al., 2002; Janes et al., 2004). Model visualization An advantage of using a BN to represent data is that it can capture linear, nonlinear and multistate interactions between variables. However, a directed graph representation of a BN gives no indication of the underlying interactions between variables, thus we have developed additional visualization tools to describe these interactions. In the simplest case of a node with only one parent, we visualize the interaction between the child and parent node by plotting their probability densities on the x and y coordinates, respectively. In this way, high-probability parent/child configurations can be easily distinguished from regions of low probability. For a child node with two parents, we visualize the interaction by plotting the maximum likelihood of the child node as a function of the two parent nodes on the x and y coordinates. Although this method does not present the full distribution of values of the child node it does provide a fairly accurate representation of the underlying distribution. Children with three parents were not visualized. Model predictions Given a BN, it is possible to perform what if experiments to make quantitative predictions about conditions not tested in the original dataset. For example, given a network and a dataset, we can predict how setting or observing one part of the network would influence our beliefs about the rest of the network. We make this prediction by evaluating the likelihood of various instantiations of the unobserved entries using a Gibbs sampling approach. Briefly, Gibbs sampling is a Markov chain Monte Carlo method, and has been used extensively for evaluating missing or unobserved entries in BNs (Heckerman, 1999; Ghahramani, 2001). Using a Gibbs sampling model, all entries with unobserved values are initially instantiated at random. Next, each unobserved entry is taken in turn and its value is updated by sampling from the conditional probability distribution dictated by its parents. This process is repeated for 10 000 iterations as a preliminary burn in period, after which approximately 400 samples of the states of the unobserved entry are taken at 1000 iteration intervals for each condition. Averages of these observations are then taken as expectation values for the entry given the available data. A schematic representation of this algorithm is shown in Figure 2. Note that we differentiate between data entries that are observed and entries that are set. Entries that are observed (e.g. by a biological assay) can be scored normally as our observation does not change the system's ability to influence the entry. In contrast, if a data entry is known because we have intervened and set it (e.g. by adding an inhibitor or knocking out a gene), then that entry must be scored as if its associated node had no parents, for the state of the system has no influence over the value of the entry. This approach for handling observational versus interventional data in BNs has been well studied and is elaborated elsewhere (Cooper and Yoo, 1999; Tong and Koller, 2001). To test the accuracy of our model, we predicted the effects of inhibiting the phosphorylation of PKCε and RAF1 (S259). The readout of this in silico experiment was the rate of differentiated cell proliferation RD. Simulations were run under conditions where FN present, LAM present, and both LAM and FN absent (e.g. gelatin only). RESULTS Best fit network model The output of the BN learning procedure is a single network that has the highest observed probability of describing the data, as shown in Figure 3. When this top-scoring network is compared to the second highest scoring network, we find that they differ by only one connection between RAF (S259) and RAF (S338), which is present only in the top-scoring network. Resampling the data and repeating the network learning procedure resulted in identical highest and second highest scoring networks, indicating that the network structure learning algorithm is robust to variations in sampling. In addition, enumeration of all permitted arc removals, additions or reversals on the highest scoring network failed to produce a score improvement, demonstrating that this network structure is at least a local maximum. When the data were shuffled, the new P(Model | Data) value for the model in Figure 3 changed significantly in comparison to the probability score for unshuffled data (P ≪ 0.001). This difference in probability between shuffled and unshuffled data indicates that the BN in Figure 3 does detect signals within the data well above the noise level. This data shuffling approach also implies that our model can effectively predict features within the dataset. Visualization of one- and two-parent interactions A key advantage of a BN is that it can detect complicated relationships between variables. To illustrate the types of possible interactions that are detected by this algorithm, we have plotted all of the predicted one- and two-parent interactions for our dataset as described in the Systems and methods section. For one-parent interactions (Fig. 4a–n), we have plotted the probability density for the phosphorylation state of each of the variables. Such plots demonstrate that the BN can detect a variety of linear, nonlinear and multistate interactions. Examples of near linear relationships include the connections between ERK1 and ERK2 (Fig. 4k), and between GSK3β and MEK1/2 (Fig. 4n). Nonlinear relationships are also detected such as the biphasic relationships between STAT3 (S727) and MEK3/6 (Fig. 4c), or the relationship between AKT1 (T308) and p38MAPK (Fig. 4a). Multistate interactions represent a particularly difficult area for other modeling paradigms; however, the BN is able to uncover a number of possible two-state (Fig. 4e and j) and three-state (Fig. 4d and g) interactions. Interestingly, a few of the connections appear to describe faint, noisy relationships, such as Figure 4l. If we compare the relative contribution of such noisy relationships to the total network, we find that their presence only slightly improves the probability of the network (data not shown). This connection between the strength of the pattern in the interaction plot and relative contribution to the probability of the network highlights how such a visualization tool can help to improve our intuition about connections in the network. For two-parent interactions (Fig. 4aa–nn), we instead plot the maximum-likelihood phosphorylation value of the child node as a function of the two parents. These plots also reveal linear, nonlinear and multistate behavior. For example, Figure 4hh shows a near linear increase in the expected rate of undifferentiated cell proliferation (RU) as a function of p38α MAPK, but only in the presence of LIF. Note that the linear relationship does not hold in the absence of LIF (bottom half of Fig. 4hh). Similar near-linear behavior is observed for the phosphorylation level of PKCα as a function of PKCε, but only in the absence of FN (Fig. 4jj). Consistent nonlinear relationships between the phosphorylation data of various proteins are also uncovered, such as the effects of RAF1 (S338) and GSK3β on RAF1 (S259) (Fig. 4nn), or PKCε and RAF1 (S259) on AKT1 (S472) (Fig. 4kk). Multistate behaviors are also detected, such as the two states of SRC (Y529) phosphorylation as a function of AKT1 (S473) phosphorylation in the presence of FN (Fig. 4dd). Similarly, Figure 4ll and mm show almost three possible states of Nf and RC, respectively. Comparison to known signaling pathways In the following sections, we highlight two signal transduction pathways and compare their known biochemical topologies to the BN topologies predicted from data alone. These comparisons serve as an external validation of the BN model, but also identify some important differences between biochemical and BN representations of signal transduction. Comparison to known signaling pathways: LIF signaling via STAT When LIF is added to the growth medium of mouse ES cells, the cells can be grown in an undifferentiated state indefinitely (Smith et al., 1988; Williams et al., 1988). Based on this information, we hypothesized that the signal transduction pathways that are activated by LIF should also be implicated in either undifferentiated stem cell growth or inhibition of differentiation. Two primary routes of LIF signal transduction are via the protein STAT3 and the AKT pathway. Earlier work suggested that STAT3 phosphorylation is both necessary and sufficient for mouse ES cell self-renewal (Matsuda et al., 1999) implying that LIFs primary effect on ES cells is via STAT3. A schematic representation of LIF's effect on STAT3 and AKT is shown in Figure 5a and is compared to the Bayesian network prediction in Figure 5b. Because the experimental dataset did not contain all members of the signaling pathway, we only compared the observed species. Briefly, LIF is thought to bind to a heterodimer of gp130 and LIFR, which in turn induces the phosphorylation of JAK. In its phosphorylated form, JAK then phosphorylates the tyrosine 705 residue of STAT3, which induces STAT3 dimerization and translocation to the nucleus. Once in the nucleus, STAT3 is phosphorylated at the serine 727 residue (Decker and Kovarik, 2000) possibly by kinases in the MAPK pathway. This serine phosphorylated form of STAT3 is then active and able to regulate gene transcription. In a parallel pathway, JAK is also thought to activate AKT. AKT in turn activates p70 S6, which directly regulates gene expression (Schiemann et al., 1995; Oh et al., 1998; Noda et al., 2000). When the literature pathway is compared to the BN, we see good agreement. The BN directly connects LIF to STAT3, although the causal pattern of phosphorylation is reversed. The BN predicts that the presence of LIF determines the phosphorylation state of STAT3 at the S727 site, which in turn determines the phosphorylation state of the Y705 site. Experimental evidence suggests that STAT3 phosphorylation on the Y705 and S727 sites takes place in the order of tens of minutes after the addition of LIF (data not shown), while the measurements taken in our datasets were taken in the order of days after the addition of LIF. Given this disparity of time scales, it is not surprising that the BN is unable to resolve the sequence of STAT3 phosphorylation events as the BN was constructed using data that are essentially at steady state, while the signal transduction pathway describes a transient event. In line with experimental findings the BN also predicts a relationships between AKT1 (T308) and P70 S6K phosphorylation, the data for which are visualized in Figure 4b. However, the BN finds no direct or indirect connection between LIF and AKT1 phosphorylation. This lack of connection could be because the phosphorylation state of JAK (the signaling intermediate between LIF and ATK1) was not measured. Another possibility for this lack of connection is that other signaling pathways played a more dominant role in determining AKT1 phosphorylation. Comparison to known signaling pathway: MAPK/ERK signaling The MAPK/ERK pathway encompasses a large family of signal transduction pathways that have been implicated in a variety of cellular processes. In Figure 6a and b, we show what is known about the MAPK/ERK signaling proteins in our dataset along with the predictions made by the BN. When we compare the BN and the known biochemical pathway, we find that the sequence of signaling events is preserved. For example, the canonical MAPK/ERK pathway proceeds from RAF to MEK to ERK—an order that the BN also extracts from the proteomic dataset. However, the BN also predicts unusual lateral connections between species, such as a relationship between the phosphorylation states of both sites of RAF1, or the connection between ERK1 and ERK2. Interestingly, the BN shows a connection between MEK3/6 and MEK1/2. Although there is no experimental evidence that MEK3/6 directly affects MEK1/2, there is evidence that both targets are phosphorylated by a common kinase MEKK1 (Xu et al., 1995; Balasubramanian et al., 2002). Given this common connection, it is not surprising that MEK3/6 and MEK1/2 are connected in the network as their phosphorylation appears to be co-regulated by an unobserved species. One area of disagreement between the BN and the biochemical network is the upstream parent of ERK1 and ERK2. Biochemical data suggest that ERK1 and ERK2 are both regulated by MEK1/2 (Johnson and Lapadat, 2002) while the BN predicts that ERK1 is regulated by MEK3/6 and ERK2 is regulated by ERK1. The density plot for ERK1 and ERK2 shows that ERK1 and ERK2 are almost linearly related to each other, and often are either both on or both off (Fig. 4k). This finding suggests that ERK1 and ERK2 are nearly interchangeable and as such explains why the BN connected them. The impact of MEK3/6 on ERK1 is more complicated, as MEK3/6, GSK3α (S21) and RB (S807/S811) are all parents of ERK1. From this list of parent nodes, it appears that ERK1 is an integrating node for many signals—only one of which is a MAPK pathway element. One additional way to judge the agreement between the BN and the experimental conditions is by examining which connections are not present. For example, it is known that MEK6 does not activate ERK1 or ERK2, but instead activate ERK5 or ERK6 (Pearson et al., 2001) neither of which were represented in our original dataset. Similarly, in our model we do not find any connections between MEK6 to ERK1 or ERK2, which agrees with experimental findings. Primary effectors of cell proliferation and differentiation One goal of this work is to determine what signaling pathways influence ES cell proliferation and differentiation. By creating a BN of this dataset, we can find the most direct effectors of proliferation and differentiation by looking at the direct parent nodes of the undifferentiated cell proliferation rate (RU), differentiated cell proliferation rate (RD) and differentiation rate (RC), as shown in Figure 7. The rate of differentiated cell proliferation, RD, is found to be governed by LAM, and the phosphorylation states of RAF1 and p38αMAPK. LAM and RAF have been shown to influence both proliferation and differentiation in a number of cell types (Grant et al., 1990; Vaudry et al., 2002; Park et al., 2003) making them both good candidates for predictors of RD. In addition, RAF has been suggested to play an anti-apoptotic role in the differentiated cells (Cox and Der, 2003)—a role that could also influence the apparent differentiated cell proliferation rate. Similarly, p38αMAPK has been linked to proliferation of a variety of cell types, including B lymphocytes and melanoma cells (Craxton et al., 1998; Recio and Merlino, 2002; Ogasawara et al., 2003). The rate of undifferentiated cell proliferation, RU, is found to be best predicted by LIF and p38αMAPK phosphorylation. This two-parent interaction is visualized in Figure 4hh. For similar reasons as stated above, we were not surprised to find that p38αMAPK is implicated in undifferentiated cell proliferation, as p38αMAPK’s role in proliferation in general is well established. However, we were surprised to find a direct connection between LIF and RU. This connection could be interpreted as meaning that LIF activates another, unknown or unmeasured pathway, and in doing so affects the rate of undifferentiated cell growth. Although it has been suggested by other groups that LIF's primary role is to act as an anti-differentiation agent (Zandstra et al., 2000) our analysis suggests that LIF influences undifferentiated cell growth, but in a p38αMAPK-dependent manner. Examination of the raw data (Fig. 4hh) reveals that increasing p38αMAPK phosphorylation has a dose-dependent effect on RU, but only in the presence of LIF. This finding suggests a synergistic interaction between LIF and p38αMAPK that has not been recognized previously. Model predictions of differentiated cell proliferation A BN representation of a signaling network allows us to infer predictions about outcomes that were not observed in the original dataset. As a test, we predicted the rate of differentiated cell proliferation (RD) for a number of different conditions and compared this result to earlier experimental measurements (Prudhomme et al., 2004a). The result of this analysis is shown in Figure 8. Overall the experimentally observed trends are largely consistent, at least qualitatively, with the a priori model predictions. From Figure 3 RAF1 (S259) is a parent node of RD, and therefore is expected to have a strong influence on the state of RD. Indeed, sampling from the BN predicts that inhibiting RAF1 should lead to a decrease in RD, independent of the growth substrate. This same trend is seen experimentally, although the decrease in RD is small when cells are grown on FN. The BN predicts that untreated cells should have the lowest differentiated growth rate when grown on FN, which we also observe experimentally, although the effect is more dramatic in the experimental data. In addition, we predicted the influence of PKCε on RD. In the BN, PKCε is not located near RD, and as such is predicted to act indirectly via other signaling pathways. The BN predicts when PKCε is inhibited, differentiated cell growth rates will be lowest in the presence of LAM, then FN, and highest when grown on gelatin alone. This same trend is observed experimentally for FN and gelatin, however no data were available for LAM. When we compare the proliferation rates of cells on a single substrate, such as gelatin, with and without PKCε, the BN predicts that the rate will not change. However, the experimental results do not confirm this prediction, and instead show that PKCε inhibition results in a slight increase in proliferation on gelatin and a slight decrease in proliferation on FN. DISCUSSION The preceding results have shown that BNs can provide useful insight into complex, biological processes. In generating a BN of the signaling processes active in ES cells, we were able to identify which pathways most govern self-renewal and differentiation. Furthermore, we demonstrated that other connections made by the signaling pathway map onto known signal transduction pathways. In the following sections we differentiate between BNs and biochemical networks, and discuss how BNs can be further applied to systems biology and drug discovery. Bayesian networks versus biochemical networks Although both BNs and biochemical networks are portrayed as directed graphs, they represent different yet related concepts. As described in earlier sections, connections between nodes in a BN denote apparent causal relationships between variables at a single point in time. As such, a BN answers questions such as What is the probability that a node is in a particular state given observations of other nodes? or To predict the state of this node, what other nodes should be observed? In contrast, connections between nodes in a biochemical pathway imply a kinetic, or time varying process. Thus, biochemical pathways answer the question What happens next? From these differences in BNs and biochemical pathways, we expect to find some overlap between the two approaches. In general, when trying to predict the state of a target protein, a reasonable guess would be to look at the states of the proteins nearby on the signaling pathway. However, if the data were taken at a time scale that differed from that of the signaling pathway, we would not expect there to be a close relationship. For the data in this work, measurements were taken on the period of days, while some events, such as LIF-induced STAT3 phosphorylation, take place over a time scale of minutes to hours. Similarly, if our protein of interest were a common element of many signaling events, then the relationship between the phosphorylation state of that protein and that of its immediate neighbors in the signal transduction pathway may be difficult to determine. Biochemical and Bayesian representations do often align in cases such as when a reaction is at or near steady state. For example, take the case of a biochemical reaction between two proteins, A and B, to form a complex, AB. Before steady state has been reached, the concentration of AB cannot be predicted by the concentrations of A or B, because the concentration of AB is overwhelmed by its initial concentration. This kind of unsteady state relationship would likely not be detected by a BN, even though the underlying biochemical network is simple. At steady state, however, the concentration of AB increases proportionally to the product of A and B. This steady-state relationship would result in reproducible patterns of concentrations, much like those shown in Figure 4 and could be detected using a BN. The result is that at steady state, biochemical pathways and BNs both produce very similar networks. Given these differences, there may be some applications where BNs may be a more useful representation of a cell’s signaling state than a traditional biochemical signaling pathway. Because BNs describe apparent causal relationships between variables, they provide a natural route to suggest control points, or lever points (Holland, 1995) in the signaling pathway. When given a biochemical pathway, it is difficult to know which proteins are the most logical points for intervention because nonlinear effects such as feedback loops and signal amplification can produce a wide variety of behaviors. In contrast, a BN automatically filters the data to find causal relationships between variables, no matter how nonlinear the connection. In addition, BNs can also be used when only steady-state data are available—a common situation where kinetic biochemical networks are less useful. Impact of restricting parent and child connections In choosing to restrict the number of parent and child connections a node can have, we are attempting to balance the accuracy of the BN model with its interpretability. It has been shown that many biological networks are scale-free, meaning that most nodes have few connections while a few nodes have many connections (Jeong and Tombor, 2000). For the BN models presented in this work, this scale-free property means that restricting the number of connections a node can have should yield accurate predictions for most nodes, but will be inaccurate for rare, highly connected nodes. One logical way to increase the accuracy of a BN would be to increase the number of connections allowed for a node, or better yet to remove the restriction altogether. However, a key advantage of representing a biological dataset as a BN is that the resulting network is human interpretable. More empirically based methods such as spline fits, neural networks and principal component analysis are able to numerically fit a given dataset at least as well as a BN. However, these other methods do not easily discriminate between weaker, possibly spurious relationships in the data and stronger, clearer relationships. Also, in fitting the data these algorithms provide no insight into the physical or biochemical relationships that drive the system, but instead provide only a black box description. By restricting the number of parent and child connections in a BN, we overcome both of these problems of interpretability while maintaining reasonable accuracy. First, when the network search algorithm restricts the number of possible connections for each node, it then selects only the networks that illustrate strong relationships in the data. These strong relationships are selected because networks that contain them have higher probability scores. Second, restricting the number of allowed connections per node results in a simplified, and therefore more interpretable network. By way of a counterexample, imagine if a network contained a node with five parents. How can we interpret the contribution of these parents? Admittedly, it is possible that all five parents do play a mechanistic role. However, the mechanistic relationship may be extremely difficult to intuitively understand if the underlying relationship is as complex and nonlinear as we already see for two and three parent cases in Figure 4. Another alternative is that only a few of the five parents may have any real mechanistic importance, while the remaining nodes were included because of weak and or spurious relationships in the data. Restricting the number of connections to and from a node is a simple method to select relatively accurate, yet interpretable networks. Finding algorithms that balance between accuracy and interpretability of a BN is an active area of research. Other methods, such as network structure averaging (Neapolitan, 2003) and bootstrap confidence estimation (Friedman et al., 1999) approaches have also been used to varying degrees of success. Applications of Bayesian network approach to cell signaling BNs as we have implemented them have a number of key advantages as a way of storing, representing, and analyzing large and complex biological networks. Data for these networks are stored as an individual measurement coupled with the error of that measurement. Because errors are included in these measurements, we automatically place more weight on accurate measurements, thereby properly combining data from different sources. In addition to the ability to store errors with data, BNs also allow us to include entries that were not measured (Heckerman, 1999). By representing our data as a BN, we gain flexibility. For example, a goal in this work was to represent only the most important connections between phosphorylation targets, therefore we limited the number of children and parents that can be connected to any given node. If we were to relax this assumption, we could obtain a more complete representation of the data, but at the cost of requiring a more complex model. Because BNs impartially find connections between proteins, the method allows the data to speak for itself. The model makes no assumptions of linearity or any particular functional form, therefore we are not constrained to describe only relationships that can be easily cast as algebraic or differential equations. The main disadvantage to BNs is that they are computationally expensive to run. Searching for high-scoring networks is an NP-hard problem (Heckerman, 1999) meaning that for large problems there is no way to be sure when the calculation is done. However, with approximations like those used in this work and access to large parallel computing resources, the computing limitation can, in part, be overcome to make BNs practical. We expect that future advances in both algorithms and computational speed will rapidly overcome this limitation, and thereby make BNs more widely accessible. From a practical standpoint, BN generated hypotheses provide a logical starting point for drug target selection. For example, the BN in this work could be used to predict the effect of various kinase inhibitors on high-level physiological processes such as proliferation and differentiation. Because the network automatically accounts for noise, the predictions would also be produced as a distribution of possible responses based on the uncertainty of the original data. If an inhibitor were selected and tested, then the results of this test could be included into the original dataset used to generate the BN and the results re-derived to see how the experiment alters our understanding. By cycling from model to experiment and back again, we can then generate a closed-loop system for predicting drug effects and incorporating in new findings as they become available. CONCLUSION This work shows that BNs can be productively employed for the analysis of cell signaling problems, in application to multivariate proteomic datasets. Many of the current directions in biology are focused on gathering more measurements of an increasingly complex set of conditions. We have shown that BNs are able to organize these kinds of data at two levels of abstraction. At the first level, the BN itself is a directed graph that reflects many known, physiological connections in the original source data. In addition, by visualizing data as a directed graph, large and noisy biological datasets can be summarized in a human interpretable form. On a second level of abstraction, the connections between nodes can, in some cases, be plotted. These plots yield reveal linear, nonlinear or multistate, and in doing so allow us to better understand the underlying biochemical mechanisms. Finally, given a network, we have shown that we are able to make credible predictions of new conditions. By using such an inference approach, we can perform experiments in silico first to identify combinations of input parameters likely to yield interesting and useful results. Table 1 Summary of the mouse ES cell data used to generate the BN Time (days) # Conditions 16 16 16 LIF ± ± ± FGF — ± ± ± Fibronectin — ± ± ± Laminin — ± ± ± Time (days) # Conditions 16 16 16 LIF ± ± ± FGF — ± ± ± Fibronectin — ± ± ± Laminin — ± ± ± Cells were cultured under a combination of different cytokine and matrix stimuli, and data taken at one of four time points. In total, 49 measurements were made. For each measurement, the phosphorylation level of 28 different proteins was measured and 8 different metrics of cell proliferation and differentiation. A more detailed description of each measurement can be found in Supplementary Table 1. Fig. 1 Summary of the optimization algorithm used to search for high-scoring networks. (a) A conceptual outline of the procedures used in searching the space of possible networks. (b) A sample calculation with a small network to illustrate how the search process takes place. Fig. 1 Summary of the optimization algorithm used to search for high-scoring networks. (a) A conceptual outline of the procedures used in searching the space of possible networks. (b) A sample calculation with a small network to illustrate how the search process takes place. Fig. 2 Summary of the Gibbs sampling algorithm used to fill in unobserved entries in the dataset. Fig. 2 Summary of the Gibbs sampling algorithm used to fill in unobserved entries in the dataset. Fig. 3 Most likely BN found to describe the signaling events that led to embryonic stem cell differentiation in mouse. Oval nodes represent the variables listed in Supplementary Table 1. Arrows connecting nodes represent apparent causal relationships. Fig. 3 Most likely BN found to describe the signaling events that led to embryonic stem cell differentiation in mouse. Oval nodes represent the variables listed in Supplementary Table 1. Arrows connecting nodes represent apparent causal relationships. Fig. 4 Probability density plots for one- and two-parent interactions. Arrows along the axis denote increasing protein phosphorylation in all figures. For one-parent interactions (a–n) the plot is shown as a two-dimensional probability density field, where green and red regions represent high- and low-probability density, respectively. For two-parent interactions (aa–nn), the plot shows the maximum-likelihood estimate for the value taken on by the child node for that given parent configuration. Here, green and red regions represent high- and low-expected phosphorylation levels of the child node. Note that the cue nodes LIF, FGF, LAM and FN were tested only with two states, absent and present. Fig. 4 Probability density plots for one- and two-parent interactions. Arrows along the axis denote increasing protein phosphorylation in all figures. For one-parent interactions (a–n) the plot is shown as a two-dimensional probability density field, where green and red regions represent high- and low-probability density, respectively. For two-parent interactions (aa–nn), the plot shows the maximum-likelihood estimate for the value taken on by the child node for that given parent configuration. Here, green and red regions represent high- and low-expected phosphorylation levels of the child node. Note that the cue nodes LIF, FGF, LAM and FN were tested only with two states, absent and present. Fig. 5 LIF signal transduction cascade. (a) Signaling pathway derived from the literature. (b) Related signaling pathway that was predicted by the BN in Figure 3. Single arrows represent direct connections between species, while double arrows indicate a connection through other intermediate nodes or species. Fig. 5 LIF signal transduction cascade. (a) Signaling pathway derived from the literature. (b) Related signaling pathway that was predicted by the BN in Figure 3. Single arrows represent direct connections between species, while double arrows indicate a connection through other intermediate nodes or species. Fig. 6 MAPK/ERK signaling. (a) Signaling pathway derived from the literature. (b) Related signaling pathway that was extracted from the BN in Figure 3. Note that gray ovals represent species not present in the experimental dataset. Single arrows represent direct connections between species, while double arrows indicate a connection through other, intermediate nodes or species. Fig. 6 MAPK/ERK signaling. (a) Signaling pathway derived from the literature. (b) Related signaling pathway that was extracted from the BN in Figure 3. Note that gray ovals represent species not present in the experimental dataset. Single arrows represent direct connections between species, while double arrows indicate a connection through other, intermediate nodes or species. Fig. 7 Predicted primary upstream effectors of cell proliferation and differentiation (RU, RD and RC) based on the BN in Figure 3. Fig. 7 Predicted primary upstream effectors of cell proliferation and differentiation (RU, RD and RC) based on the BN in Figure 3. Fig. 8 Comparison of predicted and experimentally observed differentiated cell proliferation rates (RU) under drug treatment. Note that the experimentally measured rates are average rates over a five-day time course, while the predicted rates are instantaneous growth rates. Average rates tend to be significantly lower than instantaneous rates, however they both exhibit the same trends. Fig. 8 Comparison of predicted and experimentally observed differentiated cell proliferation rates (RU) under drug treatment. Note that the experimentally measured rates are average rates over a five-day time course, while the predicted rates are instantaneous growth rates. Average rates tend to be significantly lower than instantaneous rates, however they both exhibit the same trends. We would like to thank Wing Yung and Christopher Vincent from IBM for their help in parallelizing the BN search code. We would also like to thank Peter Zandstra and Sampsa Hautaniemi for helpful discussions. This work has been supported by the NIGMS Cell Migration Consortium Computational Modeling Initiative and the NSF Biotechnology Process Engineering Center ERC program. Computing resources were provided by the MIT Computational and Systems Biology (CSBi) high-performance computing center. REFERENCES Balasubramanian, S., Efimova, T., Eckert, R.L. 2002 Green tea polyphenol stimulates a Ras, MEKK1, MEK3, and p38 cascade to increase activator protein 1 factor-dependent involucrin gene expression in normal human keratinocytes. J. Biol. Chem. 277 1828 –1836 Beer, M.A. and Tavazoie, S. 2004 Predicting gene expression from sequence. Cell 117 185 –198 Burns, M.A., Johnson, B.N., Brahmasandra, S.N., Handique, K., Webster, J.R., Krishnan, M., Sammarco, T.S., Man, P.M., Jones, D., Heldsinger, D., Mastrangelo, C.H., Burke, D.T. 1998 An integrated nanoliter DNA analysis device. Science 282 484 –487 Cagney, G., Uetz, P., Fields, S. 2000 High-throughput screening for protein–protein interactions using two-hybrid assay. Methods Enzymol. 328 3 –14 Chickering, D.M. 1996 Learning Bayesian networks is NP-complete. In Fisher, D. and Lenz, H.J. (Eds.). Learning from Data: Artificial Intelligence and Statistics V Springer-Verlag Vol. 112 , pp. 121 –130 Cooper, G.F. and Herskovits, E. 1992 A Bayesian method for the induction of probabilistic networks from data. Machine Learn. 9 309 –347 Cooper, G.F. and Yoo, C. 1999 Causal discovery from a mixture of experimental and observational data. Proceedings of the Fifteenth Conference on Uncertainty in Artificial IntelligenceJuly 30–August 1Stockholm, Sweden , San Mateo, CA Morgan Kaufmann Cox, A.D. and Der, C.J. 2003 The dark side of Ras: regulation of apoptosis. Oncogene 22 , pp. 8999 –9006 Craxton, A., Shu, G., Graves, J.D., Saklatvala, J., Krebs, E.G., Clark, E.A. 1998 p38 MAPK is required for CD40-induced gene expression and proliferation in B lymphocytes. J. Immunol. 161 3225 –3236 Decker, T. and Kovarik, P. 2000 Serine phosphorylation of STATs. Oncogene 19 2628 –2637 Ernst, M., Oates, A., Dunn, A.R. 1996 Gp130-mediated signal transduction in embryonic stem cells involves activation of Jak and Ras/mitogen-activated protein kinase pathways. J. Biol. Chem. 271 30136 –30143 Friedman, N. 2003 Probabilistic models for identifying regulation networks. Bioinformatics 19 (Suppl. 2), II57 Friedman, N., Goldszmidt, M., Wyner, A. 1999 Data analysis with Bayesian networks: a bootstrap approach. Proceedings of the Fifteenth Conference on Uncertainty in Artificial IntelligenceJuly 30–August 1Stockholm, Sweden , pp. 206 –215 Gansner, E. and North, S. 2003 neato Gardner, T.S., di Bernardo, D., Lorenz, D., Collins, J.J. 2003 Inferring genetic networks and identifying compound mode of action via expression profiling. Science 301 102 –105 Ghahramani, Z. 2001 An introduction to hidden Markov models and Bayesian networks. Intl. J. Pattern Recogn. Artif. Intell. 15 9 –42 Grant, D.S., Kleinman, H.K., Martin, G.R. 1990 The role of basement membranes in vascular development. 588 61 –72 Hartemink, A.J., Gifford, D.K., Jaakkola, T.S., Young, R.A. 2002 Combining location and expression data for principled discovery of genetic regulatory network models. Pac. Symp. Biocomput. 437 –449 Heckerman, D. Learning in Graphical Models 1999 , Cambridge, MA MIT Press Holland, J.H. Hidden Order: How Adaptation Builds Complexity 1995 , NY Perseus Publishing Hsu, W.H., Joehannes, R., Thornton, J.A., Perry, B.B., Haverkamp, L.M., Gettings, N.D., Guo, H. 2003 Bayesian Network tools in Java (BNJ) v2, Kansas State University Laboratory for Knowledge Discovery in Databases Janes, K.A., Kelly, J.R., Gaudet, S., Albeck, J.G., Sorger, P.K., Lauffenburger, D.A. 2004 Cue-signal-response analysis of TNF-induced apoptosis by partial least squares regression of the dynamic multivariate data. J. Comput. Biol. (in press) Jensen, F.V. An Introduction to Bayesian Networks 1998 , London University College London Press Jeong, H. and Tombor, B. 2000 The large-scale organization of metabolic networks. Nature 407 , pp. 651 –654 2002 Mitogen-activated protein kinase pathways mediated by ERK, JNK, and p38 protein kinases. Science 298 1911 –1912 Kitano, H. 2002 Computational systems biology. Nature 420 206 –210 Kozlov, A. and Koller, D. 1997 Nonuniform dynamic discretization in hybrid networks. Proceedings of the 13th Annual Conference on Uncertainty in Artificial Intelligence , Providence, RI Lund-Johansen, F., Davis, K., Bishop, J., de Waal Malefyt, R. 2000 Flow cytometric analysis of immunoprecipitates: high-throughput analysis of protein phosphorylation and protein–protein interactions. Cytometry 39 , pp. 250 –259 Maliakal, J.C. 2002 Quantitative high throughput endothelial cell migration and invasion assay system. Methods Enzymol. 352 175 –182 Matsuda, T., Nakamura, T., Nakao, K., Arai, T., Katsuki, M., Heike, T., Yokota, T. 1999 STAT3 activation is sufficient to maintain an undifferentiated state of mouse embryonic stem cells. EMBO J. 18 4261 –4269 Mavroudi, S., Papadimitriou, S., Bezerianos, A. 2002 Gene expression data analysis with a dynamically extended self-organized map that exploits class information. Bioinformatics 18 1446 –1453 Morand, K.L., Burt, T.M., Regg, B.T., Tirey, D.A. 2001 Curr. Opin. Drug Discov. Devel. 4 729 –735 Neapolitan, R.E. 2003 Learning Bayesian Networks. , Harlow Prentice Hall Niwa, H., Burdon, T., Chambers, I., Smith, A. 1998 Self-renewal of pluripotent embryonic stem cells is mediated via activation of STAT3. Genes Dev. 12 2048 –2060 Noda, S., Kishi, K., Yuasa, T., Hayashi, H., Ohnishi, T., Miyata, I., Nishitani, H., Ebina, Y. 2000 Overexpression of wild-type Akt1 promoted insulin-stimulated p70S6 kinase (p70S6K) activity and affected GSK3 beta regulation, but did not promote insulin-stimulated GLUT4 translocation or glucose transport in L6 myotubes. J. Med. Invest. 47 47 –55 Ogasawara, T., Yasuyama, M., Kawauchi, K. 2003 Constitutive activation of extracellular signal-regulated kinase and p38 mitogen-activated protein kinase in B-cell lymphoproliferative disorders. Int. J. Hematol. 77 364 –370 Oh, H., Fujio, Y., Kunisada, K., Hirota, H., Matsui, H., Kishimoto, T., Yamauchi-Takihara, K. 1998 Activation of phosphatidylinositol 3-kinase through glycoprotein 130 induces protein kinase B and p70 S6 kinase phosphorylation in cardiac myocytes. J. Biol. Chem. 273 9703 –9710 Park, J.H., Kim, S.J., Oh, E.J., Moon, S.Y., Roh, S.I., Kim, C.G., Yoon, H.S. 2003 Establishment and maintenance of human embryonic stem cells on STO, a permanently growing cell line. Biol. Reprod. 69 2007 –2014 Pearl, J. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference 1988 , San Mateo, CA Morgan Kaufmann Publishers Pearl, J. Causality: Models, Reasoning, and Inference 2000 , New York, Cambridge, UK Cambridge University Press Pearson, G., English, J.M., White, M.A., Cobb, M.H. 2001 ERK5 and ERK2 cooperate to regulate NF-kappaB and cell transformation. J. Biol. Chem. 276 , pp. 7927 –7931 Price, N.D., Papin, J.A., Schilling, C.H., Palsson, B.O. 2003 Genome-scale microbial in silico models: the constraints-based approach. Trends Biotechnol. 21 162 –169 Prudhomme, W. Quantitative Analysis of Mouse Embryonic Stem Cell Self-renewal versus Differentiation Responses to Cytokine and Extracellular Matrix Stimuli 2003 MIT Press Prudhomme, W., Daley, G.Q., Zandstra, P., Lauffenburger, D.A. 2004 Multivariate proteomic analysis of murine embryonic stem cell self-renewal versus differentiation signaling. 101 , pp. 2900 –2905 Prudhomme, W., Duggar, K.H., Lauffenburger, D.A. 2004 Cell population dynamics model for deconvolution of murine embryonic stem cell self-renewal and differentiation responses to cytokines and extracellular matrix. Biotechnol. Bioeng. 88 264 –272 Raz, R., Lee, C.K., Cannizzaro, L.A., d'Eustachio, P., Levy, D.E. 1999 Essential role of STAT3 for embryonic stem cell pluripotency. 96 2846 –2851 Recio, J.A. and Merlino, G. 2002 Hepatocyte growth factor/scatter factor activates proliferation in melanoma cells through p38 MAPK, ATF-2 and cyclin D1. Oncogene 21 1000 –1008 Sachs, K., Gifford, D., Jaakkola, T., Sorger, P., Lauffenburger, D.A. 2002 Bayesian network approach to cell signaling pathway modeling. Sci. STKE 2002 PE38 Schiemann, W.P., Graves, L.M., Baumann, H., Morella, K.K., Gearing, D.P., Nielsen, M.D., Krebs, E.G., Nathanson, N.M. 1995 Phosphorylation of the human leukemia inhibitory factor (LIF) receptor by mitogen-activated protein kinase and the regulation of LIF receptor function by heterologous receptor activation. 92 5361 –5365 Smith, A.G., Heath, J.K., Donaldson, D.D., Wong, G.G., Moreau, J., Stahl, M., Rogers, D. 1988 Inhibition of pluripotential embryonic stem cell differentiation by purified polypeptides. Nature 336 688 –690 Suriyapperuma, S.P., Lozovatsky, L., Ciciotte, S.L., Peters, L.L., Gilligan, D.M. 2000 The mouse adducin gene family: alternative splicing and chromosomal localization. Mamm. Genome 11 16 –23 Takikawa, M. and D’Ambrosio, B., et al. 2002 Real-time inference with large-scale temporal Bayes nets. Proceedings of the Eighteenth Annual Conference on Uncertainty in Artificial Intelligence , San Francisco, CA Tong, S. and Koller, D. 2001 Active learning for structure in bayesian networks. Proceedings of the Seventeenth International Joint Conference on Artificial Intelligence , Seattle, WA Vaudry, D., Stork, P.J., Lazarovici, P., Eiden, L.E. 2002 Signaling pathways for PC12 cell differentiation: making the right connections. Science 296 , pp. , pp. 1648 –1649 Williams, R.L., Hilton, D.J., Pease, S., Willson, T.A., Stewart, C.L., Gearing, D.P., Wagner, E.F., Metcalf, D., Nicola, N.A., Gough, N.M. 1988 Myeloid leukaemia inhibitory factor maintains the developmental potential of embryonic stem cells. Nature 336 684 –687 Xu, S., Robbins, D., Frost, J., Dang, A., Lange-Carter, C., Cobb, M.H. 1995 MEKK1 phosphorylates MEK1 and MEK2 but does not cause activation of mitogen-activated protein kinase. 92 6808 –6812 Zandstra, P.W., Le, H.V., Daley, G.Q., Griffith, L.G., Lauffenburger, D.A. 2000 Leukemia inhibitory factor (LIF) concentration modulates embryonic stem cell self-renewal and differentiation independently of proliferation. Biotechnol. Bioeng. 69 607 –617
2017-01-21 11:01:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5889480113983154, "perplexity": 2117.6865618490906}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00159-ip-10-171-10-70.ec2.internal.warc.gz"}
https://experts.mcmaster.ca/display/publication221079
# Anomalous behavior in the magneto-optics of a gapped topological insulator Academic Article • • Overview • • Research • • Identity • • The Dirac fermions at the surface of a topological insulator can be gapped by introducing magnetic dopants. Alternatively, in an ultra-thin slab with thickness on the order of the extent of the surface states, both the top and bottom surface states acquire a common gap value ($\Delta$) but with opposite sign. In a topological insulator, the dominant piece of the Hamiltonian ($\hat{H}$) is of a relativistic nature. A subdominant non-relativistic piece is also present and in an external magnetic field ($B$) applied perpendicular to the surface, the $N=0$ Landau level is no longer at zero energy but is shifted to positive energy by the Schr{\"o}dinger magnetic energy. When a gap is present, it further shifts this level down by $-\Delta$ for positive $\Delta$ and up by $|\Delta|$ for a negative gap. This has important consequences for the magneto-optical properties of such systems. In particular, at charge neutrality, the lowest energy transition displays anomalous non-monotonic behaviour as a function of $B$ in both its position in energy and its optical spectral weight. The gap can also have a profound impact on the spectral weight of the interband lines and on corresponding structures in the real part of the dynamical Hall conductivity. Conversely, the interband background in zero field remains unchanged by the non-relativistic term in $\hat{H}$ (although its onset frequency is modified).
2019-06-26 02:01:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7948275208473206, "perplexity": 565.9915337920935}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000044.37/warc/CC-MAIN-20190626013357-20190626035357-00416.warc.gz"}
https://dshizuka.github.io/networkanalysis/02_dataformats.html
## 2.1 Basic Data Formats For Networks There are three basic data formats that can be used to describe networks: adjacency matrix, edge list, and adjacency list. Each format has its pros and cons. There are other variations on these (e.g., a biadjacency matrix for bipartite networks). An adjacency matrix is a matrix in which the rows and columns represent different nodes. In an unweighted adjacency matrix, the edges (i.e., lines) are represented by 0 or 1, with indicating that these two nodes are connected. If two nodes are connected, they are said to be adjacent (hence the name, adjacency matrix). In a weighted matrix, however, you can have different values, indicating different edge qualities (or tie strengths). Let’s start by loading the igraph package and setting up a toy network (same as in Lesson 1: Intro) library(igraph) ## Warning: package 'igraph' was built under R version 3.5.2 g=make_graph(~A-B-C-A, D-E-F-D, A-F) V(g)$color=c("white", "red", "green", "blue", "orange", "yellow") E(g)$weight=1:7 E(g)$color=rainbow(7) We can now extract the adjacency matrix of the network we created, called g: as_adjacency_matrix(g, sparse=F) ## A B C D E F ## A 0 1 1 0 0 1 ## B 1 0 1 0 0 0 ## C 1 1 0 0 0 0 ## D 0 0 0 0 1 1 ## E 0 0 0 1 0 1 ## F 1 0 0 1 1 0 Note the argument sparse=F in the code above. This displays the adjacency matrix with 0s. If sparse=T, the output is a special format of the matrix where the 0s are replaced with a period (this is to make it easier to see very large matrices). Also note that, because the network is undirected and unweighted, the corresponding adjacency matrix is symmetrical (value for row A, column B is identical to row B, column A) and binary (values are 0 or 1). ### 2.1.2 Edge List An edge list is a two-column list of the two nodes that are connected in a network. In the case of a directed network, the convention is that the edge goes from the vertex in the first column to the vertex in the second column. In an undirected network, the order of the vertices don’t matter. For weighted networks, you may have a third column that indicates the edge weight. You can get the edgelist of any igraph object as well: as_edgelist(g) ## [,1] [,2] ## [1,] "A" "B" ## [2,] "A" "C" ## [3,] "A" "F" ## [4,] "B" "C" ## [5,] "D" "E" ## [6,] "D" "F" ## [7,] "E" "F" ### 2.1.3 Affiliation Matrix (aka individual-by-group matrix) In many cases, we will construct social networks from co-membership in groups. For example, we would draw edges between individuals based on their patterns of co-occurrence in a flock. Similarly, we could construct networks of species co-occurrences in populations, etc. To do this, we would first need data in a matrix in which rows represent individuals (or species) and columns represent groups (or populations). Note that you could flip the columns and rows–either way is fine. You just need to be aware of how you arranged it. Here’s a toy example in which individuals A through E occur in different combinations in 4 groups. A=c(1,1,0,0) B=c(1,0,1,0) C=c(1,0,1,0) D=c(0,1,0,1) E=c(0,0,1,1) aff=matrix(c(A,B,C,D,E),nrow=5,byrow=TRUE) dimnames(aff)=list(c("A","B","C","D","E"),c("Group1","Group2","Group3","Group4")) aff #The individual-by-group matrix ## Group1 Group2 Group3 Group4 ## A 1 1 0 0 ## B 1 0 1 0 ## C 1 0 1 0 ## D 0 1 0 1 ## E 0 0 1 1 There are different ways to convert this data into a social network–i.e., a network that describes which individual co-occurs with which individual in groups. One simple way is to do what is called a one-mode projection of this data by multiplying this matrix with the transpose of itself. Note that matrix multiplication notation is %*% in R. aff %*% t(aff) ## A B C D E ## A 2 1 1 1 0 ## B 1 2 2 0 1 ## C 1 2 2 0 1 ## D 1 0 0 2 1 ## E 0 1 1 1 2 This resulting matrix is now an adjacency matrix in which the diagonal represents how many groups each individual participated in (2 for all of them), and the off-diagonals represent the number of times a pair of individuals were int he same group. You can use this as the adjacency matrix to convert this into a network: m2=aff %*% t(aff) g2=graph_from_adjacency_matrix(m2, "undirected", weighted=T, diag=F) plot(g2, edge.width=E(g2)$weight) An adjacency list, also known as a node list, presents the ‘focal’ node on the first column, and then all the other nodes that are connected to it (i.e., adjacent to it) as columns to the right of it. In a spreadsheet, would be a table with rows with different number of columns, which is often very awkward to deal with, like this: Focal Node Neighbor_1 Neighbor_2 Neighbor_3 A B C F B A C C A B D E F E D F F A D E In R, you can display an adjacency list as an actual ‘list object’, with each item representing neighbors of each focal node: as_adj_list(g) ## $A ## + 3/6 vertices, named, from f4bdde8: ## [1] B C F ## ##$B ## + 2/6 vertices, named, from f4bdde8: ## [1] A C ## ## $C ## + 2/6 vertices, named, from f4bdde8: ## [1] A B ## ##$D ## + 2/6 vertices, named, from f4bdde8: ## [1] E F ## ## $E ## + 2/6 vertices, named, from f4bdde8: ## [1] D F ## ##$F ## + 3/6 vertices, named, from f4bdde8: ## [1] A D E ## 2.2 Data formats for directed and weighted networks Let’s consider some important aspects of data formats that come with networks that are directed or weighted. I will keep this short by listing some important things to consider, and a line of code that will display this. ### 2.2.1 Directed networks Let’s create an igraph object for a directed network called dir.g. For directed networks, the adjacency matrix is not symmetrical. Rather, the cell value is 1 if the edge goes from the row vertex to the column vertex. dir.g=make_graph(~A-+B-+C-+A, D-+E-+F-+D, A+-+F) plot(dir.g) as_adjacency_matrix(dir.g, sparse=F) ## A B C D E F ## A 0 1 0 0 0 1 ## B 0 0 1 0 0 0 ## C 1 0 0 0 0 0 ## D 0 0 0 0 1 0 ## E 0 0 0 0 0 1 ## F 1 0 0 1 0 0 For directed networks with mutual edges (represented by double-edged arrows), the edge list lists both directions separately: as_edgelist(dir.g) ## [,1] [,2] ## [1,] "A" "B" ## [2,] "A" "F" ## [3,] "B" "C" ## [4,] "C" "A" ## [5,] "D" "E" ## [6,] "E" "F" ## [7,] "F" "A" ## [8,] "F" "D" You can see that, since the dir.g network object contains one mutual edge (A<->F), the edge list has 8 rows, while the edgelist for the undirected version of the network has 7 rows. ### 2.2.1 Weighted networks Let’s now consider what the data formats would look like. To do this, let’s go back to our original network, g. Let’s say that the edge widths that we added represent edge weights or values. Then, the adjacency matrix for this network can be shown by using the attr= argument within the function to call the adjacency matrix to specify the edge weights: as_adjacency_matrix(g, sparse=F, attr="weight") ## A B C D E F ## A 0 1 2 0 0 3 ## B 1 0 4 0 0 0 ## C 2 4 0 0 0 0 ## D 0 0 0 0 5 6 ## E 0 0 0 5 0 7 ## F 3 0 0 6 7 0 You can display the edge weights as an edgelist as well. In fact, igraph has a convenient function that will display all of the edge attributes together as a data frame: as_data_frame(g) ## from to weight color ## 1 A B 1 #FF0000FF ## 2 A C 2 #FFDB00FF ## 3 A F 3 #49FF00FF ## 4 B C 4 #00FF92FF ## 5 D E 5 #0092FFFF ## 6 D F 6 #4900FFFF ## 7 E F 7 #FF00DBFF Recall that in undirected networks, the “from” and “to” designation are arbitrary (it is simply organized in alphabetical order here). If you want to show an edge list as a three-column matrix with the two nodes and edge weights only, you can just specify which edge attribute you want to use as the edge weight, e.g.: as_data_frame(g)[,c("from", "to", "weight")] ## from to weight ## 1 A B 1 ## 2 A C 2 ## 3 A F 3 ## 4 B C 4 ## 5 D E 5 ## 6 D F 6 ## 7 E F 7 ## 2.3 Going from Data to Networks ### 2.3.1 Creating a network from your edge list Creating a network from an edgelist that you have created is easy. First, import the .csv file called “sample_edgelist.csv”. edge.dat=read.csv("https://dshizuka.github.io/network2018/NetworkWorkshop_SampleData/sample_edgelist.csv") edge.dat ## V1 V2 weight ## 4 Betty Charles 3 ## 5 Betty Daniel 1 ## 6 Betty Frank 2 ## 7 Daniel Esther 1 ## 8 Daniel Frank 1 ## 9 Esther Frank 1 ## 10 Frank Gina 2 So this data frame has three columns: the first two columns are the edge list, and the third column is an edge value we called “weight”. If we have the data organized this way, we can simply use a function called graph.data.frame() to create a network we will call eg. set.seed(2) eg=graph_from_data_frame(edge.dat, directed=FALSE) eg ## IGRAPH c85a423 UNW- 7 10 -- ## + attr: name (v/c), weight (e/n) ## + edges from c85a423 (vertex names): ## [9] Esther--Frank Frank --Gina
2021-08-03 15:58:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4874187111854553, "perplexity": 2279.639323300154}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154466.61/warc/CC-MAIN-20210803155731-20210803185731-00388.warc.gz"}
https://testbook.com/objective-questions/mcq-on-random-processes--5eea6a0f39140f30f369e71d
# The random variable$$Y = \mathop \smallint \limits_{ - \infty }^\infty W\left( t \right)\phi \left( t \right)dt,$$ where$$\phi \left( t \right)=\;\left\{ {\begin{array}{*{20}{c}} {1;}&{5 \le t \le 7}\\ {0;}&{Otherwise} \end{array}} \right\}$$ And W(t) is a real white Gaussian noise process with two-sided power spectral density SW (f) = 3 W/Hz, for all f. The variance of Y is: ## Random Processes MCQ Question 1 Detailed Solution Concept: The mean or the expected value of a Gaussian random variable is 0. The variance of a random variable 'X' is defined as: $$E\left[ {{X^2}} \right] = E\left[ {X \cdot X} \right]$$ Also, the autocorrelation function and the Energy spectrum density form Fourier transform pairs, i.e. $${R_w}\left( \tau \right)\mathop \leftrightarrow \limits^{FT} {S_w}\left( f \right)$$ Application: Sw(f) = 3 Watt/Hz Since the power spectral density is constant, the autocorrelation function will be an impulse function, i.e. $${S_w}\left( f \right)\;\mathop \leftrightarrow \limits^{IFT} {R_w}\left( \tau \right)$$ ∴ Rw(τ) = 3 δ (τ) Also, Rw (t2 – t1) = 3δ (t2 – t1)     …1) Now, $$E\left[ Y \right] = E\left[ {\mathop \smallint \nolimits_{ - \infty }^\infty w\left( t \right) \cdot \phi \left( t \right)dt} \right]$$ $$E\left[ Y \right] = \mathop \smallint \nolimits_{ - \infty }^\infty E\left[ {w\left( t \right)} \right]\phi \left( t \right)\;dt$$ Since W(t) is a real white Gaussian noise process: E[w(t)] = 0 ∴ E[Y] = 0 Now, the variance of Y  can be written as: $$E\left[ {{Y^2}} \right] = E\left[ {Y \cdot Y} \right]$$ Putting on the respective functions, we get: $$= E\left[ {\mathop \smallint \nolimits_{-\infty} ^{ \infty } w\left( {{t_1}} \right)\phi \left( {{t_1}} \right)d{t_1}\mathop \smallint \nolimits_{ - \infty }^\infty w\left( {{t_2}} \right)\phi \left( {{t_2}} \right)d{t_2}} \right]$$ $$= E\left[ {\mathop \smallint \nolimits_{ - \infty }^\infty \mathop \smallint \nolimits_{ - \infty }^\infty w\left( {{t_1}} \right)w\left( {{t_2}} \right)\phi \left( {{t_1}} \right)\phi \left( {{t_2}} \right)d{t_1}d{t_2}} \right]$$ $$= \mathop \smallint \nolimits_{ - \infty }^\infty \mathop \smallint \nolimits_{ - \infty }^\infty E\left[ {w\left( {{t_1}} \right)w\left( {{t_2}} \right)} \right] \cdot \phi \left( {{t_1}} \right)\phi \left( {{t_2}} \right)d{t_1}d{t_2}$$ $$= \mathop \smallint \nolimits_{ - \infty }^\infty \mathop \smallint \nolimits_{ - \infty }^\infty {R_w}\left( {{t_2} - {t_1}} \right) \cdot \phi \left( {{t_1}} \right)\phi \left( {{t_2}} \right)d{t_1}d{t_2}$$ From Equation (1), we can write: $$E\left[ {{Y^2}} \right] = \mathop \smallint \nolimits_{ - \infty }^\infty \mathop \smallint \nolimits_{ - \infty }^\infty 3\delta \left( {{t_2} - {t_1}} \right)\phi \left( {{t_1}} \right)\phi \left( {{t_2}} \right)d{t_1}d{t_2}$$ The above integration exists, provided: t1 = t2 = t $$E\left[ {{Y^2}} \right] = \mathop \smallint \nolimits_{ - \infty }^\infty \mathop \smallint \nolimits_{ - \infty }^\infty 3\delta \left( 0 \right){\phi ^2}\left( t \right) \cdot dt$$ $$= 3 \times 1 \times \mathop \smallint \nolimits_5^7 {\phi ^2}\left( t \right) \cdot dt$$ = 3 × 2 = 6 # One of the following types of noise assumes importance at high frequncies: 1. Thermal noise 2. Shot noise 3. Transit-time noise 4. Flicker noise Option 3 : Transit-time noise ## Random Processes MCQ Question 2 Detailed Solution High- Frequency Noise: • These noises are also known as TRANSIT- TIME Noise • They are observed in the semiconductor devices when the transit time of a charge carrier while crossing a junction is compared with the time period of that signal Various types of internal noise are: Shot Noise: • These Noises are generally arising in the active devices due to the random behavior of Charge particles or carries • In the case of the electron tube, shot Noise is produced due to the random emission of electron from cathodes Partition Noise: • When a circuit is to divide in between two or more paths then the noise generated is known as Partition noise • The reason for the generation is a random fluctuation in the division Low- Frequency Noise: • They are also known as FLICKER NOISE • These types of noise are generally observed at a frequency range below few kHz • Power spectral density of these noise increases with the decrease in frequency • That's why the name is given Low- Frequency Noise Thermal Noise: • Thermal Noise is random and often referred to as White Noise or Johnson Noise • Thermal noise is generally observed in the resistor or the sensitive resistive components of a complex impedance due to the random and rapid movement of molecules or atoms or electrons # The response of a Gaussian random process applied to a stable linear system is:1. A Gaussian random process2. Not a Gaussian random process3. Completely specified by its mean and auto-covariance functionsWhich of the above statements is/are correct? 1. 1 only 2. 2 only 3. 2 and 3 4. 1 and 3 Option 4 : 1 and 3 ## Random Processes MCQ Question 3 Detailed Solution An important property of Gaussian random process is that their Probability density function is completely determined by their mean and covariance, i.e. $${f_X}\left( X \right) = \frac{1}{{\sqrt {2\pi {\sigma ^2}} }}{e^{ - \frac{{{{\left( {x - \mu } \right)}^2}}}{{2{\sigma ^2}}}}}$$ Where μ = mean σ = Standard deviation. Fourier analysis states that any Linear system in the frequency spectrum can be represented as the sum of sinusoids, i.e. $$H\left( \omega \right) = \mathop \sum \limits_{n \to - \infty }^{n \to \infty } sin\left( {n{\omega _0}} \right)$$ Now, If the Input (x(t)) is Gaussian, then the output y(t) in the frequency spectrum will be: Y(ω) = X(ω) × H(ω) For a sinusoid input, the output will be varied in Magnitude and phase, but nature will remain the same according to the input. So, the output will be Gaussian Observations: 1) The output phase and magnitude will be varied, but nature will remain the same as that of the input. 2) So, if the input is Gaussian, the output will also be Gaussian. # The noise caused by random variations in the arrival of electrons or holes at the output electrode of an amplifying device 1. white noise 2. flicker 3. Shot Noise 4. Transmit time noise Option 3 : Shot Noise ## Random Processes MCQ Question 4 Detailed Solution Shot noise: • It is a type of noise that appears in all amplifying & active devices. • It is associated with a current flow • The noise caused by random variations in the arrival of electrons or holes at the output electrode of an amplifying device is called shot noise. • In semiconductor devices, shot Noise is due to the random variation in the diffusion of minority carriers • Current is a flow of carriers each of which carriers a finite amount of charge. • For diodes, the exact expression for RMS shot noise $$I_n^2 = 2\;{I_{dC}}qB$$ • At low frequencies, the noise varies approximately as 1/f, and is called an excess, or flicker noise. It is caused by the recombination and generation of carriers on the surface of the crystal. • In intermediate frequencies, the noise is independent of frequency. This white noise is caused by the bulk resistance of the semiconductor material and the statistical variation of the currents (shot noise). • The third region is characterized by an increase of the noise figure with frequency and is essentially caused by a decrease in power gain with frequency. # The relation between power spectral density S(f) and the total average power over 1Ω is: 1. P = S(f) × B/2 2. $$P = \mathop \smallint \nolimits_{ - \infty }^\infty S\left( f \right)df$$ 3. $$P = \mathop \smallint \nolimits_{ - \infty }^\infty S\left( f \right)\exp \left( {j\omega t} \right)df$$ 4. P = S(f) × B Option 2 : $$P = \mathop \smallint \nolimits_{ - \infty }^\infty S\left( f \right)df$$ ## Random Processes MCQ Question 5 Detailed Solution Analysis: The power spectral density (PSD) of a WSS process is defined as the Fourier transform of its Autocorrelation function. It is given as: $${S_X}\left( f \right) = \mathop \smallint \nolimits_{ - \infty }^\infty {R_x}\left( \tau \right){e^{ - j2\pi f\tau }} \cdot d\tau$$ Note: The Autocorrelation can be obtained from the PSD through inverse transform, $${R_x}\left( \tau \right) = \mathop \smallint \nolimits_{ - \infty }^\infty {S_X}\left( f \right){e^{j2\pi f\tau }}df$$ → RX (0) = ∫ SX (f) ⋅ df = Power of x(t) → SX (f) is real and SX (f) ≥ 0 → SX (f) = SX (-f) # The autocorrelation function of a band-limited (B) white noise process is given by 1. R(τ) = B × N0 In (2Bτ) 2. R(τ) = B × N0 × τ × sin (2B) 3. R(τ) = B × N0 log (2Bτ) 4. R(τ) = B × N0 sinc (2Bτ) Option 4 : R(τ) = B × N0 sinc (2Bτ) ## Random Processes MCQ Question 6 Detailed Solution Concept: PSD is the Fourier transform of the autocorrelation function, i.e. $$R_{nm}(\tau) \leftrightarrow S_{nm}(f)$$ Since white noise is uniformly distributed, the spectral density will be represented as: Now, using the Fourier transform pair, Using the duality property of Fourier transform, we can write: $$2{\rm{asinc}}\left( {{\rm{t}}.2{\rm{a}}} \right)\mathop \leftrightarrow \limits^{{\rm{FT}}} {\rm{rect}}\left( {\frac{{\rm{f}}}{{2{\rm{a}}}}} \right)$$ Replacing, $${\rm{a}} = {\rm{B}}$$ $$2{\rm{Bsinc}}\left( {{\rm{\tau }}.2{\rm{B}}} \right)\mathop \leftrightarrow \limits^{{\rm{FT}}} {\rm{rect}}\left( {\frac{{\rm{f}}}{{2{\rm{B}}}}} \right)$$ Multiplying both sides by $$\frac{{{{\rm{N}}_{\rm{o}}}}}{2}$$, we get: $${{\rm{N}}_{\rm{o}}}{\rm{Bsinc}}\left( {2{\rm{B\tau }}} \right)\mathop \leftrightarrow \limits^{{\rm{FT}}} \frac{{{{\rm{N}}_0}}}{2}{\rm{rect}}\left( {\frac{{\rm{f}}}{{2{\rm{B}}}}} \right)$$ ∴ The autocorrelation function is: R(τ) = B × N 0 sinc (2Bτ) # If the autocorrelation function of a random process X(t) is R(τ) its power spectral density is: 1. S(f) = In R(τ) 2. S(f) = exp (R(τ)) 3. S(f) = R(τ)*R(τ) 4. $$S\left( f \right) = \mathop \smallint \nolimits_{ - \infty }^\infty R\left( \tau \right)\exp \left\{ { - \left( {j 2\pi f \tau } \right)} \right\}d\tau$$ Option 4 : $$S\left( f \right) = \mathop \smallint \nolimits_{ - \infty }^\infty R\left( \tau \right)\exp \left\{ { - \left( {j 2\pi f \tau } \right)} \right\}d\tau$$ ## Random Processes MCQ Question 7 Detailed Solution Analysis: The power spectral density (PSD) of a WSS process is defined as the Fourier transform of its Autocorrelation function. It is given as: $${S_X}\left( f \right) = \mathop \smallint \nolimits_{ - \infty }^\infty {R_x}\left( \tau \right){e^{ - j2\pi f\tau }} \cdot d\tau$$ Note: The Autocorrelation can be obtained from the PSD through inverse transform as: $${R_x}\left( \tau \right) = \mathop \smallint \nolimits_{ - \infty }^\infty {S_X}\left( f \right){e^{j2\pi f\tau }}df$$ SX (f) is real and SX (f) ≥ 0 SX (f) = SX (-f) RX (0) = ∫ SX (f) ⋅ df = Power of x(t) # Spectral density of white noise: 1. is constant 2. Varies with frequency 3. varies with bandwidth 4. varies with amplitude of the signal Option 1 : is constant ## Random Processes MCQ Question 8 Detailed Solution White noise is that signal whose frequency spectrum is uniform i.e. it has flat spectral density. The power spectral density (PSD) of white noise is uniform throughout the frequency spectrum as shown: # Cross-correlation function provides: 1. Information about the structure of only one signal 2. Information about the behavior of only one signal in the time domain 3. The measure of dissimilarities between two signals 4. The measure of similarities between two signals Option 4 : The measure of similarities between two signals ## Random Processes MCQ Question 9 Detailed Solution The correlation function measures the similarity between two random signals, i.e. how often the two random signals have similar values. Depending upon the signals we are correlating, there are two kinds of correlation. Autocorrelation: This is a type of correlation in which the given signal is correlated with itself mathematically, this is defined as: $${R_{xx}}\left( y \right) = \;\mathop \smallint \limits_{ - \infty }^\infty x\left( t \right)\;{x^*}\;\left( {t - \tau } \right)dt$$ Cross-correlation: This is a type of correlation, in which the signal in hand is correlated with another signal to understand how much they resemble each other 'or' how much similar they are to each other. Mathematically, this is defined as: $${R_{xy}}\left( \tau \right) = \;\mathop \smallint \limits_{ - \infty }^\infty x\left( t \right)y\left( {t - \tau } \right)d\tau$$ # The autocorrelation of a wide-sense stationary random process is given by:  e-2|τ| .The peak value of the spectral density is 1. 2 2. 1 3. e-1/2 4. e Option 2 : 1 ## Random Processes MCQ Question 10 Detailed Solution Concept: The power spectral density is basically the Fourier transform of the autocorrelation function of the power signal, i.e. $${S_x}\left( f \right) = F.T.\left\{ {{R_x}\left( \tau \right)} \right\}$$ Analysis: Given, the autocorrelation function of the random signal X(t) as: $${R_X}\left( \tau \right) = {e^{ - 2\left| \tau \right|}}$$ So, its power spectral density is obtained as: $${S_X}\left( f \right) = {\cal F}\left\{ {{R_X}\left( \tau \right)} \right\}$$ $$= \mathop \smallint \limits_{ - \infty }^\infty {R_X}\left( \tau \right){e^{ - j2\pi f\tau }}d\tau$$ $$= \mathop \smallint \limits_{ - \infty }^0 {e^{2\tau }}{e^{ - j2\pi f\tau }}d\tau + \mathop \smallint \limits_0^\infty {e^{ - 2\tau }}{e^{ - j2\pi f\tau }}d\tau$$ $$= \frac{1}{{2\left( {1 - j\pi f} \right)}} + \frac{1}{{2\left( {1 + j\pi f} \right)}}$$ $$= \frac{1}{{{1} + {\pi ^2}{f^2}}}$$ The peak value of PSD is at f = 0, SX(0) = 1 # Following are few statements regarding noise in communication system:a) Atmospheric noise, shot noise, solar noise are examples of external noise sourcesb) Noise temperature is useful in dealing with UHF noisec) Thermal agitation is the only source of noise in receiverChoose the correct answer: 1. Only b) 2. a) and b) 3. b) and c) 4. a) and c) Option 1 : Only b) ## Random Processes MCQ Question 11 Detailed Solution Statement a: Noise in communication systems is generally divided into two categories- i) External noise like atmospheric noise, industrial noise solar noise, and cosmic noise. ii) Internal noise like shot noise, thermal noise, low and high-frequency noise. (Statement a is false) Statement b: Noise temperature is one way of expressing the level of available noise power introduced by a component or source. The power spectral density of the noise is expressed in terms of the temperature (in kelvins) that would produce that level of Johnson–Nyquist noise. It is useful in dealing with UHF. (Statement b is True) Statement c: Internal and external noises are the main source of noise in the receiver (Statetemt c is false) Note: Thermal Agitation: • It is also called Johnson-Nyquist noise • It is distributed across the entire frequency spectrum range so it is also called as white noise • Thermal noise is generally observed in the passive component like R, L & C due to the random movement of charge carriers electrons. • Random noise power is Pn = KTB Where, K = Boltz man's constant T = temperature B = Bandwidth over which noise is measured # A telephone exchange has 9000 subscribers. If the number of calls originating at peak time is 10,000 in one hour, the calling rate is 1. 0.9 2. 10/9 3. 0.81 4. 0.1 Option 2 : 10/9 ## Random Processes MCQ Question 12 Detailed Solution Calling rate is the number of calls per subscriber. Given that, the number of subscribers = 9000 Number of calls = 10,000 Calling rate = 10,000/9000 = 10/9 # Let (𝑡) be a wide sense stationary (WSS) random process with power spectral density 𝑆𝑋(𝑓). If Y(t) is the process defined as 𝑌(𝑡) = 𝑋(2𝑡 − 1), the power spectral density 𝑆𝑌(𝑓) is 1. $${S_Y}\left( f \right) = \frac{1}{2}{S_X}\left( {\frac{f}{2}} \right){e^{ - j\pi f}}$$ 2. $${S_Y}\left( f \right) = \frac{1}{2}{S_X}\left( {\frac{f}{2}} \right){e^{ - j\pi f/2}}$$ 3. $${S_Y}\left( f \right) = \frac{1}{2}{S_X}\left( {\frac{f}{2}} \right)$$ 4. $${S_Y}\left( f \right) = \frac{1}{2}{S_X}\left( {\frac{f}{2}} \right){e^{ - j2\pi f}}$$ Option 3 : $${S_Y}\left( f \right) = \frac{1}{2}{S_X}\left( {\frac{f}{2}} \right)$$ ## Random Processes MCQ Question 13 Detailed Solution Concept: Power spectral density function (PSD) is the Fourier transform of the Autocorrelation function of the WSS process. $$h\left( t \right)\mathop \to \limits^{FT} H\left( f \right)$$ $$h\left( {at} \right)\mathop \to \limits^{FT} \frac{1}{a}H\left( {\frac{f}{a}} \right)$$ The shifting in the time domain does not change the PSD. Calculation: Let the autocorrelation function be denoted as R(τ). Therefore, $${R_x}\left( \tau \right)\mathop \to \limits^{FT} {S_X}\left( f \right)$$ Then, $$Y\left( t \right) = X\left( {2t - 1} \right)\mathop \to \limits^{Autocorrelation} {R_y}\left( \tau \right) = {R_x}\left( {2\tau } \right)$$ $${R_x}\left( {2\tau } \right)\mathop \to \limits^{FT} \frac{1}{2}{S_x}\left( {\frac{f}{2}} \right)$$ $${R_y}\left( \tau \right)\mathop \to \limits^{FT} {S_y}\left( f \right) = \frac{1}{2}{S_x}\left( {\frac{f}{2}} \right)$$ # A random process X(t) is called ‘white noise’ if the power spectral density is equal to 1. $$\frac{\eta }{8}$$ 2. $$\frac{\eta }{2}$$ 3. $$\frac{3\eta }{4}$$ 4. η Option 2 : $$\frac{\eta }{2}$$ ## Random Processes MCQ Question 14 Detailed Solution Concept: The power spectral density is basically the Fourier transform of the autocorrelation function of the power signal, i.e. $${S_x}\left( f \right) = F.T.\left\{ {{R_x}\left( \tau \right)} \right\}$$ Also, the inverse Fourier transform of a constant function is a unit impulse. Application: The power spectral density of white noise is given by: $$\rm{S_{X}\left(f\right)=\frac{\eta}{2}}$$ for all frequency 'f', i.e. Now auto-correlation is inverse Fourier transform (IFT) of power spectral density function. $$\rm{R_x\left(\tau\right)\mathop \to \limits^{IFT}S_X\left(f\right)}$$ The inverse Fourier transform of the power spectrum of white noise will be an impulse as shown: $$\rm{R_x\left(\tau\right)=\frac{\eta}{2}\delta\left(\tau\right)}$$ # Let a random process Y(t) be described as Y(t) = h(t) ∗ X(t) + Z(t), where X(t) is a white noise process with power spectral density SX(f) = 5 W/Hz. The filter h(t) has a magnitude response given by |H(f)| = 0.5 for −5 ≤ f ≤ 5, and zero elsewhere. Z(t) is a stationary random process, uncorrelated with X(t), with power spectral density as shown in the figure. The power in Y(t), in watts, is equal to ________ W (rounded off to two decimal places). ## Random Processes MCQ Question 15 Detailed Solution Concept: Average power = Area bounded between the Power Spectral Density - frequency curve. Calculation: Given: y(t) = h(t) * x(t) + z(t) Power in y(t) = Power [h(t) * x(t)] + Power [z(t)] H(f) will determine the band width of x(t). h(t) * x(t) is bandlimited to – 5 Hz to + 5 Hz $$Power\left[ {h\left( t \right)*x\left( t \right)} \right] = \mathop \smallint \limits_{ - \infty }^{ + \infty } {\left| {H\left( f \right)} \right|^2}5 \times \left( f \right)$$ $$= \mathop \smallint \limits_{ - 5}^{ + 5} {\left( {0.5} \right)^2}5df$$ $$\Rightarrow \mathop \smallint \limits_{ - 5}^{ + 5} \left( {0.25} \right)\left( {0.5} \right)df$$ = 10(1.25) = 12.5 W power [z (t)] = Area under power spectral density $${P_z}\left( t \right) = \left( {\frac{1}{2}} \right)\left( {10} \right)\left( 1 \right) = 5W$$ Total power = (5 + 12.5) W = 17.5 W # Consider a random process $$\rm X(t) = 3V(t) − 8$$, where (𝑡) is a zero mean stationary random process with autocorrelation $$\rm R_{V}(\tau)=4e^{-5\left|\tau\right|}$$. The power in $$\rm X(t)$$ is ________ ## Random Processes MCQ Question 16 Detailed Solution Concept: ACF is defined as: Rx(τ) = E[x(t) x(t + τ)] = E[x(t) x(t - τ)] Properties of ACF: 1. Rx(-τ) = Rx (τ) 2. Rx (0) = E [x2(t)] = Power of x(t) Calculation: Given: x(t) = 3 V(t) – 8,  Rv(τ) = 4e-5|τ| E [V(t)] = 0 We know that, Power of x(t) = E[x2(t)] = E[9V2(t) + 64 – 48 E[V(t)]] = 9E [V2(t)] + 64 – 48 E[V(t)] Now, E[V2(t)] = Rv(0) = 4 So, Power of x(t) = (9 × 4) + 64 – 0 Power of x(t) = 100 # The Fourier transform of a Gaussian time pulse is 1. Uniform 2. A pair of impulses 3. Gaussian 4. Rayleigh Option 3 : Gaussian ## Random Processes MCQ Question 17 Detailed Solution The normalized Gaussian pulse is given as: $$f\left( t \right) = K{e^{ - a{t^2}}}$$ And its Fourier transform is given as: $$F.T.\;\left\{ {f\left( t \right)} \right\} = \sqrt {\frac{\pi }{a}} \;{e^{ - \frac{{{\pi ^2}{t^2}}}{a}}}$$ $$= \sqrt {\frac{\pi }{a}} \;{e^{ - {{\left( {\frac{\pi }{a}} \right)}^{\pi {t^2}}}}}$$ $$= \sqrt A \;{e^{ - A\pi {t^2}}}$$ $$= B{e^{ - C{t^2}}}$$ Where, A, B, C, K all are constant values. Here, its Fourier transform is also coming a Gaussian pulse shape. So, the amplitude spectrum is the same as the Gaussian pulse. # The power spectral density of the stationary noise process N(t), having auto-correlation Ruu(τ) = Ke-3|τ|  is 1. $$\frac{{3K}}{{3 + {\omega ^2}}}$$ 2. $$\frac{{3K}}{{3 - {\omega ^2}}}$$ 3. $$\frac{{6K}}{{9 + {\omega ^2}}}$$ 4. $$\frac{{6K}}{{9 - {\omega ^2}}}$$ Option 3 : $$\frac{{6K}}{{9 + {\omega ^2}}}$$ ## Random Processes MCQ Question 18 Detailed Solution Concept: The Fourier Transform of the Auto-Correlation Ruu(τ) = Spectral density of that process Analysis: Given Ruu(τ) = ke-3|τ| Fourier Transform of the above will be: $$X\left( \omega \right) = \mathop \smallint \limits_{ - \infty }^\infty x\left( t \right)\;{e^{ - j\omega t}}dt$$ Spectral density will be: $${S_u}\left( \omega \right) = \mathop \smallint \limits_{ - \infty }^\infty {R_{uu}}\left( \tau \right){e^{ - j\omega \tau }}d\tau$$ $${S_u}\left( \omega \right) = \mathop \smallint \limits_{ - \infty }^\infty k{e^{ - 3\left| \tau \right|}}\;{e^{ - j\omega \tau }}d\tau$$ $${S_u}\left( \omega \right) = k\left[ {\mathop \smallint \limits_{ - \infty }^0 {e^{3\tau }}{e^{ - j\omega \tau }}d\tau + \mathop \smallint \limits_0^\infty {e^{ - 3\tau }}{e^{ - j\omega \tau }}d\tau } \right]$$ $${S_u}\left( \omega \right) = k\left[ {\mathop \smallint \limits_{ - \infty }^0 {e^{\left( {3 - j\omega } \right)\tau }}d\tau + \mathop \smallint \limits_0^\infty {e^{ - \left( {3 + j\omega } \right)\tau }}d\tau } \right]$$ $${S_u}\left( \omega \right) = k\left[ {\left( {\frac{{1.{e^{\left( {3 - j\omega } \right)\tau }}}}{{3 - j\omega }}} \right)_{ - \infty }^0 + \left( {\frac{{ - {e^{ - \left( {3 + j\omega } \right)\tau }}}}{{\left( {3 + j\omega } \right)}}} \right)_0^\infty } \right]$$ $${S_u}\left( \omega \right) = k\left[ {\frac{1}{{3 - j\omega }} + \frac{1}{{3 + j\omega }}} \right]$$ $$= k\left[ {\frac{{3 + j\omega + 3 - j\omega }}{{9 + {\omega ^2}}}} \right]$$ $${S_u}\left( \omega \right) = \frac{{6k}}{{9 + {\omega ^2}}}$$ Tips and Tricks: Remember the following for saving time in the exam. $${e^{ - \left| a \right|t}}\mathop \leftrightarrow \limits^{\;F.T\;} \frac{{2a}}{{{a^2} + {\omega ^2}}}$$ $$k{e^{ - 3\left| \tau \right|}}\mathop \leftrightarrow \limits^{\;F.T\;} \;k\left( {\frac{{2 \times 3}}{{9 + {\omega ^2}}}} \right)$$ $$= \frac{{6k}}{{9 + {\omega ^2}}}$$ # If the covariance of two random variable X and Y is μXY, their correlation coefficient ρXY is: 1. In μXY 2. $$ln\;\left( {\frac{{{\mu _{XY}}}}{{{\sigma _X}{\sigma _Y}}}} \right)$$ 3. $$\frac{{{\sigma _X}{\sigma _Y}}}{{{\mu _{XY}}}}$$ 4. $$\frac{{{\mu _{XY}}}}{{{\sigma _X}{\sigma _Y}}}$$ Option 4 : $$\frac{{{\mu _{XY}}}}{{{\sigma _X}{\sigma _Y}}}$$ ## Random Processes MCQ Question 19 Detailed Solution Concept: The correlation coefficient (ρxy) of two random, variable X, and Y is given as: $${\rho _{xy}} = \frac{{COV\left[ {X,Y} \right]}}{{\sqrt {Var\left( x \right)Var\left( y \right)} }}$$     ---(1) Where, COV[X, Y] = E[X,Y] – E[X] E[Y] Var (x) = σx2 Var (y) = σy2 Analysis: Given: COV [X, Y] = μxy From equation 1), $${\rho _{xy}} = \frac{{COV\left[ {X,Y} \right]}}{{\sqrt {{V_{ar}}\left( x \right){V_{ar}}\left( y \right)} }}$$ $${\rho _{xy}} = \frac{{{\mu _{xy}}}}{{\sqrt {\sigma _x^2\sigma _y^2} }}$$ $${\rho _{xy}} = \frac{{{\mu _{xy}}}}{{{\sigma _x}{\sigma _y}}}$$ # The power spectral density of a real stationary random process (𝑡) is given by$${S_X}\left( f \right) = \left\{ {\begin{array}{*{20}{c}} {\frac{1}{W},\;\;\;\;\;\left| f \right| \le W}\\ {0,\;\;\;\;\;\left| f \right| > W} \end{array}} \right.\;\;$$ The value of the expectation $$E\left[ {\pi X\left( t \right)X\left( {t - \frac{1}{{4W}}} \right)} \right]$$ is ______ ## Random Processes MCQ Question 20 Detailed Solution Concept: The Auto correlation Rx(τ) is defined as: Rx(τ) = E[x(t) x(t + τ)] Or Rx(τ) = E[x(t) x(t – τ)] Also, the Fourier transform of Rx(τ) is the power spectral density, i.e. Rx(τ) ↔  Sx(f) Analysis: $$E\left[ {\pi \;x\left( t \right)\;x\left( {t - \frac{1}{{4\omega }}} \right)} \right]$$ $$= \pi \;E\left[ {x\left( t \right)x\left( {t - \frac{1}{{4\omega }}} \right)} \right]$$ $$Here\;\tau = \frac{1}{{4\omega }}$$ . The above equation becomes: $$= \pi \;{R_x}\left( {\frac{1}{{4\omega }}} \right)$$ $${S_x}\left( f \right) = \left\{ {\begin{array}{*{20}{c}} {\frac{1}{\omega }\;\;\;\;\;\;\;\;\left| f \right| \le \omega }\\ {0\;\;\;\;\;\;\;\;\;\;\;\;\left| f \right| > \omega } \end{array}} \right.$$ \ Taking its Inverse Fourier transform, we get: If x(t) ↔ X(f), then the duality property states that: X(t) ↔ x(-f) $${R_x}\left( \tau \right) = \frac{1}{\omega }\left( {2\omega } \right)\sin c\;\left( {2\omega \;\tau } \right)$$ Rx(τ) = 2 sin c (2ωτ) $$\pi E\left[ {x\left( t \right)x\left( {t - \frac{1}{{4\omega }}} \right)} \right] = \pi {R_x}\left( {\frac{1}{{4\omega }}} \right)$$ $$= \pi 2\sin c\left( {\;2\omega \frac{1}{{4\omega }}} \right)$$ $$= \pi 2\sin c\;\left( {\frac{1}{2}} \right)$$ $$= \frac{{2\pi \sin \frac{\pi }{2}}}{{\frac{\pi }{2}}}$$ $$\pi \;E\left[ {x\left( t \right)\;x\left( {t - \frac{1}{{4\omega }}} \right)} \right] = 4$$
2021-11-27 05:14:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7967126369476318, "perplexity": 2116.595964646397}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358118.13/warc/CC-MAIN-20211127043716-20211127073716-00038.warc.gz"}
https://docs.dgl.ai/en/latest/guide/graph-basic.html
# 1.1 Some Basic Definitions about Graphs (Graphs 101)¶ (中文版) A graph $$G=(V, E)$$ is a structure used to represent entities and their relations. It consists of two sets – the set of nodes $$V$$ (also called vertices) and the set of edges $$E$$ (also called arcs). An edge $$(u, v) \in E$$ connecting a pair of nodes $$u$$ and $$v$$ indicates that there is a relation between them. The relation can either be undirected, e.g., capturing symmetric relations between nodes, or directed, capturing asymmetric relations. For example, if a graph is used to model the friendships relations of people in a social network, then the edges will be undirected as friendship is mutual; however, if the graph is used to model how people follow each other on Twitter, then the edges are directed. Depending on the edges’ directionality, a graph can be directed or undirected. Graphs can be weighted or unweighted. In a weighted graph, each edge is associated with a scalar weight. For example, such weights might represent lengths or connectivity strengths. Graphs can also be either homogeneous or heterogeneous. In a homogeneous graph, all the nodes represent instances of the same type and all the edges represent relations of the same type. For instance, a social network is a graph consisting of people and their connections, representing the same entity type. In contrast, in a heterogeneous graph, the nodes and edges can be of different types. For instance, the graph encoding a marketplace will have buyer, seller, and product nodes that are connected via wants-to-buy, has-bought, is-customer-of, and is-selling edges. The bipartite graph is a special, commonly-used type of heterogeneous graph, where edges exist between nodes of two different types. For example, in a recommender system, one can use a bipartite graph to represent the interactions between users and items. For working with heterogeneous graphs in DGL, see 1.5 Heterogeneous Graphs. Multigraphs are graphs that can have multiple (directed) edges between the same pair of nodes, including self loops. For instance, two authors can coauthor a paper in different years, resulting in edges with different features.
2021-11-27 01:51:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7788663506507874, "perplexity": 456.49485921460797}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358078.2/warc/CC-MAIN-20211127013935-20211127043935-00414.warc.gz"}
https://zh-tw.osdn.net/projects/freshmeat_fidocadj/
## 最後更新: 2014-04-30 20:29 ### 專案描述 FidoCadJ is a very easy-to-use editor, with a library of electrical symbols and footprints (through hole and SMD). Drawings can be exported in several graphic formats (PDF, EPS, PGF for LaTeX, SVG, PNG, and JPEG). Although very simple and not relying on any netlist concept, FidoCadJ can be considered a basic electronic design automation program. FidoCadJ uses a file format containing only UTF-8 text, which is very compact and suited for copying and pasting with newsgroups and forum messages. This determined its success, as it is quite versatile for simple mechanical drawings as well as for electronics. ### 評 您的評分 4.7 3 評分次數 5 星 2 1 0 0 0
2022-05-25 16:44:14
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8012661933898926, "perplexity": 7846.070761306556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662588661.65/warc/CC-MAIN-20220525151311-20220525181311-00632.warc.gz"}
http://mathhelpforum.com/geometry/8570-help-solving.html
1. ## Help with solving. What are the steps to solving this problem??? The second problem was written wrong. It should have been, If CD= 15, what is the length of AE? I now understand how to do the first problem. But how do I apply it to solving the second one? I changed the image to shows the correct question. 2. Originally Posted by Ash What are the steps to solving these problems??? Take the problem on the left. Triangles CBA and CED are similar since angle C is shared, angles CED and CBA are right, and angles EDC and BAC are equal. So $\frac{BA}{ED} = \frac{BC}{BE}$ $\frac{30}{10} = \frac{CE + 30}{CE}$ $3 CE = CE + 30$ $2 CE = 30$ $CE = 15$ The second problem is supposed to be done using the same principle, but the diagram seems to conflict with the problem? (CD is the whole base, but 10 seems to be only a part of it?) -Dan 3. Originally Posted by topsquark Take the problem on the left. Triangles CBA and CED are similar since angle C is shared, angles CED and CBA are right, and angles EDC and BAC are equal. So $\frac{BA}{ED} = \frac{BC}{BE}$ $\frac{30}{10} = \frac{CE + 30}{CE}$ $3 CE = CE + 30$ $2 CE = 30$ $CE = 15$ The second problem is supposed to be done using the same principle, but the diagram seems to conflict with the problem? (CD is the whole base, but 10 seems to be only a part of it?) -Dan Indeed, as with the first I think he meant BA = 30, not BE. In the second, if DE = 10, then CD can't possibly be 10, so something is wrong. 4. Originally Posted by AfterShock Indeed, as with the first I think he meant BA = 30, not BE. In the second, if DE = 10, then CD can't possibly be 10, so something is wrong. On the second figure It was written wrong. It's suppose to be If CD=15 what is the length of AE?
2018-02-24 09:02:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8258976936340332, "perplexity": 846.7123521185156}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815500.61/warc/CC-MAIN-20180224073111-20180224093111-00506.warc.gz"}
https://cs.stackexchange.com/questions/75304/combine-the-following-ip-addresses-into-a-single-block
# Combine the following IP Addresses into a single block I was asked to combine the ip addresses into a single block: 16.27.24.0/26, 16.27.24.64/26, 16.27.24.128/25 I managed to convert the given ip addresses into binary: 00010000.00011011.00011000.00000000 | 00011010 00010000.00011011.00011000.01000000 | 00011010 00010000.00011011.00011000.10000000 | 00011001 I am not quite sure what needs to be done in order to convert it properly. I believe that in this link it is explained what is supposed to be done. I should do it the way it is explained there. I am not able to access it, but whatever is shown in the preview, that is probably the way it would be gone about. Please help. Thanks • Don't convert the netmask to binary as well, that just makes it complicated. – orlp May 13 '17 at 11:59 The address is just a 32-bit binary number. E.g. yours: 00010000.00011011.00011000.00000000 00010000.00011011.00011000.01000000 00010000.00011011.00011000.10000000 However, these addresses are very precise, and without any further information point to exactly one address. But we want to be more general, and be able to talk about a whole group of addresses. That is a block, and what the netmask is for. When I use a netmask of /n, it means 'only look at the first n bits of this address, those are meaningful, the rest can be anything'. Think of it like this, the address we give is full, but then we strike out the most specific parts that can be anything: United States, Middleberge, Otter Lane 5322 In your case, the first 26 or 25 bits matter, so we strike through the last 6 or 7 bits: 00010000.00011011.00011000.00000000 00010000.00011011.00011000.01000000 00010000.00011011.00011000.10000000 But we still need to combine the blocks. Note that the top two lines have the exact same start, but then one ends in a 0 followed by anything, and the other in a 1 followed by anything. But since that exhaustively covers everything, we can just extend the 'anything' part one up to combine them: 00010000.00011011.00011000.00000000 00010000.00011011.00011000.10000000 Here we can do the exact same: 00010000.00011011.00011000.00000000 We can now convert our IP address back to decimal, giving 16.27.24.0. The netmask is 32 - 8 = 24 (because we strike out 8 binary digits). So the full IP address + netmask is 16.27.24.0/24.
2020-01-17 17:41:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31178227066993713, "perplexity": 743.8612859701509}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250589861.0/warc/CC-MAIN-20200117152059-20200117180059-00557.warc.gz"}
http://aas.org/archives/BAAS/v32n4/aas197/820.htm
AAS 197, January 2001 Session 111. Galaxy Morphology and Structure Display, Thursday, January 11, 2001, 9:30-4:00pm, Exhibit Hall ## [111.11] {\em Chandra} Observations of the X-Ray Point Source Population in Centaurus A J.M. Kregenow (Wittenberg University), R.P. Kraft, C. Jones, W.R. Forman, S.S. Murray (CFA), High Energy Division We present the results from a study of the X-ray point source population in two {\em Chandra} observations of the nearby radio galaxy Centaurus A (NGC 5128). Using a wavelet decomposition detection algorithm, we detect 246 individual point sources above a limiting flux of 1.34\times10-15erg/cm2s, which corresponds to a luminosity of 1.96\times1036 erg/s. Of the 246 sources detected, 82 are detected in both data sets where the fields of view overlap. We positively identify 8 foreground stars in our observations, and estimate approximately 15% to 20% of the sources to be background AGN not associated with Centaurus A. The remaining ~200 sources, likely associated with the galaxy, are probably X-ray binaries and supernova remnants. We identify 11 with known globular clusters, and 41 as possible transient or variable sources. We find that the population of X-ray point sources in Centaurus A, a merged elliptical and spiral galaxy with an active nucleus, is not significantly different than that of M31 in both spatial distribution and luminosity range. We also detect in one observation a possible super-Eddington X-ray transient previously detected by ROSAT. This project was supported by NASA contract NAS8-38248, and the observations were made as part of the HRC GTO program. The research was conducted, in part, through the Harvard-Smithsonian CfA Undergraduate Summer Intern Program as part of the NSF REU program.
2014-08-27 09:49:53
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8340167999267578, "perplexity": 4584.0523153103795}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500828050.28/warc/CC-MAIN-20140820021348-00173-ip-10-180-136-8.ec2.internal.warc.gz"}
http://anthonypan.com/posts/data-wrangling-strings-r
# Data Wrangling in R: Strings In which we dive into string manipulation, with a focus on regular expressions. February 18, 2020 - 8 minute read - In this post we’ll be taking a look at basic string functions, regular expression syntax, and several applications of regular expressions in string manipulation.1 The RStudio String Manipulation Cheat Sheet has great reference material on this topic in a condensed format. ### String Basics • str_length() returns the number of characters in a string • str_replace_na() turns missing values (NA) into “NA” • str_c(..., sep = ",") combines two or more strings with a specified separator • str_c(..., collapse = ", " collapses a vector of strings into a single string with a specified separator • str_sub() extracts or modifies subsets of a string by character position • str_sort() sorts strings; specify locale if necessary • str_to_lower(), str_to_upper(), str_to_title() changes cases; specify locale if necessary ## Regular Expressions - Basic Syntax #### Anchors • ^ matches the start of a string #### Repetition • ? is 0 or 1 • + is 1 or more • * is 0 or more • {n} is exactly n • {n,} is n or more • {,m} is at most m • {n,m} is between n and m, inclusive • \1, \2 backreference previous text in parentheses and search for the same pattern Note: By default, these are greedy and match the longest string possible; make them lazy by putting a ? after them. ## Appling Regular Expressions in R It’s important to note that in order for your regular expressions to work in R, you must add an additional backslash, \ , to all existing backslashes in your expression. This is because backslashes have their own meaning in R strings. You can also search for patterns using OR logic using the pipe: |. Let’s go through some simple applications of regular expressions using the two libraries below. The two datasets that I will use as examples are words, a character vector of 980 common words, and sentences, a character vector of 720 sentences. ### 1. Detecting Matches: string_detect(), string_subset(), and string_count() string_detect() searches for a pattern in a string and returns TRUE or FALSE. string_subset() keeps strings matching a pattern. string_count() counts the number of matches there are in a string. ### 2. Extracting Matches: string_extract() and string_extract_all() str_extract(), str_extract_all() extracts the actual text of a match. ### 3. Grouped Matches: str_match() and str_match_all() str_match() and str_match_all() are very similar to the previous string extracting functions – they extract a matching pattern from a vector, but also returns each individual component by returning a matric with one column for the complete match followed by one column for each group. extract() does the same thing, but is especially useful for tibbles (as opposed to vectors). It will add additional columns to the tibble for each grouped match. ### 4. Replacing Matches: str_replace() and str_replace_all() str_replace() will replace the first occurence of a match, while str_replace_all() will replace all occurences. ### 5. Splitting Matches: str_split() str_split() will split strings based on a pattern. ### 6. Finding the Positions of Matches: str_locate(), str_locate_all() str_locate(), str_locate_all() return the starting and ending positions of each match. When none of the other functions do what you want, you may want to locate the positions of the matching patterns, then use str_sub() to extract/modify them. ### …A final note about regular expressions: In the examples above, the pattern matching string is automatically wrapped into a call to regex(): We can explicitly call the regex() function to change case matching, search over multiple lines, or add comments for readability. ## Applying Pattern Matching Without Regular Expressions All of the functions that we’ve looked at to apply pattern matching via regular expressions by default. However, it is possible to override the pattern matching type by explicity specifying one of three functions in place of regex(): 1. fixed() matches the exact specified sequence of bytes, ignoring all special regular expressions. It is much faster than regular expressions, but be careful with non-English data. 2. coll() compares strings using standard collation rules. This is useful for doing case insensitive matching, but is slower than the other functions. 3. boundary() can match boundaries, such as characters, words, or sentences. 1. This post is meant for a person who is looking for a refresher on string manipulation and regular expressions in R. The content in this post is based on chapter fourteen of R for Data Science by Hadley Wickham & Garrett Grolemund, which I would recommend reading for in-depth examples.
2020-04-02 05:22:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3324907422065735, "perplexity": 2832.7007936114155}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506673.7/warc/CC-MAIN-20200402045741-20200402075741-00559.warc.gz"}
http://energyresources.asmedigitalcollection.asme.org/article.aspx?articleid=1414655
0 RESEARCH PAPERS # Combination of a Biomass Fired Updraft Gasifier and a Stirling Engine for Power Production [+] Author and Article Information Jeng-Chyan Muti Lin Research Institute of Information and Electrical Energy, National Chinyi Institute of Technology, [email protected] J. Energy Resour. Technol 129(1), 66-70 (Jul 20, 2006) (5 pages) doi:10.1115/1.2424963 History: Received November 10, 2005; Revised July 20, 2006 ## Abstract Biomass is the largest renewable energy source used in the world and its importance grows larger in the future energy market. Since most biomass sources are low in energy density and are widespread in space, a small scale biomass conversion system is therefore more competitive than a large stand-alone conversion plant. The current study proposes a small scale solid biomass powering system to explore the viability of direct coupling of an updraft fixed bed gasifier with a Stirling engine. The modified updraft fixed bed gasifier employs an embedded combustor inside the gasifier to fully combust the syngas generated by the gasifier. The flue gas produced by the syngas combustion inside the combustion tube is piped directly to the heater head of the Stirling engine. The engine will then extract and convert the heat contained in the flue gas into electricity automatically. Output depends on heat input and the heat input is proportional to the flow rate and temperature of the flue gas. The preliminary study of the proposed direct coupling of an updraft gasifier with a $25kW$ Stirling engine demonstrates that full power output could be produced by the current system. It could be found from the current investigation that very little attention and no assisting fuel are required to operate the current system. The proposed system could be considered as a feasible solid biomass powering technology. <> ## Figures Figure 1 The schematic of the direct coupling of the updraft gasifier with a Stirling engine Figure 2 Snapshot of the modified fixed bed gasifier fed by wood chips with combustion taking place in the embedded combustor Figure 3 Represented flame temperature history at the exit of the embedded combustor Figure 4 Snapshot of the direct coupling of an updraft gasifier with a Stirling engine Figure 5 Snapshot of the heat exchangers on a STM Stirling engine Figure 6 A PLC based LABVIEW monitoring system to automatically and concurrently acquired and store the system thermal and power data Figure 7 Represented temperature histories at the entrance and at the exit of the Stirling engine Figure 8 Power output history from the current study ## Errata Some tools below are only available to our subscribers or users with an online account. ### Related Content Customize your page view by dragging and repositioning the boxes below. Related Journal Articles Related Proceedings Articles Related eBook Content Topic Collections • TELEPHONE: 1-800-843-2763 (Toll-free in the USA) • EMAIL: [email protected]
2018-11-22 11:24:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17984938621520996, "perplexity": 4574.775664769822}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746205.96/warc/CC-MAIN-20181122101520-20181122123202-00002.warc.gz"}
https://tohanuqidakas.mcgivesback.com/transmission-lines-and-wave-propagation-book-7615db.php
# Transmission lines and wave propagation Publisher: CRC Press in Boca Raton, FL Written in English ## Subjects: • Electric lines., • Electromagnetic waves. ## Edition Notes Classifications The Physical Object Statement Philip C. Magnusson ... [et al.]. Contributions Magnusson, Philip Cooper., Magnusson,Philip Cooper. LC Classifications TK3221 .M24 2001, TK3221 .M24 2001 Pagination 519 p. : Number of Pages 519 Open Library OL18161511M ISBN 10 0849302692 LC Control Number 00048642 TRANSMISSION LINES PART II DR. FARID FARAHMAND FALL Wave Equations for Transmission Line Impedance and Shunt Admittance of the line. Solution of Wave Equations (cont.) Propagation Constant (function of frequency)! Impedance (function of frequency) # Lossy or LoslessFile Size: 3MB.   Early chapters cover pulse propagation, sinusoidal waves and coupled lines, all set within the context of a simple lossless equivalent circuit. Later chapters then develop this basic model by demonstrating the derivation of circuit parameters, and the use of Maxwell's equations to extend this theory to major transmission : Cambridge University Press.   Pre-book Pen Drive and G Drive at GATE ACADEMY launches its products for GATE/ESE/UGC-NET aspirants. Postal study course - https://gatea. Transmission Lines Transmission Lines • Transmission Lines • Transmission Line Equations + • Solution to Transmission Line Equations • Forward Wave • Forward + Backward Waves • Power Flow • Reflections • Reflection Coefficients • Driving a line • Multiple Reflections • Transmission Line Characteristics + • Summary E Analysis of Circuits ( File Size: 1MB. Introduction to Wave Propagation, Transmission Lines, and Antennas: Leakage Current: LENGTH OF A TRANSMISSION LINE. A transmission line is considered to be electrically short when its physical length is short compared to a quarter-wavelength 1/4 l) of the energy it is to carry.   Microwave Devices, Circuits and Subsystems for Communications Engineering provides a detailed treatment of the common microwave elements found in modern microwave communications systems. The treatment is thorough without being unnecessarily mathematical. The emphasis is on acquiring a conceptual understanding of the techniques and technologies discussed and the practical . Transmission Lines — a review and explanation An apology 1. We must quickly learn some foundational material on transmission lines. It is described in the book and in much of the literature in a highly mathematical way. DON'T GET LOST IN THE MATH! We want to use the Smith chart to cut through the boring math — but mustFile Size: KB. let's consider this image (from AAC book, see here). The author wanted to show that a transmission line propagates not only voltage and current waves, but also an . Transmission lines with a very small cross section compared to the wavelength of the dominant mode of propagation is the transverse electromagnetic mode (TEM). Closed rectangular and cylindrical conducting tubes on which the dominant modes of propagation are the transverse electric mode and transverse magnetic mode (TE and TM modes). Free 2-day shipping. Buy Transmission Lines and Wave Propagation (Hardcover) at ce: $If you have any doubts please refer to the JNTU Syllabus Book. Unit Need for Transmission Lines, Types of Transmission lines, Characterization in terms of primary and secondary constants, Characteristic impedance, General wave equation,Loss less propagation,Propagation, Propagation constant, Wave reflection at discontinuities, Voltage 5/5(16). ## Recent ## Transmission lines and wave propagation Download PDF EPUB FB2 Summary. Transmission Lines and Wave Propagation, Fourth Edition helps readers develop a thorough understanding of transmission line behavior, as well as their advantages and limitations. Developments in research, programs, and concepts since the first edition presented a demand for a. Transmission Lines and Wave Propagation, Fourth Edition helps readers develop a thorough understanding of transmission line behavior, as well as their advantages and limitations. Developments in research, programs, and concepts since the first edition presented a demand for a Cited by: CHAPTER 2. TRANSMISSION LINES Key concepts developed include: wave propagation, standing waves, and power transfer Returning to Figure, we note that sinusoidal steady-state is implied as the source voltage is the phasor VQ g, the source impedance Z g is. Transmission Lines and Wave Propagation, Fourth Edition helps readers develop a thorough understanding of transmission line behavior, as well as their advantages and limitations. Developments in research, programs, and concepts since the first edition presented a demand for a version that reflected these by: Communications-Electronics Fundamentals: Wave Propagation, Transmission Lines, and Antennas - Kindle edition by U.S. Department of Defense. Download it once and read it on your Kindle device, PC, phones or tablets. Use features like bookmarks, note taking and highlighting while reading Communications-Electronics Fundamentals: Wave Propagation, Transmission Lines, and Antennas/5(2). TC COMMUNICATIONS-ELECTRONICS FUNDAMENTALS Wave Propagation, Transmission Lines, and Antennas JULY DISTRIBUTION RESTRICTION: Approved for public release; distribution is unlimited. HEADQUARTERSFile Size: 4MB. Radio and Line Transmission, Volume 2 gives a detailed treatment of the subject as well as an introduction to additional advanced subject matter. Organized into 14 chapters, this book begins by explaining the radio wave propagation, signal frequencies, and bandwidth. Note that $$\alpha=0$$ for a wave that does not diminish in magnitude with increasing distance, in which case the transmission line is said to be lossless. If $$\alpha>0$$ then the line is said to be lossy (or possibly “low loss” if the loss can be neglected), and in this case the rate at. Teaching transmission lines and wave propagation is a challenging task because it involves quantities not easily observable and also because the underlying mathematical equations—functions of time, distance and using complex numbers—are not prone to an easy physical interpretation in a frequent framework of a superposition of traveling waves in distinct : Susana Mota, Armando Rocha. Be the first to ask a question about Solutions Manual for Transmission Lines and Wave Propagation, Fourth Edition of Vaccines for Human Use Lists with This Book This book is not yet featured on Listopia.3/5(2). Elements of electrostatics, magnetostatics, time-varying electric and magnetic fields, wave propagation through unbounded and bounded mediums and the transmission lines theory are covered concisely to give readers a sound introduction to the subject and its engineering Range:$ - \$ Book Description. Transmission Lines and Wave Propagation, Fourth Edition helps readers develop a thorough understanding of transmission line behavior, as well as their advantages and limitations. Developments in research, programs, and concepts since the first edition presented a demand for a version that reflected these advances. PREFACE. INTRODUCTION. Transmission Systems: General Comments. The Circuit-Theory Approach to Transmission-Line Analysis. Traveling-Wave Fields-Lines and Waveguides. Closure. WAVE PROPAGATION ON AN INFINITE LOSSLESS LINE. Partial Differential Equations of Lossless Line. Traveling-Wave Solutions to the Wave Equation. Principles of Electrical Transmission Lines in Power and Communication If the duration of the applied p.d. is short compared with the time of wave propagation over the section of the system under consideration, one is in the realm of travelling-wave theory, or “surge phenomena” as the power engineer calls the subject of this study. A guide to transmission lines and wave propagation. This revised edition features a new chapter on coupled structures, discussion of insulation materials, and uses the fast Fourier Transform to refine the approximation of transmission line response to step-function excitation. In radio-frequency engineering, a transmission line is a specialized cable or other structure designed to conduct alternating current of radio frequency, that is, currents with a frequency high enough that their wave nature must be taken into account. Transmission lines are used for purposes such as connecting radio transmitters and receivers with their antennas (they are then called feed. Introduces wave propagation, transmission lines, and antenna theory. Topic 1 discusses wave motion, sound-wave terminology, light waves, properties of electromagnetic waves and the electromagnetic spectrum. Topic 2 discusses radio-wave propagation, including components of radio waves 4/5(1). Early chapters cover pulse propagation, sinusoidal waves and coupled lines, all set within the context of a simple lossless equivalent circuit. Later chapters then develop this basic model by demonstrating the derivation of circuit parameters, and the use of Maxwell's equations to extend this theory to major transmission : Richard Collier. Transmission Lines and Wave Propagation, Fourth Edition helps readers develop a thorough understanding of transmission line behavior, as well as their advantages and limitations. Developments in research, programs, and concepts since the first edition presented a demand for a version that reflected these advances. Extensively revised, the fourth edition of this bestselling text 5/5(1). Books for electromagnetic theory are: Indian Author: 1. Elements of electromagnetics-- N.O. Sadiku 2. Principles of electromagnatics R.G Kaduskar 3. Principles of electromagnaticsS.C. Mahapatra Foreign Author: 1. Electromagnatics with applic. Assuming E(x,t) = 2cos(3x10^15t – 10^7x) V/m, calculate the wave velocity. Assume we have a transmission line in which air separated the two perfect conductors. Assume the impedance of the line is 50 ohm, phase constant is 20 (rad/m) and theFile Size: 1MB. Hon Tat Hui Transmission Lines – Basic Theories NUS/ECE EE 28 Example 1 A Ωtransmission line is connected to a load consisted of a Ωresistor in series with a pF capacitor. (a) Find the reflection coefficient Г. Electromagnetic Field Theory and Transmission Lines is an ideal textbook for a single semester, first course on Electromagnetic Field Theory (EMFT) at the undergraduate level. This book uses plain and simple English, diagrammatic representations and real life examples to explain the fundamental concepts, notations, representation and principles that govern the field of EMFT. Transmission Line Theory Dr. a Introduction: Our analysis of transmission lines will include the derivation of transmission line electric and magnetic fields on the line are transverse to the direction of wave propagation. an important property of TEM waves is that the fields E and H are uniquely related to voltage V and File Size: 1MB. Transmission Lines And Waveguide. Technical Publications, and Waveguides The book will be very much useful not only to the students but also. ratio Substituting value susceptance TE10 mode TM waves transmission line unit length voltage and current voltage minima VSWR wave impedance wave propagation wavelength is given 3/5(4). Transmission Lines and Wave Propagation, 3rd Edition addresses this broad topic as a text for upper division students in electrical engineering. It contains a wealth of information that makes it an essential reference volume for libraries and professionals. In these cases, we can say that the transmission lines in question are electrically short, because their propagation effects are much quicker than the periods of the conducted signals. By contrast, an electrically long line is one where the propagation time is a large fraction or Author: Tony R. Kuphaldt. Need for Transmission Lines, Types of Transmission lines, Characterization in terms of primary and secondary constants, Characteristic impedance, General wave equation, Loss less propagation, Propagation constant, Wave reflection at discontinuities, Voltage standing wave ratio, Transmission line of finite length, The Smith Chart, Smith Chart calculations for lossy lines, Impedance matching by. EEE RF TL Waves & Impedances - 5 - wave reflecting from a dielectric or conducting boundary, transmitted and reflected waves are required to satisfy all the boundary conditions2. Waves can exist traveling independently in either direction on a linear transmission Size: KB. Transmission Lines and Wave Propagation. Transmission Lines and Wave Propagation book. These results are helpful reference points from which to survey the multiparameter transmission line. Series expansions prove useful in showing quantitatively the effects caused by small deviations from the respective limiting-case : Philip C. Magnusson, Gerald C. Alexander, Vijai K. Tripathi, Andreas Weisshaar. Transmission Lines A transmission line connects a generator to a load Transmission lines include: propagation. Example of TEM Mode Electric Field E is radial Wave Equations for Transmission Line Impedance and Shunt Admittance of the line. Solution of Wave Equations (cont.).transmission lines waveguides Download transmission lines waveguides or read online books in PDF, EPUB, Tuebl, and Mobi Format. Click Download or Read Online button to get transmission lines waveguides book now. This site is like a library, Use search box in the widget to get ebook that you want.A simple, concise and completely general way to present the wave propagation on transmission lines, including a thorough study of the line equations in characteristic form Frequency and time domain multiport representations of any linear transmission line.
2021-08-03 11:48:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4879922866821289, "perplexity": 2576.299658996103}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154457.66/warc/CC-MAIN-20210803092648-20210803122648-00061.warc.gz"}
http://www.chegg.com/homework-help/questions-and-answers/given-second-order-prototype-characteristics-equation-s2-2-omegans-omegan2-0-desired-find--q1142458
Image text transcribed for accessibility: Given the second order prototype characteristics equation: s2 + 2 omegans+omegan2 = 0 It is desired to find a set of admissible solutions (this is a region of space in the s - plane) that have the following properties 20 < P.O < 25 1.5 < Ts < 2.0 sec settling band is < 5% (use #TC = 3) Show the admissible solution as an area (polygon) in the s - plane. This admissible area could conceptually be correlated with the tolerance of components of a system. Hints: Plot region should be defined by axis([-4 0 0 6]) so everyone gets same plot. Use the relation: = cos theta to define the P.O boundaries The result should "look like" (but is not) the following graph, the angled blue lines correspond to one overshoot locus the black the other. The horizontal lines correspond to the 1.5 sec and 2.0 sec setting time locus for a given damping ratio, colors horizontal lines match angled line colors. You need to indicate the admissible region on the plot by coloring it in (by hand is acceptable). Think carefully as to which region meets the specifications given above.
2015-01-31 23:23:12
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8618656396865845, "perplexity": 1282.2518939556962}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115861162.19/warc/CC-MAIN-20150124161101-00073-ip-10-180-212-252.ec2.internal.warc.gz"}
http://blog.manfredas.com/tag/tutorial/
# Expectation-Maximization Algorithm for Bernoulli Mixture Models (Tutorial) Even though the title is quite a mouthful, this post is about two really cool ideas: 1. A solution to the "chicken-and-egg" problem (known as the Expectation-Maximization method, described by A. Dempster, N. Laird and D. Rubin in 1977), and 2. An application of this solution to automatic image clustering by similarity, using Bernoulli Mixture Models. For the curious, an implementation of the automatic image clustering is shown in the video below. The source code (C#, Windows x86/x64) is also available for download! Automatic clustering of handwritten digits from MNIST database using Expectation-Maximization algorithm While automatic image clustering nicely illustrates the E-M algorithm, E-M has been successfully applied in a number of other areas: I have seen it being used for word alignment in automated machine translation, valuation of derivatives in financial models, and gene expression clustering/motif finding in bioinformatics. As a side note, the notation used in this tutorial closely matches the one used in Christopher M. Bishop's "Pattern Recognition and Machine Learning". This should hopefully encourage you to check out his great book for a broader understanding of E-M, mixture models or machine learning in general. Alright, let's dive in! #### 1. Expectation-Maximization Algorithm Imagine the following situation. You observe some data set $\mathbf{X}$ (e.g. a bunch of images). You hypothesize that these images are of $K$ different objects... but you don't know which images represent which objects. Let $\mathbf{Z}$ be a set of latent (hidden) variables, which tell precisely that: which images represent which objects. Clearly, if you knew $\mathbf{Z}$, you could group images into the clusters (where each cluster represents an object), and vice versa, if you knew the groupings you could deduce $\mathbf{Z}$. A classical "chicken-and-egg" problem, and a perfect target for an Expectation-Maximization algorithm. Here's a general idea of how E-M algorithm tackles it. First of all, all images are assigned to clusters arbitrarily. Then we use this assignment to modify the parameters of the clusters (e.g. we change what object is represented by that cluster) to maximize the clusters' ability to explain the data; after which we re-assign all images to the expected most-likely clusters. Wash, rinse, repeat, until the assignment explains the data well-enough (i.e. images from the same clusters are similar enough). (Notice the words in bold in the previous paragraph: this is where the expectation and maximization stages in the E-M algorithm come from.) To formalize (and generalize) this a bit further, say that you have a set of model parameters $\mathbf{\theta}$ (in the example above, some sort of cluster descriptions). To solve the problem of cluster assignments we effectively need to find model parameters $\mathbf{\theta'}$ that maximize the likelihood of the observed data $\mathbf{X}$, or, equivalently, the model parameters that maximize the log likelihod Using some simple algebra we can show that for any latent variable distribution $q(\mathbf{Z})$, the log likelihood of the data can be decomposed as \begin{align} \ln \,\text{Pr}(\mathbf{X} | \theta) = \mathcal{L}(q, \theta) + \text{KL}(q || p), \label{eq:logLikelihoodDecomp} \end{align} where $\text{KL}(q || p)$ is the Kullback-Leibler divergence between $q(\mathbf{Z})$ and the posterior distribution $\,\text{Pr}(\mathbf{Z} | \mathbf{X}, \theta)$, and \begin{align} \mathcal{L}(q, \theta) := \sum_{\mathbf{Z}} q(\mathbf{Z}) \left( \mathcal{L}(\theta) - \ln q(\mathbf{Z}) \right) \end{align} with $\mathcal{L}(\theta) := \ln \,\text{Pr}(\mathbf{X}, \mathbf{Z}| \mathbf{\theta})$ being the "complete-data" log likelihood (i.e. log likelihood of both observed and latent data). To understand what the E-M algorithm does in the expectation (E) step, observe that $\text{KL}(q || p) \geq 0$ for any $q(\mathbf{Z})$ and hence $\mathcal{L}(q, \theta)$ is a lower bound on $\ln \,\text{Pr}(\mathbf{X} | \theta)$. Then, in the E step, the gap between the $\mathcal{L}(q, \theta)$ and $\ln \,\text{Pr}(\mathbf{X} | \theta)$ is minimized by minimizing the Kullback-Leibler divergence $\text{KL}(q || p)$ with respect to $q(\mathbf{Z})$ (while keeping the parameters $\theta$ fixed). Since $\text{KL}(q || p)$ is minimized at $\text{KL}(q || p) = 0$ when $q(\mathbf{Z}) = \,\text{Pr}(\mathbf{Z} | \mathbf{X}, \theta)$, at the E step $q(\mathbf{Z})$ is set to the conditional distribution $\,\text{Pr}(\mathbf{Z} | \mathbf{X}, \theta)$. To maximize the model parameters in the M step, the lower bound $\mathcal{L}(q, \theta)$ is maximized with respect to the parameters $\theta$ (while keeping $q(\mathbf{Z}) = \,\text{Pr}(\mathbf{Z} | \mathbf{X}, \theta)$ fixed; notice that $\theta$ in this equation corresponds to the old set of parameters, hence to avoid confusion let $q(\mathbf{Z}) = \,\text{Pr}(\mathbf{Z} | \mathbf{X}, \theta^\text{old})$). The function $\mathcal{L}(q, \theta)$ that is being maximized w.r.t. $\theta$ at the M step can be re-written as \begin{align*} \theta^\text{new} &= \underset{\mathbf{\theta}}{\text{arg max }} \left. \mathcal{L}(q, \theta) \right|_{q(\mathbf{Z}) = \,\text{Pr}(\mathbf{Z} | \mathbf{X}, \theta^\text{old})} \\ &= \underset{\mathbf{\theta}}{\text{arg max }} \left. \sum_{\mathbf{Z}} q(\mathbf{Z}) \left( \mathcal{L}(\theta) - \ln q(\mathbf{Z}) \right) \right|_{q(\mathbf{Z}) = \,\text{Pr}(\mathbf{Z} | \mathbf{X}, \theta^\text{old})} \\ &= \underset{\mathbf{\theta}}{\text{arg max }} \sum_{\mathbf{Z}} \,\text{Pr}(\mathbf{Z} | \mathbf{X}, \theta^\text{old}) \left( \mathcal{L}(\theta) - \ln \,\text{Pr}(\mathbf{Z} | \mathbf{X}, \theta^\text{old}) \right) \\ &= \underset{\mathbf{\theta}}{\text{arg max }} \mathbb{E}_{\mathbf{Z} | \mathbf{X}, \theta^\text{old}} \left[ \mathcal{L}(\theta) \right] - \sum_{\mathbf{Z}} \,\text{Pr}(\mathbf{Z} | \mathbf{X}, \theta^\text{old}) \ln \,\text{Pr}(\mathbf{Z} | \mathbf{X}, \theta^\text{old}) \\ &= \underset{\mathbf{\theta}}{\text{arg max }} \mathbb{E}_{\mathbf{Z} | \mathbf{X}, \theta^\text{old}} \left[ \mathcal{L}(\theta) \right] - (C \in \mathbb{R}) \\ &= \underset{\mathbf{\theta}}{\text{arg max }} \mathbb{E}_{\mathbf{Z} | \mathbf{X}, \theta^\text{old}} \left[ \mathcal{L}(\theta) \right], \end{align*} i.e. in the M step the expectation of the joint log likelihood of the complete data is maximized with respect to the parameters $\theta$. So, just to summarize, • Expectation step: $q^{t + 1}(\mathbf{Z}) \leftarrow \,\text{Pr}(\mathbf{Z} | \mathbf{X}, \mathbf{\theta}^t)$ • Maximization step: $\mathbf{\theta}^{t + 1} \leftarrow \underset{\mathbf{\theta}}{\text{arg max }} \mathbb{E}_{\mathbf{Z} | \mathbf{X}, \theta^\text{t}} \left[ \mathcal{L}(\theta) \right]$ (where superscript $\mathbf{\theta}^t$ indicates the value of parameter $\mathbf{\theta}$ at time $t$). Phew. Let's go to the image clustering example, and see how all of this actually works. Continue reading # Backpropagation Tutorial The PhD thesis of Paul J. Werbos at Harvard in 1974 described backpropagation as a method of teaching feed-forward artificial neural networks (ANNs). In the words of Wikipedia, it lead to a "rennaisance" in the ANN research in 1980s. As we will see later, it is an extremely straightforward technique, yet most of the tutorials online seem to skip a fair amount of details. Here's a simple (yet still thorough and mathematical) tutorial of how backpropagation works from the ground-up; together with a couple of example applets. Feel free to play with them (and watch the videos) to get a better understanding of the methods described below! Training a single perceptron Training a multilayer neural network ##### 1. Background To start with, imagine that you have gathered some empirical data relevant to the situation that you are trying to predict - be it fluctuations in the stock market, chances that a tumour is benign, likelihood that the picture that you are seeing is a face or (like in the applets above) the coordinates of red and blue points. We will call this data training examples and we will describe $i$th training example as a tuple $(\vec{x_i}, y_i)$, where $\vec{x_i} \in \mathbb{R}^n$ is a vector of inputs and $y_i \in \mathbb{R}$ is the observed output. Ideally, our neural network should output $y_i$ when given $\vec{x_i}$ as an input. In case that does not always happen, let's define the error measure as a simple squared distance between the actual observed output and the prediction of the neural network: $E := \sum_i (h(\vec{x_i}) - y_i)^2$, where $h(\vec{x_i})$ is the output of the network. #### 2. Perceptrons (building-blocks) The simplest classifiers out of which we will build our neural network are perceptrons (fancy name thanks to Frank Rosenblatt). In reality, a perceptron is a plain-vanilla linear classifier which takes a number of inputs $a_1, ..., a_n$, scales them using some weights $w_1, ..., w_n$, adds them all up (together with some bias $b$) and feeds everything through an activation function $\sigma \in \mathbb{R} \rightarrow \mathbb{R}$. A picture is worth a thousand equations: Perceptron (linear classifier) To slightly simplify the equations, define $w_0 := b$ and $a_0 := 1$. Then the behaviour of the perceptron can be described as $\sigma(\vec{a} \cdot \vec{w})$, where $\vec{a} := (a_0, a_1, ..., a_n)$ and $\vec{w} := (w_0, w_1, ..., w_n)$. To complete our definition, here are a few examples of typical activation functions: • sigmoid: $\sigma(x) = \frac{1}{1 + \exp(-x)}$, • hyperbolic tangent: $\sigma(x) = \tanh(x)$, • plain linear $\sigma(x) = x$ and so on. Now we can finally start building neural networks. Continue reading # Eigenfaces Tutorial The main purpose behind writing this tutorial was to provide a more detailed set of instructions for someone who is trying to implement an eigenface based face detection or recognition systems. It is assumed that the reader is familiar (at least to some extent) with the eigenface technique as described in the original M. Turk and A. Pentland papers (see "References" for more details). #### 1. Introduction The idea behind eigenfaces is similar (to a certain extent) to the one behind the periodic signal representation as a sum of simple oscillating functions in a Fourier decomposition. The technique described in this tutorial, as well as in the original papers, also aims to represent a face as a linear composition of the base images (called the eigenfaces). The recognition/detection process consists of initialization, during which the eigenface basis is established and face classification, during which a new image is projected onto the "face space" and the resulting image is categorized by the weight patterns as a known-face, an unknown-face or a non-face image. #### 2. Demonstration To download the software shown in video for 32-bit x86 platform, click here. It was compiled using Microsoft Visual C++ 2008 and uses GSL for Windows. #### 3. Establishing the Eigenface Basis First of all, we have to obtain a training set of $M$ grayscale face images $I_1, I_2, ..., I_M$. They should be: 1. face-wise aligned, with eyes in the same level and faces of the same scale, 2. normalized so that every pixel has a value between 0 and 255 (i.e. one byte per pixel encoding), and 3. of the same $N \times N$ size. So just capturing everything formally, we want to obtain a set $\{ I_1, I_2, ..., I_M \}$, where \begin{align} I_k = \begin{bmatrix} p_{1,1}^k & p_{1,2}^k & ... & p_{1,N}^k \\ p_{2,1}^k & p_{2,2}^k & ... & p_{2,N}^k \\ \vdots \\ p_{N,1}^k & p_{N,2}^k & ... & p_{N,N}^k \end{bmatrix}_{N \times N} \end{align} and $0 \leq p_{i,j}^k \leq 255.$ Once we have that, we should change the representation of a face image $I_k$ from a $N \times N$ matrix, to a $\Gamma_k$ point in $N^2$-dimensional space. Now here is how we do it: we concatenate all the rows of the matrix $I_k$ into one big vector of dimension $N^2$. Can it get any more simpler than that? Continue reading
2020-04-09 02:59:07
{"extraction_info": {"found_math": true, "script_math_tex": 71, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 72, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9156851768493652, "perplexity": 1072.0783598327514}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371829677.89/warc/CC-MAIN-20200409024535-20200409055035-00076.warc.gz"}
https://www.electro-tech-online.com/threads/plz-help-me.5447/
# plz help me !! Status Not open for further replies. #### pike ##### Member hey guys, I am having trouble trying to get the internal oscillator of the 16F819 to actually work. Do I need any commands in the program to calibrate or startup the oscilator? I'm using IC-Prog for now and it gives me the option of :LS, HS, XT,EXT Clock, IntRC, IntRC Clockout, ExtRC and ExtRC Clockout. -I know I should use either IntRC or IntRC Clockout. But whats the difference between the 2 and which one do I need??? I also have the options of using fuses: watch dog timer power up timer Master clear Brown out Debugger and CCP1 selection -Can anyone give me a brief description of these fuses, particurly "brown out"??? -So how do i get my internal oscillator working??? (well how did you guys get your working) plz help me guys, i'm totally stumped on this problem :? #### Nigel Goodwin ##### Super Moderator The two IntRC options both use the internal oscillator, but the second one outputs itself on one of the PIC pins - if you don't require this (and most applications won't) chose the plain IntRC. I've not used a 16F819, but the 'brown out' option affects the reset capability of the chip - if the HT supply to the chip drops it can corrupt the running program, the 'brown out' option causes the chip to do a hardware reset if the HT drops sufficiently. If you read the datasheet they will all be explained, mostly you tend to set them the same once you've decided on how you want them - only altering them when you have a particular need. If you set the 'watch dog timer' you need to place 'CLRWDT' instructions throughout your program - if one of these doesn't appear before the timer runs out the PIC is reset - it's intended to rescue a program stuck in an endless loop. #### Exo ##### Active Member pike said: hey guys, I am having trouble trying to get the internal oscillator of the 16F819 to actually work. Do I need any commands in the program to calibrate or startup the oscilator? I'm using IC-Prog for now and it gives me the option of :LS, HS, XT,EXT Clock, IntRC, IntRC Clockout, ExtRC and ExtRC Clockout. -I know I should use either IntRC or IntRC Clockout. But whats the difference between the 2 and which one do I need??? Both settings activate the internal RC oscillator. But IntRC clockout outputs the frequency generated by the intOsc on a certain pin (see datasheet to see wich one). This allows you to use this frequency elsewhere in your circuit. The normal IntRC setting does not output the frequency, in stead the pin is a normal I/O line pike said: I also have the options of using fuses: watch dog timer The watchdog timer runs in the background. Your software must reset the timer at certain intervals (with a CLRWDT command). If your software fails to do so in time (because it is stuck in an infinite loop for example) the watchdog timer will overflow and this will reset the pic. If you turn it off then there is no need for CLRWDT commands in your code pike said: power up timer When power is applied to the pic it will wait a short time (less then half a sec) before it starts running. This could come in handy if there is equipment connected to the pic that needs time to stabilize when power is applied (a LCD for example). pike said: Master clear With master clear switched to external you can reset the PIC at any time by driving the MCLR pin low (see datasheet to know wich pin is MCLR). To make the pic run you must pull MCLR high. With master clear switched to internal you can no longer reset the pic from outside (of course, if you turn the pic off and back on it is also reset) but the MCLR pin can now be used as a digital input pin. pike said: Brown out When supply voltage drops below a certain voltage the pic may start acting weird due to this voltage shortage. To prevent this you can enable brown-out. With brown-out enabled the pic will keep resetting itself as long as the supply voltage is too low. Once the voltage is restored the pic will start running again. pike said: debugger option sets some pins on the pic so they can be connected to an in-circuit debugger device wich allows you to monitor what the pic is doing wile it is running. The pins will be no longer available as normal I/O lines pike said: -So how do i get my internal oscillator working??? (well how did you guys get your working) just set your fuse settings to INTOSC RC #### Exo ##### Active Member hehe, you have to type pretty fast to beat nigel :lol: #### Nigel Goodwin ##### Super Moderator Exo said: hehe, you have to type pretty fast to beat nigel :lol: Fastest fingers in the west!!! :lol: #### pike ##### Member wow, you guys are fast eh?? Exo, what software programmer are you running?? IC-Prog doesn't have a fuse called INTOSC RC. maybe i have have an old version of IC-Prog. If I enabled "debugger", would i need an LCD to display whats happening inside the PIC or would i use my computer?? #### Exo ##### Active Member I'm using both Winpicprog and IC-prog. In IC-Prog 1.05C it's called "IntRC" at oscillator settings. But you should add the config settings in your code. When you are programming your assembler file you should add a command to the top of your code __CONFIG number (thats 2 underscores _ ) In MPLAB IDE (a free pic development program available from microchips website) you can generate the number you need. this way the settings are incorporated into your code itself and you don't need to set it in ic-prog. debugger option is for an In-circuit debugger. This is a device connected to the PC. #### pike ##### Member How would I write that in BASIC?? Sorry i'm not familiar assembler. But either way they should work right?? Otherwise i think my 4 brand new 819's are dead and useless :x #### Exo ##### Active Member Picbasic normally sets the configuration settings for you. just change the oscillator to IntRC in IC-prog. Leave the other settings, they should be set right by basic #### pike ##### Member Thanks for your input guys, I found (I think) the problem. The problem was the programmer or programmer software. verify ocassionally works, and when it does it shows that the chip was programmed to use an LP oscillator. I'm pointing the finger at the software most likely. :evil: When verify doesn't work it is because IC-Prog has automatically set the fuse Code Protect on. #### Exo ##### Active Member What programmer hardware do you use ? Are you using long and/or unshielded cables between pc/programmer and programmer/Pic. this tends to give 'occasional' problems. #### pike ##### Member I'm using a JDM based programmer. Yeh the cable is shielded, and its 1.5 metres long. funny, the programmer wasn't playing up last night. I just did some more experimenting. The oscillator works, but at a very low speed. I programmed this into it last night when the programmer was working. Code: Poke trisB, 0 'configure portb to output only Test: poke portB, 1 'turn RB0 high poke portb, 0 'turn RB0 low goto test 'repeat for ever When the program runs it runs flawlessly except for the timing. In a normal PIC, you wouldn't see the leds light up or respond due to the quick timing of the code. But with the (faulty??) 819's I have, you can actaully see the leds light up for about 10ms. I can program other chips succesfully (16F84a) but not the 819. Sounds like i've killed the chips. #### Exo ##### Active Member If it runs then the pic is probabely not defective. Some oscillator setting will still be wrong or something. I would try to write a little assembly program and see how it acts. could also be the basic acting up. Status Not open for further replies.
2021-07-30 20:46:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33898836374282837, "perplexity": 4477.838113210515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153980.55/warc/CC-MAIN-20210730185206-20210730215206-00564.warc.gz"}
https://math.stackexchange.com/questions/657551/if-a-is-a-matrix-with-complex-entries-then-there-exists-a-matrix-b-such-tha
# If $A$ is a matrix with complex entries, then there exists a matrix $B$ such that $AB=0$ and $\operatorname{rank} A+ \operatorname{rank}B=n$. I have some difficulty writing the proof for this one.This problem appeared in a shortlist for a mathematical olympiad (for high-school): Let $A$ be a $n\times n$ matrix with complex entries. 1. Prove that there exists a $n\times n$ matrix with complex entries $B$ such that $AB=0$ (the null matrix) and $\operatorname{rank} A + \operatorname{rank}B = n$ 2. If $1 < \operatorname{rank} A < n$, prove that there exists a $n\times n$ matrix with complex entries $C$ such that $AC=0, CA \neq 0$ and $\operatorname{rank}A + \operatorname{rank} C = n$ Well, since this problem was proposed for the mathematical olympiad, I suppose that the solution involves basic linear algebra concepts i.e. without vector spaces, linear transformations etc. So far, I have only come up with the fact that if $A$ is invertible, then $B$ is the zero matrix, and using the Sylvester inequality and supposing $A$ singular implies: $\operatorname{rank} A + \operatorname{rank} B \leq n$ Also, I guess that stating that $\operatorname{rank} B$ exists from the above inequality doesn't imply that exists such $B$ so that $\operatorname{rank} B = n - \operatorname{rank} A$. • Throw $A$ into Jordan form. It'll have some number of $0$ eigenvectors. Construct your matrix $B$ in the same basis as $A$ by giving it an eigenspace for $1$ where $A$ has an eigenspace for $0$. Then the two matrices have complementary rank and their product is 0. – Ian Coley Jan 30 '14 at 18:29 • Do you still have a link to that shortlist? Or maybe the problems written somewhere? I'm looking for some problems of this kind and it would help me a lot. – Asix Feb 19 '18 at 15:45 If $\textrm{rank} A = m < n$, then there are $n-m$ linearly independent vectors $b_1, \dots, b_{n-m}$ so that $Ab_j = 0$. What would happen if you used these vectors as the columns of $B$ (with $0$ columns otherwise), and what could you say about the rank of $B$? For 1, any $B$ such that $Im(B)= Ker(A)$ will do. For example, the projection onto $Ker(A)$ along some subspace complementary to $Ker(A)$. For 2, modify the above by $C=BD$, where $D$ is an isomorphism (thus maintaining $rank(C)= rank(B)$ and $AC=0$), such that $D(Im(A))$ intersects $Ker(A)$. You can for example take any linear isomorphism $D$ that send some non-zero vector in $Im(A)$ to some non-zero vector in $Ker(A)$.
2019-06-26 01:43:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9606093168258667, "perplexity": 80.3832883941759}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000044.37/warc/CC-MAIN-20190626013357-20190626035357-00293.warc.gz"}
http://italiageorgia.it/kyuh/lstm-forecasting-github.html
## Lstm Forecasting Github Having followed the online tutorial here , I decided to use data at time (t-2) and (t-1) to predict the value of var2 at time step t. In this post we present the results of a competition between various forecasting techniques applied to multivariate time series. ; This module was built with Recurrent Neural Network (RNN) on top of Tensorflow and Keras. Temporal Pattern Attention for Multivariate Time Series Forecasting. Unlike standard feed-forward neural networks, LSTM has feedback connections. Thus, we explode the time series data into a 2D array of features called ‘X’, where the input data consists of overlapping lagged values at the desired number of. The problem to be solved is the classic stock market prediction. Each LSTM cell has its cell state (c) and has the ability to add or remove information to it. Long short-term memory (LSTM) network has been proved to. Financial Time Series Predicting with Long Short-Term Memory Authors: Daniel Binsfeld, David Alexander Fradin, Malte Leuschner Introduction. Weinzaepfel, J. For predicting the future, you will need stateful=True LSTM layers. LSTM network needs updating the weight matrices for each LSTM cell, which requires a large amount of data across numerous di-mensions. Time Series Gan Github Keras. They were introduced by Hochreiter & Schmidhuber (1997), and were refined and popularized by many people in following work. which shows that there is a "displacement" in the values predicted and in the real ones. Experiments have shown that the CNN-LSTM neural network, which combines convolutional neural network (CNN) and long short-term memory (LSTM), can extract complex features of energy consumption. They allow to put different weights on different inputs, to decide which data point should be more preponderant in order to make an accurate prediction. However, due to the existence of the high noise in financial data, it is inevitable that the deep neural networks trained by the original data fail to accurately predict the stock price. In this thesis, LSTM (long short-term memory) recurrent neural networks are used in order to perform financial time series forecasting on return data of three stock indices. You can think of it as compile, for easiness. Build a Bidirectional LSTM Neural Network in Keras and TensorFlow 2 and use it to make predictions. Long Short-Term Memory Network for Time Series Forecasting Introduction To understand the terms frequently used in the context of Machine Learning in a simple way, read my post: Machine Learning Basics. As can be seen, the “Adj close” data are quite erratic, seems neither upward trend nor downward trend. Time Series Gan Github Keras. blogs at mabrek. In this video, household power consumption dataset is used to predict future power consumption. when considering product sales in regions. Weather forecast using recurrent neural network Motivation. The long short-term memory works on the sequential framework which considers all of the predecessor data. The art of forecasting stock prices has been a difficult task for many of the researchers and analysts. js framework. • The forecasting efficiency of financial time series is improved by the model. J Su*, W Byeon*, F Huang, J Kautz, A Anandkumar, “Convolutional Tensor-Train LSTM for Spatio-temporal Learning”, arXiv, 2020 (*) equal contributions. Prediction at a particular timestamp is strongly dependent upon electricity consumption on previous timestamps. Specify the input to be sequences of size 3 (the feature dimension of the input data). Then at time step $t$, your hidden vector [math]h(x_1(t), x_2(t. Define the LSTM network architecture. LSTM for time series forecasting. Provide details and share your research! But avoid … Asking for help, clarification, or responding to other answers. edu, [email protected] $\begingroup$ I'm constantly amazed by the number of "LSTM for forecasting time series" blog posts and kaggle kernels which then show a forecast which is essentially a one step ahead naive forecast. I just used the code given in the following link. Experiments have shown that the CNN-LSTM neural network, which combines convolutional neural network (CNN) and long short-term memory (LSTM), can extract complex features of energy consumption. Time series forecasting using a hybrid ARIMA and LSTM model Oussama FATHI, Velvet Consulting, 64, Rue la Boetie, 75008,´ [email protected] In addition, each layer has to be controlled by reliability tests, as it is the input from the following layers of regression model. So I got the predicted values of the series from 121 st to 150 th. Long short-term memory (LSTM) recurrent neural networks are often used for forecasting. blogs at mabrek. Lovecraft’s style. (This is a weird one but it's worked before. How to represent data for time series neural networks. x and the. The task here will be to predict values for a time series given the history of 2 million minutes of a household’s power consumption. We'll start with a simple example of forecasting the values of the Sine function using a simple LSTM network. I am new to deep learning and LSTM. Demo project for electricity load forecasting with a LSTM (abbr. They were introduced by Hochreiter & Schmidhuber (1997), and were refined and popularized by many people in following work. Bitcoin Time Series Prediction with LSTM Python notebook using data from multiple data sources · 26,085 views · 3y ago. Long short-term memory (LSTM) units are units of a recurrent neural network (RNN). First, the time-varying graph signals for each vertex are segmentized to be fed into LSTM. LSTM uses are currently rich in the world of text prediction, AI chat apps, self-driving cars…and many other areas. Contents Models Stacking models. Figuring out how to reshape the data based on the N_TIMESTEPS, N_FEATURES and length of the data was actually. We will forecast the number of confirmed cases in Iran for validation set and next 7 days from today. For example, he won the M4 Forecasting competition (2018) and the Computational Intelligence in Forecasting International Time Series Competition 2016 using recurrent neural networks. A simple sine-wave as a model data set to model time series forecasting is used. 0) lstm_bw_cell = tf. Demand Forecasting 3: Neural networks By Semantive August 20, 2018 December 19th, 2019 No Comments This post is a part of our series exploring different options for long-term demand forecasting. Unlike regression predictive modeling, time series also adds the complexity of a sequence dependence among the input variables. Use MathJax to format equations. physhological, rational and irrational behaviour, etc. We are going to use a multi-layered LSTM recurrent neural network to predict the last value of a sequence of values. The S&P 500 index increases in time, bringing about the problem that most values in the test set are out of the scale of the train set and thus the model has to predict some numbers it has never seen before. In this paper, we propose a CNN-LSTM neural network that can extract spatial and temporal features to effectively predict the housing energy consumption. Demo project for electricity load forecasting with a LSTM (abbr. Getting started. Today, I’ll teach you how to train a LSTM Neural Network for text generation, so that it can write with H. • The vanilla model could not adapt to time series with domains it was not trained on, which led to poor performance when using a single neural network. That is, 20% of the neurons will be randomly selected and set inactive during the training process, in order to make the model less flexible and avoid over-fitting. The code for this framework can be found in the following GitHub repo (it assumes python version 3. Multivariate Time Series Forecasting with LSTMs in Keras - README. Yijing, Dmitry, Angus, and Vanja conclude by examining how and when to use RNNs for time series forecasting. A hybrid architecture, extending the LSTM with a mean stochastic model (MSM–LSTM), is proposed to ensure convergence to the invariant measure. 2017): My dear friend Tomas Trnka rewrote the code below for Keras 2. Weather forecast using recurrent neural network Motivation. js framework. I would make your LSTM return sequences=false, repeat vector 8 times to predict length, concatenate prediction timing info, and pass through another lstm which returns sequence. Generally, in time series, you have uncertainty about future values. Sign up An LSTM model for weather forecasting, written in Python, using TensorFlow. LSTM forecasting of FX. The complete project on GitHub. Or you can watch Andrew Ng’s video too (which by. To examine a number of different forecasting techniques to predict future stock returns based on past returns and numerical news indicators to construct a portfolio of multiple stocks in order to diversify the risk. $\endgroup$ - Narahari B M Aug 11 '17 at 17:27. However, the results of these methods are not always informative for the policy makers due to excessive frequency, lack of details and supportive. A Comparative Study of Machine Learning Frameworks for Demand Forecasting Kalyan Mupparaju, Anurag Soni, Prasad Gujela, Matthew A Lanham Purdue University Krannert School of Management 403 W. It looks like this: date date_block_num shop_id item_id item_price item_cnt_day 02. LSTM network using Keras for sequence prediction 23 Sep 2018. Each LSTM cell has its cell state (c) and has the ability to add or remove information to it. This will install also already most of the. Long Short-Term Memory networks, or LSTMs for short, can be applied to time series forecasting. The symbols in the LSTM diagram are defined as follows: Figure 3: Legend for figure 2. This is a great benefit in time series forecasting, where classical linear methods can be difficult to adapt to multivariate or multiple input forecasting problems. Therefore, it is significant to develop a more accurate forecast model. Keras LSTM - Multivariate Time Series Predictions. Time Series Forecasting with TensorFlow. The Beta-PSO-LSTM, the Beta-IM-LSTM, the Beta-PSO-BP, the Beta-LSTM, the Norm-LSTM, and the LSSVM models are adopted to realize prediction interval of wind power series. A Comparison of LSTMs and Attention Mechanisms for Forecasting Financial Time Series. Pytorch for time series forecasting Hi all, I am interested in using Pytorch for modelling time series data. Easier to handle multivariate data 3. Likewise, the deep neural. It claims to have a better performance than the previously implemented LSTNet, with the additional advantage that an attention mechanism automatically tries to determine important parts of. If you lump all your 365 time steps into one sample, then the first dimension will be 1 - one single sample!. For this purpose, I am using the Rossmann Sales data from Kaggle. A sequence of vibrational signals (signals that last 50 seconds) leading to the current time are used as input to the LSTM model, which then tries to predict the next data. Long Short-Term Memory (LSTM) Recurrent Neural Network & Dropout Regularization Strategy. In strong noisy financial market, accurate volatility forecasting is the core task in risk management. As we explain in detail below, the convolutional architecture is well-suited to model the geospatial structure of the temperature grid, while the RNN can capture temporal correlations in sequences of variable length. The goal is to predict temperature of the next 12 or 24 hours as time series data for weather forecasting was tested. num_layers is number of layers, learning_rate is ratio of reducing gradient, usually we said learning rate, size_layer is size of each layer, timestamp is length of each timeseries want to feed into LSTM model for every iteration, epoch is number of loop to train, dropout_rate. LSTM for adding the Long Short-Term Memory layer Dropout for adding dropout layers that prevent overfitting We add the LSTM layer and later add a few Dropout layers to prevent overfitting. Presented by Jayeol Chun and Sang-Hyun Eun June 9, 2016. Recall, a convolutional network is most often used for image data like the MNIST dataset (dataset of handwritten images). md file to showcase the performance of the model. Each LSTM cell has its cell state (c) and has the ability to add or remove information to it. Box and Jenkins auto-regressive. Time series prediction (forecasting) has experienced dramatic improvements in predictive accuracy as a result of the data science machine learning and deep learning evolution. The problem to be solved is the classic stock market prediction. Thus, we explode the time series data into a 2D array of features called ‘X’, where the input data consists of overlapping lagged values at the desired number of. The traffic flow prediction is becoming increasingly crucial in Intelligent Transportation Systems. Using ‘state of the outside world’ for boundary grids. EDIT3: [Solved] I experimented with the LSTM hyperparameters and tried to reshape or simplify my data, but that barely changed the outcome. Time Series Forecasting with TensorFlow. Bitcoin Time Series Prediction with LSTM. #import forecast models from zoo. We again altered several network. If you are a student or a deep learning beginner, then work on deep learning projects that try to leverage your deep learning skills diversifically and solve real-world use-cases that interest you the most. The complete project on GitHub. Financial Time Series Predicting with Long Short-Term Memory Authors: Daniel Binsfeld, David Alexander Fradin, Malte Leuschner Introduction. Sign up An LSTM model for weather forecasting, written in Python, using TensorFlow. 2 Forecasting Model We then designed a model that would forecast air pollution hours or days into the future. Contribute to rakshita95/DeepLearning-time-series development by creating an account on GitHub. Time Series Forecasting with TensorFlow. • The forecasting results of the proposed model are more accurate than other similar models. Part 05: LSTM for Time Series Forecasting. For example, he won the M4 Forecasting competition (2018) and the Computational Intelligence in Forecasting International Time Series Competition 2016 using recurrent neural networks. CNTK 106: Part B - Time series prediction with LSTM (IOT Data)¶ In part A of this tutorial we developed a simple LSTM network to predict future values in a time series. edu, [email protected] Tailoring our LSTM model. 04 Nov 2017 | Chandler. LSTM is designed to forecast, predict and classify time series data even long time lags between vital events happened before. Time Series Analysis in Python with statsmodels Wes McKinney1 Josef Perktold2 Skipper Seabold3 1Department of Statistical Science Duke University 2Department of Economics University of North Carolina at Chapel Hill 3Department of Economics American University 10th Python in Science Conference, 13 July 2011. Social coding platforms, such as GitHub, can serve as natural laboratories for studying the diffusion of innovation through tracking the pattern of code adoption by programmers. Popular choices are Long Short Term Memory (LSTM) unit or Gated Recurrent Unit (GRU) Use final output of repeated iterates $$s_{T+1}$$ as forecast or input to next non-recurrent layer RNNs accomodate variable-length inputs, can take into account long-term dependences, and work with input like text. In strong noisy financial market, accurate volatility forecasting is the core task in risk management. Training a stateful LSTM means that the full given input data X is separated in batches and there are actually n LSTMs trained at the same time (n = batch size) that share the gradient. It was proposed in 1997 by Sepp Hochreiter and Jurgen schmidhuber. Dataset 1: 36 Month Shampoo Sales Data ¶ The first time series examined is a univariate monthly sales data for shampoo provided by DataMarket: Time Series Data Library (citing: Makridakis, Wheelwright and Hyndman (1998)). Posted: (3 days ago) Long short-term memory (LSTM) RNN in Tensorflow. This is a great benefit in time series forecasting, where classical linear methods can be difficult to adapt to multivariate or multiple input forecasting problems. High-Dimensional Sequence Learning, Spatio-Temporal Learning. Include the markdown at the top of your GitHub README. Finally, specify nine classes by including a fully connected layer of size 9, followed by a softmax layer and a. Understanding the up or downward trend in statistical data holds vital importance. 2 Motion Forecasting Networks Starting from. Introduction To LSTM For Forecasting With TensorFlow And Neon. com Abstract—Inspite of its great importance, there has been no general consensus on how to model the trend and the seasonal component in time-series data. 2013 0 59 22154 999 1 03. I am trying to do multi-step time series forecasting using multivariate LSTM in Keras. Long Short Term Memory networks - usually just called "LSTMs" - are a special kind of RNN, capable of learning long-term dependencies. It seems a perfect match for time series forecasting, and in fact, it may be. For example, recent results on time-series forecasting using LSTM only apply a single layer of LSTM [3]. The complete project on GitHub. All data. LSTM(Long Short Term Memory)[1] is one kind of the most promising variant of RNN. Accurate forecast result can provide support for the forewarning of flow outburst and enables passengers to make an appropriate travel plan. However, traffic forecasting has always been considered an open scientific issue, owing to the constraints of urban road network topological structure and the law of dynamic change with time, namely, spatial dependence and temporal dependence. Graph() named lstm_graph and a set of tensors to hold input data, inputs, targets, and learning_rate in the same way. If you want to demystify the mystery behind LSTM, I would suggest you take a look at my previous article. In this tutorial, we will investigate the use of lag observations as features in LSTM models in Python. GitHub Gist: instantly share code, notes, and snippets. ral network based on long short term memory (LSTM) units. Using a specific window of several sensor signals, differentiated features can be extracted to forecast the power consumption by using the prediction model []. Then, first you predict the entire X_train (this is needed for the model to understand at which point of the sequence it is, in technical words: to create a. The tutorial is an illustration of how to use LSTM models with MXNet-R. This is a sample of the tutorials available for these projects. 12 Sep 2018 • gantheory/TPA-LSTM • To obtain accurate prediction, it is crucial to model long-term dependency in time series data, which can be achieved to some good extent by recurrent neural network (RNN) with attention mechanism. The next model in the FluxArchitectures repository is the Temporal Pattern Attention LSTM network based on the paper “Temporal Pattern Attention for Multivariate Time Series Forecasting” by Shih et. Time Series Prediction with LSTMs. In this way, I used LSTM model because of the efficiency of this model for times series forecasting. In part B, we try to predict long time series using stateless LSTM. 0 open source license. - workforce forecasting and optimization (Recurrent Neural Network LSTM, fbProphet, Integer Programming) - inventory optimization (ARIMA) - pricing elasticity (Ensemble Methods with Monte Carlo. num_layers is number of layers, learning_rate is ratio of reducing gradient, usually we said learning rate, size_layer is size of each layer, timestamp is length of each timeseries want to feed into LSTM model for every iteration, epoch is number of loop to train, dropout_rate. Slawek Smyl is a forecasting expert working at Uber. LSTM for adding the Long Short-Term Memory layer Dropout for adding dropout layers that prevent overfitting We add the LSTM layer and later add a few Dropout layers to prevent overfitting. In stateless mode, long term memory does not mean that the LSTM will remember the content of the previous batches. Good and effective prediction systems for stock market help traders, investors, and. Neural networks like Long Short-Term Memory (LSTM) recurrent neural networks are able to almost seamlessly model problems with multiple input variables. As an extreme case, I had a chance to study on Forex (Foreign Exchange Rate) forecast and intensively compared performances of LSTM, windowed-MLP and ARIMA. md file to showcase the performance of the model. It claims to have a better performance than the previously implemented LSTNet, with the additional advantage that an attention mechanism automatically tries to determine important parts of. The goal is to predict temperature of the next 12 or 24 hours as time series data for weather forecasting was tested. The vanilla LSTM didn’t work well • Did not exhibit superior performance compared to the baseline model, which included a combination of univariate forecasting and machine learning elements. Dataset 1: 36 Month Shampoo Sales Data ¶ The first time series examined is a univariate monthly sales data for shampoo provided by DataMarket: Time Series Data Library (citing: Makridakis, Wheelwright and Hyndman (1998)). In this paper, we propose a generic framework employing Long Short-Term Memory (LSTM) and convolutional neural network (CNN) for adversarial training to forecast high-frequency stock market. In this blog I will demonstrate how we can implement time series forecasting using LSTM in R. Code: https://github. The LSTM previous Y model had. The task here will be to predict values for a time series given the history of 2 million minutes of a household's power consumption. Understanding the up or downward trend in statistical data holds vital importance. Version 11 of 11. I won’t go into details, but everything I’ve said about RNNs stays exactly the same, except the mathematical form for computing the update (the line self. Electric energy consumption forecasting is a multivariate time series prediction problem []. rnn(stacked_lstm. Long-Short Time Memory (LSTM) model will be applied. LSTMs excel in learning, processing, and classifying sequential data. Forecasting with Neural Networks - An Introduction to Sequence-to-Sequence Modeling Of Time Series Note : if you’re interested in building seq2seq time series models yourself using keras, check out the introductory notebook that I’ve posted on github. Building a new architecture with neural networks. forecast macroeconomic time series. In this tutorial, we will investigate the use of lag observations as features in LSTM models in Python. For an introductory look at high-dimensional time series forecasting with neural networks, you can read my previous blog post. The size of output and the hidden output vector of the LSTM cell will be same as the size of the hidden states (Check LSTM internal calcuations for why!). 12/18/2018 ∙ by Thomas Hollis, et al. so we slowly start predicting on the predictions and hence are forecasting the next 50 steps forward. It jointly models the normal condition tra c and the pattern of accidents. They seemed to be complicated and I’ve never done anything with them before. 5 data of US Embassy in Beijing” We want to predict pollution levels(PM2. Two Effective Algorithms for Time Series Forecasting - Duration: 14:20. The task here will be to predict values for a time series given the history of 2 million minutes of a household’s power consumption. Get started with TensorBoard. Let's start with the library imports and setting seeds: lstm-setup. In this tutorial, you will discover how to develop an LSTM forecast model for a one-step univariate time series forecasting problem. 2013 0 25 2552 899 -1 06. We can define a simple univariate problem as a sequence of integers, fit the model on this sequence and have the model predict the next value in the sequence. In this tutorial, you will discover how…. The data requirement hinders the application of deep LSTM model in time series forecasting. Long Short-Term Memory networks, or LSTMs for short, can be applied to time series forecasting. LSTMs belong to the family of recurrent neural networks which are very usefull for learning sequential data as texts, time series or video data. The task here will be to predict values for a time series given the history of 2 million minutes of a household's power consumption. The 'input_shape' argument in 'LSTM' has 1 as time step and 3 as features while training. I have a very simple question. This model can be seen in detail below: Simple LSTM. As an example we want to predict the daily output of a solar panel base on the initial readings of the day. #import forecast models from zoo. Follow Data Science & Deep Learning on WordPress. This is covered in two parts: first, you will forecast a univariate time series, then you will forecast a multivariate time series. In fact, investors are highly interested in the research area of stock price prediction. There exists many optimiser variants that can be used. Long short-term memory is a recurrent neural network introduced by Sepp Hochreite and Jurgen Schmidhuber in 1997 [6]. NY Stock Price Prediction RNN LSTM GRU Python notebook using data from New York Stock Exchange · 68,670 views · 2y ago · time series, lstm, rnn. Dataset 1: 36 Month Shampoo Sales Data ¶ The first time series examined is a univariate monthly sales data for shampoo provided by DataMarket: Time Series Data Library (citing: Makridakis, Wheelwright and Hyndman (1998)). 12 Sep 2018 • gantheory/TPA-LSTM • To obtain accurate prediction, it is crucial to model long-term dependency in time series data, which can be achieved to some good extent by recurrent neural network (RNN) with attention mechanism. Before anything, you reset the model's states: model. Machine learning is becoming increasingly popular these days and a growing number of the world’s population see it is as a magic crystal ball. Code: https://github. The task here will be to predict values for a time series given the history of 2 million minutes of a household’s power consumption. This article covers implementation of LSTM Recurrent Neural Networks to predict the. For an introductory look at high-dimensional time series forecasting with neural networks, you can read my previous blog post. Its potential application are predicting stock markets, prediction of faults and estimation of remaining useful life of systems, forecasting weather etc. This blog post analyzes the tweets of the 2020 presidential candidates using Fasttext and CNN. We will forecast the number of confirmed cases in Iran for validation set and next 7 days from today. In particular a multi layer perceptron (MLP) and recurrent neural network (RNN), i. There are many types of LSTM models that can be used for each specific type of time series forecasting problem. Long Short-Term Memory Networks. How to represent data for time series neural networks. References. The output is a normalized data so we apply inverse transformations on the following. This issue can be resolved by applying a slightly tweaked version of RNNs – the. For an example showing how to classify sequence data using an LSTM network, see Sequence Classification Using Deep Learning. The first step in the problem is to convert your time series problem into a supervised learning problem i. A long short-term memory network is a type of recurrent neural network (RNN). 0) # Pass lstm_fw_cell / lstm_bw_cell directly to tf. Since LSTM is not good for spatial vector as input, which is only one dimension, they created ConvLSTM to allowed multidimensional data coming with convolutional operations in each gate. From predicting sales to finding patterns in stock market's data, Long short-term memory (LSTMs) networks are very effective to solve problems. I would like to forecast the heat load of a district heating network given its past values, the temperature and the 3-day ahead forecast of the temperature with an LSTM RNN. We propose an ensemble of long–short‐term memory (LSTM) neural networks for intraday stock predictions, using a large variety of technical analysis indicators as network inputs. For this problem the Long Short Term Memory, LSTM, Recurrent Neural Network is used. It's free to sign up and bid on jobs. Specify an bidirectional LSTM layer with 100 hidden units, and output the last element of the sequence. You’ll then discover how RNN models are trained and dive into different RNN architectures, such as LSTM (long short-term memory) and GRU (gated recurrent unit). js framework. Lovecraft’s style. Both the. To address the problem, the wavelet threshold-denoising method, which has been widely applied in. The goal of precipitation nowcasting is to predict the future rainfall intensity in a local region over a relatively short period of time. From predicting sales to finding patterns in stock market's data, Long short-term memory (LSTMs) networks are very effective to solve problems. In this tutorial, we apply a variant of a convolutional long short-term memory (LSTM) RNN to this problem. The code which may span for several lines while dealing with models such as ARIMA can be completed within couple of lines using LSTM. GitHub Gist: instantly share code, notes, and snippets. fi[email protected] The complete project on GitHub. Here are a few pros and cons. We will briefly discuss various variants and their pros and cons Variants 1. LSTMs are a certain set of RNNs that perform well compared to vanilla LSTMs. Long short-term memory (LSTM) is an artificial recurrent neural network (RNN) architecture used in the field of deep learning. The proposed GraphCNN-LSTM model is validated using data from DiDi Chuxing Gaia Open Data Initiative, which supported the Transportation Forecasting Competition (TRANSFOR19) organized by the Standing Committee on Artificial Intelligence, the Advanced Computing Applications (ABJ70) of the Transportation Research Board, and the IEEE ITS Technical. Time Series Gan Github Keras. Copy and Edit. We’ll build three different model with Python and inspect their results. However, due to complicated spatio-temporal dependency and high non-linear dynamics in road networks, traffic prediction task is still challenging. Keras LSTM - Multivariate Time Series Predictions. View source on GitHub. Predicting time series quantities has been an interesting domain in predictive analytics. NY Stock Price Prediction RNN LSTM GRU Python notebook using data from New York Stock Exchange · 68,670 views · 2y ago · time series, lstm, rnn. Specifically, I have two variables (var1 and var2) for each time step originally. TensorFlow Core. Data Science for IoT Conference - London - 26th Jan 2017. Most of the recent algorithms are based on deep stacks of. The output is a normalized data so we apply inverse transformations on the following. MX-LSTM predicts future pedestrians location and head pose, increasing the standard capabilities of the current approaches on long-term trajectory forecasting. Code review; Project management; Integrations; Actions; Packages; Security. In this tutorial, you will discover how to develop an LSTM forecast model for a one-step univariate time series forecasting problem. Predicting the future. Some interesting applications are Time Series forecasting, (sequence) classification and anomaly detection. Long Short-Term Memory layer - Hochreiter 1997. Predicting how the stock market will perform is one of the most difficult things to do. 1 The details of this procedure can be found in the GitHub Repository 4. An introduction to recurrent neural networks. So, our Y label is the value from the next (future) point of time while the X inputs are one. One technique is optical flow [1], [2], which is popular with people doing modeling of action videos. 2013 0 25 2552 899 1 05. In this sense to evaluate the goodness of our methodology, I decided to develop a new model for price forecasting with the same structure as our previous forecasting NN. Author: Lin Zelong Mo 1. Download notebook. Pytorch Implementation of DeepAR, MQ-RNN, Deep Factor Models, LSTNet, and TPA-LSTM. • The vanilla model could not adapt to time series with domains it was not trained on, which led to poor performance when using a single neural network. The LSTM observes consecutive pose inputs to identify the type of motion, and then predicts the pose for the next period of time. i have multiple variables my data is multivariate time series data. Time Series Deep Learning, Part 1: Forecasting Sunspots With Keras Stateful LSTM In R - Shows the a number of powerful time series deep learning techniques such as how to use autocorrelation with an LSTM, how to backtest time series, and more!. While deep learning has successfully driven fundamental progress in natural language processing and image processing, one pertaining question is whether the technique will equally be successful to beat other models in the classical statistics and machine learning areas to yield the new state-of-the-art methodology. Faizan Shaikh, April 2, 2018 Login to Bookmark this article. i want to predict 2019 data by using test data of 2018. LSTM forecasting of FX. com for 1,742 cryptocurrencies. GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together. Accurate energy forecasting is a very active research field as reliable information about future electricity generation allows for the safe operation of the power grid and helps to minimize excessive electricity production. Published on September 9, 2017 September 9, 2017 • 53 Likes • 5. [Time Series Forecasting with the Long Short-Term Memory Network in Python - Machine Learning Mastery]. Its potential application are predicting stock markets, prediction of faults and estimation of remaining useful life of systems, forecasting weather etc. However, the bottom line is that LSTMs provide a useful tool for predicting time series, even when there are long-term dependencies--as there often are in financial time series among others such as handwriting and voice sequential datasets. In this implementation, I want to show evidence of LSTM Autoencoder power as a tool for relevant features creation for time series forecasting. LSTM models are powerful, especially for retaining a long-term memory, by design, as you will see later. the inputs are so specified that the observation at previous time is used as an input to predict the output at the next time-step. The idea of this post is to provide a brief and clear understanding of the stateful mode, introduced for LSTM models in Keras. The art of forecasting stock prices has been a difficult task for many of the researchers and analysts. Why apply RNN (LSTM) on time series datasets? The expression long short-term refers to the fact that LSTM is a model for the short-term memory which can last for a long period of time. does my problem come under multivariate multi step forecasting? or multivariate single step forecasting?. LSTM is one of the most powerful algorithm out there when dealing with time series forecasting. is it possible? i am confused about Long short term memory neural networks working what is actually it will do. Common areas of application include sentiment analysis, language modeling, speech recognition, and video analysis. from Time-series Extreme Event Forecasting with Neural Networks at Uber. Short term forecasting in the renewable energy sector is becoming. Long short-term memory (LSTM) units are units of a recurrent neural network (RNN). There are many types of LSTM models that can be used for each specific type of time series forecasting problem. (A) memory h and input x are multiplied by weight matrices W and U, the results added and then run through an element-wise sigma function. An introduction to recurrent neural networks. 01 size_layer = 128 timestamp = 5 epoch = 500 dropout_rate = 0. Just two days ago, I found an interesting project on GitHub. Short term forecasting in the renewable energy sector is becoming. In this tutorial, you will discover how to develop an LSTM forecast model for a one-step univariate time series forecasting problem. If you're reading this blog, it's likely that you're familiar with. is it possible? i am confused about Long short term memory neural networks working what is actually it will do. Since LSTM is not good for spatial vector as input, which is only one dimension, they created ConvLSTM to allowed multidimensional data coming with convolutional operations in each gate. multivariate time series forecasting with lstms in keras github, Aug 14, 2017 · This is a great benefit in time series forecasting, where classical linear methods can be difficult to adapt to multivariate or multiple input forecasting problems. deep-learning time-series tensorflow rnn lstm. This topic explains how to work with sequence and time series data for classification and regression tasks using long short-term memory (LSTM) networks. RangeIndex: 145460 entries, 0 to 145459 Data columns (total 24 columns): Date 145460 non-null object Location 145460 non-null object MinTemp 143975 non-null float64 MaxTemp 144199 non-null float64 Rainfall 142199 non-null float64 Evaporation 82670 non-null float64 Sunshine 75625 non-null float64 WindGustDir 135134. 2017 Apr 7. Since this problem also involves a sequence of similar sorts, an LSTM is a great candidate to be tried. Why GitHub? Features →. This led to the invention of so-called long short-term memory (LSTM) cells and gated recurrent units (GRU). A collection of data analysis projects. In this tutorial, you will discover how to develop a suite of LSTM models for a range of standard time series forecasting problems. That's why LSTM is more suitable for Time Series than RNN. This forecasting method used daily discharged data collected from the Basantapur gauging station located on. Long Short-Term Memory layer - Hochreiter 1997. Specifically, I have two variables (var1 and var2) for each time step originally. It claims to have a better performance than the previously implemented LSTNet, with the additional advantage that an attention mechanism automatically tries to determine important parts of. References. Accurate energy forecasting is a very active research field as reliable information about future electricity generation allows for the safe operation of the power grid and helps to minimize excessive electricity production. To improve the prediction accuracy, a spatiotemporal traffic flow prediction method is proposed combined with k-nearest neighbor (KNN) and long short-term memory network (LSTM), which is called KNN-LSTM. Introduction. LSTM makes it possible to learn the long time-series by determining the optimal time lags for prediction. LSTM(Long Short Term Memory)[1] is one kind of the most promising variant of RNN. • The forecasting results of the proposed model are more accurate than other similar models. The notebook for this blog post was written in collaboration with Nicolas Juguet and can be found on github. LSTM Neural Networks have seen a lot of use recently, both for text and music generation, and for Time Series Forecasting. The goal of precipitation nowcasting is to predict the future rainfall intensity in a local region over a relatively short period of time. In this implementation, I want to show evidence of LSTM Autoencoder power as a tool for relevant features creation for time series forecasting. Neural networks like Long Short-Term Memory (LSTM) recurrent neural networks are able to almost seamlessly model problems with multiple input variables. Architecture: Input LSTM layer (20 neurons) 1 hidden lstm layer 20 neurons 1 output dense layer, batch size as 1. GitHub Gist: instantly share code, notes, and snippets. Slawek Smyl is a forecasting expert working at Uber. We can find the basic formulas are as same as LSTM, they just use convolutional operations instead of one dimension for input, previous output and memory. Yijing, Dmitry, Angus, and Vanja conclude by examining how and when to use RNNs for time series forecasting. I would give a brief description of key concepts that are needed here but I strongly recommend reading Andre karpathy’s blog here, which is considered one of the best resources on LSTM out there and this. Specify an bidirectional LSTM layer with 100 hidden units, and output the last element of the sequence. (1) As demonstrated in tutorial Part 1: Define the Graph, let us define a tf. That is, at each time step of the input sequence, the LSTM network learns to predict the value of the next time step. Time series analysis has a variety of applications. See the Keras RNN API guide for details about the usage of RNN API. In this tutorial, you will discover how to develop a suite of LSTM models for a range of standard time series forecasting problems. During testing, vanilla (the not customized form of) LSTM did not perform superior performance compared to the baseline model, which included a combination of univariate forecasting and machine learning elements. Multivariate Time Series Forecasting with LSTMs in Keras - README. When the LSTM+attention neural network is asked to give a simple t+1 forecast using the 2013 data, the attention mechanism concentrates exactly on the week of the Fed’s announcement: Attention model concentration for a single forecast. I will walk through every line of code…. Object detection. I read and tried many web tutorials for forecasting and prediction using lstm, but still far away from the point. The Long Short-Term Memory network or LSTM network is a type of recurrent. We asked a data scientist, Neelabh Pant, to tell you about his experience of forecasting exchange rates using recurrent neural networks. I have a very simple question. The skill of the proposed LSTM architecture at rare event demand forecasting and the ability to reuse the trained model on unrelated forecasting problems. The short-term forecast of rail transit is one of the most essential issues in urban intelligent transportation system (ITS). However, traffic forecasting has always been considered an open scientific issue, owing to the constraints of urban road network topological structure and the law of dynamic change with time, namely, spatial dependence and temporal dependence. GitHub and YouTube Results. (ii) Proposed model improves prediction performance by 9% upon single pipeline deep learning model and by over a factor of six upon support vector machine regressor model predicted price for the next on S&P 500 grand challenge dataset. This is covered in two parts: first, you will forecast a univariate time series, then you will forecast a multivariate time series. LSTM (long short-term memory) is a recurrent neural network architecture that has been adopted for time series forecasting. Short-term traffic forecasting based on deep learning methods, especially long short-term memory (LSTM) neural networks, has received much attention in recent years. LSTM network using Keras for sequence prediction 23 Sep 2018. This will install also already most of the. (iii) We illustrate the improvement in prediction. ARIMA-type models have implicit. LSTM Neural Networks have seen a lot of use recently, both for text and music generation, and for Time Series Forecasting. Time series analysis has a variety of applications. Having followed the online tutorial here , I decided to use data at time (t-2) and (t-1) to predict the value of var2 at time step t. As Recurrent Neural Networks outperform most machine learning approaches in time series forecasting, they became widely used models for energy forecasting problems. Long Short-Term Memory networks, or LSTMs for short, can be applied to time series forecasting. A 2017 Uppsala Big Data Meetup LSTM solution; LSTM spoke Zarathustra; Student Project 04 on Power Forecasting Part 1; Student Project 04 on. However, the results of these methods are not always informative for the policy makers due to excessive frequency, lack of details and supportive. Explore and run machine learning code with Kaggle Notebooks | Using data from multiple data sources. After the citizen science project of Curieuze Neuzen, I wanted to learn more about air pollution and see if I could make a data science project out of it. We will forecast the number of confirmed cases in Iran for validation set and next 7 days from today. GitHub Gist: instantly share code, notes, and snippets. A similar case is observed in Recurrent Neural Networks. Badges are live and will be dynamically updated with the latest ranking of this paper. LSTM: Forecast vs Observed Series 47 Housing Stars Consumer Sentiment 46. There are many types of LSTM models that can be used for each specific type of time series forecasting problem. Deep learning system to predict stock prices of next day (one step time series forecast) and also for a specific period of time (multi-step time series forecast). The code for this framework can be found in the following GitHub repo (it assumes python version 3. We can define a simple univariate problem as a sequence of integers, fit the model on this sequence and have the model predict the next value in the sequence. forecast import MTNetForecaster #build a lstm forecast model lstm_forecaster = LSTMForecaster(horizon=1, feature_dim=4) #build a mtnet forecast model mtnet_forecaster = MTNetForecaster(horizon=1, feature_dim=4, lb_long_steps=1, lb_long_stepsize=3. Bidirectional LSTM network and Gated Recurrent Unit. The Beta-PSO-LSTM, the Beta-IM-LSTM, the Beta-PSO-BP, the Beta-LSTM, the Norm-LSTM, and the LSSVM models are adopted to realize prediction interval of wind power series. GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together. We are going to use a multi-layered LSTM recurrent neural network to predict the last value of a sequence of values. While deep learning has successfully driven fundamental progress in natural language processing and image processing, one pertaining question is whether the technique will equally be successful to beat other models in the classical statistics and machine learning areas to yield the new state-of-the-art methodology. LSTM is designed to forecast, predict and classify time series data even long time lags between vital events happened before. forecast import LSTMForecaster from zoo. If you're reading this blog, it's likely that you're familiar with. NARX time series forecasting in recent years [Gao and Er, 2005; Diaconescu, 2008]. Accurate forecast result can provide support for the forewarning of flow outburst and enables passengers to make an appropriate travel plan. is it possibl. The long short-term memory works on the sequential framework which considers all of the predecessor data. Accurate prediction result is the precondition of traffic guidance, management, and control. Note: if you're interested in learning more and building a simple WaveNet-style CNN time series model yourself using keras, check out the accompanying notebook that I've posted on github. In fact, investors are highly interested in the research area of stock price prediction. In this blog I will demonstrate how we can implement time series forecasting using LSTM in R. This topic explains how to work with sequence and time series data for classification and regression tasks using long short-term memory (LSTM) networks. 0 open source license. The forecasting techniques we use are some neural networks, and also - as a benchmark - arima. GitHub URL: * Submit Remove a code repository from this paper A Recurrent Neural Model for Spatiotemporal Point-cloud Stream Forecasting. Since we always want to predict the future, we take the latest 10% of data as the test data. Badges are live and will be dynamically updated with the latest ranking of this paper. hidden = (torch. They allow to put different weights on different inputs, to decide which data point should be more preponderant in order to make an accurate prediction. com for 1,742 cryptocurrencies. NY Stock Price Prediction RNN LSTM GRU Python notebook using data from New York Stock Exchange · 68,670 views · 2y ago · time series, lstm, rnn. 1 Time Series Forecasting Using LSTM Networks: A Symbolic Approach Steven Elsworth and Stefan Guttel¨ Abstract—Machine learning methods trained on raw numerical time series data exhibit fundamental limitations such as a high sensitivity to the hyper parameters and even to the initialization of random weights. The code which may span for several lines while dealing with models such as ARIMA can be completed within couple of lines using LSTM. Unlike regression predictive modeling, time series also adds the complexity of a sequence dependence among the input variables. If you are a student or a deep learning beginner, then work on deep learning projects that try to leverage your deep learning skills diversifically and solve real-world use-cases that interest you the most. This example shows how to forecast time series data using a long short-term memory (LSTM) network. GitHub Gist: instantly share code, notes, and snippets. In business, time series are often related, e. js framework Machine learning is becoming increasingly popular these days and a growing number of the world’s population see it is as a magic crystal ball. Zero padding is used to indicate ‘total ignorance’ of the outside. We recently showed how a Long Short Term Memory (LSTM) Models developed with the Keras library in R could be used to take advantage of autocorrelation to predict the next 10 years of monthly Sunspots (a solar phenomenon that’s tracked by NASA). when considering product sales in regions. This article explores the suitability of a long short-term memory recurrent neural network (LSTM-RNN) and artificial intelligence (AI) method for low-flow time series forecasting. 2 LSTM Networks Given the recent success of RNNs in the area of sequential data and taking into account the temporal sequence of our data, we apply such an architecture to our prediction task. For each task we show an example dataset and a sample model definition that can be used to train a model from that data. Long Short-Term Memory networks, or LSTMs for short, can be applied to time series forecasting. Data Log Comments. Site template made by devcows using hugo. Hi, this is Luke Qi! I am currently finishing my Master’s of Science in Data Science(MSDS) at University of San Francisco, where I have developed a strong programming and data warehouse skills and become passionate about applying machine learning methods to solve business problems. This is a great benefit in time series forecasting, where classical linear methods can be difficult to adapt to multivariate or multiple input forecasting problems. DQ-LSTM Figure 1 illustrates how data quality networks with LSTM (DQ-LSTM) consists of submodules. A new hybrid time series forecasting method is established by combining EMD and CEEMDAN algorithm with LSTM neural network. We will forecast the number of confirmed cases in Iran for validation set and next 7 days from today. For each task we show an example dataset and a sample model definition that can be used to train a model from that data. A long short-term memory network is a type of recurrent neural network (RNN). com Abstract—Inspite of its great importance, there has been no general consensus on how to model the trend and the seasonal component in time-series data. A common LSTM unit is composed of a cell, an input gate, an output gate and a forget gate. Making statements based on opinion; back them up with references or personal experience. For an example showing how to classify sequence data using an LSTM network, see Sequence Classification Using Deep Learning. Traditional RNNs, however, suffer from the problem of vanishing gradients [Bengio etal. com for 1,742 cryptocurrencies. As we explain in detail below, the convolutional architecture is well-suited to model the geospatial structure of the temperature grid, while the RNN can capture temporal correlations in sequences of variable length. Good and effective prediction systems for stock market help traders, investors, and. background 2019 the novel coronavirus (novel coronavirus) (SARS-CoV-2), formerly known as 2019-nCoV, commonly known as the new crown virus, is a positive chain single strand RNA coronavirus with a envelope. Short term wind forecasting using recurrent neural networks (LTSM) and Keras About Blog GitHub Short Term Forecasting using Recurrent Neural Networks. Long Short-Term Memory networks, or LSTMs for short, can be applied to time series forecasting. In this paper, we apply GARCH model and a LSTM model to predict the stock index volatility. This article focuses on using a Deep LSTM Neural Network architecture to provide multidimensional time series forecasting using Keras and Tensorflow - specifically on stock market datasets to provide momentum indicators of stock price. The tutorial can be found at: CNTK 106: Part A – Time series prediction with LSTM (Basics) and uses sin wave function in order to predict time series data. The input shape for an LSTM must be (num_samples, num_time_steps, num_features). This forecasting method used daily discharged data collected from the Basantapur gauging station located on. TensorFlow Core. Comparison between Classical Statistical Model (ARIMA) and Deep Learning Techniques (RNN, LSTM) for Time Series Forecasting. This tutorial demonstrates a way to forecast a group of short time series with a type of a recurrent neural network called Long Short-Term memory (LSTM), using Microsoft's open source Computational Network Toolkit (CNTK). We recently showed how a Long Short Term Memory (LSTM) Models developed with the Keras library in R could be used to take advantage of autocorrelation to predict the next 10 years of monthly Sunspots (a solar phenomenon that’s tracked by NASA). For this task, we will use a convolutional LSTM neural network to forecast next-day sea temperatures for a given sequence of temperature grids. As many articles say, Forex time series is close to the random walk series (it is completely non-stationary). Code review; Project management; Integrations; Actions; Packages; Security. My Top 10% Solution for Kaggle Rossman Store Sales Forecasting Competition 16 Jan 2016 This is the first time I have participated in a machine learning competition and my result turned out to be quite good: 66th out of 3303. Contribute to rakshita95/DeepLearning-time-series development by creating an account on GitHub. There are many types of LSTM models that can be used for each specific type of time series forecasting problem. During the outbreak, researchers …. bi-directional long short term memory units. By Class of Winter Term 2017 / 2018 in instruction. A Comparative Study of Machine Learning Frameworks for Demand Forecasting Kalyan Mupparaju, Anurag Soni, Prasad Gujela, Matthew A Lanham Purdue University Krannert School of Management 403 W. Site template made by devcows using hugo. The LSTM previous Y model had. Normalization. The size of output and the hidden output vector of the LSTM cell will be same as the size of the hidden states (Check LSTM internal calcuations for why!). 04 Nov 2017 | Chandler. Why GitHub? Features →. Failing to forecast the weather can get us wet in the rain, failing to predict stock prices can cause a loss of money and so can an incorrect prediction of a patient's medical condition lead to health impairments or to decease. I also had a talk, "Time series shootout: ARIMA vs. One such application is the prediction of the future value of an item based on its past values. An RNN composed of LSTM units is often called an LSTM network. 01 size_layer = 128 timestamp = 5 epoch = 500 dropout_rate = 0. Deep Learning Projects for Students/Beginners. Aug 30, 2015. We'll build three different model with Python and inspect their results. However, due to the existence of the high noise in financial data, it is inevitable that the deep neural networks trained by the original data fail to accurately predict the stock price. Copy and Edit. Predicting the future. Forecasting time series data has been around for several decades with techniques like ARIMA. Time Series Gan Github Keras. Dataset Attribution: “PM2. Popular choices are Long Short Term Memory (LSTM) unit or Gated Recurrent Unit (GRU) Use final output of repeated iterates $$s_{T+1}$$ as forecast or input to next non-recurrent layer RNNs accomodate variable-length inputs, can take into account long-term dependences, and work with input like text. Good and effective prediction systems for stock market help traders, investors, and. Data collected from sensors is subject to uncertainty []. 2016) Black line is the ground truth trajectory Gray line is the past Heatmap is the predicted distribution Social LSTM learned to turn around a group 1 16 Slide: Alexandre Alahi 20 Activity Forecasting –Computer Vision. Search for jobs related to Neural network for time series forecasting or hire on the world's largest freelancing marketplace with 17m+ jobs. I study the physics of clouds, which is one of the most complex processes to accurately simulate in a global weather model. GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together. Predictive Learning. LSTM uses are currently rich in the world of text prediction, AI chat apps, self-driving cars…and many other areas. md file to showcase the performance of the model. TensorFlow , Keras. The purpose of this article is to explain Artificial Neural Network (ANN) and Long Short-Term Memory Recurrent Neural Network (LSTM RNN) and enable you to use them in real life and build the simplest ANN and LSTM recurrent neural network for the time series data. We apply Deep LSTM to forecast peak-hour traf- c and manage to identify unique characteristics of the tra c data. Failing to forecast the weather can get us wet in the rain, failing to predict stock prices can cause a loss of money and so can an incorrect prediction of a patient's medical condition lead to health impairments or to decease. LSTMs excel in learning, processing, and classifying sequential data. Step 3: Prepare Tensorflow program (Compile). TensorFlow - Time series forecasting; Understanding LSTM Networks. This is covered in two parts: first, you will forecast a univariate time series, then you will forecast a multivariate time series. Object detection. Define the LSTM network architecture. The predictors are the training sequences without the final time step. Complex LSTM. Collected data from CoinGecko. This paper presents a novel energy load forecasting methodology based on Deep Neural Networks, specifically Long Short Term Memory (LSTM) algorithms. 大学の実験で必要になって実装したのでメモしておきます。 Convolutional LSTM の説明 名前で完全にネタバレしてる感が否めないですが、Convolutional LSTM とは、LSTM の結合を全結合から畳み込みに変更したものです。 例えば画像を RNN に食わすときに、位置情報が失われないので便利です。 動画の次. The detailed Jupyter Notebook is available. In part A, we predict short time series using stateless LSTM. Harchaoui, and C. ) Use more data if you can. In stateless model, Keras allocates an array for the states of size. An LSTM is well-suited to classify, process and predict time series given time lags of unknown size and duration between important events. Slawek has ranked highly in international forecasting competitions. which shows that there is a "displacement" in the values predicted and in the real ones. There are so many factors involved in the prediction – physical factors vs. For an example showing how to classify sequence data using an LSTM network, see Sequence Classification Using Deep Learning. deep-learning time-series tensorflow rnn lstm. Deep learning system to predict stock prices of next day (one step time series forecast) and also for a specific period of time (multi-step time series forecast). Note that the motion map is calculated by mapping the output 2D coordinates from the LSTM to heatmaps and concatenating them on depth. Thanks for contributing an answer to Data Science Stack Exchange! Please be sure to answer the question. 0 open source license. The most popular way to train an RNN is by backpropagation through time. GitHub Gist: instantly share code, notes, and snippets. A time series is a collection of observations made sequentially in time. 0) # Pass lstm_fw_cell / lstm_bw_cell directly to tf. com, [email protected] com Abstract—Inspite of its great importance, there has been no general consensus on how to model the trend and the seasonal component in time-series data. I study the physics of clouds, which is one of the most complex processes to accurately simulate in a global weather model. 1 From RNNs to LSTMs We start by review-ing the standard Recurrent Neural Network (RNN), fol-. After completing this tutorial, you will know: How to develop and evaluate Univariate and multivariate Encoder-Decoder LSTMs for multi-step time series forecasting. Multidimensional LSTM Networks to Predict Bitcoin Price. But after training on my data set (~1500 training examples), my forecasting seems completely useless due to the lag days. Traditional RNNs, however, suffer from the problem of vanishing gradients [Bengio etal. For profit maximization, the model-based stock price prediction can give valuable guidance to the investors. An in-depth discussion of all of the features of a LSTM cell is beyond the scope of this article (for more detail see excellent reviews here and here). Brief Introduction Load the neccessary libraries & the dataset Data preparation Modeling In mid 2017, R launched package Keras, a comprehensive library which runs on top of Tensorflow, with both CPU and GPU capabilities. This is covered in two parts: first, you will forecast a univariate time series, then you will forecast a multivariate time series. js framework Machine learning is becoming increasingly popular these days and a growing number of the world's population see it is as a magic crystal ball. The Long Short-Term Memory recurrent neural network has the promise of learning long sequences of observations. Thanks for contributing an answer to Data Science Stack Exchange! Please be sure to answer the question.
2020-08-09 23:36:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3999953269958496, "perplexity": 1400.898695385635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738595.30/warc/CC-MAIN-20200809222112-20200810012112-00132.warc.gz"}
http://www.electrical-installation.org/enwiki/Harmonics_standards
# Harmonics standards ## Contents Harmonic emissions are subject to various standards and regulations: • Compatibility standards for distribution networks • Emissions standards applying to the equipment causing harmonics • Recommendations issued by Utilities and applicable to installations In view of rapidly attenuating the effects of harmonics, a triple system of standards and regulations is currently in force based on the documents listed below. ## Standards governing compatibility between distribution networks and products These standards determine the necessary compatibility between distribution networks and products: • The harmonics caused by a device must not disturb the distribution network beyond certain limits • Each device must be capable of operating normally in the presence of disturbances up to specific levels • Standard IEC 61000-2-2 is applicable for public low-voltage power supply systems • Standard IEC 61000-2-4 is applicable for LV and MV industrial installations ## Standards governing the quality of distribution networks • Standard EN 50160 stipulates the characteristics of electricity supplied by public distribution networks • Standard IEEE 519 presents a joint approach between Utilities and customers to limit the impact of non-linear loads. What is more, Utilities encourage preventive action in view of reducing the deterioration of power quality, temperature rise and the reduction of power factor. They will be increasingly inclined to charge customers for major sources of harmonics ## Standards governing equipment • Standard IEC 61000-3-2 for low-voltage equipment with rated current under 16 A • Standard IEC 61000-3-12 for low-voltage equipment with rated current higher than 16 A and lower than 75 A ## Maximum permissible harmonic levels International studies have collected data resulting in an estimation of typical harmonic contents often encountered in electrical distribution networks. Figure M23 presents the levels that, in the opinion of many Utilities, should not be exceeded. LV MV HV 6 5 2 5 4 2 3.5 3 1.5 3 2.5 1.5 $\definecolor{bggrey}{RGB}{234,234,234}\pagecolor{bggrey}2.27\frac{17}{h}-0.27$ $\definecolor{bggrey}{RGB}{234,234,234}\pagecolor{bggrey}1.9\frac{17}{h}-0.2$ $\definecolor{bggrey}{RGB}{234,234,234}\pagecolor{bggrey}1.2\frac{17}{h}$ 5 4 2 1.5 1.2 1 0.4 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.2 2 1.8 1.4 1 1 0.8 0.5 0.5 0.4 0.5 0.5 0.4 $\definecolor{bggrey}{RGB}{234,234,234}\pagecolor{bggrey}0.25\frac{10}{h}+0.25$ $\definecolor{bggrey}{RGB}{234,234,234}\pagecolor{bggrey}0.25\frac{10}{h}+0.22$ $\definecolor{bggrey}{RGB}{234,234,234}\pagecolor{bggrey}0.19\frac{10}{h}+0.16$ 8 6.5 3 Fig. M23Maximum admissible harmonic voltages and distortion (%)
2018-09-23 17:36:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43147146701812744, "perplexity": 2499.8762952941274}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267159570.46/warc/CC-MAIN-20180923173457-20180923193857-00361.warc.gz"}
http://math.stackexchange.com/questions/63136/integral-of-nx/63144
# Integral of $n^x$ A simple high school math problem (It's been too long since I went there). I need to find the average of the yearly output when the degredation of the yearly output is given by $100 \times 0.98^x$. I need to be able to find the average between $x_1$ and $x_2$ where $x_1 \geq 0$ and $x_1 \lt x_2 \lt 40$. I needed it for a business case I need to be able to verify. The degredation is the expected degradation of solarpanels and the calculation will give an estimate of the scrap value of a solar panel power plant at year x - Hint: For $a>0$ and $a\not=1$, $a^x=e^{x\cdot ln(a)}$. –  user5137 Sep 9 '11 at 16:39 @Jack thanks for the hint. I was however hoping for a solution. No it's not homework :) –  Rune FS Sep 9 '11 at 16:48 Let $a = 0.98$. Then, we have $$\int a^x dx = \int e^{x \ln a} dx = \frac{e^{x \ln a}}{\ln a} + C = \frac{a^x}{\ln a} + C.$$ To evaluate the integral in between, we made the substitution $y = x \ln a$, and used $$\int e^y dy = e^y + C.$$ The $C$ is an arbitrary constant. For the purposes of computing the definite integral below, we can ignore the $C$. So, the average output is given by: $$\frac{1}{x_2 - x_1} \int_{x_1}^{x_2} 100 a^x dx = \left. \frac{1}{x_2-x_1} \cdot \frac{100a^x}{\ln a} \right|_{x_1}^{x_2} = \frac{100}{\ln a} \frac{a^{x_2} - a^{x_1}}{x_2 - x_1},$$ writing it in a slightly nicer form. To make it easy to calculation, we can massage this answer to: $$\frac{100 \ \log_{10} e}{\log_{10} (.98)} \times \frac{0.98^{x_2} - 0.98^{x_1}}{x_2 - x_1}$$ Here, $e \approx 2.718\ldots$ is the famous Euler constant. Note: Be careful with the signs a bit. Since $a < 1$, both $a^{x_2} - a^{x_1}$ and $\log_{10} a$ will turn out to be negative. Added. The above formula is correct but not that great for numerical computations. I recommend using $$\frac{100 \ \log_{10} e}{\log_{10} (\frac{1}{0.98})} \times \frac{0.98^{x_1} \cdot (1 -0.98^{x_2 - x_1})}{x_2 - x_1}$$ instead. You will need to know some standard identities of exponential functions to derive this. Also, note that I have "corrected" the sign problem as well: all the terms will now be positive. Discrete case. Though not asked by the OP, I will mention how to proceed when you have a discrete sequence of outputs, rather than a continuous function $f(x)$ (like, $a^x$). Specifically, imagine that the output in the starting year is $100$, and every subsequent year, this drops to $98 \%$ of the previous year. In this case, the outputs for the first $40$ years form a geometric sequence $100, 100 a, 100 a^2, \ldots, 100 a^{40-1}$. Suppose you want to calculate the average of the outputs of all years between $n_1$th and $n_2$th year (both endpoints included). This is given by: $$\frac{100 a^{n_1 - 1} + 100 a^{n_1 + 1 - 1} + \ldots + 100 a^{n_2 - 1}}{n_2 - n_1 + 1}.$$ In this case, the numerator can be summed by geometric series formula. I will only mention the final answer without going into the details: $$\frac{100}{1-a} \times \frac{a^{n_1-1} \cdot (1 - a^{n_2 - n_1 +1})}{n_2 - n_1 +1}.$$ Notice the similarities between the two answers. Plugging in $a = 0.98$, we get: $$5000 \times \frac{0.98^{n_1-1} \cdot (1 - 0.98^{n_2 - n_1 +1})}{n_2 - n_1 +1}.$$ - Well, since $\frac{d}{dx}\left(e^x\right)=e^x$, we have $\frac{d}{dx}\left(e^{ax}\right)=ae^{ax}$ by the chain rule. Thus, $\int e^{ax}\,dx=\frac{e^{ax}}{a}+C$ (for $a\not=0$, of course). Thus, if I've done my number crunching correctly, the average value over $[0,40]$ of $f(x)=100\cdot 0.98^x$ is: $\frac{1}{40}\int_0^{40} 100 \cdot 0.98^x\,dx=\frac{100}{40}\int_0^{40} e^{x\cdot ln(0.98)}\,dx=\frac{5}{2}\cdot \frac{1}{ln(0.98)}e^{x\cdot ln(0.98)}\big\vert_0^{40}$ $=\frac{5}{2\cdot ln(0.98)}\left(0.98^{40}-1\right)$.
2014-03-07 12:22:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9508641362190247, "perplexity": 233.73565379848296}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999642307/warc/CC-MAIN-20140305060722-00092-ip-10-183-142-35.ec2.internal.warc.gz"}
https://www.authorea.com/users/6459/articles/6804/_show_article
ROUGH DRAFT authorea.com/6804 # Validation of methods for Low-volume RNA-seq Abstract Recently, a number of protocols extending RNA-sequencing to the single-cell regime have been published. However, we were concerned that the additional steps to deal with such minute quantities of input sample would introduce serious biases that would make analysis of the data using existing approaches invalid. In this study, we performed a critical evaluation of several of these low-volume RNA-seq protocols, and found that they performed slightly less well in metrics of interest to us than a more standard protocol, but with at least two orders of magnitude less sample required. We also explored a simple modification to one of these protocols that, for many samples, reduced the cost of library preparation to approximately \$20/sample. # Introduction Second-generation sequencing of RNA (RNA-seq) has proven to be a sensitive and increasingly inexpensive approach for a number of different experiments, including annotating genes in genomes, quantifying gene expression levels in a broad range of sample types, and determining differential expression between samples. As technology improves, transcriptome profiling has been able to be applied to smaller and smaller samples, allowing for more powerful assays to determine transcriptional output. For instance, our lab has used RNA-seq on single Drosophila embryos to measure zygotic gene activation (Lott 2011) and medium-resolution spatial patterning (Combs 2013). Further improvements will allow an even broader array of potential experiments on samples that were previously too small. For instance, over the past few years, a number of groups have published descriptions of protocols to perform RNA-seq on single cells (typically mammalian cells) (Tang 2009, Ramsköld 2012, Sasagawa 2013, Hashimshony 2012, Islam 2011). A number of studies, both from the original authors of the single-cell RNA-seq protocols and from others, have assessed various aspects of these protocols, both individually and competitively (Bhargava 2014, Wu 2014, Marinov 2013). One particularly powerful use of these approaches is to sequence individual cells in bulk tissues, revealing different states and cellular identies (Buganim 2012, Treutlein 2014). However, we felt that published descriptions of single-cell and other low-volume protocols did not adequately address whether a change in concentration of a given RNA between two samples would result in a proportional change in the FPKM (or any other measure of transcriptional activity) between those samples. While there are biases inherent to any protocol, we were concerned that direct amplification of the mRNA would select for PCR compatible genes in difficult to predict, and potentially non-linear ways. For many of the published applications of single cell RNA-seq, this is not likely a critical flaw, since the clustering approaches used are moderately robust to quantitative changes. However, to measure spatial and temporal activation of genes across an embryo, it is important that the output is monotonic with respect to concentration, and ideally linear. While it is possible to estimate absolute numbers of cellular RNAs from an RNAseq experiment, doing so requires spike-ins of known concentration and estimates of total cellular RNA content (Mortazavi 2008, Lin 2012). However, many RNA-seq experiments do not do these controls, nor are such controls strictly necessary under reasonable, though often untested, assumptions of approximately constant RNA content. While ultimately absolute concentrations will be necessary to fully predict properties such as noise tolerance of the regulatory circuits (Gregor 2007, Gregor 2005), many current modeling efforts rely only on scaled concentration measurements, often derived from in situ-hybridization experiments (Garcia 2013, Ilsley 2013, He 2010). Given that, we felt it was not important that different protocols should necessarily agree on any particular expression value for a given gene, nor are we fully convinced that absolute expression of any particular gene can truly reliably be predicted in a particular experiment. In order to convince ourselves that data generated from limiting samples would be suitable for our purposes, we evaluated several protocols for performing RNA-seq on extremely small samples. We also investigated a simple modification to one of the protocols that reduced sample preparation cost per library by more than 2-fold. Finally, we evaluated the effect of read depth on quality of the data. This study provides a single, consistent comparison of these diverse approaches, and shows that in fact all data from the low-volume protocols we examined are usable in similar contexts to the earlier bulk approach. # Results ## Experiment 1: Evaluation of Illumina TruSeq In our hands, the Illumina TruSeq protocol has performed extremely reliably with samples on the scale of  100ng of total RNA, the manufacturer recommended lower limit of the protocol. However, attempts to create libraries from much smaller samples yielded low complexity libraries, corresponding to as much as 30-fold PCR duplication of fragments. Anecdotally, less than 5% of libraries made with at least 90ng of total RNA yielded abnormally low concentrations, which we observed correlated with low complexity (Data not shown). To determine the lower limit of input needed to reliably produce libraries, we attempted to make libraries from 40, 50, 60, 70, and 80 ng of Drosophila total RNA, each in triplicate. Total TruSeq cDNA library yields made with a given amount of input total RNA. Yields measured by Nanodrop of cDNA libraries resuspended in 25$$\mu L$$ of EB. The italicized samples were unusually low, and when analyzed with a Bioanalyzer, showed abnormal size distribution of cDNA fragments. Amount Input RNA Replicate A Replicate B Replicate C 40 ng 57 ng 425 ng 672 ng 50 ng 435 ng 768 ng 755 ng 60 ng 115 ng 663 ng 668 ng 70 ng 300 ng 593 ng 653 ng 80 ng 468 ng 550 ng 840 ng \label{table:truseqtitration} We considered the two libraries with lower than usual concentration to be failures. While a failure rate of approximately 1 in 3 might be acceptable for some purposes, we ultimately wanted to perform RNA sequencing on precious samples, where a failure in any one of a dozen or more libraries would necessitate regenerating all of the libraries. Furthermore, due to the low sample volumes involved (less than approximately 500pg of poly-adenylated mRNA), common laboratory equipment is not able to determine the particular point in the protocol where the failures occurred. Thus, we consider 70 ng of total RNA to be the conservative lower limit to the protocol. While this is about 30% smaller than the manufacturer suggests, it is still several orders of magnitude larger than we needed it to be. We therefore considered using other small-volume and “single-cell” RNA-seq kits, which we had less experience with and less faith in the data. ## Experiment 2: Competitive Comparison of Low-volume RNAseq protocols We first sought to determine whether the low-volume RNAseq protocols available faithfully recapitulate linear changes in abundance of known inputs. We generated synthetic spike-ins by combining D. melanogaster and D. virilis total RNA in known, predefined proportions of 0, 5, 10, and 20% D. virilis RNA. For each of the low-volume protocols, we used 1ng of total RNA as input, whereas for the TruSeq protocol we used 100ng. Although pre-defined mixes of spike-in controls have been developed and are commercially available (Jiang 2011), we felt it was important to ensure that a given protocol would function reproducibly with natural RNA, which almost certainly has a different distribution of 6-mers, which could conceivably affect random cDNA priming and other amplification effects. Furthermore, our spike-in sample more densely covers the approximately $$10^5$$ fold coverage typical of RNA abundances. It should be noted, however, that our sample is not directly comparable to any other standards, nor is the material of known strandedness. We assumed that the majority of each sample is from the standard annotated transcripts, but did not verify this prior to library construction and sequencing. The different protocols had a variation in yield of libraries from between 6 fmole (approximately 3.6 trillion molecules) and 2,400 femtomoles, with the TruSeq a clear outlier at the high end of the range, and the other protocols all below 200 fmole (Table \ref{tab:protocols}). All of these quantities are sufficient to generate hundreds of millions of reads—far more than is typically required for an RNA-seq experiment. We pooled the samples, attempting equimolar fractions in the final pool; however, due to a pooling error, we generated significantly more reads than intended for the TruSeq protocol, and correspondingly fewer in the other protocols. Unless otherwise noted, we therefore sub-sampled the mapped reads to the lowest number of mapped reads in any sample in order to provide a fair comparison between protocols. We were interested in the fold-change of each D. virilis gene across the four samples, rather than the absolute abundance of any particular gene. Therefore, after mapping and gene quantification, we normalized the abundance $$A_{ij}$$ of every gene $$i$$ across the $$j=4$$ samples by a weighted average of the quantity $$Q_j$$ of D. virilis in sample $$j$$, as show in equation \ref{eqn:norm}. Thus, within a given gene, a linear fit of $$\hat{A}_{ij}$$ vs $$Q_j$$ should have a slope of one and an intercept of zero. $\label{eqn:norm} \hat{A}_{ij} = A_{ij} \div \frac{\sum_j Q_j A_{ij}}{\sum_j (Q_j)^2}$ We filtered the D. virilis genes for those with at least 20 mapped fragments in the sample with 20% D. virilis, then calculated an independent linear regression for each of those genes. As expected, for every protocol, the mean slope was 1 ($$t$$-test, $$p<5\times10^{-7}$$ for all protocols). Similarly, the average intercepts for all protocols was 0 ($$t$$-test, $$p<5\times10^{-7}$$ for all protocols). Also unsurprisingly, the TruSeq protocol had a noticeably higher mean correlation coefficient ($$0.98 \pm 0.02$$) than any of the other protocols ($$0.95 \pm 0.06$$, $$0.92\pm0.09$$, and $$0.95 \pm 0.06$$ for Clontech, TotalScript, and SMART-seq2, respectively). The mean correlation coefficient was statistically and practically indistinguishable between the Clontech samples and the SMART-seq2 samples ($$t$$-test $$p = .11$$, Figure \ref{fig:fithists}). Indeed, the only major differentiator we could find between the low-volume protocols we measured was cost. For only a handful of libraries, the kit-based all inclusive model of the Clontech and TotalScript kits could be a significant benefit, allowing the purchase of only as much of the reagents as required. By contrast, the Smart-seq2 protocol requires the a la carte purchase of a number of reagents, some of which are not available or more expensive per unit for smaller quantities. Furthermore, there could potentially be a “hot dogs and buns” problem, where reagents are sold in non-integer multiples of each other, leading to leftovers. Many of these reagents are not single-purpose, however, so leftovers could in principle be repurposed in other experiments.
2016-10-23 12:05:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.573858380317688, "perplexity": 2651.7185226697898}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719273.37/warc/CC-MAIN-20161020183839-00341-ip-10-171-6-4.ec2.internal.warc.gz"}
https://bookdown.org/michela_cameletti/psbf2122_rlab_notes/lab-8---03122021.html
# Chapter 10 Lab 8 - 03/12/2021 In this lecture: - we give some hint on Rmarkdown and how you must know it to perform the exam PSBF - we study multiple linear regression in R ## 10.1 RMarkdown RMarkdown is the best framework for data science (official website: https://rmarkdown.rstudio.com/). It makes it possible to obtain reproducible reports that include text, R code and the corresponding output. See this video for an introduction to RMarkdown: https://vimeo.com/178485416. To use RMarkdown you have to install the rmarkdown and knitr package. For the exercise part of the PBSF exam you will receive a RMarkdown document, i.e. a file with .Rmd extension (see for example the file PSBF_Exam_FacSimile.Rmd available in the PSBF Moodle page). You can open the file (by double clicking) with RStudio. In the top part of the file, as shown in Figure 10.1, you just have to write your Surname, Name and Student ID. The rest must not be modified. You can compile the RMarkdown file to obtain an html file by using the Knit button, see the yellow circle in Figure 10.1. If the compilation concludes correctly a web page will be opened with your html document. Moreover, the file PSBF_Exam_FacSimile.html will be saved in the same folder. If you want to see the web page directly in the bottom right panel of RStudio click on the wheel button (see the purple circle in Figure 10.1) and select Preview in Viewer Pane (then compile again your document with Knit, you will see your document in the Viewer pane). In Figure 10.2 you can view the beginning of Exercise 1 (see the purple rectangle) and the first sub-exercise (1.). This must not be modified. You have instead to write your code and your comments (preceded by #) in the yellow area delimited by the symbols {r} and (this area is known as code chunk). To check what your code produces you can: • run separately each line of code by using Ctrl/Cmd Enter. This is the approach you have used so far, you will find your results in the console and the new objects in your environment (see top right panel). • Use the arrow located in the right part of the code chunk (see the orange arrow in Figure 10.2 ). This will run all the code lines included in the chunk. You will find your results in the console and the new objects in your environment (see top right panel). In any case you have to compile your document (using the Knit button) after each sub-exercise. This will make you possible to check the final html file step by step. At the end of your exam you will have to deliver both the .Rmd and .html file (you will upload them to the PSBF Moodle page). Remember that if a code gives you errors or doesn’t work you can always keep it in your .Rmd file by commenting it out. ## 10.2 Preliminaries on multiple linear regrssion We introduce the multiple linear regression model, given by the following equation: $Y = \beta_0+\beta_1 X_1 +\beta_2X_2+\ldots+\beta_pX_p+ \epsilon$ where $$\beta_0$$ is the intercept and $$\beta_j$$ is the coefficient of regressor $$X_j$$ ($$j=1,\ldots,p$$). The term $$\epsilon$$ represents the error with mean equal to zero and variance $$\sigma^2_\epsilon$$. We will use the same data of Lab 7 (file datareg_logreturns.csv) and regarding the daily log-returns (i.e. relative changes) of: • the NASDAQ index • ibm, lenovo, apple, amazon, yahoo • gold • the SP500 index • the CBOE treasury note Interest Rate (10 Year) data_logreturns = read.csv("files/datareg_logreturns.csv", sep=";") str(data_logreturns) ## 'data.frame': 1258 obs. of 10 variables: ## $Date : chr "27/10/2011" "28/10/2011" "31/10/2011" "01/11/2011" ... ##$ ibm : num 0.02126 0.00841 -0.01516 -0.01792 0.01407 ... ## $lenovo: num -0.00698 -0.02268 -0.04022 -0.00374 0.07641 ... ##$ apple : num 0.010158 0.000642 -0.00042 -0.020642 0.002267 ... ## $amazon: num 0.04137 0.04972 -0.01769 -0.00663 0.01646 ... ##$ yahoo : num 0.02004 -0.00422 -0.05716 -0.04646 0.01132 ... ## $nasdaq: num 0.032645 -0.000541 -0.019456 -0.029276 0.012587 ... ##$ gold : num -0.023184 0.006459 0.030717 0.043197 -0.000674 ... ## $SP : num 0.033717 0.000389 -0.025049 -0.02834 0.015976 ... ##$ rate : num 0.0836 -0.0379 -0.0585 -0.0834 0.0025 ... ## 10.3 Multiple linear regression model We start by implementing (again) the simple linear regression model already described in Section 9.1.2, which considers nasdq as dependent (response) variable and SP as independent variable. mod1 = lm(nasdaq ~ SP, data=data_logreturns) summary(mod1) ## ## Call: ## lm(formula = nasdaq ~ SP, data = data_logreturns) ## ## Residuals: ## Min 1Q Median 3Q Max ## -0.0128494 -0.0018423 0.0002159 0.0020178 0.0115080 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 7.443e-05 8.777e-05 0.848 0.397 ## SP 1.085e+00 1.019e-02 106.471 <2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.003109 on 1256 degrees of freedom ## Multiple R-squared: 0.9003, Adjusted R-squared: 0.9002 ## F-statistic: 1.134e+04 on 1 and 1256 DF, p-value: < 2.2e-16 anova(mod1) ## Analysis of Variance Table ## ## Response: nasdaq ## Df Sum Sq Mean Sq F value Pr(>F) ## SP 1 0.109579 0.10958 11336 < 2.2e-16 *** ## Residuals 1256 0.012141 0.00001 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 We implement now the full model by including all the available $$p=8$$ regressors. In the R function lm the regressors are specified by name and separated by +: mod2 = lm(nasdaq ~ ibm + lenovo + apple + amazon + yahoo + gold + SP + rate, data= data_logreturns) summary(mod2) ## ## Call: ## lm(formula = nasdaq ~ ibm + lenovo + apple + amazon + yahoo + ## gold + SP + rate, data = data_logreturns) ## ## Residuals: ## Min 1Q Median 3Q Max ## -0.011522 -0.001574 0.000089 0.001626 0.010487 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 6.097e-06 7.208e-05 0.085 0.9326 ## ibm -9.625e-03 7.666e-03 -1.256 0.2095 ## lenovo 5.123e-03 3.361e-03 1.524 0.1277 ## apple 9.209e-02 5.008e-03 18.389 <2e-16 *** ## amazon 5.706e-02 4.285e-03 13.316 <2e-16 *** ## yahoo 3.901e-02 4.681e-03 8.333 <2e-16 *** ## gold -5.936e-03 2.931e-03 -2.025 0.0431 * ## SP 8.884e-01 1.462e-02 60.762 <2e-16 *** ## rate 2.604e-03 3.471e-03 0.750 0.4532 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.002547 on 1249 degrees of freedom ## Multiple R-squared: 0.9334, Adjusted R-squared: 0.933 ## F-statistic: 2189 on 8 and 1249 DF, p-value: < 2.2e-16 In the summary table we have the parameter estimates ($$\hat \beta_0,\hat \beta_1,\ldots,\hat\beta_p$$ and $$\sigma_\epsilon=\sqrt{\frac{SSE}{n-1-p}}$$) with the corresponding standard errors (i.e. estimate precision). By means of the t value and the corresponding p-value we can test the hypotheses $$H_0:\beta_j=0$$ vs $$H_1:\beta_j\neq 0$$ separately for each covariate coefficient (then $$j=1,\ldots,p$$). In the considered case we do not reject the $$H_0$$ hypothesis for the following regressors: ibm, lenovo, rate (i.e. the corresponding parameters can be considered null and there is no evidence of a linear relationship). Note that for gold there is a weak evidence from the data against $$H_0$$. Which is the interpretation of the generic $$\hat\beta_j$$? For apple for example the interpretation is the following: when the apple log-return change by one unit, we expect the nasdaq log return to increase by 0.0920924 (when all other covariates are held fixed). ### 10.3.1 Analysis of variance (sequential and global F tests) We now apply the function anova to the multiple regression model. This provides us with sequential tests on the single regressors (which are entered one at a time in a hierarchical fashion) and it is useful to assess the effect of adding a new predictor in a given order. anova(mod2) ## Analysis of Variance Table ## ## Response: nasdaq ## Df Sum Sq Mean Sq F value Pr(>F) ## ibm 1 0.040144 0.040144 6186.8396 <2e-16 *** ## lenovo 1 0.006941 0.006941 1069.7375 <2e-16 *** ## apple 1 0.020520 0.020520 3162.4311 <2e-16 *** ## amazon 1 0.013850 0.013850 2134.4469 <2e-16 *** ## yahoo 1 0.006197 0.006197 955.0328 <2e-16 *** ## gold 1 0.000006 0.000006 0.8585 0.3543 ## SP 1 0.025956 0.025956 4000.1962 <2e-16 *** ## rate 1 0.000004 0.000004 0.5629 0.4532 ## Residuals 1249 0.008104 0.000006 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 In the anova table 0.0081042 is the value of SSE (note that it is lower than the corresponding value observed for mod1). As shown also in Figure 10.3 and 10.4, the SS values reported in column Sum Sq should be read as the increase in $$SS_R$$ when a new variable is added to the predictors already in the model. It’s important to point out that the order of the regressors (used in the lm formula) is relevant and has an effect on the output of the anova function. The value 0.0401439 represents the additional SSR of the model $$Y=\beta_0+\beta_1 ibm+\epsilon$$ with respect to the model with no regressors $$Y=\beta_0+\epsilon$$ (for the latter SSR=0). Note that the two models differs just in one predictor (that’s why df=1). Analyzing the corresponding F-statistic value (with 1 and 1249 degrees of freedom) and p-value, we conclude that the coefficient of ibm is significantly different from zero (and thus ibm is an useful predictor). Similarly, the value 0.0069411 represents the additional SSR of the model $$Y=\beta_0+\beta_1 ibm+\beta_2 lenovo + \epsilon$$ with respect to the model containing only ibm: $$Y=\beta_0+\beta_1 ibm+\epsilon$$. Also in this case the p-value leads to conclude that lenovo is an useful covariate. This reasoning can be repeated step by step for all the regressors. At the end by summing all the sequential additional SSRs we obtain the total SSR for the model with $$p=8$$ regressors: mod2_an = anova(mod2) SSR = sum(mod2_an$Sum Sq[1:8]) SSR ## [1] 0.1136159 The values of SSE instead is contained in the last line of the anova tables under the name Residuals: SSE = mod2_an$Sum Sq[9] SSE ## [1] 0.008104249 The values of SSR and SSE can then be used for the global F-test also reported in the last of the summary output. The considered hypotheses are: $$H_0: \beta_1=\beta_2=\ldots=\beta_p=0$$ (this corresponds to the model $$Y=\beta_0+\epsilon$$) vs $$H_1: \text{at least one } \beta\neq 0$$. As reported in Figure 10.5, the corresponding F-statistic is given by $\text{F-value}=\frac{SS_R/p}{SS_E/(n-1-p)}=\frac{MS_R}{MS_E}$ which is a value of the F distribution with $$p$$ and $$n-1-p$$ degrees of freedom. If we reject $$H_0$$ it means that at least one of the covariates has predictive power in our linear model, i.e. that using a regression is predictively better than just using the average $$\bar y$$ (this is the best prediction in the case of the trivial model with no regressors, $$Y=\beta_0+\epsilon$$). The p-value of this test is reported in the last line of the summary output. (## F-statistic: 2189 on 8 and 1249 DF, p-value: < 2.2e-16). As expected, the p-values is very small and there is evidence for rejecting $$H_0$$. ### 10.3.2 Adjusted $$R^2$$ By using SSR, SSE, SST and the corresponding degrees of freedom (see Figure 10.5), it is possible to compute the adjusted $$R^2$$ goodness of fit index: $adjR^2 =1- \frac{\frac{SS_E}{n-1-p}}{\frac{SS_T}{n-1}}=1-\frac{MS_E}{MS_T}$ As in the formula of $$adjR^2$$ we consider $$p$$, there is a penalization for the number of regressors. Thus, $$adjR^2$$, differently from the standard $$R^2$$ introduced in Section 9.1.3, can increase or decrease. In particular, it increases only when the added variables decrease the SSE enough to compensate for the increase in $$p$$. It can be computed manually as follows: p = 8 n = nrow(data_logreturns) SST = sum((data_logreturns$nasdaq - mean(data_logreturns$nasdaq))^2) 1 - (SSE/(n-1-p))/(SST/(n-1)) ## [1] 0.9329925 but it is also reported in the summary output: Adjusted R-squared: 0.933. This value denotes a very high goodness of fit. ### 10.3.3 Remove non significant regressors one by one From the t tests reported in the summary output of mod2, we observed that some of the regressors are not significant. For this reason we remove them one by one, starting from the one with the highest p-value (rate). The following is the new model: mod3 = lm(nasdaq ~ ibm + lenovo + apple + amazon + yahoo + gold + SP, data= data_logreturns) summary(mod3) ## ## Call: ## lm(formula = nasdaq ~ ibm + lenovo + apple + amazon + yahoo + ## gold + SP, data = data_logreturns) ## ## Residuals: ## Min 1Q Median 3Q Max ## -0.0114711 -0.0015527 0.0000856 0.0016361 0.0104669 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 4.496e-06 7.204e-05 0.062 0.9502 ## ibm -9.537e-03 7.664e-03 -1.244 0.2136 ## lenovo 5.244e-03 3.357e-03 1.562 0.1185 ## apple 9.211e-02 5.007e-03 18.396 <2e-16 *** ## amazon 5.692e-02 4.280e-03 13.298 <2e-16 *** ## yahoo 3.906e-02 4.680e-03 8.346 <2e-16 *** ## gold -6.045e-03 2.927e-03 -2.066 0.0391 * ## SP 8.913e-01 1.409e-02 63.258 <2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.002547 on 1250 degrees of freedom ## Multiple R-squared: 0.9334, Adjusted R-squared: 0.933 ## F-statistic: 2502 on 7 and 1250 DF, p-value: < 2.2e-16 We note that $$R^2_{adj}$$ doesn’t change (thus including rate doesn’t decrease the SSE significantly). We go on by removing ibm which still has a high p-value: mod4 = lm(nasdaq ~ lenovo + apple + amazon + yahoo + gold + SP, data= data_logreturns) summary(mod4) ## ## Call: ## lm(formula = nasdaq ~ lenovo + apple + amazon + yahoo + gold + ## SP, data = data_logreturns) ## ## Residuals: ## Min 1Q Median 3Q Max ## -0.0111649 -0.0015513 0.0000687 0.0016327 0.0108501 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 8.350e-06 7.199e-05 0.116 0.9077 ## lenovo 5.410e-03 3.355e-03 1.612 0.1071 ## apple 9.214e-02 5.008e-03 18.398 <2e-16 *** ## amazon 5.711e-02 4.278e-03 13.349 <2e-16 *** ## yahoo 3.941e-02 4.672e-03 8.435 <2e-16 *** ## gold -5.886e-03 2.925e-03 -2.013 0.0444 * ## SP 8.823e-01 1.208e-02 73.027 <2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.002547 on 1251 degrees of freedom ## Multiple R-squared: 0.9333, Adjusted R-squared: 0.933 ## F-statistic: 2918 on 6 and 1251 DF, p-value: < 2.2e-16 We note that $$R^2_{adj}$$ doesn’t change. We go on by removing lenovo: mod5 = lm(nasdaq ~ apple + amazon + yahoo + gold + SP, data= data_logreturns) summary(mod5) ## ## Call: ## lm(formula = nasdaq ~ apple + amazon + yahoo + gold + SP, data = data_logreturns) ## ## Residuals: ## Min 1Q Median 3Q Max ## -0.0114026 -0.0015730 0.0000785 0.0016151 0.0107557 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 6.403e-06 7.202e-05 0.089 0.929 ## apple 9.191e-02 5.009e-03 18.348 <2e-16 *** ## amazon 5.702e-02 4.281e-03 13.321 <2e-16 *** ## yahoo 3.987e-02 4.666e-03 8.545 <2e-16 *** ## gold -5.763e-03 2.925e-03 -1.970 0.049 * ## SP 8.871e-01 1.171e-02 75.727 <2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.002549 on 1252 degrees of freedom ## Multiple R-squared: 0.9332, Adjusted R-squared: 0.9329 ## F-statistic: 3496 on 5 and 1252 DF, p-value: < 2.2e-16 The p-value of gold is very close to alpha=0.05. This is a weak evidence against the hypothesis $$H0:\beta_{gold}=0$$. We try to remove also gold and see what happens: mod6 = lm(nasdaq ~ apple + amazon + yahoo + SP, data= data_logreturns) summary(mod6) ## ## Call: ## lm(formula = nasdaq ~ apple + amazon + yahoo + SP, data = data_logreturns) ## ## Residuals: ## Min 1Q Median 3Q Max ## -0.0114132 -0.0015924 0.0000558 0.0016247 0.0106998 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 7.367e-06 7.211e-05 0.102 0.919 ## apple 9.186e-02 5.015e-03 18.318 <2e-16 *** ## amazon 5.731e-02 4.283e-03 13.380 <2e-16 *** ## yahoo 3.964e-02 4.670e-03 8.489 <2e-16 *** ## SP 8.871e-01 1.173e-02 75.638 <2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.002552 on 1253 degrees of freedom ## Multiple R-squared: 0.933, Adjusted R-squared: 0.9327 ## F-statistic: 4359 on 4 and 1253 DF, p-value: < 2.2e-16 We observe that $$R^2_{adj}$$ of mod6 is very similar to the one of mod5 (and it’s still a very high goodness of fit!) but mod6 is more parsimonious because it has one parameter less (it’s less complex). For this reason mod6 should be preferred. ### 10.3.4 AIC computation To compare models it is also possible to use the Akaike Information Criterion (AIC) given, in the case of multiple regression model, by $\text{AIC}=c + n \log({\hat\sigma^2_\epsilon})+2(1+p).$ where $$c$$ is a constant not important for model comparison. The criterion for AIC is : the lower, the better. It is possible to compute the AIC by using the extractAIC function: extractAIC(mod5) ## [1] 6.00 -15019.69 extractAIC(mod6) ## [1] 5.0 -15017.8 The function returns a vector of length 2: the first element represents the total number of parameters ($$p$$ + 1 (intercept)), while the second elements is the AIC value. In this case the AIC is lower for mod5; indeed the two AIC values are very similar. For this reason, considering also what it was said in the previous Section, we still prefer mod6 because it has a very high goodness of fit but it’s less complex. ## 10.4 An authomatic procedure As an alternative we can use the step() that implement a sequential procedure to find an optimal (actually it is sub-optimal model) mod_step<- step(mod2) ## Start: AIC=-15018.43 ## nasdaq ~ ibm + lenovo + apple + amazon + yahoo + gold + SP + ## rate ## ## Df Sum of Sq RSS AIC ## - rate 1 0.0000037 0.008108 -15020 ## - ibm 1 0.0000102 0.008114 -15019 ## <none> 0.008104 -15018 ## - lenovo 1 0.0000151 0.008119 -15018 ## - gold 1 0.0000266 0.008131 -15016 ## - yahoo 1 0.0004506 0.008555 -14952 ## - amazon 1 0.0011505 0.009255 -14853 ## - apple 1 0.0021942 0.010298 -14719 ## - SP 1 0.0239564 0.032061 -13290 ## ## Step: AIC=-15019.86 ## nasdaq ~ ibm + lenovo + apple + amazon + yahoo + gold + SP ## ## Df Sum of Sq RSS AIC ## - ibm 1 0.0000100 0.008118 -15020 ## <none> 0.008108 -15020 ## - lenovo 1 0.0000158 0.008124 -15019 ## - gold 1 0.0000277 0.008136 -15018 ## - yahoo 1 0.0004518 0.008560 -14954 ## - amazon 1 0.0011470 0.009255 -14855 ## - apple 1 0.0021951 0.010303 -14720 ## - SP 1 0.0259556 0.034064 -13216 ## ## Step: AIC=-15020.3 ## nasdaq ~ lenovo + apple + amazon + yahoo + gold + SP ## ## Df Sum of Sq RSS AIC ## <none> 0.008118 -15020 ## - lenovo 1 0.000017 0.008135 -15020 ## - gold 1 0.000026 0.008144 -15018 ## - yahoo 1 0.000462 0.008580 -14953 ## - amazon 1 0.001156 0.009274 -14855 ## - apple 1 0.002196 0.010314 -14721 ## - SP 1 0.034606 0.042724 -12933 summary(mod_step) ## ## Call: ## lm(formula = nasdaq ~ lenovo + apple + amazon + yahoo + gold + ## SP, data = data_logreturns) ## ## Residuals: ## Min 1Q Median 3Q Max ## -0.0111649 -0.0015513 0.0000687 0.0016327 0.0108501 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 8.350e-06 7.199e-05 0.116 0.9077 ## lenovo 5.410e-03 3.355e-03 1.612 0.1071 ## apple 9.214e-02 5.008e-03 18.398 <2e-16 *** ## amazon 5.711e-02 4.278e-03 13.349 <2e-16 *** ## yahoo 3.941e-02 4.672e-03 8.435 <2e-16 *** ## gold -5.886e-03 2.925e-03 -2.013 0.0444 * ## SP 8.823e-01 1.208e-02 73.027 <2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.002547 on 1251 degrees of freedom ## Multiple R-squared: 0.9333, Adjusted R-squared: 0.933 ## F-statistic: 2918 on 6 and 1251 DF, p-value: < 2.2e-16 we can compare the AIC index between model 6 and the model identified by the step function extractAIC(mod_step) ## [1] 7.0 -15020.3 extractAIC(mod6) ## [1] 5.0 -15017.8 As expected model 6 expresses an higher value of the AIC, nevertheless we still decide to consider this model, having in mind the considerations we have made above. ### 10.4.1 Plot of observed and predicted values In the case of multiple regression model it is not possible to plot the estimated regression model, as we did for the simple regression model in Section 9.1.2. This is because it’s a multivariate hyperplane which can not be represented in a 2-D plot. However we can plot together the observed ($$y$$) and predicted values ($$\hat y$$, also known as fitted values) of the response variable to have an idea of how good the model is in prediction: plot(data_logreturns$nasdaq,mod6$fitted.values) The cloud of points is thin and narrow and this is a sign of a strong linear relationship (correlation is equal to 0.9658989) and a good performance of the model. ## 10.5 Variance Inflation Factor The variance inflation factor (VIF) tells how much the variance of $$\hat \beta_j$$ for a given regressor $$X_j$$ is increased by having other predictors in the model ($$j=1,\ldots,p$$). It is given by $VIF_j = \frac{1}{1-R^2_j}\geq 1$ where $$R^2_j$$ is the goodness of fit index for the model which has $$X_j$$ has dependent variable and the other remaining regressors as independent variables. Note that $$VIF_j$$ doesn’t provide any information about the relationship between the response and $$X_j$$. Rather, it tells us only how correlated $$X_j$$ is with the other predictors. We want VIF to be close to one (the latter is the value of VIF when all the regressors are independent). In R VIF can by computed by using the vif function contained in the faraway library: library(faraway) vif(mod6) ## apple amazon yahoo SP ## 1.335137 1.370225 1.381315 1.967353 All the four values (one for each regressor in the model) are close to one, so we can conclude that there are no problems of collinearity. A standard threshold for identifying problematic collinearity situation is the value of 5 (or 10). ## 10.6 Exercise Lab 8 ### 10.6.1 Exercise 1 A linear regression model with three predictor variables was fit to a dataset with 40 observations. The correlation between the observed data $$y$$ and the predicted values $$\hat y$$ is 0.65. The total sum of squares (SST) is 100. 1. What is the value of the non-adjusted goodness of fit index ($$R^2$$)? You have to think of an alternative way to compute $$R^2$$… see theory lectures. Comment the value. 2. What is the value of the residual sum of squares (SSE)? 3. What is the value of the regression sum of squares (SSR)? 4. What is the estimate of $${\sigma^2_\epsilon}$$? 5. Fill in the following ANOVA table. To compute the p-value associated to the F-value of the F statistic you have to use the pf function (see ?pf): Source of variability Df Sum of Squares (SS) Mean Sum of Squares (MS) F value p(>F) Regressors Residuals Total 1. What do you think about the fitted model? ### 10.6.2 Exercise 2 1. Complete the following ANOVA table for the linear regression model $$Y=\beta_0+\beta_1X_1+\beta_2X_2+\epsilon$$. Explain all the steps of your computations. Suggestion: start with the degrees of freedom and then the p-value. To obtain the F-value given the p-value you will need the qf function (see ?qf). Source of variability Df Sum of Squares (SS) Mean Sum of Squares (MS) F value p(>F) Regressors 0.04 Residuals 5.66 Total 15 1. What do you think about the fitted model? 2. Determine the value of $$R^2$$ and of the adjusted $$R^2$$. Comment. ### 10.6.3 Exercise 3 Use again the data in the prices_5Y.csv file alty used for Lab 7. They refer to daily prices (Adj.Close) for the period 05/11/2012-03/11/2017 for the following assets: Apple (AAPL), Intel (INTC), Microsoft (MSFT) and Google (GOOGL). Import the data in R. 1. Create a new data frame containing the log returns for all the assets. 2. Estimate the parameters of the multiple linear model which considers GOOGL as dependent variable and includes all the remaining explanatory variables. Are all the coefficients significantly different from zero? Comment the results. 3. Do you suggest to remove some of the regressors? 4. Comment about the goodness of fit of the model. 5. Comment the anova table for the multiple linear regression model. Moreover, derive from the table the values of SSR, SSE and SST. 6. Compute manually the value of the F-statistic reported in the last line of the summary output. Compute also the corresponding p-value and comment the result. 7. Plot the observed and predicted values of the response variable.
2022-06-26 20:35:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5383031964302063, "perplexity": 2659.0330450327833}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103271864.14/warc/CC-MAIN-20220626192142-20220626222142-00593.warc.gz"}
http://dict.cnki.net/h_50051137000.html
全文文献 工具书 数字 学术定义 翻译助手 学术趋势 更多 explanation 的翻译结果: 查询用时:0.011秒 历史查询 explanation 解释(5281)阐释(1004)说明(1308) 解释 Research on Hempel's Theory of Scientific Explanation 亨普尔科学解释理论研究 短句来源 A POSSIBLE EXPLANATION OF THE DOUBLE BACKBENDING OF YRAST BAND FOR NUCLEI ~(122,124,126)Te ~(122,124,126)Te的双回弯的一种可能解释 短句来源 A THEORETICAL EXPLANATION ON THE RELATION BETWEEN LATTICE CONSTANT AND TRANSITION TEMPERATURE OF A-15 SUPERCONDUCTING COMPOUNDS A-15型超导化合物转变温度与晶格常数关系的理论解释 短句来源 THEORETICAL EXPLANATION OF ARRESTING THE SOFTENING OF A-15 V_3Si IN SUPERCONDUCTING STATE 超导态抑止A-15V_3Si软化的理论解释 短句来源 ON THE EXPLANATION OF INDUCED FIELD OF DIAMAGNETISM 关于抗磁性的感生场解释 短句来源 更多 阐释 Novel Writing and Film & Video Explanation in the New Period 新时期的小说书写与影像阐释 短句来源 A NEW EXPLANATION OF THE M-LINE SPECTRA AS FORMED BY A PRISM COUPLED TO AN OPTICAL WAVEGUIDE 棱镜-波导耦合M线光谱的新阐释 短句来源 The Explanation of Ascending Order Adjacency Matrix 升阶邻接矩阵的阐释 短句来源 A Study on the Principle of Text Explanation and the Dissemination of Document 文本阐释原理与文献传播研究 短句来源 This article has first produced the information technology meaning,and through to the modern information technology explanation and the information technology to our country international trade influence,writes using the information technology promotes foreign trade national development. 本文首先给出了信息技术的含义,并通过对现代信息技术的阐释和信息技术对我国国际贸易的影响,写出利用信息技术促进对外贸易发展的必要性。 短句来源 更多 说明 On Gaseous Solitons of A Two Component Disks and An Explanation to Spiral Structure of Galaxy and The Titius-Bode Rule 气盘孤立波及其对星系旋涡结构和Titius-Bode定则的说明 短句来源 Explanation on the Use of the Computer Method and its Program in the Calculations of ship's Longitudinal Launching 船舶纵向下水计算机方法及程序的使用说明 短句来源 Suggested Item and Its Explanation About Soil-structure Interaction in a Seismic Design Code for Industrial and Civil Buildings 工业与民用建筑抗震设计规范中关于土与结构相互作用的条文建议及说明 短句来源 Function Design and its Explanation of the Comprehensive Office Automation Branch System in Yunfo Sulphur-Iron Mine 云浮硫铁矿综合办公分系统的设计及其说明 短句来源 THE EXPLANATION OF WORKING OUT "THE TEST TECHNIQUE OF DUCTILE FRACTURE TOUGHNESS J_(IC) AT LOW TEMPERATURE" 《低温延性断裂韧度J_(IC)测试技术》编制说明 短句来源 更多 讲解 Explanation to Examples of VF Programming VF编程实例讲解 短句来源 Here the author hopes to look into the traditional approach of teaching material explanation and to reform the instruction of reading from such aspects as the aims of the teaching, ethics , approaches and links of teaching , expecting to use the theory of dialogue more effectively in the application of the new textbooks . 反思篇部分,提出了对话在阅读教学中应用所引起的一些思考,希望对传统的讲解法予以重新审视,希望应用对话能从教学目的、伦理、方式、环节等方面深刻变革阅读教学,期待新教材出现以利于对话更有效地应用于阅读教学。 短句来源 explanation of realizing approaches of mental health; 讲解实现心理健康之途径和方法 ; 短句来源 The Design and Explanation of the Example of Concept of Higher Mathematics 高等数学概念引例的设计与讲解 短句来源 The need for the community service was strong with a rate of 28.0% and 27.0% of them chose individual instruction and explanation. 同时,他们对社区护理服务需求很多,更多选择是医护人员的个别指导与讲解,分别为28.0%与27.0%。 短句来源 更多 我想查看译文中含有:的双语例句 explanation This gives a geometric explanation for the appearance of pattern avoidance in the study of singularities of Schubert varieties. We propose a representation theoretic explanation of a link between the intertwining operators on the tensor products of ${\rm Y}(\mathfrak{gl}_n)$-modules, and the "extremal cocycle" on the Weyl group of $\mathfrak{gl}_m$ defined by D. In view of reasonable explanation of intermittent subharmonics and chaos that can be gained from coupling filter between circuits, this paper discusses a method that maps time bifurcation with parameter bifurcation. Based on the fundamental processes of birth, death, dispersal and speciation, the neutral theory provided the first mechanistic explanation of species abundance distribution commonly observed in natural communities. The applicability of the nucleophilicity/electrophilicity concept to the explanation of mechanisms of formation and transformation of humic substances was considered. 更多 点击这里查询相关文摘 相关查询 CNKI小工具 在英文学术搜索中查有关explanation的内容 在知识搜索中查有关explanation的内容 在数字搜索中查有关explanation的内容 在概念知识元中查有关explanation的内容 在学术趋势中查有关explanation的内容 CNKI主页 |  设CNKI翻译助手为主页 | 收藏CNKI翻译助手 | 广告服务 | 英文学术搜索 2008 CNKI-中国知网 2008中国知网(cnki) 中国学术期刊(光盘版)电子杂志社
2020-02-21 01:08:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5454908013343811, "perplexity": 5789.670815458255}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145316.8/warc/CC-MAIN-20200220224059-20200221014059-00420.warc.gz"}
http://stackexchange.com/newsletters/newsletter?site=cs.stackexchange.com
## Top new questions this week: ### Is computation expression the same as monad? I'm still learning functional programming (with f#) and I recently started reading about computation expressions. I still don't fully understand the concept and one thing that keeps me unsure when … ### How to simulate a die given a fair coin Suppose that you're given a fair coin and you would like to simulate the probability distribution of repeatedly flipping a fair (six-sided) die. My initial idea is that we need to choose appropriate … probability-theory randomness sampling
2014-08-22 05:59:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6834697723388672, "perplexity": 645.1009248184744}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500823169.67/warc/CC-MAIN-20140820021343-00057-ip-10-180-136-8.ec2.internal.warc.gz"}
https://blog.theleapjournal.org/2011/06/freedom-of-speech-in-pakistan-and-india.html
## Monday, June 06, 2011 ### Freedom of speech in Pakistan and India One of Pakistan's more remarkable journalists, Syed Saleem Shahzad, was tortured and murdered, probably by Pakistan's ISI. In one view of the world, freedom of speech is something that you are gifted by your founding fathers. As an example, if you have the good fortune of having a well drafted Constitution, it would say Congress shall make no law ... abridging the freedom of speech, or of the press;. This would block the ability of politicians to enact legislation that is inimical to freedom of speech. Then, as long as rule of law prevails, we get freedom of speech. This seems like a palace coup, it seems rather easy, as long as you have the right intellectual capabilities in the hands of those who draft the Constitution of a country. We in India or Pakistan are not blessed thusly. The Indian Constitution is not clear-headed about freedom of speech, and anti-defamation law of colonial vintage continues to be on the books. This is an important tool for harassment and intimidation. And then, there is the question of rule of law. What is going on in Pakistan is way beyond questions of how the Constitution should be drafted. It is, instead, more useful to think that democracy and freedom are made of a million battles, small and large. Freedom of speech is won, piece by piece, through a million mutinies. It is important to constantly think, and speak, and write. Each little act of writing about troublesome issues pushes the envelope of freedom of speech, and creates a culture of honest discussion and discourse. I feel the media in India has become quite complacent about the tawdry condition of free speech in India. All too often journalists can be warned off a seamy story by a tiny exercise of power or influence. All too often, the crooks are able to buy the loyalty of a journalist quite easily. There isn't enough intellectualism going around, among the men and women in the media. Eshwar Sundaresan, writing in Dawn, says that India badly needs more journalists of the character of Pakistan's Najam Sethi. This is one of many areas where India's success in the last 20 years is leading to an erosion of the very foundations of that success. LaTeX mathematics works. This means that if you want to say $10 you have to say \$10.
2019-08-24 13:47:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18215738236904144, "perplexity": 2664.595533681326}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027321140.82/warc/CC-MAIN-20190824130424-20190824152424-00019.warc.gz"}
https://en.m.wikipedia.org/wiki/300_(number)
# 300 (number) 300 (three hundred) is the natural number following 299 and preceding 301. ← 299 300 301 → Cardinalthree hundred Ordinal300th (three hundredth) Factorization22 × 3 × 52 Greek numeralΤ´ Roman numeralCCC Binary1001011002 Ternary1020103 Senary12206 Octal4548 Duodecimal21012 Hebrewש (Shin) ## Mathematical properties The number 300 is a triangular number and the sum of a pair of twin primes (149 + 151), as well as the sum of ten consecutive primes (13 + 17 + 19 + 23 + 29 + 31 + 37 + 41 + 43 + 47). It is palindromic in 3 consecutive bases: 30010 = 6067 = 4548 = 3639, and also in base 13. Factorization is 22 × 3 × 52. 30064 + 1 is prime ## Integers from 301 to 399 ### 300s #### 301 301 = 7 × 43 = ${\displaystyle \left\{{7 \atop 3}\right\}}$ . 301 is the sum of three consecutive primes (97 + 101 + 103), happy number in base 10,[1] lazy caterer number (sequence A000124 in the OEIS). #### 302 302 = 2 × 151. 302 is a nontotient,[2] a happy number,[1] the number of partitions of 40 into prime parts[3] #### 303 303 = 3 × 101. 303 is a palindromic semiprime. The number of compositions of 10 which cannot be viewed as stacks is 303.[4] #### 304 304 = 24 × 19. 304 is the sum of six consecutive primes (41 + 43 + 47 + 53 + 59 + 61), sum of eight consecutive primes (23 + 29 + 31 + 37 + 41 + 43 + 47 + 53), primitive semiperfect number,[5] untouchable number,[6] nontotient.[2] 304 is the smallest number such that no square has a set of digits complementary to the digits of the square of 304: The square of 304 is 92416, while no square exists using the set of the complementary digits 03578. #### 305 305 = 5 × 61. 305 is the convolution of the first 7 primes with themselves.[7] #### 306 306 = 2 × 32 × 17. 306 is the sum of four consecutive primes (71 + 73 + 79 + 83), pronic number,[8] and an untouchable number.[6] #### 307 307 is a prime number, Chen prime,[9] number of one-sided octiamonds[10] #### 308 308 = 22 × 7 × 11. 308 is a nontotient,[2] totient sum of the first 31 integers, heptagonal pyramidal number,[11] and the sum of two consecutive primes (151 + 157). #### 309 309 = 3 × 103, Blum integer, number of primes <= 211.[12] ### 310s #### 310 310 = 2 × 5 × 31. 310 is a sphenic number,[13] noncototient,[14] number of Dyck 11-paths with strictly increasing peaks.[15] #### 311 311 is a prime number. 4311 - 3311 is prime #### 312 312 = 23 × 3 × 13, idoneal number. #### 313 313 is a prime number. #### 314 314 = 2 × 157. 314 is a nontotient,[2] smallest composite number in Somos-4 sequence.[16] #### 315 315 = 32 × 5 × 7 = ${\displaystyle D_{7,3}\!}$  rencontres number, highly composite odd number, having 12 divisors.[17] #### 316 316 = 22 × 79. 316 is a centered triangular number[18] and a centered heptagonal number[19] #### 317 317 is a prime number, Eisenstein prime with no imaginary part, Chen prime,[9] and a strictly non-palindromic number. 317 is the exponent (and number of ones) in the fourth base-10 repunit prime.[20] #### 318 318 = 2 × 3 × 53. It is a sphenic number,[13] nontotient,[2] and the sum of twelve consecutive primes (7 + 11 + 13 + 17 + 19 + 23 + 29 + 31 + 37 + 41 + 43 + 47) #### 319 319 = 11 × 29. 319 is the sum of three consecutive primes (103 + 107 + 109), Smith number,[21] cannot be represented as the sum of fewer than 19 fourth powers, happy number in base 10[1] ### 320s #### 320 320 = 26 × 5 = (25) × (2 × 5). 320 is a Leyland number,[22] and maximum determinant of a 10 by 10 matrix of zeros and ones. #### 321 321 = 3 × 107, a Delannoy number[23] #### 322 322 = 2 × 7 × 23. 322 is a sphenic,[13] nontotient, untouchable,[6] and a Lucas number.[24] #### 323 323 = 17 × 19. 323 is the sum of nine consecutive primes (19 + 23 + 29 + 31 + 37 + 41 + 43 + 47 + 53), the sum of the 13 consecutive primes (5 + 7 + 11 + 13 + 17 + 19 + 23 + 29 + 31 + 37 + 41 + 43 + 47), Motzkin number.[25] A Lucas and Fibonacci pseudoprime. See 323 (disambiguation) #### 324 324 = 22 × 34 = 182. 324 is the sum of four consecutive primes (73 + 79 + 83 + 89), totient sum of the first 32 integers, a square number,[26] and an untouchable number.[6] #### 325 325 = 52 × 13. 325 is a triangular number, hexagonal number,[27] nonagonal number,[28] centered nonagonal number.[29] 325 is the smallest number to be the sum of two squares in 3 different ways: 12 + 182, 62 + 172 and 102 + 152. 325 is also the smallest (and only known) 3-hyperperfect number. #### 326 326 = 2 × 163. 326 is a nontotient, noncototient,[14] and an untouchable number.[6] 326 is the sum of the 14 consecutive primes (3 + 5 + 7 + 11 + 13 + 17 + 19 + 23 + 29 + 31 + 37 + 41 + 43 + 47), lazy caterer number (sequence A000124 in the OEIS). #### 327 327 = 3 × 109. 327 is a perfect totient number,[30] number of compositions of 10 whose run-lengths are either weakly increasing or weakly decreasing[31] #### 328 328 = 23 × 41. 328 is a refactorable number,[32] and it is the sum of the first fifteen primes (2 + 3 + 5 + 7 + 11 + 13 + 17 + 19 + 23 + 29 + 31 + 37 + 41 + 43 + 47). #### 329 329 = 7 × 47. 329 is the sum of three consecutive primes (107 + 109 + 113), and a highly cototient number.[33] ### 330s #### 330 330 = 2 × 3 × 5 × 11. 330 is sum of six consecutive primes (43 + 47 + 53 + 59 + 61 + 67), pentatope number (and hence a binomial coefficient ${\displaystyle {\tbinom {11}{4}}}$ ), a pentagonal number,[34] divisible by the number of primes below it, and a sparsely totient number.[35] #### 331 331 is a prime number, super-prime, cuban prime,[36] sum of five consecutive primes (59 + 61 + 67 + 71 + 73), centered pentagonal number,[37] centered hexagonal number,[38] and Mertens function returns 0.[39] #### 332 332 = 22 × 83, Mertens function returns 0.[39] #### 333 333 = 32 × 37, Mertens function returns 0,[39] #### 334 334 = 2 × 167, nontotient.[40] #### 335 335 = 5 × 67, divisible by the number of primes below it, number of Lyndon words of length 12. #### 336 336 = 24 × 3 × 7, untouchable number,[6] number of partitions of 41 into prime parts.[3] #### 337 337, prime number, emirp, permutable prime with 373 and 733, Chen prime,[9] star number #### 338 338 = 2 × 132, nontotient, number of square (0,1)-matrices without zero rows and with exactly 4 entries equal to 1.[41] #### 339 339 = 3 × 113, Ulam number[42] ### 340s #### 340 340 = 22 × 5 × 17, sum of eight consecutive primes (29 + 31 + 37 + 41 + 43 + 47 + 53 + 59), sum of ten consecutive primes (17 + 19 + 23 + 29 + 31 + 37 + 41 + 43 + 47 + 53), sum of the first four powers of 4 (41 + 42 + 43 + 44), divisible by the number of primes below it, nontotient, noncototient.[14] Number of regions formed by drawing the line segments connecting any two of the 12 perimeter points of a 3 times 3 grid of squares (sequence A331452 in the OEIS) and (sequence A255011 in the OEIS). #### 341 341 = 11 × 31, sum of seven consecutive primes (37 + 41 + 43 + 47 + 53 + 59 + 61), octagonal number,[43] centered cube number,[44] super-Poulet number. 341 is the smallest Fermat pseudoprime; it is the least composite odd modulus m greater than the base b, that satisfies the Fermat property "bm−1 − 1 is divisible by m", for bases up to 128 of b = 2, 15, 60, 63, 78, and 108. #### 342 342 = 2 × 32 × 19, pronic number,[8] Untouchable number.[6] #### 343 343 = 73, the first nice Friedman number that is composite since 343 = (3 + 4)3. It's the only known example of x2+x+1 = y3, in this case, x=18, y=7. It is z3 in a triplet (x,y,z) such that x5 + y2 = z3. #### 344 344 = 23 × 43, octahedral number,[45] noncototient,[14] totient sum of the first 33 integers, refactorable number.[32] #### 345 345 = 3 × 5 × 23, sphenic number,[13] idoneal number #### 346 346 = 2 × 173, Smith number,[21] noncototient.[14] #### 347 347 is a prime number, emirp, safe prime,[46] Eisenstein prime with no imaginary part, Chen prime,[9] Friedman prime since 347 = 73 + 4, and a strictly non-palindromic number. #### 348 348 = 22 × 3 × 29, sum of four consecutive primes (79 + 83 + 89 + 97), refactorable number.[32] #### 349 349, prime number, sum of three consecutive primes (109 + 113 + 127), 5349 - 4349 is a prime number.[47] ### 350s #### 350 350 = 2 × 52 × 7 = ${\displaystyle \left\{{7 \atop 4}\right\}}$ , primitive semiperfect number,[5] divisible by the number of primes below it, nontotient, a truncated icosahedron of frequency 6 has 350 hexagonal faces and 12 pentagonal faces. #### 351 351 = 33 × 13, triangular number, sum of five consecutive primes (61 + 67 + 71 + 73 + 79), member of Padovan sequence[48] and number of compositions of 15 into distinct parts.[49] #### 352 352 = 25 × 11, the number of n-Queens Problem solutions for n = 9. It is the sum of two consecutive primes (173 + 179), lazy caterer number (sequence A000124 in the OEIS). #### 353 353 is a prime number, Chen prime,[9] Proth prime,[50] Eisenstein prime with no imaginary part, palindromic prime, and Mertens function returns 0.[39] 353 is the base of the smallest 4th power that is the sum of 4 other 4th powers, discovered by Norrie in 1911: 3534 = 304 + 1204 + 2724 + 3154. 353 is an index of a prime Lucas number.[51] #### 354 354 = 2 × 3 × 59 = 14 + 24 + 34 + 44,[52][53] sphenic number,[13] nontotient, also SMTP code meaning start of mail input. It is also sum of absolute value of the coefficients of Conway's polynomial. #### 355 355 = 5 × 71, Smith number,[21] Mertens function returns 0,[39] divisible by the number of primes below it. The numerator of the best simplified rational approximation of pi having a denominator of four digits or fewer. This fraction (355/113) is known as Milü and provides an extremely accurate approximation for pi. #### 356 356 = 22 × 89, Mertens function returns 0.[39] #### 357 357 = 3 × 7 × 17, sphenic number.[13] #### 358 358 = 2 × 179, sum of six consecutive primes (47 + 53 + 59 + 61 + 67 + 71), Mertens function returns 0,[39] number of ways to partition {1,2,3,4,5} and then partition each cell (block) into subcells.[54] #### 359 359 is a prime number, Sophie Germain prime,[55] safe prime,[46] Eisenstein prime with no imaginary part, Chen prime,[9] and strictly non-palindromic number. ### 360s #### 360 360 = triangular matchstick number.[56] #### 361 361 = 192, centered triangular number,[18] centered octagonal number, centered decagonal number,[57] member of the Mian–Chowla sequence;[58] also the number of positions on a standard 19 x 19 Go board. #### 362 362 = 2 × 181 = σ2(19): sum of squares of divisors of 19,[59] Mertens function returns 0,[39] nontotient, noncototient.[14] #### 363 363 = 3 × 112, sum of nine consecutive primes (23 + 29 + 31 + 37 + 41 + 43 + 47 + 53 + 59), Mertens function returns 0,[39] perfect totient number.[30] #### 364 364 = 22 × 7 × 13, tetrahedral number,[60] sum of twelve consecutive primes (11 + 13 + 17 + 19 + 23 + 29 + 31 + 37 + 41 + 43 + 47 + 53), Mertens function returns 0,[39] nontotient. It is a repdigit in base 3 (111111), base 9 (444), base 25 (EE), base 27 (DD), base 51 (77) and base 90 (44), the sum of six consecutive powers of 3 (1 + 3 + 9 + 27 + 81 + 243), and because it is the twelfth non-zero tetrahedral number.[61] 365 = 5 × 73 #### 366 366 = 2 × 3 × 61, sphenic number,[13] Mertens function returns 0,[39] noncototient,[14] number of complete partitions of 20,[62] 26-gonal and 123-gonal. #### 367 367 is a prime number, Perrin number,[63] happy number, prime index prime and a strictly non-palindromic number. #### 368 368 = 24 × 23. It is also a Leyland number.[22] #### 369 369 = 32 × 41, it is the magic constant of the 9 × 9 normal magic square and n-queens problem for n = 9; there are 369 free polyominoes of order 8. With 370, a Ruth–Aaron pair with only distinct prime factors counted. ### 370s #### 370 370 = 2 × 5 × 37, sphenic number,[13] sum of four consecutive primes (83 + 89 + 97 + 101), nontotient, with 369 part of a Ruth–Aaron pair with only distinct prime factors counted, Base 10 Armstrong number since 33 + 73 + 03 = 370. #### 371 371 = 7 × 53, sum of three consecutive primes (113 + 127 + 131), sum of seven consecutive primes (41 + 43 + 47 + 53 + 59 + 61 + 67), sum of the primes from its least to its greatest prime factor (sequence A055233 in the OEIS), the next such composite number is 2935561623745, Armstrong number since 33 + 73 + 13 = 371. #### 372 372 = 22 × 3 × 31, sum of eight consecutive primes (31 + 37 + 41 + 43 + 47 + 53 + 59 + 61), noncototient,[14] untouchable number,[6] refactorable number.[32] #### 373 373, prime number, balanced prime,[64] two-sided prime, sum of five consecutive primes (67 + 71 + 73 + 79 + 83), permutable prime with 337 and 733, palindromic prime in 3 consecutive bases: 5658 = 4549 = 37310 and also in base 4: 113114. #### 374 374 = 2 × 11 × 17, sphenic number,[13] nontotient, 3744 + 1 is prime.[65] #### 375 375 = 3 × 53, number of regions in regular 11-gon with all diagonals drawn.[66] #### 376 376 = 23 × 47, pentagonal number,[34] 1-automorphic number,[67] nontotient, refactorable number.[32] #### 377 377 = 13 × 29, Fibonacci number, a centered octahedral number,[68] a Lucas and Fibonacci pseudoprime, the sum of the squares of the first six primes. #### 378 378 = 2 × 33 × 7, triangular number, cake number, hexagonal number,[27] Smith number.[21] #### 379 379 is a prime number, Chen prime,[9] lazy caterer number (sequence A000124 in the OEIS) and a happy number in base 10. It is the sum of the 15 consecutive primes (3 + 5 + 7 + 11 + 13 + 17 + 19 + 23 + 29 + 31 + 37 + 41 + 43 + 47 + 53). 379! - 1 is prime. ### 380s #### 381 381 = 3 × 127, palindromic in base 2 and base 8. It is the sum of the first 16 prime numbers (2 + 3 + 5 + 7 + 11 + 13 + 17 + 19 + 23 + 29 + 31 + 37 + 41 + 43 + 47 + 53). #### 382 382 = 2 × 191, sum of ten consecutive primes (19 + 23 + 29 + 31 + 37 + 41 + 43 + 47 + 53 + 59), Smith number.[21] #### 383 383, prime number, safe prime,[46] Woodall prime,[69] Thabit number, Eisenstein prime with no imaginary part, palindromic prime. It is also the first number where the sum of a prime and the reversal of the prime is also a prime.[70] 4383 - 3383 is prime. #### 385 385 = 5 × 7 × 11, sphenic number,[13] square pyramidal number,[71] the number of integer partitions of 18. 385 = 102 + 92 + 82 + 72 + 62 + 52 + 42 + 32 + 22 + 12 #### 386 386 = 2 × 193, nontotient, noncototient,[14] centered heptagonal number,[19] number of surface points on a cube with edge-length 9.[72] #### 387 387 = 32 × 43, number of graphical partitions of 22.[73] #### 388 388 = 22 × 97 = solution to postage stamp problem with 6 stamps and 6 denominations,[74] number of uniform rooted trees with 10 nodes.[75] #### 389 389, prime number, emirp, Eisenstein prime with no imaginary part, Chen prime,[9] highly cototient number,[33] strictly non-palindromic number. Smallest conductor of a rank 2 Elliptic curve. ### 390s #### 390 390 = 2 × 3 × 5 × 13, sum of four consecutive primes (89 + 97 + 101 + 103), nontotient, ${\displaystyle \sum _{n=0}^{10}{390}^{n}}$  is prime[76] #### 391 391 = 17 × 23, Smith number,[21] centered pentagonal number.[37] #### 392 392 = 23 × 72, Achilles number. #### 393 393 = 3 × 131, Blum integer, Mertens function returns 0.[39] #### 394 394 = 2 × 197 = S5 a Schröder number,[77] nontotient, noncototient.[14] #### 395 395 = 5 × 79, sum of three consecutive primes (127 + 131 + 137), sum of five consecutive primes (71 + 73 + 79 + 83 + 89), number of (unordered, unlabeled) rooted trimmed trees with 11 nodes.[78] #### 396 396 = 22 × 32 × 11, sum of twin primes (197 + 199), totient sum of the first 36 integers, refactorable number,[32] Harshad number, digit-reassembly number. #### 397 397, prime number, cuban prime,[36] centered hexagonal number.[38] #### 398 398 = 2 × 199, nontotient. ${\displaystyle \sum _{n=0}^{10}{398}^{n}}$  is prime[76] #### 399 399 = 3 × 7 × 19, sphenic number,[13] smallest Lucas–Carmichael number, Leyland number of the second kind. 399! + 1 is prime. ## References 1. ^ a b c Sloane, N. J. A. (ed.). "Sequence A007770 (Happy numbers)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 2016-05-21. 2. Sloane, N. J. A. (ed.). "Sequence A005277 (Nontotients)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 2016-05-21. 3. ^ a b Sloane, N. J. A. (ed.). "Sequence A000607 (Number of partitions of n into prime parts)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. 4. ^ 5. ^ a b Sloane, N. J. A. (ed.). "Sequence A006036 (Primitive pseudoperfect numbers)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 2016-05-21. 6. Sloane, N. J. A. (ed.). "Sequence A005114 (Untouchable numbers)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 2016-05-21. 7. ^ Sloane, N. J. A. (ed.). "Sequence A014342 (Convolution of primes with themselves)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. 8. ^ a b c Sloane, N. J. A. (ed.). "Sequence A002378 (Oblong numbers)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 2016-05-21. 9. Sloane, N. J. A. (ed.). "Sequence A109611 (Chen primes)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 2016-05-21. 10. ^ Sloane, N. J. A. (ed.). "Sequence A006534". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 2022-05-10. 11. ^ Sloane, N. J. A. (ed.). "Sequence A002413 (Heptagonal pyramidal numbers)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 2016-05-22. 12. ^ Sloane, N. J. A. (ed.). "Sequence A007053 (Number of primes <= 2^n)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 2022-06-02. 13. Sloane, N. J. A. (ed.). "Sequence A007304 (Sphenic numbers)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 2016-05-21. 14. Sloane, N. J. A. (ed.). "Sequence A005278 (Noncototients)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 2016-05-21. 15. ^ 16. ^ Sloane, N. J. A. (ed.). "Sequence A006720 (Somos-4 sequence)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. 17. ^ "A053624 - OEIS". oeis.org. 18. ^ a b Sloane, N. J. A. (ed.). "Sequence A005448 (Centered triangular numbers)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 2016-05-21. 19. ^ a b Sloane, N. J. A. (ed.). "Sequence A069099 (Centered heptagonal numbers)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 2016-05-21. 20. ^ Guy, Richard; Unsolved Problems in Number Theory, p. 7 ISBN 1475717385 21. Sloane, N. J. A. (ed.). "Sequence A006753 (Smith numbers)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 2016-05-21. 22. ^ a b Sloane, N. J. A. (ed.). "Sequence A076980 (Leyland numbers)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 2016-05-22. 23. ^ Sloane, N. J. A. (ed.). "Sequence A001850 (Central Delannoy numbers)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 2016-05-21. 24. ^ Sloane, N. J. A. (ed.). "Sequence A000032 (Lucas numbers)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 2016-05-21. 25. ^ Sloane, N. J. A. (ed.). "Sequence A001006 (Motzkin numbers)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 2016-05-22. 26. ^ "A000290 - OEIS". oeis.org. Retrieved 2022-10-23. 27. ^ a b Sloane, N. J. A. (ed.). "Sequence A000384 (Hexagonal numbers)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 2016-05-22. 28. ^ Sloane, N. J. A. (ed.). "Sequence A001106 (9-gonal numbers)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 2016-05-22. 29. ^ Sloane, N. J. A. (ed.). "Sequence A060544 (Centered 9-gonal numbers)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 2016-05-22. 30. ^ a b Sloane, N. J. A. (ed.). "Sequence A082897 (Perfect totient numbers)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 2016-05-22. 31. ^ Sloane, N. J. A. (ed.). "Sequence A332835 (Number of compositions of n whose run-lengths are either weakly increasing or weakly decreasing)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 2022-06-02. 32. Sloane, N. J. A. (ed.). "Sequence A033950 (Refactorable numbers)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 2016-05-22. 33. ^ a b Sloane, N. J. A. (ed.). "Sequence A100827 (Highly cototient numbers)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 2016-05-22. 34. ^ a b Sloane, N. J. A. (ed.). "Sequence A000326 (Pentagonal numbers)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 2016-05-22. 35. ^ Sloane, N. J. A. (ed.). "Sequence A036913 (Sparsely totient numbers)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 2016-05-22. 36. ^ a b Sloane, N. J. A. (ed.). "Sequence A002407 (Cuban primes)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 2016-05-22. 37. ^ a b Sloane, N. J. A. (ed.). "Sequence A005891 (Centered pentagonal numbers)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 2016-05-22. 38. ^ a b Sloane, N. J. A. (ed.). "Sequence A003215 (Hex numbers)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 2016-05-22. 39. Sloane, N. J. A. (ed.). "Sequence A028442 (Numbers n such that Mertens' function is zero)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 2016-05-22. 40. ^ Sloane, N. J. A. (ed.). "Sequence A003052 (Self numbers)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 2016-05-21. 41. ^ 42. ^ Sloane, N. J. A. (ed.). "Sequence A002858 (Ulam numbers)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. 43. ^ Sloane, N. J. A. (ed.). "Sequence A000567 (Octagonal numbers)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 2016-05-22. 44. ^ Sloane, N. J. A. (ed.). "Sequence A005898 (Centered cube numbers)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 2016-05-22. 45. ^ Sloane, N. J. A. (ed.). "Sequence A005900 (Octahedral numbers)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 2016-05-22. 46. ^ a b c Sloane, N. J. A. (ed.). "Sequence A005385 (Safe primes)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 2016-05-22. 47. ^ Sloane, N. J. A. (ed.). "Sequence A059802 (Numbers k such that 5^k - 4^k is prime)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. 48. ^ Sloane, N. J. A. (ed.). "Sequence A000931 (Padovan sequence)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 2016-05-22. 49. ^ Sloane, N. J. A. (ed.). "Sequence A032020 (Number of compositions (ordered partitions) of n into distinct parts)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 2022-05-24. 50. ^ Sloane, N. J. A. (ed.). "Sequence A080076 (Proth primes)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 2016-05-22. 51. ^ Sloane, N. J. A. (ed.). "Sequence A001606 (Indices of prime Lucas numbers)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. 52. ^ Sloane, N. J. A. (ed.). "Sequence A000538 (Sum of fourth powers: 0^4 + 1^4 + ... + n^4)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. 53. ^ Sloane, N. J. A. (ed.). "Sequence A031971 (a(n) = Sum_{k=1..n} k^n)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. 54. ^ Sloane, N. J. A. (ed.). "Sequence A000258 (Expansion of e.g.f. exp(exp(exp(x)-1)-1))". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. 55. ^ Sloane, N. J. A. (ed.). "Sequence A005384 (Sophie Germain primes p: 2p+1 is also prime)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. 56. ^ Sloane, N. J. A. (ed.). "Sequence A045943 (Triangular matchstick numbers: a(n) = 3*n*(n+1)/2)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. 57. ^ Sloane, N. J. A. (ed.). "Sequence A062786 (Centered 10-gonal numbers)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 2016-05-22. 58. ^ Sloane, N. J. A. (ed.). "Sequence A005282 (Mian-Chowla sequence)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 2016-05-22. 59. ^ Sloane, N. J. A. (ed.). "Sequence A001157 (a(n) = sigma_2(n): sum of squares of divisors of n)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. 60. ^ Sloane, N. J. A. (ed.). "Sequence A000292 (Tetrahedral numbers)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 2016-05-22. 61. ^ Sloane, N. J. A. (ed.). "Sequence A000292 (Tetrahedral (or triangular pyramidal) numbers)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. 62. ^ Sloane, N. J. A. (ed.). "Sequence A126796 (Number of complete partitions of n)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. 63. ^ Sloane, N. J. A. (ed.). "Sequence A001608 (Perrin sequence)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 2016-05-22. 64. ^ Sloane, N. J. A. (ed.). "Sequence A006562 (Balanced primes)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 2016-05-22. 65. ^ Sloane, N. J. A. (ed.). "Sequence A000068 (Numbers k such that k^4 + 1 is prime)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. 66. ^ 67. ^ Sloane, N. J. A. (ed.). "Sequence A003226 (Automorphic numbers)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 2016-05-22. 68. ^ Sloane, N. J. A. (ed.). "Sequence A001845 (Centered octahedral numbers (crystal ball sequence for cubic lattice))". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 2022-06-02. 69. ^ Sloane, N. J. A. (ed.). "Sequence A050918 (Woodall primes)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 2016-05-22. 70. ^ Sloane, N. J. A. (ed.). "Sequence A072385 (Primes which can be represented as the sum of a prime and its reverse)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 2019-06-02. 71. ^ Sloane, N. J. A. (ed.). "Sequence A000330 (Square pyramidal numbers)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 2016-05-22. 72. ^ Sloane, N. J. A. (ed.). "Sequence A005897 (a(n) = 6*n^2 + 2 for n > 0, a(0)=1)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. 73. ^ Sloane, N. J. A. (ed.). "Sequence A000569 (Number of graphical partitions of 2n)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. 74. ^ 75. ^ Sloane, N. J. A. (ed.). "Sequence A317712 (Number of uniform rooted trees with n nodes)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. 76. ^ a b Sloane, N. J. A. (ed.). "Sequence A162862 (Numbers n such that n^10 + n^9 + n^8 + n^7 + n^6 + n^5 + n^4 + n^3 + n^2 + n + 1 is prime)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 2022-06-02. 77. ^ Sloane, N. J. A. (ed.). "Sequence A006318 (Large Schröder numbers)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 2016-05-22. 78. ^
2023-03-30 06:42:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 6, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7263363003730774, "perplexity": 1896.0007958188842}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949097.61/warc/CC-MAIN-20230330035241-20230330065241-00171.warc.gz"}
https://eclassmate.in/important-points-of-electricity/
# Important points of electricity ## Important Points of Electricity Current : The rate of flow of charge (Q) through a conductor is called current . Current (I) is given by, Current = $\frac{{Ch\arg&space;e\,}}{{Time}}$ or        I = $\frac{Q}{t}$ The SI unit of current is ampere (A) : 1A = 1 C/s The current flowing through a circuit is measured by a device called ammeter. Ammeter is connected in series with the conductor. The direction of the current is taken as the direction of the flow of positive charge. Ohm’s law : At any constant temperature, the current (I) flowing through a conductor is directly proportional  to the potential (V) applied across it. Mathematically, I = V/R or   V = IR Resistance : Resistance is the property of a conductor by virtue of which it opposes the flow of electricity through it. Resistance is measured in ohms. Resistance is a scalar quantity. Resistivity : The resistance offered by a cube of a substance having side of 1 meter, when current flows perpendicular to the opposite faces, is called its resistivity (ρ). The SI unit of resistivity is ohm.m. • Equivalent resistance : A single resistance which can replace a combination of resistances so that current through the circuit remains the same is called equivalent resistance. •  Law of combination of resistances in series : When a number of resistance are connected in series, their equivalent resistance is equal to the sum of the individual resistances. If R1, R2, R3, etc. are combined in series, then the equivalent resistance (R) is given by, R = R1 + R2 + R3 + ….. The equivalent resistance of a number of resistances connected in series is higher than each individual resistance. • Law of combination of resistances in parallel : When a number of resistances are connected in parallel, the reciprocal of the equivalent resistance is equal to the sum of the reciprocals of the individual resistances. If R1, R2, R3, etc. are combined in parallel, then the equivalent resistance (R) is given by. $\frac{1}{R}=\frac{1}{{{{R}_{1}}}}+\frac{1}{{{{R}_{2}}}}+\frac{1}{{{{R}_{3}}}}$ The equivalent resistance of a number of resistances connected in parallel is less than each of all the individual resistances.
2021-03-07 09:17:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7462442517280579, "perplexity": 465.5335982401325}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178376206.84/warc/CC-MAIN-20210307074942-20210307104942-00633.warc.gz"}
https://sharepoint.stackexchange.com/questions/245047/how-to-show-unique-values-in-a-lookup-column/263667
# How to show unique values in a lookup column? One of the Projects I am working on, I have a site lookup column being used in many libraries across various subsites. The list column to which the lookup column is referenced to has some duplicate values, the reason is because they have other metadata fields that contain different value. So this is why the lookup column shows some duplicates. Is there anyway possible to only show unique values in the lookup drop-down by leveraging JSOM since I am working with Office 365/SharePoint online. There's no inherent way to filter out duplicates. You could write a client side script to scan the options and filter out the duplicates, but which of the duplicates should it take? Is it just the "Title" that's important? If it creates a link to the lookup item, is that reference important? Here's what I would do. 1. Create a calculated column that combines Title + Some Other Field to create a unique, non-ambiguous value. 2. Change the lookup to reference the new calculated column for its display, instead of the default Title or whatever it's set to 3. When you pick from the dropdown now, you'll see your calculated value that makes it clear which item you're selecting. UPDATE! Ok, if you really only want the unique values from a field in another list, you might consider creating a separate list that stores these values and have both lists reference this same list. In terms of relational design, that would be optimal. However, in SharePoint, we know things aren't always optimal. You have to cut corners. So a similar approach would be to create the 3rd list with the unique values, and automatically add items to this list through a workflow whenever the original "lookup" list has an item added/updated. Then your 2nd list would reference this instead. Again, you could also just create a script to scan the select options and remove duplicates by name... jquery could do this in a few lines of code, but this feels bad to me. Just my 2c. This works for me. Wait until the entire page is finished loading and run this script //Removes Duplicate communities function runLast(){ var usedNames = {}; $("select[title='Indigenous Relationship'] > option").each(function () { if(usedNames[this.text]) {$(this).remove(); } else { usedNames[this.text] = this.value; } }); }
2019-10-16 02:38:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18519116938114166, "perplexity": 1397.5098875387903}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986661296.12/warc/CC-MAIN-20191016014439-20191016041939-00489.warc.gz"}
http://gps.ijl.univ-lorraine.fr/Groupe_Physique_Statistique/seminaire.php?theme=defaut-GPS&lang=fr_FR&titre=Corner%20contribution%20to%20cluster%20numbers
# Groupe de Physique Statistique ## Equipe 106, Institut Jean Lamour •  •  •  •  • ### Séminaire de groupe Corner contribution to cluster numbers Ferenc Iglói Institut Wigner (Budapest) lundi 12 mai 2014 , 10h25 Salle de séminaire du groupe de Physique Statistique For the two-dimensional $Q$-state Potts model at criticality, we consider Fortuin-Kasteleyn and spin clusters and study the average number $N_\Gamma$ of clusters that intersect a given contour $\Gamma$. To leading order, $N_\Gamma$ is proportional to the length of the curve. Additionally, however, there occur logarithmic contributions related to the corners of $\Gamma$. These are found to be universal and their size can be calculated employing techniques from conformal field theory. For the Fortuin-Kasteleyn clusters relevant to the thermal phase transition we find agreement with these predictions from large-scale numerical simulations. For the spin clusters, on the other hand, the cluster numbers are not found to be consistent with the values obtained by analytic continuation, as conventionally assumed. We mention possible extension of the results for systems with quenched disorder, as well as for three-dimensional problems.
2017-11-20 15:36:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4809200167655945, "perplexity": 1298.9351231856704}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806070.53/warc/CC-MAIN-20171120145722-20171120165722-00706.warc.gz"}
https://math.dartmouth.edu/~colloq/s11/ChantalDavid.phtml
This website uses features that are not well-supported by your browser. Please consider upgrading to a browser and version that fully supports CSS Grid and the CSS Flexible Box Layout Module. NB: A PDF version of this announcement (suitable for posting) is also available. ## Fluctuations in the number of points of curves over finite fields ### Thursday, May 19, 2011 007 Kemeny Hall, 4 pm Tea 3:30 pm, 300 Kemeny Hall Abstract: We study in this talk the distribution of the number of points for two families of curves over a finite field with $q$ elements: cyclic covers of $\PP^1$ and smooth plane curves. The Katz-Sarnak philosophy makes predictions about the statistics for such families in the large $q$ limit when the genus is fixed. We are looking at the complementary statistics, when the genus varies, but the field of definition is fixed. In that case, one can obtain statistics for the distribution of the number of points by sieving the families of curves. This is joint work with A. Bucur, B. Feigon and M. Lalin. This talk will be accessible to graduate students.
2021-10-28 00:53:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2452419102191925, "perplexity": 682.6463644936683}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588246.79/warc/CC-MAIN-20211028003812-20211028033812-00520.warc.gz"}
https://www.sparrho.com/item/method-and-system-for-digital-noise-reduction-of-scaled-compressed-video-pictures/df6670/
# Method and system for digital noise reduction of scaled compressed video pictures Imported: 17 Feb '17 | Published: 23 Sep '14 USPTO - Utility Patents ## Abstract In a video processing device, scale of a video image is detected for vertical and horizontal directions based on pixel information, for example, per pixel vertical and horizontal gradients. Gradients are utilized or discarded based on picture format, standard deviation of luma levels and pixel location relative to black border edges, graphics and/or overlaid content. Mosquito noise filters are adapted based on scale and/or noise strength. Median and/or linear filter results are selected based on a weakest, a strongest and/or a blended result. Horizontal and vertical operations are performed separately for edge detection, edge strength determination, filtering and filter correction control. Horizontal and vertical block grid spacing and grid shift are determined. Block noise strength is determined. Block noise filters are configured based on scaling and/or noise strength. Filter corrections are limited based on block noise strength. Noise reduction results may be blended to generate a pixel correction value. ## Description ### CROSS-REFERENCE TO RELATED APPLICATIONS/INCORPORATION BY REFERENCE This application makes reference to: • U.S. patent application Ser. No. 11/087,491 filed Mar. 22, 2005; • U.S. patent application Ser. No. 11/083,597 filed Mar. 18, 2005; • U.S. patent application Ser. No. 11/090,642 which was filed on Mar. 25, 2005; • U.S. patent application Ser. No. 11/089,788 which was filed on Mar. 25, 2005; and • U.S. patent application Ser. No. 11/491,599 which was filed on Jul. 24, 2006. Each of the above stated applications is hereby incorporated herein by reference in its entirety. ### FIELD OF THE INVENTION Certain embodiments of the invention relate to communication systems. More specifically, certain embodiments of the invention relate to digital noise reduction of scaled compressed video pictures. ### BACKGROUND OF THE INVENTION Advances in compression techniques for audio-visual information have resulted in cost effective and widespread recording, storage, and/or transfer of movies, video, and/or music content over a wide range of media. The Moving Picture Experts Group (MPEG) family of standards is among the most commonly used digital compressed formats. A major advantage of MPEG compared to other video and audio coding formats is that MPEG-generated files tend to be much smaller for the same quality. This is because MPEG uses very sophisticated compression techniques. However, MPEG compression may be lossy and, in some instances, it may distort the video content. In this regard, the more the video is compressed, that is, the higher the compression ratio, the less the reconstructed video resembles the original information. Some examples of MPEG video distortion are a loss of texture, detail, and/or edges. MPEG compression may also result in ringing on sharper edges and/or discontinuities on block edges. Because MPEG compression techniques are based on defining blocks of video image samples for processing, MPEG compression may also result in visible “macroblocking” that may result due to bit errors. In MPEG, a macroblock is the area covered by a 16×16 array of luma samples in a video image. Luma may refer to a component of the video image that represents brightness. Moreover, noise due to quantization operations, as well as aliasing and/or temporal effects may all result from the use of MPEG compression operations. When MPEG video compression results in loss of detail in the video image it is said to “blur” the video image. In this regard, operations that are utilized to reduce compression-based blur are generally called image enhancement operations. When MPEG video compression results in added distortion on the video image it is said to produce “artifacts” on the video image. For example, the term “mosquito noise” may refer to MPEG artifacts that may be caused by the quantization of high spatial frequency components in the image. Mosquito noise may also be referred to as “ringing” or “Gibb's effect.” In another example, the term “block noise” may refer to MPEG artifacts that may be caused by the quantization of low spatial frequency information in the image. Block noise may appear as edges on 8×8 blocks and may give the appearance of a mosaic or tiling pattern on the video image. Mosquito noise commonly appears near sharp luma edges, making credits, text, and/or cartoons particularly susceptible to this form of artifact. Mosquito noise may be more common, and generally more severe, at low bit rates. For example, mosquito noise may be more severe when macroblocks are coded with a higher quantization scale and/or on a larger quantization matrix. Mosquito noise may tend to appear as very high spatial frequencies within the processing block. In some instances, when the input video to the MPEG compression operation has any motion, the mosquito noise generated may tend to vary rapidly and/or randomly resulting in flickering noise. Flickering noise may be particularly objectionable to a viewer of the decompressed video image. In other instances, when the input video to the MPEG compression operation is constant, the mosquito noise that results is generally constant as well. Horizontal edges tend to generate horizontal ringing while vertical edges tend to generate vertical ringing. While mosquito noise may also occur in the color components or chroma of a video image, it may generally be less of a problem since it is less objectionable to a viewer of the decompressed video image. Block noise may generally occur near a block boundary. While block noise may occur anywhere on an image, it is more commonly seen in nearly smooth regions, such as the sky and faces, or in high motion or high variance regions, such as moving water. Block noise may be more common, and generally more severe, at low bit rates. For example, block noise may be more severe when macroblocks are coded with a higher quantization scale and/or on a larger quantization matrix. While block noise is typically caused by quantization of low spatial frequency terms that result from the DCT operation, it is not generally caused by the quantization of the DC term. For example, MPEG compression generally provides at least 8 bits when quantizing the DC term of intra coded blocks. Block noise may also appear at discontinuities located at or near the block edges. The block boundaries may remain fixed even when the video image moves. In this regard, a static block pattern may stand out strongly against a moving background, a condition that may be highly objectionable from a viewer's perspective. In some instances, however, motion vectors generated during MPEG compression may cause block noise to move with the video image, but this is generally less common and less objectionable from a viewer's perspective. Block noise may be more objectionable on vertical edges than on horizontal edges, particularly on an interlaced display. Block noise may generally be more pronounced in certain picture coding types. For example, block noise may be often worse in intra coded pictures or I-pictures and in predicted pictures or P-pictures. While block noise is generally associated with the luma component of a video image, it may also occur in the chroma component of a video image. However, the block noise in the chroma component may generally be less of a problem since it is less objectionable to a viewer of the decompressed video image. Further limitations and disadvantages of conventional and traditional approaches will become apparent to one of skill in the art, through comparison of such systems with the present invention as set forth in the remainder of the present application with reference to the drawings. ### BRIEF SUMMARY OF THE INVENTION A system and/or method for digital noise reduction of scaled compressed video pictures. Various advantages, aspects and novel features of the present invention, as well as details of an illustrated embodiment thereof, will be more fully understood from the following description and drawings. ### DETAILED DESCRIPTION OF THE INVENTION Certain embodiments of the invention can be found in a method and system for digital noise reduction of scaled compressed video pictures. In accordance with various embodiments of the invention, in a video processing device, scale may be detected in a video image. The scale may be detected in one or both of vertical direction and horizontal direction based on pixel information for the video image. One or both of a first video noise reduction operation and a second video noise reduction operation which are utilized for processing at least a portion of the video image, may be controlled based on the detected scale. A pixel correction value may be generated based on one or both of results from the first video noise reduction operation and results from the second video noise reduction operation. The results from the first video noise reduction operation and the results from the second video noise reduction operation may be blended to generate the pixel correction value. At least one pixel value may be corrected for the video image utilizing the generated pixel correction value. The first video noise reduction operation and the second video noise reduction operation may comprise mosquito noise reduction and block noise reduction. The scale may be determined based one or both of a per pixel vertical gradient measurement and a per pixel horizontal gradient measurement. Which gradient measurements to utilize and/or which gradient measurements to discard, may be determined based on one or more of the following: configured picture format information associated with the video image, standard deviation of luma levels in one or both of a vertical window and a horizontal window about a current pixel of said video image, and a current pixel location relative to edges of black borders, graphics and/or overlaid content associated with said video image. During one or both of the first video noise reduction operation and the second video noise reduction operation, horizontal operations which may correspond to the horizontal direction may be performed separately from vertical operations which may correspond to the vertical direction. The video noise operations may comprise: detecting horizontal and vertical edges, determining strength of horizontal and vertical edges, filtering horizontal and vertical edges and controlling the amount of horizontal filtering and the amount of vertical filtering. For example, horizontal filtering and/or vertical filtering may be adapted based on the determined horizontal direction scale and/or the determined vertical direction scale. Furthermore, the horizontal and/or vertical filtering may be adapted based on the determined strength of the horizontal and the vertical edges. Results from one or both of the first video noise reduction operation and the second video noise reduction operation may be determined based on one or more of: selecting a weakest filter correction from a median filter and one or more linear filters, blending filter corrections from a median filter and one or more linear filters and selecting a strongest filter correction from a median filter and one or more linear filters. Horizontal spacing of a block noise grid, vertical spacing of a block noise grid, horizontal shift of a block noise grid, vertical shift of a block noise grid and/or block noise strength may be determined during one or both of the first video noise reduction operation and the second video noise reduction operation. Determination of which pixels to filter in a picture, may be based on one or more of the horizontal spacing, the vertical spacing, the horizontal shift and the vertical shift. Vertical and/or horizontal block noise filters may be configured based on one or more of the horizontal direction scale, the vertical direction scale, the horizontal spacing and the vertical spacing. Filter corrections may be limited based on block noise strength when determining the results from the first video noise reduction operation and/or the results from the second video noise reduction operation. In this manner, digital noise may be reduced in scaled compressed video pictures, for example, when video pictures are scaled after being compressed. FIG. 1 is a block diagram that illustrates exemplary functions in a system which performs digital noise reduction (DNR) of video pictures utilizing scale factor detection information, in accordance with an embodiment of the invention. Referring to FIG. 1, there is shown, a video display device digital noise reduction system 100 which comprises a video input 120, a scale factor detection unit 102, a mosquito noise detection unit 104, a block grid detection unit 108, a mosquito noise reduction unit 106 and a block noise reduction unit 110. The video display device digital noise reduction system 100 may be integrated in a video display device or digital television, for example. The video display device digital noise reduction system 100 may be operable to receive the video input 120 from, for example, a set top box or disk device such as a Blue-ray device. The video input 120 may comprise a sequence and/or stream of decoded video data. The video input 120 may comprise pictures or images that may, for example, comprise progressive frames, interlaced fields and or converted pictures. For simplicity, a field and/or a frame may sometimes be referred to as a picture or an image. The following discussion will generally use the terms “image” and “picture” interchangeably. Accordingly, notions of a difference between the terms “image” and “picture” should not limit the scope of various aspects of the present invention. The video input 120 may or may not be spatially scaled. Scaling may change the size of a picture in one or both of the horizontal direction (x) and the vertical direction (y). When video input 120 is not scaled, it may be referred to as native content or as comprising a native resolution. Digital noise reduction (DNR) may be utilized to reduce noise artifacts, and may comprise removing mosquito noise (MN) block noise (BN) and contours. Noise artifacts may comprise added distortion, for example, ringing, while blurring may comprise a visual loss of detail. The video input 120 may have been compressed utilizing transform encoding by a video source and may have been decompressed by a video processing device such as, for example, a set top box or digital television, prior to being sent to the video display device digital noise reduction system 100. For example, the video input 120 may have been compressed or encoded utilizing MPEG 2 where 8×8 blocks of pixels undergo discreet cosine transforms (DCT). The image and/or video transform encoding may introduce ringing artifacts on edges within a video picture. Also, block noise may be introduced along transform block boundaries. When the video input 120 pictures are decompressed, various digital noise artifacts such as, for example, block grid noise and mosquito noise may be found in all or a portion of a video picture. Video input 120 pictures may have been scaled prior to compression coding, subsequent to compression coding and/or subsequent to decompression. In instances when scaling is performed prior to compression or when video is not scaled, digital noise such as block grid noise and mosquito noise may not be scaled up or down with the video content. The location and/or frequency spectrum of the noise may be somewhat predictable in that it may occur according to a known 8×8 DCT block pattern, for example. However, in instances when picture scaling is performed subsequent to compression and/or subsequent to decompression, an 8×8 block noise pattern and other noise components may be scaled along with the content of the video input 120. As a result, location and/or frequency spectrum of digital noise such as block grid noise and/or mosquito noise may be unknown in the video input 120. Detecting a scale factor in the received video input 120 may improve the efficacy of digital noise detection and digital noise reduction. The scale factor detection unit (SFD) 102 comprises suitable logic, circuitry, interfaces and/or code that may be operable to receive the video input 120 and detect whether the video input 120 has been scaled. The SFD 102 may be operable to determine a scale factor for one or more pictures in the video input 120. The scale factor may be communicated to the mosquito noise detection unit 104: the block grid detection unit 108, the mosquito noise reduction unit 106 and/or the block noise reduction unit 110. The SFD 102 may be operable to determine the scale factor based on gradients that are determined with respect to luma pixel values and/or in various embodiments of the invention, with respect to chroma pixel levels. Furthermore, the SFD 102 may be operable exclude certain picture content from gradient analysis in instances when sharp edges in the content itself would yield large gradient values which may be misleading. For example, picture areas comprising high frequency content such as superimposed graphics, subtitles, and black borders which may comprise letter box, pillar box and poster stamp borders, may be flagged and gradient analysis may be excluded in these regions. The mosquito noise detection unit (MND) 104 may comprise suitable logic, circuitry, interfaces and/or code that may be operable to estimate mosquito noise strength of a current picture in the video input 120. Output from the MND 104 may be utilized to adapt mosquito noise reduction (MNR) filters to varying qualities of pictures within the input video 120. In this regard, MNR filters may be adjusted or selected to handle different amounts of mosquito noise in pictures of the video input 120. Mosquito noise strength estimation (MNSE) in the MND 104 may be based on mosquito noise filtering. In some instances various logic, circuitry, interfaces and/or code that may be operable to perform mosquito noise filtering, may also be utilized for MNSE. The MND 104 may estimate the amount of mosquito noise in a picture by determining an amount of filtering that may be applied to all or a portion of the picture. The MND 104 may then normalize the amount of filtering based on a sum of edge strength that may be determined for the entire picture, for example, to determine the MNSE. The mosquito noise reduction unit (MNR) 106 may comprise suitable logic, circuitry, interfaces and/or code that may be operable to perform separate noise detection for horizontal and vertical edges within pictures of the video input 120. In this regard, the vertical edges of mosquito noise may tend to propagate horizontally and the horizontal edges may tend to propagate vertically. The MNR unit 106 may also be operable to perform separate horizontal and vertical filtering where the amount of filtering may be controlled based on the separate horizontal and vertical mosquito noise detection. The MNR unit 106 may be operable to select from a plurality of filter types such as median filters and linear low pass filters for reducing the horizontal and/or the vertical noise components. In this regard, the MNR unit 106 may apply a reduced or minimal amount of filtering to the video input 120, depending on picture content and/or noise characteristics. For example, median filters may work better in areas with sharp edges in video content where linear low pass filters may tend to blur the content. In areas of high frequency patterns and/or smooth surfaces, linear low pass filters may work best while median filters may invert some high frequency patterns or may cause contouring on smooth surfaces. The MNR unit 106 may apply both filters to the video content 120 and may utilize the one that resulted in the least filtering. The MNR unit 106 may be operable to adjust to the size of a filter, for example, the size of the MNR filters may be programmable. In this regard, the number of horizontal and vertical taps and/or coefficients may vary depending on the size and/or scale factor of the video input 120. The block grid detection unit (BGD) 108 may comprise suitable logic, circuitry, interfaces and/or code that may be operable to estimate the location of grid lines associated with block noise and assess the strength of the block grid in the video input 120. For example, a block grid size which may comprise the distance between two consecutive block boundaries may be determined for the vertical direction and also for the horizontal direction in a video picture. Also, a block grid shift value or phase may be determined which may comprise an offset of the first horizontal grid edge from the top of the picture and an offset of the first vertical edge from the left of the picture. In addition, a per picture measurement of block noise strength may be determined for each horizontal and/or vertical direction in a picture. The block grid detection unit 108 may be operable to handle any suitable block size and/or shift value and may not know scaling values of the input video 120. The determined block grid localization and strength information may be may be communicated to the block noise reduction unit 110 for filtering of the block noise. The block grid detection output may be utilized to determine an amount of filtering to apply to video content. Block noise may be present in all or a portion of a picture. The block grid reduction unit 110 may comprise suitable logic, circuitry, interfaces and/or code that may be operable to detect block edges on the block grid of a picture and filter the edges appropriately. The block grid reduction unit 110 may be operable to determine limits that may be utilized to control block noise filtering. For example, in instances when a strong edge is found, a large limit may allow strong filtering. In instances when no edge is found, a zero limit may disable filtering. The block grid reduction unit 110 may utilize horizontal and vertical markers from the block grid detection unit 108 when detecting edges in pictures of the video input 120. In various embodiments of the invention, the noise detection units, for example, the MND 104 and/or the BGD 108 may be operable to perform scale factor detection. For example, scale factor analysis by the MND 104 and/or the BGD 108 may be combined with that of the SFD 206 to refine scale factor detection. In operation, digital picture coding that utilizes block based compression algorithms such as MPEG-2, for example, may introduce block noise and mosquito noise artifacts that may need to be filtered out of a picture. By determining the location and frequency spectrum of the digital noise, 2-D filtering may be adjusted to suppress the noise artifacts with minimal effect to other active regions of video content. In instances when an image has been scaled up, subsequent filtering of block and/or mosquito noise may be affected. For example, when video data is encoded prior to up scaling and/or de-interlacing, the scaling will affect the source data as well as the block noise and mosquito noise artifacts. In this regard, frequency spectrum of the noise artifacts and gradients determined for an image may be reduced when the digital noise is scaled up. Digital filters may be adjusted to mitigate the lower frequency, scaled noise artifacts. FIG. 2 is a block diagram that illustrates an exemplary scale factor detector in a digital noise reduction system, in accordance with an embodiment of the invention. Referring to FIG. 2, there is shown, a digital noise reduction system 200 comprising a video bus receiver 202, line stores 204, a scale factor detection unit (SFD) 206, a digital noise reduction (DNR) unit 208, a chroma delay and filter 210, and a video bus transmitter 212. The video bus receiver (VB RCV) 202 may comprise suitable logic, circuitry, interfaces and/or code that may be operable to receive MPEG-coded and/or decoded images in a format that is in accordance with bus protocol. The VB RCV 202 may also be adapted to convert the received MPEG-coded and/or decoded video images into a different format for transfer to the line stores 204. The line stores 204 may comprise suitable logic, circuitry, interfaces and/or code that may be adapted to convert raster-scanned luma data from a current MPEG-coded video image into parallel lines of luma data. The line stores block 204 may be adapted to operate in a high definition (HD) mode or in a standard definition (SD) mode. Moreover, the line stores block 204 may also be adapted to convert and delay-match the raster-scanned chroma information into a single parallel line. The SFD 206 may be similar and/or substantially the same as the SFD 102 shown in FIG. 1. The SFD 206 may comprise suitable logic, circuitry, interfaces and/or code that may be operable to receive lines of pixels from the line stores 204 and determine horizontal and/or vertical scale factors for a video picture corresponding to the lines of pixels. In this regard, the SFD 206 may be operable to determine an overall gradient for each video picture and may utilize the overall gradient to determine a scale factor for each video picture. The SFD 206 may communicate the scale factor to the digital noise reduction (DNR) unit 208 to be utilized in noise detection and/or noise reduction. The DNR unit 208 may comprise suitable logic, circuitry, interfaces and/or code that may be operable to perform mosquito noise strength estimation, mosquito noise reduction, block grid detection and block noise reduction. In this regard, the DNR unit 208 may comprise the mosquito noise detection unit 104, the block grid detection unit 108, the mosquito noise reduction unit 106 and the block noise reduction unit 110 which are described with respect to FIG. 1. The DNR unit 208 may be operable to output noise reduced luma data to the video bus transmitter 212. The chroma delay and filter 210 may comprise suitable logic, circuitry, interfaces and/or code that may be operable to delay the transfer of chroma pixel information in the chroma data line to the video bus transmitter (VB XMT) 212 to substantially match the time at which the luma data from the digital noise reduction unit 208 is transferred to the VB XMT 212. The VB XMT 212 may comprise suitable logic, circuitry, and/or code that may be adapted to assemble noise-reduced MPEG-coded video images into a format that is in accordance with the bus protocol supported by the VB. In operation, the digital noise reduction system 200 may utilize luma data to estimate spatial scaling in a pixel domain for video pictures. Scaling may comprise stretching or compressing in either or both of a horizontal (x) direction and a vertical (y) direction. In instances when an image is scaled up in a particular direction, a sharp luma edge along that direction may become less sharp or dull in the corresponding scaled image. Therefore, a measurement of gradient at the edge in the duller scaled picture would result in a lower gradient than in a measurement of gradient for the corresponding sharper edge in the original picture. In this manner, luma gradients may be utilized in the estimation of scale factor. In an exemplary embodiment of the invention, the VB RCV 202 may be operable to receive MPEG coded and/or decoded video content and may convert the content into a different format for transfer to the line stores 204. The line stores 204 may receive the video content corresponding to a current video picture and may convert raster-scanned luma data from the current picture into parallel lines of luma data. The line stores 204 may communicate the parallel lines to the scale factor detection (SFD) unit 206 and to the digital noise reduction (DNR) unit 208. The line stores block 204 may also convert and delay match raster-scanned chroma information for the current picture into a single parallel line and may communicate the chroma information to the chroma delay and filter unit 210. The SFD unit 206 may determine an overall gradient for the MPEG coded and/or decoded picture and may estimate a scale factor for the picture. The scale factor may be communicated to the DNR 208 to be utilized in digital noise detection and/or digital noise reduction. The DNR unit 208 may process the received video picture lines and may generate a plurality of noise correction parameters. In this regard, the DNR unit 208 may detect mosquito noise strength in horizontal and vertical directions and may determine whether the mosquito noise has been scaled. The DNR unit 208 may determine block grid locations and offsets and may determine whether block noise has been scaled. The mosquito noise and block noise information may be utilized to adjust filtering parameters, for example, filter strength and filter range. Filters may be applied separately in the horizontal and vertical directions and may be controlled based on edge strength in each direction. The DNR unit 208 may apply block noise filters to the video picture pixels and may communicate the filtered video picture to the VB XMT 212 for transmission to the video bus. FIG. 3 is a block diagram that illustrates exemplary scale factor detection architecture that is operable to determine scale factor in video pictures that are scaled subsequent to compression and/or decompression, in accordance with an embodiment of the invention. Referring to FIG. 3, there is shown, the scale factor detection unit (SFD) 206 which may comprise a gradient determination unit 350, a horizontal IIR 352, a vertical IIR 354, a weighted average unit 356, a slicer with hysteresis 358, a convert to 3 bit unit 358 and a scale factor override 360. In addition, there is shown, the video bus receiver (VB RCV) 202, the line stores 204. The VB RCV 202, the line stores 206 and the SFD 206 are described with respect to FIG. 2 and FIG. 1. The gradient determination unit 350 may comprise suitable logic, circuitry, interfaces and/or code that may be operable to receive luma pixel information from the line stores 204 and may output an overall scale factor for a video image and/or a sequence of images. The gradient determination unit 350 may be operable to generate basic metrics for horizontal and vertical gradients and may be operable to output the separate horizontal and vertical maximum gradients for a video image. The horizontal and vertical gradient metrics may be utilized in the gradient determination unit 350 to generate a decision as to whether a video picture has been scaled. The horizontal and vertical gradients may also be output for use in digital noise reduction and/or to determine separate horizontal and vertical scale factors for a video image. The gradient determination unit 350 may utilize luma data to estimate spatial scaling in the pixel domain for video pictures. Scaling may comprise stretching or compressing in either or both of a horizontal (x) direction and a vertical (y) direction. In instances when an image is scaled up in a particular direction, a sharp luma edge along that direction may become less sharp or dull in the corresponding scaled image. Therefore, a measurement of gradient at the edge in the duller scaled picture would result in a lower gradient than a measurement of gradient for the corresponding sharper edge in the original picture. In this manner, luma gradients may be utilized in the detection of scaling and/or estimation of scale factor. • (1) a maximum and a minimum luma value, MAX (Y) and MIN (Y), may be determined over a vertical window of pixels that is centered about the pixel Y(y, x). The pixels may be received from the available line stores 204. In the case of a vertical gradient, the lines may be horizontally sub sampled; • (2) an absolute difference in luma of adjacent pixels may be determined as, for example, |Y(y+1, x)−Y(y, x)|; • (3) a mean luma level and a standard deviation may be determined for a top portion of the vertical window of pixels and a mean luma level and a standard deviation may be determined for the bottom portion of the vertical window of pixels; • (4) the presence of black pixels and/or a black edge that may belong to a superimposed black border or graphics area may be determined and flagged; and • (5) a “safe region” may be determined where edges within the safe region are not a result of superimposed borders or graphics and where pixels may be effectively utilized to determine gradient. In an exemplary embodiment of the invention, a vertical luma gradient for a current pixel at Y(y, x), may be determined as follows: $gradient_y = ⁢  Y ⁡ ( y + 1 , x ) - Y ⁡ ( y , x )  * 256 MAX ⁡ ( Y ) - MIN ⁡ ( Y ) + GRADY_BIAS ⁢ _OUT = ⁢ Abs_diff * 256 range$ The difference between MAX (Y) and MIN (Y) in the denominator may be referred to as the range. The range may be utilized by decision logic as well as in determination of the gradient. The range in the denominator may provide a normalizing factor that may affect how gradual or sharp a luma step change is relative to other pixel levels in the vertical window of pixels. The term GRADY_BIAS_OUT in the denominator may prevent a divide by zero situation, for example, in regions where luma may have a flat value or where a range is very small. The gradient determination unit 350 may be operable to determine a vertical gradient for a current pixel in the vertical window of pixels, a top standard deviation and bottom standard deviation for the top and bottom portions of pixels in the vertical window of pixels and a black pixel flag for the top and bottom portions of the window of pixels. The gradient determination unit 350 may consider these values as well as one or more previously determined gradients to determine a vertical gradient value for the current pixel and/or to determine whether to discard or retain the vertical gradient for use in determining a maximum vertical gradient for the current video picture. In various embodiments of the invention, a picture format may stored in memory and may be known to the SFD 206. In this regard, horizontal and vertical gradient limits may be asserted based on the known picture format. The horizontal IIR 352 and the vertical IIR 354 may comprise suitable logic, circuitry, interfaces and/or code that may be operable to receive the determined maximum horizontal gradients and vertical gradients for a video picture from the gradient determination unit 350 and may smooth the max gradient values over a sequence of video pictures. Outputs from the horizontal IIR 352 and the vertical IIR 354 may be communicated to the weighted average unit 356. The horizontal IIR 352 and the vertical IIR 354 may be updated at the end of each picture or frame based on the current value of the maximum horizontal and vertical gradients over the picture, respectively. Moreover, the outputs from the horizontal IIR 352 and the vertical IIR 354 may be directly output from the SFD 206. The weighted average unit 356 may comprise suitable logic, circuitry, interfaces and/or code that may be operable to determine an overall average gradient of the horizontal and vertical gradients on a per picture basis utilizing a weighted average, for example. In various embodiments of the invention, the weights may be programmable. Output from the weighted average may be communicated to the slicer with hysteresis unit 358. The slicer with hysteresis unit 358 may comprise suitable logic, circuitry, interfaces and/or code that may be operable to receive the weighted average gradient values and adjust or convert the gradient values to corresponding quantized levels. For example, gradients that may range between 65 and 245 may be assigned to one of twenty corresponding levels; however, the invention is not limited in this regard. The slicer with hysteresis unit 358 may also consider previous changes in gradient levels relative to a current level and/or relative to a hysteresis threshold, in order to determine whether a current change in gradient level is valid. In this manner, unnecessary volatility in the overall gradient level may be reduced. The output of the slicer with hysteresis unit 358 may be utilized to determine the extent of scaling in an image sequence. The sliced levels may enable better filter selection for the remove of digital noise artifacts. The convert to 3 bit unit 358 may reduce unnecessary precision of the slicer with hysteresis unit 358 output. In an alternative embodiment of the invention, outputs from the horizontal IIR 352 and the vertical IIR 354 filters may be utilized to determine separate horizontal and vertical scale factors. In this regard, the weighted average unit 356 may be bypassed and the output from each IIR filter may be communicated to a slicer with hysteresis and convert to 3 bit unit to determine the separate scale factors. The scale factor override 360 may comprise suitable logic, circuitry, interfaces and/or code that may be operable to receive a scale factor from an external source that may be used instead of the scale factor determined based on the gradient determination unit 350. In various embodiments of the invention, the scale factor override 360 may be utilized to select the output from the slicer with hysteresis or the scale factor from the external source. In operation, the SFD 206 may generate a scale factor for an image and/or a sequence of images. The SFD 206 may also provide horizontal and vertical maximum gradients for the image or sequence of images. In various embodiments of the invention, the SFD 206 may be operable to flag an image which is scaled in either or both of the horizontal and vertical directions and may provide corresponding scale factors. For example, the scale information may be communicated to digital noise detection and/or digital noise reduction units and may be utilized to detect noise and/or to adjust digital noise filters. The SFD 206 may be configured to define a safe region within active video content for performing gradient measurements. The safe region may aid in avoiding high contrast regions of, for example, black borders and/or edges of graphics. The SFD 206 may be operable to determine gradients on a pixel by pixel basis in the horizontal and vertical directions utilizing luma information, for example. The SFD 206 may be operable to determine which gradients to retain and which to discard for the determination of a scale factor. For example, a gradient may be discarded when it may cause an image to appear to be at native resolution regardless of whether or not the image was scaled. In this regard, standard deviation of luma levels may be determined to indicate areas of uncharacteristic high frequency noise. The standard deviation information may be enable determination of which gradients to retain and which to neglect. This may provide an amount of immunity from noisy or near Nyquist regions within an image. For example, when vertical luma spatial variation is close to Nyquist, pixels which are interpolated within a region of high frequency might still yield a gradient which may appear as a gradient in an un-scaled image. Furthermore, sharp contrast edges at black borders and/or graphics borders may be avoided when determining max gradients for scale factor detection. The SFD 206 may determine an overall maximum horizontal gradient and an overall maximum vertical gradient for an image. The maximum gradients may be IIR filtered and averaged together. The average maximum gradient for an image may be quantized prior to determine a scale factor for the image and the results may be distributed to other digital noise processing units such as one or more of block grid detection, mosquito noise detection, block grid noise reduction and mosquito noise reduction. FIG. 6 is a diagram that illustrates various aspects of mosquito noise in video systems, in accordance with an embodiment of the invention. Referring to FIG. 6, there is shown a video image 600 comprising typical mosquito noise. Mosquito noise refers to MPEG artifacts that may be caused by quantization of high frequency components. Mosquito noise may be an artifact of 8×8 block, discreet cosign transform (DCT) and may originate within blocks. Motion compensation and/or scaling may carry mosquito noise beyond a block boundary. Mosquito noise may occur near sharp luma edges, for example, credits, text and cartoons may be highly susceptible. Mosquito noise may occur and may be more severe at low bit rates. Mosquito noise may appear as high frequency noise within a block. In some instances, horizontal edges may cause horizontal ringing and vertical edges may cause vertical ringing. Vertical and horizontal ringing may be additive, for example. When the edges are diagonal, a checkerboard pattern may occur near the diagonal edge. The checkerboard patterns may be stronger near an intersection between a horizontal and a vertical edge the ringing that occurs in horizontal or vertical edges. Moreover, mosquito noise may not fade away from edges as does the fast Fourier transform (FFT) ringing that occurs as a result of Gibb's phenomenon. In some instances, the largest mosquito noise spike may actually occur farthest from the edge. FIG. 7 is a block diagram that illustrates an exemplary mosquito noise reduction system that may be operable to perform mosquito noise strength estimation and mosquito noise reduction, in accordance with an embodiment of the invention. Referring to FIG. 7, there is shown a mosquito noise reduction (MNR) system 700 that may comprise a block variance unit 710, a MNR filter and limit unit 720, a local variance unit 730 and a memory 740. The MNR system 700 may comprise suitable logic, circuitry, interfaces and/or code that may be operable to perform separate noise detection for horizontal and vertical edges. The MNR detection may operate over a programmable range. The MNR system 700 may also be operable to perform separate horizontal and vertical filtering of noise artifacts. The amount of applied filtering may be determined and/or controlled based on the separate horizontal and vertical noise detection. Also, the MNR filtering may operate over a programmable range. The amount of applied filtering and the size of the filters, for example, the number of horizontal and/or vertical taps utilized, may be determined based on a determined scale factor, for example. Additional information regarding mosquito noise reduction may be found in U.S. patent application Ser. No. 11/087,491 filed Mar. 22, 2005 and U.S. patent application Ser. No. 11/089,788 which was filed on Mar. 25, 2005. Each of the above stated applications is hereby incorporated herein by reference in its entirety. The block variance unit 710 may comprise suitable logic, circuitry, interfaces and/or code and may be operable to find the strongest vertical and horizontal edges in a specified region of pixels. The region may be referred to as a block. In this regard, block variance refers to estimating edge strength within a block that comprises the current pixel. The size of the specified region utilized for determining block variance may vary and/or may be programmable. Block variance may be determined for data at native resolution or for data scaled at various horizontal and/or vertical scale factors. For example, in instances when a picture or sequence of pictures is scaled up, a larger block of pixels may be utilized to search for edge strength. Exemplary specified blocks or regions may comprise an 8×8 pixel blocks which correspond to one MPEG 2 encoding block, 16×16 pixel blocks, 48×48 pixel blocks, 56×40 pixel blocks and 56×56 pixel blocks. In an exemplary embodiment of the invention, video picture data may be scaled by 3× and the specified regions for determining block variance may comprise 48×48 pixels per block. In this instance, the 48×48 blocks may be subdivided into 8×8 blocks for block variance determination operations. Block variance determination may differentiate edges from noise and/or from content texture. Intermediate results from horizontal and vertical block variance operations may be stored on on-chip memory 740, for example. Results of horizontal and vertical block variance for previous pictures may be stored in off chip memory, for example. The local variance unit 730 may comprise suitable logic, circuitry, interfaces and/or code that may be operable to estimate local variance around a current pixel. The local variance may be utilized to control mosquito noise filtering operations. The size of the pixel region for determining local variance may vary and/or may be programmable. The MNR filter and limit unit 720 may comprise suitable logic, circuitry, interfaces and/or code may be operable to perform separate horizontal and vertical filtering to reduce mosquito noise artifacts. MNR filter and limit unit 720 may comprise, for example, a median filter which may be controlled by a median filter limit. Moreover, the MNR filter and limit unit 720 may comprise linear filters such as, for example, a horizontal FIR filter and a vertical FIR filter which may be controlled by a horizontal limit and a vertical limit respectively. One or both of the median and FIR filters may be utilized to filter the mosquito noise. In this regard, the extent of filtering may be controlled by corresponding limits. The limits may be determined for each pixel and may be determined based on block variances and local variances. Additional median filtering, which may be applied to pixel input, prior to determining edge strengths, may be utilized for texture preservation. The MNR filter and limit unit 720 may support filtering of scaled content and may operate over a programmable range of pixels based on scale factor detection, for example. The MNR filter and limit unit 720 may be operable to adapt to scale factor detection (SFD) and mosquito noise strength estimation (MNSE) and may comprise adjustable filter taps, adjustable coefficients, adjustable search range and/or adjustable strength. FIG. 8 is a diagram that illustrates a region of 8×8 pixel blocks that may be utilized to determine horizontal variances, vertical variances, and/or block variances, in accordance with an embodiment of the invention. Referring to FIG. 8, there is shown a block of pixels 800. The block of pixels 800 may comprise a plurality of 8×8 blocks for computing horizontal and vertical variances which may be referred to as H_var and V_var respectively and for determining maximum horizontal and vertical variances. In this regard, variance may refer to the absolute value of the difference between two pixel luma levels where the pixels may be horizontal or vertical neighbors and/or may be vertically or horizontally aligned but separated by one or more pixels. The luma variances may indicate a luma edge strength and a maximum variance may indicate the maximum luma edge strength in a block of pixels. Mosquito noise may be indicated where large edge strengths are found. An 8×8 block size may be utilized for determining block statistics, for example, however, the invention is not limited in this regard and larger block sizes may be utilized. Mosquito noise may span a plurality of 8×8 pixel blocks. For example, when content is scaled vertically and/or horizontally, mosquito noise may be drawn away or stretched from a current 8×8 block to one or more other blocks. A plurality of 8×8 blocks may be utilized for determining statistics for a larger block area from which maximum edge strengths may be determined. This may be referred to as merging blocks. For example, for vertical merging, a maximum block variance may be determined from up to three blocks above and three blocks below a current 8×8 block. Similarly, for horizontal merging, up to three blocks to the left and three blocks to the right of a current 8×8 block may be merged to determine a maximum horizontal variance. However, the invention is not limited with regard to any specific number of merged blocks. In some instances, when merging blocks, statistics from a previous picture may be utilized with statistics from a current picture to determine block statistics. Referring to FIG. 8, lines of luma pixels in the un-shaded middle area, that are currently stored in the line stores and available for processing, may be utilized to determine new block edges and maximum block edges for a specified block area. The middle un-shaded section of the block of pixels 800 represents current lines which may be utilized for determining horizontal variance (H_var), vertical variance (V_var) maximum horizontal variance (Max H_var), maximum vertical variance (Max V_var) and in some instances, combined horizontal and vertical variances and/or combined maximum variances. One more of these results may be referred to as statistics, pixel statistics and/or block statistics. In various embodiments of the invention, when determining the statistics for a block, for example, an 8×8 block or a larger block size, processing may begin with a current pixel at the top, left corner of the block and may continue through the block in a raster scan fashion, calculating statistics one pixel at a time, from the top left corner of the block through to the last pixel in the block, however, the invention is not limited in this regard. Referring to FIG. 8, statistics that may be available from previously processed rows of a current picture are represented in the upper shaded area of the block of pixels 800. These results may be stored in the on-chip memory 740 shown in FIG. 7, for example. The lower shaded area of the block of pixels 800 may represent results from a previous picture that are located in the previous picture, in the same location as shown, relative to the current pixel. The lower shaded area statistics results which are determined from a previous picture may be utilized when estimating statistics for a current pixel. For example, a maximum variance for pixels below the location of a current pixel in a block, from a previously processed picture, may be available for determining a current maximum variance for the current block. Previous picture statistics results may be utilized when a block size spans the pixels with results available from the current picture and results available from the previous picture. In various embodiments of the invention, the previous picture results may be stored in off-chip memory 454, for example. FIGS. 9A and 9B are diagrams that illustrate exemplary horizontal variance and vertical variance determination, in accordance with an embodiment of the invention. Referring to FIG. 9A, there is shown a portion of a block of pixels 900A. The block variance unit 710, shown in FIG. 7, may be operable to perform determination of horizontal and vertical luma variances for luma edge detection on a pixel by pixel basis and may determine a horizontal and vertical block variance parameter (block_var) based on the detected edges over an entire block or picture. The sharpness of a detected luma edge rather than its length may determine the strength of the mosquito noise. For example, a large step in luma value between compared pixels may indicate mosquito noise. Gently sloping contents in an image block may not generate mosquito noise. In operation, the block variance unit 710 may determine the block variance parameter by serially calculating and/or determining a maximum horizontal variance parameter (Max H_var) and/or a maximum vertical variance parameter (Max V_var). The value of Max H_var may correspond to the maximum left and right difference between horizontally aligned pixels in an pixel block. The value of V_var may correspond to the maximum top and bottom difference between vertically aligned pixels in an image block. The aligned pixels may be immediate neighbors or may be separated by one or more pixels. In instances when a picture comprises native resolution, the aligned pixels may be immediate neighbors. In instances when a picture or portion of a pixel is scaled, the aligned pixels may be separated by one or more pixels. The values for Max H_var and Max V_var may be reset to a default value at the start of each pixel block for SD pictures or may be scaled from previously determined values for HD pictures. In this regard, a reset default value may be zero. Referring to FIG. 9A, there is shown two vertically aligned neighboring pixels A and E. There is also shown, two horizontally aligned neighboring pixels A and E. In operation, in order to determine horizontal and vertical edge strengths, the following horizontal variance and a vertical variance operations may be performed: H_var=abs(A−B) V_var=abs(A−E) Where the horizontal and vertical variances may be determined by an operation comprising: absolute value of the difference in luma between two horizontally neighboring pixels and between two vertically neighboring pixels, respectively. Referring to FIG. 9B, there is shown a portion of a block of pixels 900B. The portion of the block of pixels 900B comprises four vertically aligned pixels A, E, F and G and four horizontal neighboring pixels D, B, A and C. In some instances, a picture may be scaled in one or both of horizontal and vertical directions. Variance may be determined over a wider distance to account for scaling in an image, for example, as follows: For a horizontal range of 0:H_var=abs(A−B) For a horizontal range of 1:H_var=abs(C−B) For a horizontal range>=2:H_var=abs(C−D) For a vertical range of 0:V_var=abs(A−E) For a vertical range of 1:V_var=abs(A−F) For a vertical range>=2:V_var=abs(A−G) where the horizontal and vertical variances may be determined by an operation comprising: absolute value of the difference in luma between two horizontally aligned pixels and between two vertically aligned pixels, respectively. In various embodiments of the invention, prior to determining edge strength, the input may be filtered, for example, by a median filter. In this manner, block variance results may avoid instances where content texture might erroneously be interpreted as mosquito noise edge strength. In an exemplary embodiment of the invention, a maximum horizontal and maximum vertical variance, over an 8×8 pixel block, may be determined based on intermediate horizontal and vertical variance values determined for each pixel in the block. Prior to the start of variance determination for an 8×8 block of pixels, a maximum horizontal variance (Max H_var) and a maximum vertical variance (Max V_var) may be initialized to zero. At each location in a block, for example in each of 64 locations in an 8×8 block, the horizontal and vertical variances may be determined and Max H_var and/or Max V_var may be updated with the maximum horizontal and maximum vertical variances, respectively, found thus far. Exemplary operations for determining maximum variances for luma pixel values may comprise the following: Max H_var=MAX[Max H_var,abs(p1−p2)], and Max V_var=MAX[Max V_var,abs(p1−p3)] Where p1 and p2 are horizontally aligned pixels, p1 and p3 are vertically aligned pixels and p1 may represent the current pixel. In various embodiments of the invention, intermediate and/or final results may be stored in on-chip memory 740 and/or off-chip memory 745. In various embodiments of the invention, the mosquito noise reduction (MNR) system 700 may be configured to merge variance results from a plurality of 8×8 blocks in order to find a “block variance” for a larger window of pixels which may also be referred to as a block. For example, this may occur in instances when the SFD unit 102 or the SFD unit 206, indicates that an image has been scaled in one or both of the vertical and horizontal directions. Merging blocks may be helpful when scaling causes mosquito noise to be expanded over a plurality of 8×8 blocks. Also, merging results from a plurality of 8×8 blocks over a wider window may enable a reduction in memory and/or processing costs. FIGS. 10A and 10B are diagrams that illustrate a group of local pixels that may be utilized to determine local variances, in accordance with an embodiment of the invention. Referring to FIG. 10A, there is shown a larger block of pixels 1004 and a smaller block of pixels 1002. Within the larger block of pixels 1004 there is shown a region of pixels represented with narrowly spaced hashed lines. These pixels may correspond to a luma edge in the lower left corner of the larger block 1004. The pixels represented with widely spaced hashed lines may correspond to mosquito noise artifacts that may occur in the image block as a result of MPEG coding, for example. The inset smaller block of pixels 1002 may correspond to a current portion of the larger block 1004 that for which local variance may be determined. Referring to FIG. 10B, there is shown the smaller block 1012 that comprises exemplary pixel labels that correspond to the pixels in the inset smaller block 1002 of FIG. 10A. The pixel labeled B11 may correspond to a current pixel for which local variances may be determined based on the smaller block 1012. A local variance (local_var) may be determined for the current pixel in the smaller block 1012. In this regard, local_var may be determined based on a luma value for the current pixel B11 and maximum and minimum luma values of other pixels in the smaller block 1012. A local maximum luma value (local_max) and local minimum luma value (local_min) may be determined by the following expressions, for example: local_max=MAX[A10,A11,A12,B10,B11,B12,C10,C11,C12], local_min=MIN[A10,A11,A12,B10,B11,B12,C10,C11,C12] where [A10, A11, A12, B10, B11, B12, C10, C11, C12] may represent luma values at their respective pixel locations shown in FIG. 10B. The local variance (local_var) may be determined for each pixel. For example, for the current pixel B11 and the smaller block 1012, the local variance may be determined as follows: local_var=MAX (local_max−B11,B11−local_min) where B11 represents the luma value of the current pixel located in the B11 position of the smaller block 1012. FIG. 11 is a block diagram that illustrates exemplary mosquito noise strength estimation and mosquito noise reduction functions, in accordance with an embodiment of the invention. Referring to FIG. 11, there is shown the block variance unit 710, the local variance unit 730, the mosquito noise strength estimation unit (MNSE) 1108 and the mosquito noise reduction (MNR) filter 1110. Functionality of the block variance unit 710 and the local variance unit 730 is described with respect to FIGS. 7, 8, 9A, 9B, 10A and 10B. The block variance unit 710 comprises suitable logic, circuitry, interfaces and/or code that are operable to estimate edge strength within a pixel block that comprises a current pixel. In various instances, the pixel block may be greater than 8×8 pixels, for example, a plurality of 8×8 pixels may be merged for determining a block variance. The local variance unit 730 may comprise suitable logic, circuitry, interfaces and/or code that may be operable to estimate a local variance about a current pixel. The MNSE unit 1008 may be operable to estimate how much mosquito noise is present in a pixel block or an entire picture, by pre-filtering the block or picture and comparing the input pixel levels to the filtered output pixel levels, for example, as follows: $Q = 1 N · M · ∑ x = 1 N ⁢ ∑ y = 1 M ⁢ ABS ⁡ ( in ⁡ ( x , y ) - filtered ⁡ ( x , y ) )$ Where in(x, y) represents a current pixel, filtered (x, y) represents a corresponding filtered pixel and Q represents a delta over an N×M block of pixels. In this regard, M×N pixels may comprise an entire video picture. The delta Q may be normalized relative to the edge strength over the pixel block, or entire picture as follows: $E = 1 N · M · ∑ x = 1 N ⁢ ∑ y = 1 M ⁢ EDGE_STRENGTH ⁢ ( x , y )$ The raw mosquito noise strength for a given pixel block or for an entire picture may be represented as: $MNS raw = Q E$ The difference or delta Q may also be an indication of filter corrections that may be applied to the input pixels to reduce mosquito noise by the MNR filter 1110. A limit may be determined to modify filter corrections performed by the MNR filter 1110 based on estimated edge strength. In this regard, the value of limit may control how much filter correction may be applied to input pixels from the line stores, by the MNR filter 1110. The MNSE 1108 may comprise suitable logic, circuitry, interfaces and/or code that may be operable to estimate mosquito noise strength in a block of pixels and/or may generate a MNR filtering limit. The MNR filtering limit may comprise one or more parameters that may be utilized for making adjustments to MNR filter 1110 coefficients. An exemplary MNR filtering limit may be determined as follows: limit=block_var−αlocal_var where α may comprise a user input parameter and limit may comprise a horizontal limit, a vertical limit or a combined horizontal and vertical 2D limit depending on the type of block variance utilized, as follows: block_var=Max H_var(utilized for horizontal filtering) block_var=Max V_var(utilized for vertical filtering) block_var=MAX(H_var,V_var)(utilized for 2D filtering) In an exemplary embodiment of the invention, in instances when α=1 and a block_var is much greater than a local_var, the MNSE unit 1108 may determine that mosquito noise strength is large for a current pixel. Thus, the limit value may be large and may enable stronger filter coefficients in the MNR filter 1110. In instances when block_var not much greater than a local_var, the MNSE unit 1108 may estimate that the presence of mosquito noise is not likely in the current pixel. In this regard, the value of limit may be small and the MNR filter 1110 may reduce the strength of filter coefficients. In various embodiments of the invention, the limit may be “clamped” to a range of values, for example, greater than or equal to zero. In operation, the block variance unit 1104 and/or the local variance unit 1106 may be operable to receive luma pixel input from line stores such as the line stores 206, for example. The block variance unit 1104 may be operable to determine maximum horizontal, maximum vertical and/or maximum combined horizontal and vertical block variances over a block of pixels which may be greater than 8×8 pixels. The local variance unit 1106 may be operable to determine a local variance for each pixel in the pixel block. The MNSE unit 1108 may estimate mosquito noise edge strength in horizontal and vertical directions for each pixel in the pixel block and may generate horizontal, vertical and combined limit values for adjusting MNR filter coefficients based on the estimated edge strengths. The MNR filter 1110 may receive the luma pixel input from the lines stores and may receive the maximum horizontal variance, the maximum vertical variance and/or the maximum combined horizontal and vertical variances over the block of pixels from the block variance unit 110. The MNR filter 1110 may receive the local variances on a pixel by pixel bases form the local variance unit 1106. In addition, the MNR filter 1110 may receive the limit value determined by the MNSE unit 1108. The MNR filter 1110 may determine horizontal and vertical filter coefficients for the block of pixels based on the block variances and may adjust the filter coefficients based on the horizontal, vertical and combined limits. The MNR filter 1110 may output filtered pixels. FIG. 12 is a block diagram that illustrates exemplary mosquito noise mosquito noise reduction, in accordance with an embodiment of the invention. Referring to FIG. 12, there is shown a mosquito noise reduction filter unit 1202, a filter limiter unit 1204, a vertical linear filter 1210, a vertical filter limiter 1212, a horizontal linear filter 1214, a horizontal filter limiter 1216, a 2D median filter 1206, a 2D filter limiter 1208 and a blend and/or select unit 1218. The mosquito noise reduction filter (MNR) 1202 may be similar or substantially the same as mosquito noise reduction unit 106, the MNR filter and limit unit 720 and/or the mosquito noise reduction filter 1110. The mosquito noise reduction filter 1202 may comprise suitable logic, circuitry, interfaces and/or code that may be operable to perform separate horizontal and vertical filtering and/or may be operable to perform 2D filtering, on a pixel by pixel basis. In this regard, the MNR filter 1202 may comprise median and/or linear filters, for example, FIR filters. MNR 1202 filtering may be adjusted to target horizontal mosquito noise edges horizontal, vertical mosquito noise edges and diagonal or curved mosquito noise edges that comprise horizontal and vertical components. The MNR filter 1202 may be configurable based on mosquito noise strength estimation (MNSE) which may be performed on a pixel by pixel basis for horizontal and/or vertical directions. For example, mosquito noise strength estimation may be determined based on a difference value that may result from comparing an original pixel value to a corresponding filtered pixel value. The MNR filter 1202 may select which type of filter to utilize and/or may determine how much filtering may be contributed by one or more filters, based on the MNSE. Furthermore, the size of the MNR filter 1202 may be configurable. For example, the number of taps utilized and/or coefficient values may be adapted based on scale factor information and/or based on whether interlaced versus progressive video is present. Also, the size of the MNR filter 1202 may be adapted based on the pixel block size utilized for MNSE. The filter limiter unit 1204 may comprise suitable logic, circuitry, interfaces and/or code that may be operable to receive horizontal, vertical and/or block variances and a local variance and may generate one or more of a horizontal limit, a vertical limit and a combined horizontal and vertical 2D limit as described with respect to FIG. 11. The limits may be utilized to clamp and/or limit filter corrections depending upon the extent of estimated mosquito noise strength associated with a current pixel. The vertical linear filter 1210 may comprise suitable logic circuitry interfaces and/or code that may be operable to receive a plurality of luma pixels from line stores that may correspond to a column of pixels in a video image. The vertical linear filter 1210 may output a filter correction values on a pixel by pixel basis to the vertical limiter 1212. The filter corrections may correspond to a horizontal edge strength in the column of pixels. The number of taps and/or coefficient values may be configurable and/or programmable in the vertical linear filter 1210. The vertical limiter 1212 may comprise suitable logic, circuitry, interfaces and/or code that may be operable to adjust the filter corrections received from the vertical linear filter 1210 based on a vertical limit determined by the limiter unit 1204. The adjusted filter correction value may be output to the horizontal linear filter 1214 as part of a corresponding horizontal row of pixels in the video image. The horizontal linear filter 1214 may comprise suitable logic, circuitry, interfaces and/or code that may be operable to receive a plurality of luma pixels that may correspond to a row in the video image and have been vertically filtered and limited and may output correction values on a pixel by pixel basis to the horizontal limiter 1216. The number of taps and/or coefficient values may be configurable and/or programmable in the horizontal linear filter 1214. The horizontal limiter 1216 may comprise suitable logic, circuitry, interfaces and/or code that may be operable to adjust the filter correction values received from the horizontal linear filter 1214 based on a horizontal limit from the limiter unit 1204. The adjusted median filter corrections may be output to the blend and/or select unit 1218. The 2D median filter 1206 may comprise suitable logic, circuitry, interfaces and/or code that may be operable to receive a plurality of luma pixels from line stores and may output filter corrections based on 2D combined or separate horizontal and vertical median filtering. The 2D median filter 1206 may output median filter corrections to the 2D limiter 1208. The 2D limiter 1208 may be operable to clamp or limit the median filter corrections based on a combined horizontal and vertical limit from the limiter 1204. The 2D limiter 1208 may output the 2D limited median filter corrections to the blend and/or select unit 1218. The blend and/or select unit 1218 may comprise suitable logic, circuitry, interfaces and/or code that may be operable to receive the limited median filter correction values and the limited linear filter correction and may blend the limited filter corrections and/or select one of the filtered corrections. For example, the blend and/or select unit 1218 may be operable to compare the limited filter correction value from the median filters and the limited filter correction value from the linear filters and may be configured to select the minimum of the correction values. In this manner, the MNR filter 1202 may be operable to reduce impairment to video content that may be caused by excessive filtering. Alternatively, the blend and/or select unit 1218 may be configured to select the maximum of the two limited median and linear filter corrections. In various embodiments of the invention, the blend and/or select unit 1218 may be configured to blend the limited median and linear filter corrections. In this regard, a portion or percentage from 0 to 100% of each of the corrections may be combined. For example, a combined filter correction may utilize 25% of the limited median filter correction and 75% of the limited linear filter correction. The resulting selected and/or blended filter correction for a current pixel may be added to the original current pixel value from the line stores. In operation, the MNR filter 1202 may be operable to generate MNR filter corrections that are adapted to mosquito noise strength and/or adapted to mosquito noise edge strength that may be oriented in horizontal, vertical, curved and/or diagonal directions. The MNR filter 1202 may determine horizontal, vertical and/or combined horizontal and vertical 2D adjustments for limiting linear and median filter corrections based on mosquito noise strength estimation (MNSE). The MNSE may be determined for horizontal and vertical directions. The MNR filter 1202 may be configurable with regard to the size of the vertical linear filter 1210, the horizontal linear filter 1214 and/or the 2D median filter 1206. In this regard, the number of taps and/or coefficient values may be configured based on scale factor detection and/or pixel block size. The MNR filter 1202 may be operable to determine a MNR filter correction for a current pixel by blending and/or combining the limited median filter corrections and the limited linear filter corrections. Alternative, the MNR filter correction may be determined by selecting one of the limited median or linear filter corrections, for example, the largest or the smallest correction. The MNR filter 1202 may add the determined MNR filter correction value to the original current pixel luma value. FIG. 13 is a block diagram that illustrates exemplary visual artifacts that may result from block noise, in accordance with an embodiment of the invention. Referring to FIG. 13, there is shown a first image 1302 and a second image 1304. The first and second images 1302 and 1304 comprise visual artifacts that result from block noise in nearly-smooth regions of the image. Various aspects of video content and processing, for example, coding type, bit rate, video motion, may contribute to the presence of block noise in the first image 1302 and in the second image 1304. Block noise is an MPEG artifact caused by quantization of low-frequency information. Block noise may appear as edges on 8×8 blocks of pixels and may give the appearance of a mosaic, or tiles. Block noise may originate as an 8×8 block discreet cosign transform (DCT) artifact occurring on block boundaries. Block noise may be seen in nearly-smooth regions, for example, in images of the sky or in faces. Also, block noise may occur in areas of high motion and high variance regions, for example, in images of moving water. Notwithstanding, block noise may be found in many other types of image content. Block noise may commonly occur in images with low bit rates and may be more severe at low bit rates. More specifically, block noise may be more severe in macroblocks coded with a higher quantization scale and/or a larger quantization matrix. Block noise may appear as discontinuities at the block edges. In some instances the block boundaries remain fixed, even when motion occurs in the underlying video image. In this regard, a static block pattern may stand out strongly against a moving background. In other instances, motion vectors may cause block noise to move with the video image. Block noise is often worse for intra blocks, and I or P pictures. Although block noise may occur in chroma, it may occur less frequently in chroma than in luma and may be perceived as being less objectionable in chroma. Vertical block noise edges may be perceived as being more objectionable than horizontal block noise edges, especially when images are viewed on an interlaced display. Block noise may often result from quantization of low-frequency terms. Block noise may be stretched and/or shifted when an image is scaled subsequent to coding and/or decoding. In addition, block noise may be shifted when interlaced content is scaled vertically or progressive content is scaled vertically into interlaced content. In this regard, a block grid shift in the vertical direction may differ between fields of opposite polarity, for example, between top and bottom fields or frames derived from top and bottom fields. Because mosquito noise and block noise may be related to the MPEG block structure, several factors, including field or frame coding of macroblocks, chroma coding format, for example, 4:4:4/4:2:2/4:2:0, and field or frame raster scan from a feeder may need to be considered for an effective noise reduction implementation. For example, in MPEG2 main profile and in MPEG2 simple profile, chroma may be coded as 4:2:0 and may generally have block noise on 16×16 image blocks or macroblocks. The original video content may be coded into macroblocks as field data or as frame data. The original video may be coded as frame pictures by utilizing a field or frame DCT coding. When the frame DCT coding is utilized, an 8×8 luma block may comprise 4 lines from each field. When the field DCT coding is utilized, an 8×8 luma block may comprise 8 lines from a single field. The original video may also be coded as field pictures in which case an 8×8 luma block may comprise 8 lines from a single field. FIG. 14 is a block diagram that illustrates an exemplary block noise reduction system, in accordance with an embodiment of the invention. Referring to FIG. 14, there is shown a block noise reduction system 1400 that comprises a block grid detection (BGD) unit 1402, a horizontal block edge and limit unit 1404, a vertical block edge and limit unit 1406, a horizontal block noise reduction (HBNR) filter 1408, a vertical block noise reduction (VBNR) filter 1410 and a combiner 1412. The BGD unit 1402 may comprise suitable logic, circuitry, interfaces and/or code that may be operable to locate a block grid in an image and/or a sequence of images. In this regard, the block grid may correspond to an 8×8 DCT block grid or, for example, a scaled and/or shifted block grid. The block grid may be detectable or visible over an entire image or in only a portion of an image. In instances when a block grid appears in only a portion of a picture, it may be referred to as “localized” block noise. In a sequence of images, localized block noise may appear in different locations of different images in the sequence. The BGD unit 1402 may be operable to detect a block grid within video content in horizontal and vertical directions independently. In each direction, a block grid size may be determined which may comprise the distance between two consecutive block boundaries. Also in each direction, a block grid shift may be determined. Block grid shift may provide an offset that indicates a distance from the top of an image to the first horizontal block grid edge and/or an offset that indicates a distance from the left side of an image to the first vertical edge, for example. The BGD unit 1402 may be operable to handle any suitable block size and/or grid shift. In this regard, the input content may be scaled and shifted in any manner. Notwithstanding, BGD may be restricted in various embodiments of the invention, based on a scale factor, for example, in instances when block size is scaled down to nearly a picture resolution level or when a particular implementation is designed to handle limited up-scaling. The BGD unit 1402 may also be operable to determine block noise strength on a per picture basis. Block noise strength in a picture may be determined for each of a horizontal direction and a vertical direction. Block noise strength may be determined when block noise is localized or is visible over an entire picture. Block noise strength detection may be enhanced for pictures with localized block noise. In various embodiments of the invention, block grid detection may be performed independently on each color component. In other embodiments of the invention, block grid detection may be performed for one color component, for example, luma and may be extrapolated to other color components. For example, block grid detection may be performed for luma only in instances when one expects that chroma grids are mostly aligned with a corresponding luma grid and the chroma grids may be inferred from luma grid detection results. The BGD unit 1402 may be operable to handle vertically scaled content which may have been interlaced. For example, the content may be scaled from interlaced video to progressive video, from progressive video to interlaced video and/or from interlaced video to interlaced video. In this regard, a block grid shift in the vertical direction may differ between fields of opposite polarity, for example, between top and bottom fields or frames derived from top and bottom fields. Horizontal block grid size, shift and strength information may be referred to as horizontal markers and may be communicated to one or both of the horizontal block edge and limit unit 1404 and the HBNR filter 1408. Vertical block grid size, shift and strength information may be referred to as vertical markers and may be communicated to one or both of the vertical block edge and limit unit 1406, and the VBNR filter 1410. The horizontal block edge and limit (HBEL) unit 1404 and the vertical block edge and limit (VBEL) unit 1406 may each comprise suitable logic, circuitry, interfaces and/or code that may be operable to receive the horizontal and vertical markers and may detect block edges which occur in each picture along the block grid. The HBEL unit 1404 and/or VBEL unit 1406 may be operable to determine horizontal and vertical limits, respectively, that may be utilized to control or clamp block noise filter corrections by the HBNR filter 1408 and/or the VBNR filter 1410. For example, in instances when a strong block noise edge is detected in a picture, a large limit may be generated that may enable stronger filtering. In instances when block noise is not found, a zero limit may disable block noise filtering. The horizontal limit may be communicated to the HBNR filter 1408 and/or to the VBEL unit 1406. The vertical limit may be communicated to the HBEL unit 1404. The HBNR filter 1408 and VBNR filter 1410 may comprise suitable logic, circuitry, interfaces and/or code that may be operable to filter pixels in regions near vertical block noise edges and in regions near horizontal block noise edges separately. The HBNR filter 1408 and the VBNR filter 1410 may be operable to adjust the number of taps utilized and coefficients in horizontal and vertical filtering, based on horizontal and/or vertical scaling in a picture. The HBNR filter 1408 may operate on pixels in horizontal rows to filter vertical block noise edges that may intersect the horizontal rows. Similarly, the VBNR filter 1410 may operate on pixels in vertical columns and may filter horizontal block noise edges that may intersect the vertical columns. The HBNR filter 1408 and VBNR filter 1410 may determine a pixel correction based on the difference between the filtered result and the original pixel value to the combiner 1412. Horizontal and vertical difference values may be adjusted based on the horizontal and vertical limits from the HBEL unit 1404 and the VBEL unit 1406. The adjusted differences may be combined by the combiner 1412 to produce the final BNR pixel correction. The differences may also be combined with other filter outputs, for example, from the MNR 1202. The final pixel correction may be added the original pixel value. In operation, the block noise reduction system 1400 may be operable to read rows of pixels from the line stores to the Block grid detection (BGD) unit 1402 and may perform block grid detection in horizontal and vertical directions. The BGD unit 1402 may be operable to determine block grid location that may comprise a distance between horizontal grid lines and a distance between vertical grid lines. The BGD unit 1402 may determine horizontal and vertical block grid shifts that indicate the distance from the top of a picture to the first horizontal edge and/or from the side of the picture to the first vertical edge, for example. In addition, the BGD unit 1402 may determine block noise strength on a per picture basis. The BGD 1402 may communicate horizontal and vertical markers and block noise strength information to the HBEL unit 1404 and the VBEL unit 1406. The HBEL 1404 and/or VBEL 1406 may determine horizontal and vertical limits for controlling filter corrections based on block noise strength in the vicinity of a pixel. The horizontal and vertical limits may be communicated to the HBNR filter 1408 and VBNR filter 1410. The HBNR filter 1408 and the VBNR filter 1410 may utilize horizontal and/or vertical scale factor information from the SFD 206 to configure the size of each of the BNR filters in terms of the number of taps utilized and coefficient values. The combiner 1412 may combine the horizontal and the vertical BNR pixel corrections. In various embodiments of the invention, the BNR filter corrections may be combined with other filter corrections, for example, form the MNR unit 1202. FIG. 15 is a block diagram that illustrates exemplary horizontal and vertical block grid detection arrays for a current pixel, in accordance with an embodiment of the invention. Referring to FIG. 15, there is shown a block grid detection system 1500 that may comprise a vertical pixel kernel memory 1502, a horizontal pixel kernel memory 1504, a current pixel 1516, a horizontal block edge detection array memory 1506 and a vertical block edge detection array memory 1508. In addition, there is shown the accumulators 1510 and 1512. The block grid detection system 1500 may be operable to determine vertical and horizontal detection arrays that may be utilized to determine block grid edges and/or block grid strength for a video picture and/or for a sequence of video pictures. Block grid edges in the horizontal direction may be determined independently from vertical grid edges. The vertical pixel kernel memory 1502 and the horizontal pixel kernel memory 1504 may each comprise a pixel samples, for example, eight pixel samples; however, the invention is not limited with respect to any specific kernel size. The number of pixels samples that are utilized in each direction may be determined based on horizontal and/or vertical scaling in a picture. For example, a greater number of pixels may be utilized in pictures that have been up-scaled. The horizontal block edge detection array memory 1506 and the vertical block edge detection array memory 1508 may be utilized to store horizontal block edge likelihood scores and vertical block edge likelihood scores respectively. The horizontal and vertical block edge detection array memories 1506 and 1508 may be utilized to determine block grid location, shift and strength in both directions. In an exemplary operation, the vertical pixel kernel memory 1502 may comprise suitable logic, circuitry, interfaces and/or code that may be operable to detect horizontal block noise edges. In this regard, a horizontal block edge likelihood score for a current pixel 1516 may be determined and may be summed by the accumulator 1512 with other horizontal block edge likelihood scores for pixels from the same horizontal row. The sum may be stored in the horizontal block edge detection array memory 1506. In a similar manner, vertical edge likelihood scores may be accumulated and stored in the vertical block edge detection array memory 1508. A higher block edge likelihood score for a row or column of pixels may indicate that a block grid line may be aligned with or near the row or column. A horizontal block edge likelihood score may be determined by comparing luma values of pixels above the current pixel in a vertical current pixel kernel with luma values of pixels below the current pixel in the vertical current pixel kernel. Based upon the difference in luma values, a horizontal likelihood score that may vary with the luma difference may be determined. In a similar manner a vertical block edge likelihood score may be determined by comparing luma pixel values to the left of the current pixel with luma pixel values to the right of the current pixel. FIG. 16 is a block diagram that illustrates exemplary horizontal and vertical block grid detection arrays for a video image, in accordance with an embodiment of the invention. Referring to FIG. 16, there is shown a horizontal block edge detection array 1606 and a vertical edge detection array 1608. The horizontal block edge detection array 1606 which may be referred to as the horizontal detection array and the vertical edge detection array 1608 which may be referred to as the vertical detection array, may be similar to the detection arrays accumulated in the horizontal and vertical block edge detection array memories 1506 and 1508 described with respect to FIG. 15. The detection arrays may be as wide and/or as high as the corresponding picture and for each column and/or row, may comprise a rough estimate of the presence of a block grid boundary at that column and/or row location. In this regard, the spikes or higher levels in the detection arrays may indicate where the accumulated likelihood scores are greatest for a specified picture and may indicate where horizontal and/or vertical grid edges are located. In an exemplary operation, for each input picture, a coarse block size may be estimated for grid line spacing along a horizontal axis and another course block size for spacing along a vertical axis. In this regard, a coarse block size may be determined by autocorrelating a detection array with a version of itself where one array is shifted by an integer block size. In this regard, a plurality of suitable integer block sizes within a specified range may be tested. The integer block size with the greatest autocorrelation score may be retained. Precision of the integer block size may then be refined into a fractional block size estimate that may still be referred to as a coarse block size. The horizontal and vertical detection arrays at the coarse block size may be utilized to determine edge strength. Edge strength may be based on the accumulated likelihood scores and a signal to noise ratio (SNR). The SNR may be determined for each direction, for example, by comparing the strength of accumulated likelihood scores for the coarse block size to the accumulated likelihood scores for the other tested block sizes. The per-picture coarse block sizes in each direction may be filtered to determine a narrow band coarse block size based on temporal analysis. For example, in instances when coarse block size estimates for a sequence of pictures are consistent or similar from picture to picture, are above a minimum strength threshold and/or are above a minimum SNR threshold, the block grid detection system 1402, may lock to a new block size. The new block sizes may correspond to an average of the per-picture horizontal coarse block sizes and to an average of the vertical coarse block sizes in the sequence of pictures. The new block sizes may be temporally filtered by narrow band filters as the new horizontal and vertical coarse block sizes are received. The narrow band filters may output slow changing stable values that may be referred to as a narrow band coarse block size for horizontal and vertical edges. Additional block grid size refinement may be increased, for example, to a fractional precision of 1/512. Further refinement of the block grid size, for example, a precision of 1/4096 may be utilized in instances of up to 250 blocks per picture. In this regard, 250/4096 would yield a worst case pixel error of 1/16 with regard to the location of any block edge in a picture. In this regard, the narrow band coarse block size may be utilized to spatially and/or temporally filter the per-picture detection arrays. The filtered array results may comprise arrays as wide and/or as high as the each corresponding picture and for each column and/or row in the picture, may comprise the likelihood of the presence of a block grid boundary. The filtered detection array results may be analyzed by integrating their content for each pair of high precision block grid shift and block grid size. The high precision block sizes under test may be selected in a neighborhood of the narrow band coarse block size. The block grid shift, block grid size pair in each direction, which yields the highest integration value may be selected and referred to as the fine block grid size and/or fine block grid shift. In this regard, for a suitable block grid shift, block grid size pair, the block grid size may be near the narrow band coarse block size. In an exemplary embodiment of the invention, as the per picture detection arrays are autocorrelated, a block noise strength may be determined. The strength measurement may be temporally filtered among pictures in a sequence for stability and may be biased to rise quickly in an instance where block noise suddenly appears. Once the block noise is found, the corresponding block grid may be tracked. The tracking may persist even in instances when the grid may disappear, for example, the block grid may intermittently disappear. In this manner, block grid detection and/or block noise reduction operations may be prepared to immediately handle spurious noise, for example, a scene change, without having to go through a process of re-syncing to a block grid. In instances when block noise is localized in a picture, horizontal and vertical detection arrays may accumulate lower likelihood scores for locations where block noise is weak or not present. In this regard, block noise strength for the picture may be low since it may be determined based on an average of likelihood scores in each detection array for a picture. Also, since block noise strength is utilized to control or limit filter corrections in the VBNR and HBNR filters 1408 and 1410, localized noise in an image may not be filtered adequately and be perceived after block noise filtering. To compensate for low block noise strength in pictures with localized block noise, a standard deviation, σ, of the detection array values may be utilized to modify the block noise strength where: $σ 2 = N × ∑ N ⁢ d ⁡ [ i ] 2 - ( ∑ N ⁢ d ⁡ [ i ] ) 2 N 2$ where N is the number of samples of accumulated likelihood scores in the detection array along the block grid and d[i] is the ith sample. Then, block noise strength may be modified as follows: block noise strength=block noise strength+α*standard deviation where α may be a user adjustable parameter. FIG. 17 is a diagram that illustrates exemplary different row shifts for interlaced and/or vertically scaled picture sequences, in accordance with an embodiment of the invention. Referring to FIG. 17, there is shown a portion of an original top field 1702, a portion of a frame 1704 which is up-scaled from the original top field 1702, an original bottom field 1706 and a portion of a frame 1708 which is up-scaled from the original bottom field 1706. Block noise may be shifted when interlaced content is scaled vertically or progressive content is scaled vertically into interlaced content. In this regard, a block grid shift in the vertical direction may differ between fields of opposite polarity, for example, between top and bottom fields or frames derived from top and bottom fields. Referring to FIG. 17, prior to performing up-scaling of the original top field 1702, a vertical block boundary may be located at a position that corresponds to line (L)3.5 which is a position located between original top field lines 3 and 4. After up-scaling the original top field 1702 to arrive at the frame 1704, the block boundary is located at line 7. Furthermore, prior to performing up-scaling of the original bottom field 1706, a vertical block boundary may be located at a position that corresponds to L 3.5. After up-scaling the original bottom field 1706 to arrive at the frame 1708, the block boundary is located at line 8. Thus, it can be seen that a block grid shift in vertically scaled pictures that may, at some time, comprise interlaced rows, may differ for alternating rows. Therefore, the BGD system 1500 may be operable to determine vertical block grid shift pictures N, N+2, N+4, . . . together to yield on block grid shift and may be operable to determine block grid shift for pictures N=1, N+3, N+5, . . . together to yield a different block grid shift. FIG. 18A illustrates an exemplary variable size, vertical edge detection region for block noise reduction, in accordance with an embodiment of the invention. Referring to FIG. 18A, there is shown a first image block 1802 adjacent to a second image block 1804 in a video image. The image blocks shown may comprise, for example, an 8×8 array of pixels. The left vertical border of image block 1802 may correspond to a left vertical edge for block noise reduction processing. The right vertical border of image block 1802, which corresponds to the left vertical border of image block 1804, may correspond to a current vertical edge for block noise reduction processing. The right vertical border of image block 1804 may correspond to a right vertical edge for block noise reduction processing. Any of the current vertical edge, the left vertical edge, or the right vertical edge may also be referred to as a vertical edge. Edge-related parameters may be determined for each of the vertical edges and may be utilized to determine whether the vertical edge that coincides with the current vertical edge may be a result of blocking artifacts. Referring to FIG. 14, vertical edges may be detected by comparing pixels in a horizontal row and thus, vertical edges may be detected by the horizontal block edge detection and limit unit 1404 and may be filtered in the Horizontal BNR filter 1408. Referring back to FIG. 18A, when determining edge-related parameters for any one of the vertical edges, a portion of the image comprising pixels neighboring the vertical edge may be utilized. These neighboring pixels may be referred to as background pixels and may include a plurality of pixels to the left and to the right of the selected vertical edge. The width in terms of the number of pixels utilized for the background pixels, may vary. Moreover, the width of the vertical edge may vary depending. In this regard, the number of pixels utilized within the edge width and/or in the background width, may vary depending on a scale factor determined for the image. Referring to FIG. 18A, in an exemplary embodiment of the invention, eighteen pixels may be utilized per vertical edge. The eighteen pixels are shown as narrowly spaced hashed lined pixels for the left vertical edge and the right vertical edge. The eighteen pixels for the current vertical edge are shown as eight narrowly-spaced hashed pixels in the previous and the next rows of pixels and four widely-spaced hashed pixels in the current row of pixels for the current vertical edge. The width of the edge is indicated by cross hatched pixels. The eighteen pixels may correspond to two pixels to the left and two pixels to the right of the vertical edge in a previous row of pixels, two pixels to the left and two pixels to the right in a current row of pixels, and two pixels to the left and two pixels to the right of the vertical edge in a next row of pixels. The number of pixels utilized for determining edge parameters may depend on scale factor, the application and/or the available memory. In this regard, more or fewer than eighteen pixels may be utilized and more or fewer than three rows of pixels may be utilized when determining edge parameters. After determining the edge parameters, the widely spaced hashed-lined pixels in the current row of pixels for the current vertical edge may be further processed to reduce artifacts that may be related to block noise. FIG. 18B illustrates an exemplary vertical edge detection region for block noise reduction, in accordance with an embodiment of the invention. Referring to FIG. 18B, there is shown a first image block 1812 adjacent to a second image block 1814 in a video image. Similar to FIG. 18A, the right vertical, left vertical edge and current edge between the 8×8 blocks 1812 and 1814 may be referred to as vertical edges. Edge-related parameters may be determined for each of the vertical edges and may be utilized to determine whether the vertical edge that coincides with the current vertical edge may be a result of blocking artifacts. Referring to FIG. 18B, in an exemplary embodiment of the invention, twelve pixels may be utilized per vertical edge. For the left vertical edge and right vertical edge, the twelve pixels are shown as narrowly spaced hashed lined pixels. For the current vertical edge, the twelve pixels are shown as eight narrowly-spaced hashed pixels in the previous and the next rows of pixels and as four widely-spaced hashed pixels in the current row of pixels. The width of the edge comprises two columns. The left background comprises one column of three pixels and the right background comprises one column of three pixels. The twelve pixels may correspond to four pixels in a previous row of pixels, four pixels in a current row of pixels, and four pixels in a next row of pixels. After determining the edge parameters, the widely spaced hashed-lined pixels in the current row of pixels for the current vertical edge may be further processed to reduce artifacts that may be related to block noise. FIG. 19A illustrates an exemplary image portion for vertical edge detection, in accordance with an embodiment of the invention. Referring to FIG. 19A, there is shown a region of pixels 1900. The region of pixels 1900 may be utilized for determining a plurality of edge-related parameters for a vertical edge. As shown in FIG. 18B, the edge may be two pixels wide and each of the left and right background widths may be one pixel wide. The pixels may be labeled A0, B0, C0, and D0 for the previous row of pixels, A1, B1, C1, and D1 for the current row of pixels, and A2, B2, C2, and D2 for the next row of pixels. A vertical edge variance parameter for the vertical edge being processed may be determined by utilizing, for example, the following expression: edge_var=ABS(B0−C0)+ABS(B1−C1)+ABS(B2−C2), where ABS corresponds to an absolute value operation. A background variance parameter for the image portion defined in FIG. 19A may be determined by utilizing, for example, the following expression: backgnd_var=MAX[(ABS(A0−B0)+ABS(A1−B1)+ABS(A2−B2)), (ABS(C0−D0)+ABS(C1−D1)+ABS(C2−D2))] where the first value in the MAX operation corresponds to a left vertical variance parameter and the second value in the MAX operation corresponds to a right vertical variance parameter. A first edge strength parameter (edge_strength) and a second edge strength parameter (edge_strength2) may be determined based on the edge variance parameter and the background variance parameter. For example, the first and second edge strength parameters may be determined as follows: edge_strength=edge_var−b_rel*backgnd_var/4, edge_strength2=edge_var−2*b_rel*backgnd_var/4, where b_rel is a relative weight parameter that may be utilized to control the variance of the edge relative to the background and 4 may correspond to an exemplary scaling factor. In this regard, different values of b_rel may be used to adjust sensitivity. For example, smaller values of b_rel may result in stronger edge strengths and may allow for more filtering. For each vertical edge, a maximum vertical parameter may be determined by the following exemplary expression: vert_max=MAX[ABS(B0−C0),ABS(B1−C1),ABS(B2−C2)]. Moreover, a first vertical edge clamping limit (limit) and a second vertical edge clamping limit (limit2) may be determined for every vertical edge based on edge strength values, the maximum vertical parameter, and a block core limit (b_core). The value of b_core may be determined so as to prevent filtering of very strong edges that are likely to be the result of image content. Exemplary expressions for determining the first and second vertical edge clamping limit may be as follows: limit=MIN[edge_strength,(b_core−vert_max)], limit2=MIN[edge_strength2,(b_core−vert_max)]. The value of b_core may be configurable, for example, in a programmable register. For example, larger values of b_core may allow filtering of stronger edges. The values for limit and limit2 may be determined for the current vertical edge, for the left vertical edge, and/or for the right vertical edge. In this regard, the limits for the current vertical edge may be referred to as current vertical edge clamping limits, the limits for the left vertical edge may be referred to as left vertical edge clamping limits, and the limits for the right vertical edge may be referred to as right vertical edge clamping limits. The clamping limits for the current vertical edge, the left vertical edge, and the right vertical edge may be combined to provide a first vertical combined clamping limit (combined_limit) based on the values of limit for the vertical edges and a second vertical combined clamping limit (combined_limit2) based on the values of limit2 for the vertical edges. In this regard, the first and second vertical combined clamping limits may be utilized for processing the pixels in the current row of pixels for the current vertical edge. The values of combined_limit and combined_limit2 may be determined by the following exemplary expressions: temp=MAX[limit_left,limit_right]+b_core/8, temp2=MAX[limit2_left,limit2_right]+b_core/8, if (temp<lower_limit){temp=lower_limit} if (tepm2<lower_limit2){temp2=lower_limit2} combined_limit=MIN(temp,limit_current), combined_limit2=MIN(temp2,limit_current2), where temp corresponds to a temporary variable for storing the maximum of the first left vertical edge clamping limit (limit_left) and the first right vertical edge clamping limit (limit_right), temp2 corresponds to a temporary variable for storing the maximum of the second left vertical edge clamping limit (limit_left2) and the second right vertical edge clamping limit (limit_right2), lower_limit and lower_limit2 may correspond to lower limits that may be allowed for temp and temp2 respectively, MIN corresponds to a minimum value operation, limit_current corresponds to the first current vertical edge clamping limit, limit_current2 corresponds to the second current vertical edge clamping limit, and 8 is an exemplary scaling factor. The values of lower_limit and lower_limit2 may be selected to, for example, avoid negative vertical combined clamping limit values. FIG. 19B illustrates an exemplary notation that may be utilized for vertical edge filtering, in accordance with an embodiment of the invention. Referring to FIG. 19B, there is shown pixels labeled A, B, C, and D that are located in a current row of pixels in the current vertical edge. In this regard, the pixel labeled B is located to the left of the current vertical edge and the pixel A is located to the left of the pixel labeled B. Similarly, the pixel labeled C is located to the right of the current vertical edge and the pixel labeled D is located to the right of the pixel labeled C. The values of the pixels labeled A, B, C, and D may be filtered and the new filtered values A′, B′, C′, and D′ may be given as: A′=(13A+3C+8)/16, B′=(10B+6C+8)/16, C′=(6B+10C+8)/16, and D′=(3B+13D+8)/16. A difference parameter may be determined based on an original pixel value (original_pix) and a filtered pixel value (filt_pix). For example, the difference parameter may be determined by: diff=filt_pix−original_pix. A vertical block noise reduction difference parameter (VBNR_diff) may be determined based on the difference parameter and the clamping limits. An exemplary VBNR_diff may be determined as follows: if (pixel position corresponds to pixel labeled A or D)   { VBNR_diff = CLAMP(diff, −combined_limit2,   +combined_limit2) } else if (pixel position correspond to pixel labeled B or C)   { VBNR_diff = CLAMP(diff, −combined_limit,   +combined_limit) } else   { VBNR_diff = 0 } where CLAMP may correspond to a clamping or limiting operation. Limiting the filtering operation may be performed to ensure that strong vertical edges may be filtered while very strong vertical edges may not be filtered since they may correspond to image content. The limits may be soft and may have gradual turn-offs. Edges that occur in relatively flat backgrounds may affect all of the pixels labeled A, B, C, and D. However, the when noisier backgrounds occur, the filtering may be limited so that only the pixels labeled B and C may be adjusted. FIG. 20 is a block diagram of an exemplary vertical edge block noise reduction (BNR) system, in accordance with an embodiment of the invention. Referring to FIG. 20, there is shown a vertical edge reduction system 2000 which may comprise a variance block 2002, a max latch 2004, a left FIFO position 2006, a current FIFO position 2008, a right FIFO position 2010, an edge strengths and limits (ESL) block 2012, a left FIFO position 2014, a current FIFO position 2016, a right FIFO position 2018, a limits combiner 2020, a latch 2022, a clamping block 2024, a latch 2026, and a block noise reduction (BNR) filter 2028. The variance block 2002 may comprise suitable logic, circuitry, and/or code that may be adapted to determine a vertical edge variance parameter (edge_var) and a maximum vertical parameter (vert_max) for a vertical edge being processed. The max FIFO position 2004, the left FIFO position 2006, the current FIFO position 2008, the right FIFO position 2010, the left FIFO position 2014, the current FIFO position 2016, the right FIFO position 2018, the latch 2022, and the latch 2026 may comprise suitable logic and/or circuitry that may be adapted to store information. The variance block 2002 may transfer the value of vert_max to the max latch 2004 and the value of edge_var to the left FIFO position 2006. The value in left FIFO position 2006 may be transferred to current FIFO position 2008 and then from the current FIFO position 2008 to the right FIFO position 2010. For example, after three clock cycles the variance block 2002 may have determined the edge_var and vert_max values for a current vertical edge, a left vertical edge, and a right vertical edge. In instances where the edge width and/or the background width comprise a greater number of pixels, for example, as shown in FIG. 18A, a greater number of clock cycles may be needed to determine the edge_var and vert_max values. The edge strengths and limits (ESL) block 2012 may comprise suitable logic, circuitry, and/or code that may be adapted to receive the vertical edge variance parameters and the maximum vertical parameters for the current vertical edge, the left vertical edge, and the right vertical edge and determine the edge strength parameters (edge_strength, edge_strength2) and the vertical edge clamping limits (limit, limit2) for each of these vertical edges. In this regard, the ESL block 2012 may utilize a relative weight parameter (b_rel) and/or a block core limit (b_core) during processing. The ESL block 2012 may transfer the values for the vertical edge clamping limits to the left FIFO position 2014. The value in the left FIFO position 2014 may be transferred to the current FIFO position 2016 and then from the current FIFO position 2016 to the right FIFO position 2018. The limits combiner 2020 may comprise suitable logic, circuitry, and/or code that may be adapted to receive the right vertical edge clamping limits, the current vertical edge clamping limits, and the right vertical edge clamping limits and determine the first vertical combined clamping limit (combined_limit) and the second vertical combined clamping limit (combined_limit2) to be utilized with the pixels labeled A, B, C, and D in FIG. 19B. The limits combiner 2020 may be adapted to transfer the values for combined_limit and combined_limit2 to the latch 2022. The latch 2022 may be adapted to transfer the values of combined_limit and combined_limit2 to the clamping block 2024. The BNR filter 2028 may comprise suitable logic, circuitry, and/or code that may be adapted to filter the original values of the pixels labeled A, B, C, and D in a horizontal row shown in FIG. 18B and to determine a difference parameter (diff) based on a difference between original and filtered values. The values of the filter coefficients utilized by the BNR filter 2028 may be programmable. The values of the filter coefficients and the number of taps utilized may depend on scale factor and on the widths of the vertical edge and the background pixels, as shown in FIGS. 18A and 18B. The BNR filter 2028 may be adapted to transfer the value of the difference parameter to the latch 2026. The latch 2026 may be adapted to transfer the value of the difference parameter to the clamping block 2024. The clamping block 2024 may comprise suitable logic, circuitry, and/or code that may be adapted to determine the vertical block noise reduction difference parameter (VBNR_diff) based on the values of combined_limit, combined_limit2, and diff. In this regard, the clamping block 2024 may clamp or limit the value of the difference parameter based on the value of combined_limit when processing the pixels labeled B or C in FIG. 18B. Moreover, the clamping block 2024 may clamp or limit the value of the difference parameter based on the value of combined_limit2 when processing the pixels labeled A or D in FIG. 18B. The clamping block 2024 may be adapted to transfer the value of VBNR_diff to the combiner 1412 in FIG. 14. In instances when an image may be up-scaled, the vertical edge and/or background pixel widths may be wider, for example, as shown in FIG. 18A, the vertical edge is two pixels wide and each of the left and right background widths are two pixels wide. In another exemplary embodiment of the invention, the edge width may be four pixels wide and the left and right background widths may each be four pixels wide. Furthermore, the block dimensions which may be indicated by the block edge markers may be greater than 8×8 pixels. In this regard, the vertical edge strength (Edge_strength) may be determined, for example, based on the absolute difference between each pair combination of horizontally neighboring pixels in each row within the vertical edge width. The vertical background strength (Background_strength) may be determined, for example, based on the absolute difference between each pair combination of horizontally neighboring pixels within the background and on the horizontal pairs comprising one vertical edge pixel and one background pixel. The Edge_strength and Background_strength may be utilized to compute a limit as follows: Limit=Edge_strength−(Background_strength*BREL)/256 Limit=MAX(limit,0) where BREL may be a configurable parameter. A Limit may be performed at every marker. For a current vertical edge, confidence may be improved by determining a limit for a vertical edge to the left and a vertical edge to the right of the current vertical edge. Also, scale factor may be utilized to determine a limit. For example, for each marker, four limits may be determined as follows: Limit_left=(limit_left*LR_SCALE)/32; Limit_right=(limit_right*LR_SCALE)/32; H_limit=MIN(current_limit,MAX(limit_left,limit_right)); where H_limit is a limit that may be utilized to limit horizontal filter corrections when reducing vertical edge noise in horizontal filters, current limit is the limit determined for the current vertical edge, limit_right is the limit determined for the right vertical edge, limit_left is the limit determined for the left vertical edge and LR_SCALE is parameter determined based on the scale factor in the horizontal direction which is determined for the current picture. The LR_SCALE may be utilized to normalize limit values. In this regard, the limits may be determined over variable edge widths, variable background widths and over variable block widths. The LR_SCALE parameter may enable more consistent results across various scale factors. FIGS. 21A-21C illustrate exemplary vertical and/or horizontal detection, in accordance with an embodiment of the invention. Referring to FIG. 21A, there is shown a region of an image that comprises a top border of an image block with a top row of pixels that comprises the pixels labeled B2 through B9. The top row of pixels of the image block is indicated by widely spaced hashed lines. When a horizontal edge is detected, the pixels labeled B2 through B9 in the top row of the image block that is adjacent to the horizontal edge may be filtered to reduce the effect of block noise. Referring to FIG. 21B, there is shown an exemplary region of the image that may be utilized for detecting a horizontal edge adjacent to the top border of an image block. The region comprises the pixels labeled A2 through A9, the pixels labeled B2 through B9, and the pixels labeled C2 through C9. Widely spaced hashed lines indicate the pixels in the detection region. Referring to FIG. 21C, to detect the presence of a horizontal edge at the top border of an image block, at least one vertical edge on a vertical border of the image block may also be selected. There is shown in FIG. 21C exemplary regions of the image that may be utilized for detecting at least one vertical edge on a vertical border of the image block. For the left vertical border, the exemplary region may comprise the pixels labeled A0 through A3, the pixels labeled B0 through B3, and the pixels labeled C0 through C3. For the right vertical border, the exemplary region may comprise the pixels labeled A8 through A11, the pixels labeled B8 through B11, and the pixels labeled C8 through C11. Widely spaced hashed lines indicate the pixels in the detection region. While FIGS. 21A-21C indicate an exemplary approach that may be followed for detecting the presence of a horizontal edge adjacent to the top row of pixels in the image block, a similar approach may also be followed for detecting the presence of a horizontal edge adjacent to the bottom row of pixels in the image block. Referring to FIG. 14, horizontal edges may be detected by comparing pixels in a vertical column and thus, horizontal edges may be detected by the vertical block edge detection and limit unit 1406 and may be filtered in the Vertical BNR filter 1410. Referring to FIG. 21C, a horizontal edge variance parameter may be determined for the horizontal edge being processed by computing for every image block and for vertical pixel pair combinations that fall within the detection region, the following exemplary expressions: vvar_top=SUM[ABS(Ax−Bx)], vvar_bottom=SUM[ABS(Bx−Cx)], max_top=MAX[ABS(Ax−Bx)], max_bottom=MAX[ABS(Bx−Cx)], where SUM corresponds to an addition operation, vvar_top is a top block variance parameter, vvar_bottom is a bottom block variance parameter, max_top is a maximum top block variance, and max_bottom is a maximum bottom block variance. The computations may be performed cumulatively over every horizontal edge. For example, the values for vvar_top, vvar_bottom, maxt_top, and max_bottom may be determined for all columns of pixels in a horizontal edge along the block. These values may be determined serially as the pixels are shifted through a pixel buffer. Once these values are determined, they may be stored for further processing. For pixels in a row of pixels that is above a horizontal edge in a top field or above a horizontal edge in a progressive frame, a current horizontal edge clamping limit (limit) may be determined by the following expression: limit=vvar_top−b_rel*vvar_bottom/4, where b_rel is a relative weight parameter and 4 corresponds to an exemplary scaling factor. For pixels in a row of pixels that is below a horizontal edge in a bottom field or below a horizontal edge in a frame when progressive video is utilized, a current horizontal edge clamping limit (limit) may be determined by the following expression: limit=vvar_bottom−b_rel*vvar_top/4, where b_rel is again the relative weight parameter and 4 corresponds to an exemplary scaling factor. In any other instance, the value of the current horizontal edge clamping limit (limit) may be set to zero. The value of the parameter limit may also be scaled and further limited by the following expressions: limit=limit/4, limit=MIN[limit,b_core−max_vvar], where b_core may be determined so as to prevent filtering of very strong edge that are likely to be the result of image content, and max_vvar corresponds to the value of max_top when the bottom row of pixels in an image block for bottom fields or progressive video are to be filtered and max_vvar corresponds to the value of max_bottom when the top row of pixels in an image block for top fields or progressive video are to be filtered. For the currently selected image block, the HBEL block 1404 may have been used to determined a left vertical edge clamping limit (limit_left) and a current vertical edge clamping limit (limit_current) that may be utilized for determining whether vertical edges also exist in the current image block. In this regard, a current vertical-horizontal edge clamping limit (hlimit) may be determined as follows: hlimit=MAX[limit_left,limit_current]. When portions of a horizontal edge extend beyond the boundaries of a video image, the horizontal edge may not be filtered. When a horizontal edge starts and/or ends in a video image boundary, and/or close to the video image boundary, it may only have one vertical edge. In this instance, the value of the parameter hlimit may be set to the vertical edge clamping limit value of the existing vertical edge. The value of the current horizontal edge clamping limit (limit) and the value of the current vertical-horizontal edge clamping limit (hlimit) may be combined to determine a horizontal combined clamping limit (combined_limit) based on the following expression: combined_limit=MIN[limit,hlimit], if (combined_limit<0){combined_limit=0}. A filter may be applied to the pixels in the row adjacent to the horizontal edge. For the top row of pixels in an image block comprising top a field or progressive video, the exemplary value of a filtered pixel (filt_pixel) may be given by the following expression: filt_pixel=(B*5+A*3+4)/8, where B corresponds to the value of the B-labeled pixels, A corresponds to the value of the corresponding A-labeled pixels, and 8 is an exemplary scaling factor. For the bottom row of pixels in an image block for bottom fields or progressive video, the exemplary value of a filtered pixel (filt_pixel) may be given by the following expression: filt_pixel=(B*5+C*3+4)/8, where B corresponds to the value of the B-labeled pixels, C corresponds to the value of the corresponding C-labeled pixels, and 8 is an exemplary scaling factor. A difference parameter may be determined based on an original pixel value (original_pix) and a filtered pixel value (filt_pix). For example, the difference parameter may be determined by: diff=filt_pix−original_pix. A horizontal block noise reduction difference parameter (HBNR_diff) may be determined based on the difference parameter and the horizontal combined clamping limit (combined_limit). An exemplary HBNR_diff may be determined as follows: HBNR_diff=CLAMP(diff,−combined_limit,+combined_limit) where CLAMP may correspond to a clamping or limiting operation. FIG. 22 is a block diagram of an exemplary horizontal edge block noise reduction block, in accordance with an embodiment of the invention. Referring to FIG. 22, the VBEL block 1406 in FIG. 14 may comprise a variance block 2202, a latch 2204, a latch 2206, an edge strengths and limits (ESL) block 2208, a limits combiner 2210, a latch 2212, a clamping block 2214, a latch 2216, and the VBNR filter 1410 may comprise a filter 2218. The variance block 2202 may comprise suitable logic, circuitry, and/or code that may be adapted to determine the parameters vvar_top, vvar_bottom, max_top, and max_bottom for a horizontal edge being processed. The latches 2204, 2206, 2212, and 2216 may comprise suitable logic and/or circuitry that may be adapted to store information. The variance block 2202 may transfer the values of vvar_top and vvar_bottom to the latch 2204 and the values of max_top and max_bottom to the latch 2206. The ESL block 2208 may comprise suitable logic, circuitry, and/or code that may be adapted receive the horizontal edge parameters stored in the latches 2204 and 2206 to determine the value of the current horizontal edge clamping limit (limit). In this regard, the ESL block 2208 may utilize the relative weight parameter (b_rel), the block core limit (b_core) during processing, and/or information regarding whether the video signal is interlaced video and the current field is a top field or bottom field or whether the video signal is progressive video. The ESL block 2208 may transfer the value for the current horizontal edge clamping limit to the limits combiner 2210. The limits combiner 2210 may comprise suitable logic, circuitry, and/or code that may be adapted to receive the current horizontal edge clamping limit, the current vertical edge clamping limit, and the left vertical edge clamping limit to determine the horizontal combined clamping limit (combined_limit) to be utilized with the pixels in the row of pixels adjacent to the horizontal edge. The limits combiner 2210 may be adapted to transfer the values for combined_limit to the latch 2212. The latch 2212 may be adapted to transfer the values of combined_limit to the clamping block 2214. The VBNR filter 2218 may comprise suitable logic, circuitry, and/or code that may be adapted to filter the original values of the pixels in the row of pixels adjacent to the horizontal edge and to determine a difference parameter (diff) based on the original and filtered values. The values of the filter coefficients utilized by the VBNR filter 2218 may be programmable via, for example, the host processor 204 and/or via a register direct memory access (DMA). The VBNR filter 2218 may be adapted to transfer the value of the difference parameter to the latch 2216. The latch 2216 may be adapted to transfer the value of the difference parameter to the clamping block 2214. The clamping block 2214 may comprise suitable logic, circuitry, and/or code that may be adapted to determine the horizontal block noise reduction difference parameter (HBNR_diff) based on the values of combined_limit and diff. The clamping block 2214 may be adapted to transfer the value of HBNR_diff to the combiner 1412 shown in FIG. 14. When processing the first and last vertical edges in a video image, that is, the picture border or boundary, filtering may not be utilized. In this regard, the vertical combined edge clamping limits may be set to zero, for example. When processing the next to the first and next to the last vertical edges in a video image, the values of temp and temp2 may be set to b_core/4, for example. FIG. 23 is a flow chart illustrating exemplary steps for performing mosquito noise and block noise reduction in images that have been compressed prior to scaling, in accordance with an embodiment of the invention. Referring to FIG. 23, the exemplary steps may begin at step 2302. In step 2304, pixels may be read from the line stores 204. The SFD 206 may determine horizontal and/or vertical scale factors that may be utilized for mosquito noise detection 104, mosquito noise filtering 106, block grid detection 108 and block grid filtering 110. In step 2306, horizontal and/or vertical block variances may be determined by the MNR block variance unit 710. In step 2308, local variance may be determined by the MNR local variance block 730. In step 2310, The MNSE unit 1108 may perform mosquito noise strength estimation and may determine horizontal, vertical and combined limits. In step 2312, the MNR filter 1202 may determine horizontal and vertical mosquito noise reduction filter corrections. In step 2314, the block grid detection unit 1402 and/or the block grid detection system 1500 may perform block grid detection, may determine horizontal and vertical grid markers and may determine block edge strength. In Step 2316, the horizontal and vertical block edge detection and limit units 1404 and 1406 may perform horizontal and vertical edge detection. In step 2318, the horizontal and vertical block edge detection and limit units 1404 and 1406 may determine horizontal and vertical limits. In step 2320, the HBNR filter 1408 and VBNR 1410 may determine horizontal and vertical block noise filter corrections. In step 2322, the combiner 1412 may combine MNR and/or BNR filter corrections and may filter pixels. The exemplary steps may end at step 2324. In an embodiment of the invention, in a video processing device, scale may be detected in a video image by the SFD 206, for example. The scale may be detected in one or both of vertical direction and horizontal direction based on pixel information for the video image. The scale may be detected utilizing pixel information from the video image. One or both of a first video noise reduction operation, for example, mosquito noise strength estimation and/or mosquito noise reduction and a second video noise reduction operation, for example, block grid detection and/or block noise reduction, may be controlled based on the detected scale which are utilized for processing at least a portion of the video image. A pixel correction value may be generated based on one or both of results from the first video noise reduction operation and results from the second video noise reduction operation. The results from the first video noise reduction operation and the results from the second video noise reduction operation may be blended to generate the pixel correction value. At least one pixel value may be corrected for the video image utilizing the generated pixel correction value. The first video noise reduction operation and the second video noise reduction operation may comprise mosquito noise reduction and block noise reduction. The scale may be determined based on one or both of a per pixel vertical gradient measurement and a per pixel horizontal gradient measurement. Which gradient measurements to utilize and/or which gradient measurements to discard, may be determined based on one or more of the following: configured picture format information associated with the video image, for example, stored in memory, standard deviation of luma levels in one or both of a vertical window and a horizontal window about a current pixel of said video image, and a current pixel location relative to edges of black borders, graphics and/or overlaid content associated with said video image. During one or both of the first video noise reduction operation and the second video noise reduction operation, horizontal operations which may correspond to the horizontal direction may be performed separately from vertical operations which may correspond to the vertical direction for one or more operations comprising: detecting horizontal and vertical edges, determining strength of horizontal and vertical edges, filtering horizontal and vertical edges and controlling the amount of horizontal filtering and the amount of vertical filtering, for example, in one or both of the mosquito noise detection unit 104, the mosquito noise reduction unit 106, the block grid detection unit 108 and the block noise reduction unit 110. Mosquito noise detection may be referred to as mosquito strength estimation. Horizontal filtering and/or vertical filtering may be adapted based on the determined horizontal direction scale, the determined vertical direction scale and/or the determined strength of the horizontal and/or the vertical edges. The results from one or both of the first video noise reduction operation and the second video noise reduction operation may be determined based on one or more of: selecting a weakest filter correction from a median filter 1206 and one or more linear filters 1210 and 1214, blending filter corrections from a median filter and one or more linear filters 1210 and 1214 and selecting a strongest filter correction from a median filter and one or more linear filters 1210 and 1214. Horizontal spacing of a block noise grid, vertical spacing of a block noise grid, horizontal shift of a block noise grid, vertical shift of a block noise grid and/or block noise strength may be determined during one or both of the first video noise reduction operation and the second video noise reduction operation, for example, by the block grid detection unit 1402, the horizontal block edge detection and limit determination unit 1404 and/or the vertical block edge detection and limit determination unit 1406. The results from one or both of the first video noise reduction operation and the second video noise reduction operation may be determined based on one or more of: selecting a weakest filter correction from a median filter and one or more linear filters, blending filter corrections from a median filter and one or more linear filters and selecting a strongest filter correction from a median filter and one or more linear filters. Horizontal spacing of a block noise grid, vertical spacing of a block noise grid, horizontal shift of a block noise grid, vertical shift of a block noise grid and/or block noise strength may be determined during one or both of the first video noise reduction operation and the second video noise reduction operation. Determination of which pixels to filter in a picture, may be based on one or more of the horizontal spacing, the vertical spacing, the horizontal shift and the vertical shift. Vertical and/or horizontal block noise filters, for example, 1410 and 1408, may be configured based on one or more of the horizontal direction scale, the vertical direction scale, the horizontal spacing and the vertical spacing. Filter corrections may be limited based on block noise strength when determining the results from the first video noise reduction operation and/or the results from the second video noise reduction operation, for example, by the HBNR filter 1408 and/or the VBNR filter 1410. In this manner, digital noise may be reduced in scaled compressed video pictures, for example, when video pictures are scaled after being compressed. In another embodiment of the invention, a number of filter taps utilized and/or filter coefficient values may be controlled based on the scale, for example, by the horizontal block edge detection and limit determination unit 1404 and/or the vertical block edge detection and limit determination unit 1406. Other embodiments of the invention may provide a non-transitory computer readable medium and/or storage medium, and/or a non-transitory machine readable medium and/or storage medium, having stored thereon, a machine code and/or a computer program having at least one code section executable by a machine and/or a computer, thereby causing the machine and/or computer to perform the steps as described herein for digital noise reduction of scaled compressed video pictures. Accordingly, the present invention may be realized in hardware, software, or a combination of hardware and software. The present invention may be realized in a centralized fashion in at least one computer system or in a distributed fashion where different elements may be spread across several interconnected computer systems. Any kind of computer system or other apparatus adapted for carrying out the methods described herein is suited. A typical combination of hardware and software may be a general-purpose computer system with a computer program that, when being loaded and executed, controls the computer system such that it carries out the methods described herein. The present invention may also be embedded in a computer program product, which comprises all the features enabling the implementation of the methods described herein, and which when loaded in a computer system is able to carry out these methods. Computer program in the present context means any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after either or both of the following: a) conversion to another language, code or notation; b) reproduction in a different material form. While the present invention has been described with reference to certain embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the scope of the present invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the present invention without departing from its scope. Therefore, it is intended that the present invention not be limited to the particular embodiment disclosed, but that the present invention will include all embodiments falling within the scope of the appended claims. ## Claims 1. A method for image processing in a video processing device, the method comprising: detecting, in a video image, a scale factor by which said video image has been scaled from a native resolution of said video image in one or both of vertical or horizontal directions based on pixel information for said video image; controlling, based on said scale factor, one or both of a first video noise reduction operation or a second video noise reduction operation, which are utilized for processing at least a portion of said video image; generating a pixel correction value based on one or both of results from said first video noise reduction operation or results from said second video noise reduction operation; and determining said scale factor by which said video image has been scaled from said native resolution based on one or both of a per pixel vertical gradient measurement or a per pixel horizontal gradient measurement, wherein when determining said scale factor, determining which gradient measurements to utilize or which gradient measurements to discard is based on one or more of: configured picture format information associated with said video image; standard deviation of luma levels in one or both of a vertical window or a horizontal window about a current pixel of said video image; and a current pixel location relative to at least one of edges of black borders, graphics, or overlaid content associated with said video image. detecting, in a video image, a scale factor by which said video image has been scaled from a native resolution of said video image in one or both of vertical or horizontal directions based on pixel information for said video image; controlling, based on said scale factor, one or both of a first video noise reduction operation or a second video noise reduction operation, which are utilized for processing at least a portion of said video image; generating a pixel correction value based on one or both of results from said first video noise reduction operation or results from said second video noise reduction operation; and determining said scale factor by which said video image has been scaled from said native resolution based on one or both of a per pixel vertical gradient measurement or a per pixel horizontal gradient measurement, wherein when determining said scale factor, determining which gradient measurements to utilize or which gradient measurements to discard is based on one or more of: configured picture format information associated with said video image; standard deviation of luma levels in one or both of a vertical window or a horizontal window about a current pixel of said video image; and a current pixel location relative to at least one of edges of black borders, graphics, or overlaid content associated with said video image. 2. The method according to claim 1, comprising blending said results from said first video noise reduction operation and said results from said second video noise reduction operation to generate said pixel correction value. 3. The method according to claim 1, comprising correcting at least one pixel value for said video image utilizing said pixel correction value. 4. The method according to claim 1, wherein said first video noise reduction operation and said second video noise reduction operation comprise mosquito noise reduction and block noise reduction, respectively. 5. The method according to claim 1, further comprising during one or both of said first video noise reduction operation or said second video noise reduction operation, performing horizontal operations, which correspond to said horizontal direction, separately from performing vertical operations, which correspond to said vertical direction, for one or more operations comprising: detecting horizontal and vertical edges; determining strength of horizontal and vertical edges; filtering horizontal and vertical edges; adapting at least one of horizontal filtering or vertical filtering based on at least one of a horizontal direction scale or a vertical direction scale; and adapting at least one of said horizontal filtering or said vertical filtering based on said strength of said horizontal and vertical edges. detecting horizontal and vertical edges; determining strength of horizontal and vertical edges; filtering horizontal and vertical edges; adapting at least one of horizontal filtering or vertical filtering based on at least one of a horizontal direction scale or a vertical direction scale; and adapting at least one of said horizontal filtering or said vertical filtering based on said strength of said horizontal and vertical edges. 6. The method according to claim 1, further comprising determining one or both of said results from said first video noise reduction operation or said results from said second video noise reduction operation, based on one or more of: selecting a weakest filter correction from at least one of a median filter or one or more linear filters; blending filter corrections from at least one of said median filter or said one or more linear filters; and selecting a strongest filter correction from at least one of said median filter or said one or more linear filters. selecting a weakest filter correction from at least one of a median filter or one or more linear filters; blending filter corrections from at least one of said median filter or said one or more linear filters; and selecting a strongest filter correction from at least one of said median filter or said one or more linear filters. 7. The method according to claim 1, further comprising one or more of: determining at least one of a horizontal spacing of a block noise grid, a vertical spacing of said block noise grid, a horizontal shift of said block noise grid, a vertical shift of said block noise grid, or a block noise strength; determining where to filter pixels in a picture based on one or more of said horizontal spacing, said vertical spacing, said horizontal shift, or said vertical shift; and configuring at least one of vertical or horizontal block noise filters based on one or more of a horizontal direction scale, a vertical direction scale, said horizontal spacing, or said vertical spacing, during one or both of said first video noise reduction operation or said second video noise reduction operation. determining at least one of a horizontal spacing of a block noise grid, a vertical spacing of said block noise grid, a horizontal shift of said block noise grid, a vertical shift of said block noise grid, or a block noise strength; determining where to filter pixels in a picture based on one or more of said horizontal spacing, said vertical spacing, said horizontal shift, or said vertical shift; and configuring at least one of vertical or horizontal block noise filters based on one or more of a horizontal direction scale, a vertical direction scale, said horizontal spacing, or said vertical spacing, during one or both of said first video noise reduction operation or said second video noise reduction operation. 8. The method according to claim 1, further comprising limiting filter corrections based on block noise strength when determining said results from said first video noise reduction operation or said results from said second video noise reduction operation. 9. A system comprising: one or more processors, one or more circuits, or any combination thereof for use in a video processing device and being operable to: detect, in a video image, a scale factor by which said video image has been scaled from a native resolution of said video image in one or both of vertical or horizontal directions based on pixel information for said video image; control, based on said scale factor, one or both of a first video noise reduction operation or a second video noise reduction operation, which are utilized for processing at least a portion of said video image; and generate a pixel correction value based on one or both of results from said first video noise reduction operation or results from said second video noise reduction operation, wherein said one or more processors, one or more circuits, or any combination thereof is operable to determine said scale factor by which said video image has been scaled from said native resolution based on one or both of a per pixel vertical gradient measurement or a per pixel horizontal gradient measurement, and wherein, when determining said scale factor, said one or more processors, one or more circuits, or any combination thereof is operable to determine which gradient measurements to utilize or which gradient measurements to discard, based on one or more of: configured picture format information associated with said video image; standard deviation of luma levels in one or both of a vertical window or a horizontal window about a current pixel of said video image; and a current pixel location relative to at least one of edges of black borders, graphics, or overlaid content associated with said video image. one or more processors, one or more circuits, or any combination thereof for use in a video processing device and being operable to: detect, in a video image, a scale factor by which said video image has been scaled from a native resolution of said video image in one or both of vertical or horizontal directions based on pixel information for said video image; control, based on said scale factor, one or both of a first video noise reduction operation or a second video noise reduction operation, which are utilized for processing at least a portion of said video image; and generate a pixel correction value based on one or both of results from said first video noise reduction operation or results from said second video noise reduction operation, wherein said one or more processors, one or more circuits, or any combination thereof is operable to determine said scale factor by which said video image has been scaled from said native resolution based on one or both of a per pixel vertical gradient measurement or a per pixel horizontal gradient measurement, and wherein, when determining said scale factor, said one or more processors, one or more circuits, or any combination thereof is operable to determine which gradient measurements to utilize or which gradient measurements to discard, based on one or more of: configured picture format information associated with said video image; standard deviation of luma levels in one or both of a vertical window or a horizontal window about a current pixel of said video image; and a current pixel location relative to at least one of edges of black borders, graphics, or overlaid content associated with said video image. 10. The system according to claim 9, wherein said one or more processors, one or more circuits, or any combination thereof is operable to correct at least one pixel value for said video image utilizing said pixel correction value. 11. The system according to claim 9, wherein said first video noise reduction operation and said second video noise reduction operation comprise mosquito noise reduction and block noise reduction, respectively. 12. The system according to claim 9, wherein said one or more processors, one or more circuits, or any combination thereof is operable to blend said results from said first video noise reduction operation and said results from said second video noise reduction operation to generate said pixel correction value. 13. The system according to claim 9, wherein said one or more processors, one or more circuits, or any combination thereof is operable, during one or both of said first video noise reduction operation or said second video noise reduction operation, to perform horizontal operations, which correspond to said horizontal direction, separately from performing vertical operations, which correspond to said vertical direction, for one or more operations comprising: detecting horizontal and vertical edges; determining strength of horizontal and vertical edges; filtering horizontal and vertical edges; adapting at least one of horizontal filtering or vertical filtering based on at least one of a horizontal direction scale or a vertical direction scale; and adapting at least one of said horizontal filtering or said vertical filtering based on said strength of said horizontal and vertical edges. detecting horizontal and vertical edges; determining strength of horizontal and vertical edges; filtering horizontal and vertical edges; adapting at least one of horizontal filtering or vertical filtering based on at least one of a horizontal direction scale or a vertical direction scale; and adapting at least one of said horizontal filtering or said vertical filtering based on said strength of said horizontal and vertical edges. 14. The system according to claim 9, wherein said one or more processors, one or more circuits, or any combination thereof is operable to determine one or both of said results from said first video noise reduction operation or said results from said second video noise reduction operation, based on one or more of: selecting a weakest filter correction from at least one of a median filter or one or more linear filters; blending filter corrections from at least one of said median filter or said one or more linear filters; and selecting a strongest filter correction from at least one of said median filter or said one or more linear filters. selecting a weakest filter correction from at least one of a median filter or one or more linear filters; blending filter corrections from at least one of said median filter or said one or more linear filters; and selecting a strongest filter correction from at least one of said median filter or said one or more linear filters. 15. The system according to claim 9, wherein said one or more processors, one or more circuits, or any combination thereof is operable to: determine at least one of a horizontal spacing of a block noise grid, a vertical spacing of said block noise grid, a horizontal shift of said block noise grid, a vertical shift of said block noise grid, or a block noise strength; determine where to filter pixels in a picture based on one or more of said horizontal spacing, said vertical spacing, said horizontal shift, or said vertical shift; and configure at least one of vertical or horizontal block noise filters based on one or more of a horizontal direction scale, a vertical direction scale, said horizontal spacing, or said vertical spacing, during one or both of said first video noise reduction operation or said second video noise reduction operation. determine at least one of a horizontal spacing of a block noise grid, a vertical spacing of said block noise grid, a horizontal shift of said block noise grid, a vertical shift of said block noise grid, or a block noise strength; determine where to filter pixels in a picture based on one or more of said horizontal spacing, said vertical spacing, said horizontal shift, or said vertical shift; and configure at least one of vertical or horizontal block noise filters based on one or more of a horizontal direction scale, a vertical direction scale, said horizontal spacing, or said vertical spacing, during one or both of said first video noise reduction operation or said second video noise reduction operation. 16. The system according to claim 9, wherein said one or more processors, one or more circuits, or any combination thereof is operable to limit filter corrections based on block noise strength when determining said results from said first video noise reduction operation or said results from said second video noise reduction operation. 17. A method for image processing, the method comprising: determining, by a video processing device, a scale factor from a native resolution of a video image based on one or both of a per pixel vertical gradient measurement or a per pixel horizontal gradient measurement; when determining said scale factor, determining which gradient measurements to utilize or which gradient measurements to discard, based on one or more of: configured picture format information associated with said video image; a standard deviation of luma levels in one or both of a vertical window or a horizontal window about a current pixel of said video image; and a current pixel location relative to at least one of edges of black borders, graphics, or overlaid content associated with said video image; controlling, based on said scale factor, one or both of a first video noise reduction operation or a second video noise reduction operation; and generating a pixel correction value by blending results from said first video noise reduction operation and results from said second video noise reduction operation. determining, by a video processing device, a scale factor from a native resolution of a video image based on one or both of a per pixel vertical gradient measurement or a per pixel horizontal gradient measurement; when determining said scale factor, determining which gradient measurements to utilize or which gradient measurements to discard, based on one or more of: configured picture format information associated with said video image; a standard deviation of luma levels in one or both of a vertical window or a horizontal window about a current pixel of said video image; and a current pixel location relative to at least one of edges of black borders, graphics, or overlaid content associated with said video image; controlling, based on said scale factor, one or both of a first video noise reduction operation or a second video noise reduction operation; and generating a pixel correction value by blending results from said first video noise reduction operation and results from said second video noise reduction operation. 18. The method according to claim 17, wherein said first video noise reduction operation and said second video noise reduction operation comprise mosquito noise reduction and block noise reduction, respectively.
2021-06-14 23:36:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3029845952987671, "perplexity": 1613.0587324778419}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487614006.8/warc/CC-MAIN-20210614232115-20210615022115-00181.warc.gz"}
https://hocatt.com/technology/ozone/
Call Us Today! 1-844-696-9663 Ozone2018-08-24T13:18:56+00:00 # Ozone ## Ozone Sauna The HOCATT has an Oxygen concentrator that sends pure Oxygen (O2) to its two Ozone generators. The Ozone generators then use this Oxygen to make pure Ozone (O3), so you can think of Ozone as a Super-Oxygen. The Ozone is infused into the HOCATT chamber (as shown above), where it then mixes with the steam (H2O) to form Ozone products. This creates a relaxing sauna experience with all the benefits of Transdermal Ozone. The second Ozone generator in the HOCATT is dedicated to delivering pure Ozone to the auxiliary attachments for Ozone Cupping or Vaginal Ozone Insufflations. With the HOCATT, you can save time  by using these auxiliary features during a HOCATT sauna session. Alternatively, they can also be used as stand-alone applications. ## Ozone Cupping You can use a cup, or set of cups, and enjoy Ozone Cupping over specific areas, such as the breasts (as shown above). Cupping is also a form of Transdermal Ozone. ## Vaginal Ozone Insufflations You can use a disposable catheter for Vaginal Ozone Insufflations (as shown above).
2018-10-22 20:59:16
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8106736540794373, "perplexity": 10223.565871879946}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583515539.93/warc/CC-MAIN-20181022201445-20181022222945-00336.warc.gz"}
https://ckms.kms.or.kr/journal/list.html?Vol=18&Num=2&mod=vol/&book=CKMS&aut_box=Y&sub_box=Y&pub_box=Y
- Current Issue - Ahead of Print Articles - All Issues - Search - Open Access - Information for Authors - Downloads - Guideline - Regulations ㆍPaper Submission ㆍPaper Reviewing ㆍPublication and Distribution - Code of Ethics - For Authors ㆍOnline Submission ㆍMy Manuscript - For Reviewers - For Editors Commun. Korean Math. Soc. 2003 Vol. 0, No. 0, 193—392 양자 계산 알고리즘 지동표 MSC numbers : 81Vxx Commun. Korean Math. Soc. 2003 Vol. 18, No. 2, 193—205 A note on symmetric differences of orthomodular lattices Eunsoon Park, Mi Mi Kim, Jin Young Chung MSC numbers : 06C15 Commun. Korean Math. Soc. 2003 Vol. 18, No. 2, 207—214 Monomial characters over finite groups Eunmi Choi MSC numbers : Primary 16H, 20C Commun. Korean Math. Soc. 2003 Vol. 18, No. 2, 215—223 On projective $BCI$-algebras Sun Shin Ahn, Keumseong Bang MSC numbers : 06F35 Commun. Korean Math. Soc. 2003 Vol. 18, No. 2, 225—233 On the stability of mappings in Banach algebras Young Whan Lee MSC numbers : 39B22, 39B72, 46H40 Commun. Korean Math. Soc. 2003 Vol. 18, No. 2, 235—242 The radial derivatives on weighted Bergman spaces Si Ho Kang, Ja Young Kim MSC numbers : Primary 31B05, 31B10; Secondary 32A36 Commun. Korean Math. Soc. 2003 Vol. 18, No. 2, 243—249 Necessary and sufficient conditions for convergence of Ishikawa iterative schemes with errors to $\phi$-hemicontractive mappings Zeqing Liu, Jong Kyu Kim, Shin Min Kang MSC numbers : 47H05, 47H06, 47H10, 47H14 Commun. Korean Math. Soc. 2003 Vol. 18, No. 2, 251—261 $L^p$ estimates with weights for the $\overline\partial$-equation on real ellipsoids in $\Bbb C^n$ Heungju Ahn MSC numbers : Primary 32A26, 32W05 Commun. Korean Math. Soc. 2003 Vol. 18, No. 2, 263—280 Correlation dimensions of Cantor-like sets Mi Ryeong Lee, Hung Hwan Lee MSC numbers : 28A80, 37B10 Commun. Korean Math. Soc. 2003 Vol. 18, No. 2, 281—288 On Ruled surfaces in Minkowski spaces Dong-Soo Kim, Young Ho Kim, Dae Won Yoon MSC numbers : 53B25, 53C50 Commun. Korean Math. Soc. 2003 Vol. 18, No. 2, 289—295 1  ·  2
2020-02-23 17:06:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25747570395469666, "perplexity": 2534.587374839023}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145818.81/warc/CC-MAIN-20200223154628-20200223184628-00512.warc.gz"}
https://cs.stackexchange.com/questions/10245/lookahead-set-determining-minimum-k-such-that-g-is-a-strong-llk-grammar
# Lookahead set: Determining minimum $k$ such that $G$ is a strong $LL(k)$ grammar How do we determine minimum $k$ such that $G$ is a strong $LL(k)$ Grammar Like for grammar $G$ with the following rules $S\rightarrow aAcaa \mid bAbcc,A\rightarrow a \mid ab \mid \epsilon$ I do not believe one can obtain directly a minimum $k$ such that $G$ is a strong $LL(k)$ grammar. However, as it is possible to (dis)prove that a grammar is strong $LL(k)$, one can iterate the proof over $k$. A grammar $G$ is strong $LL(k)$ iff for every pair of distinct production rules $A \to \alpha$ and $A \to \beta$ (with $\alpha \neq \beta$), we have: $$First_k( \alpha \; Follow_k(A) ) \; \cap \; First_k( \beta \; Follow_k(A) ) = \emptyset$$ The steps to obtain a $k$ for a certain grammar $G$ are thus as follows: • For each $n > 0$: 1. Check wether $G$ is $LL(n)$ 2. If so, try proving $G$ is $LL(n)$ 3. If not, we have found our $k = n - 1$ Some documents that might help with the actual proof:
2020-01-17 20:23:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8714144825935364, "perplexity": 282.9974024743674}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250590107.3/warc/CC-MAIN-20200117180950-20200117204950-00008.warc.gz"}
https://www.zbmath.org/?q=ut%3Alocally+non-resonant+curves
# zbMATH — the first resource for mathematics On the weak limit of rapidly oscillating waves. (English) Zbl 0646.73016 A possible representation of a rapidly oscillating wave can be represented as $U^{\epsilon}(x,t)=W_ N(\theta (x,t)/\epsilon;\quad \kappa (x,t),\quad \omega (x,t))+O(\epsilon),$ where $$W_ N(\cdot,\kappa,\omega):T^ N\to R$$ is defined on the N-torus $$T^ N$$ and the N-vectors $$\theta$$, $$\kappa$$, $$\omega$$ are real-valued functions of x and t related by $$(\partial \theta /\partial x)=\kappa$$, $$(\partial \theta /\partial t)=\omega$$. An averaging theorem is proved on the so- called locally non-resonant curves $$\kappa:R\to R^ n.$$ Reviewer: V.Rǎsvan ##### MSC: 74J99 Waves in solid mechanics 35Q99 Partial differential equations of mathematical physics and other areas of application 74J20 Wave scattering in solid mechanics ##### Keywords: averaging theorem; locally non-resonant curves Full Text: ##### References: [1] R. Diperna, Measure valued solutions to conservation laws , to appear, Proc. of Conf. on Rapid Oscillation Theory, Minnesota, ed. D. Kenderlehrer and M. Slemrod, 1985. · Zbl 0616.35055 · doi:10.1007/BF00752112 [2] N. Ercolani, M. G. Forest, and D. W. McLaughlin, Oscillations and instabilities in near integrable PDEs , Nonlinear systems of partial differential equations in applied mathematics, Part 1 (Santa Fe, N.M., 1984) eds. D. Holm, J. M. Hyman, and B. Nicolaenko, Lectures in Appl. Math., vol. 23, Amer. Math. Soc., Providence, RI, 1986, pp. 3-46. · Zbl 0605.35079 [3] 1 P. D. Lax and C. D. Levermore, The small dispersion limit of the KdV equation, I , Comm. Pure Appl. Math. 36 (1983), no. 3, 253-290. · Zbl 0532.35067 · doi:10.1002/cpa.3160360302 [4] 2 P. D. Lax and C. D. Levermore, The small dispersion limit of the Korteweg-de Vries equation, II , Comm. Pure Appl. Math. 36 (1983), no. 5, 571-593. · Zbl 0527.35073 · doi:10.1002/cpa.3160360503 [5] 3 P. D. Lax and C. D. Levermore, The small dispersion limit of the Korteweg-de Vries equation, III , Comm. Pure Appl. Math. 36 (1983), no. 6, 809-829. · Zbl 0527.35074 · doi:10.1002/cpa.3160360606 [6] S. Venakides, The zero dispersion limit of the KdV equation with periodic initial data , to appear, Proc. of Conf. on Rapid Oscillation Theory, Minnesota, ed. D. Kenderlehrer and M. Slemrod, 1985. · Zbl 0544.35081 · doi:10.1002/cpa.3160380202 [7] H. Flaschka, M. G. Forest, and D. W. McLaughlin, Multiphase averaging and the inverse spectral solution of the Korteweg-de Vries equation , Comm. Pure Appl. Math. 33 (1980), no. 6, 739-784. · Zbl 0454.35080 · doi:10.1002/cpa.3160330605 [8] M. G. Forest and D. W. McLaughlin, Modulations of sinh-Gordon and sine-Gordon wavetrains , Stud. Appl. Math. 68 (1983), no. 1, 11-59. · Zbl 0541.35071 [9] D. W. MacLaughlin, G. Papanicolaou, and L. Tartar, Weak limits of semilinear conservation laws with oscillating data , to appear, Proc. of Con. on Rapid Oscillation Theory, Minnesota, ed. D. Kenderlehrer and M. Slemrod, 1985. [10] D. W. MacLaughlin, On the construction of a modulating, multiphase wavetrain for a perturbed KdV equation , to appear, Proc. of Conf. on Rapid Oscillation Theory, Minnesota, ed. D. Kenderlehrer and M. Slemrod, 1985. [11] N. Ercolani, M. G. Forest, D. W. McLaughlin, and R. Montgomery, Hamiltonian structure for the modulation equations of sine-Gordon wavetrain , Univ. of Arizona preprint, 1986. · Zbl 0668.35064 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2021-03-07 09:35:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7171809673309326, "perplexity": 2424.3506747724996}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178376206.84/warc/CC-MAIN-20210307074942-20210307104942-00553.warc.gz"}