url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
http://eprints.iisc.ernet.in/12763/
# Resistivity Minima and Electron-Electron Interaction in Crystalline Alloys of Transition Metals $[Fe_xNi_{80-x}Cr_{20}, 63 \geq x \geq 50]$ Banerjee, S and Raychaudhuri, AK (1992) Resistivity Minima and Electron-Electron Interaction in Crystalline Alloys of Transition Metals $[Fe_xNi_{80-x}Cr_{20}, 63 \geq x \geq 50]$. In: Solid State Communications, 83 (12). pp. 1047-1051. PDF Resistivity_minima_and_electron-electron_interaction.pdf Restricted to Registered users only Download (511Kb) | Request a copy ## Abstract We report the low temperature resistivity [\rho(T)] measurement of the \gamma-phase crystalline alloys $Fe_xNi_{80-x}Cr_{20} (63 \geq x \geq 50)$ in the temperature range 0.4 K-20 K. We observe of resistivity minima occurring in the temperature range $T_{min}$ = 6-11 K. Below the minima $(T < T_{min}/2)$, the rise in $\rho$(T) follows a \sqrt{T} law indicating electron-electron interactions as the origin of this rise. In this alloy system by varying x one can go from a long range ordered antiferrogmagnetic to a ferromagnetic phase passing through spin glass and reenterant spin glass phases. We find that the \sqrt{T} temperature dependence of $\rho$(T) for $T < T_{min}$ is preserved irrespective of the type of magnetic order. Item Type: Journal Article Copyright of this article belongs to Elsevier Science Ltd. electron-electron interaction;crystalline alloys;transition metals Division of Physical & Mathematical Sciences > Physics 10 Dec 2007 19 Sep 2010 04:42 http://eprints.iisc.ernet.in/id/eprint/12763
2015-10-06 06:20:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6766307950019836, "perplexity": 5265.226295008407}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736678545.31/warc/CC-MAIN-20151001215758-00128-ip-10-137-6-227.ec2.internal.warc.gz"}
https://cs.stackexchange.com/questions/116388/prove-that-the-following-language-is-not-regular-0i1j-i-neq-j?noredirect=1
Prove that the following language is not regular: $\{0^i1^j : i \neq j\}$ [duplicate] I was trying to approach this proof, after multiple reads and attempts I am getting nowhere. If someone could help me out that would be great. Should I use the pumping lemma, if so how show I start, what word should I choose? Or should I use closure-properties and if so what irregularity should I show? I am genuinely so confused. Any help is appreciated. • Have you tried Myhill-Nerode theorem? – Evil Oct 28 '19 at 6:35 Let $$L = \{0^i1^j : i \neq j\}$$. You can prove that $$L$$ is not regular in many ways. Here are some examples. Closure properties If your language were regular then so would $$0^*1^* \setminus L$$ be, but that language is $$\{0^n1^n : n \in \mathbb N\}$$, which presumably you already know isn't regular. Myhill–Nerode The words $$\{0^i : i \in \mathbb N\}$$ are pairwise distinguishable modulo $$L$$: if $$i \neq j$$, then $$0^i1^i \notin L$$ but $$0^j1^i \in L$$. It follows that $$L$$ isn't regular. Pumping lemma If $$L$$ is regular then it satisfies the pumping lemma, say with constant $$n$$. Consider the word $$w = 0^n 1^{n+n!} \in L$$. According to the pumping lemma, there should be a decomposition $$w = xyz$$ such that $$|xy| \leq n$$, $$|y| \geq 1$$, and $$xy^iz \in L$$ for all $$i \in \mathbb N$$. Let $$|y| = \ell$$, so that $$y = 0^\ell$$. Pick $$i = 1 + n!/\ell$$. Then $$xy^iz = 0^{n+n!} 1^{n+n!} \notin L$$, contradicting the pumping lemma. • Thank you very much, I understand the basic procedure thanks to you. If I am right, you use closure properties to force your way to an irregular language that we know is irregular. I am still confused about the pumping lemma, but I guess you just get used to it through practice. – CS1234 Oct 29 '19 at 4:28 The language $$L=\{0^i 1^j : i \neq j\}$$ can be written equivalently as $$L=\{0^i 1^j : i \lt j\} \cup \{0^i 1^j : i \gt j\} = L_1 \cup L_2$$. Now if we prove using the Pumping Lemma that either of the languages $$L_1$$ or $$L_2$$ are not regular, we are done. Consider any string $$x = uvw: x \in L_1$$. $$u=0^{i-a}$$ $$v=0^a1^b$$ $$w=1^{j-b} : i\lt j, 0 \le a \le i, 0 \le b \le j, a+b\ge 1$$. Now pumping $$x$$ yields $$x' = uv^nw = (0^{i-a})(0^a1^b)^n(1^{j-b})$$ $$= 0^{i+a(n-1)}1^{j+b(n-1)}: \forall n\ge 0$$ For $$L_1$$ to be regular, $$x' \in L_1$$,for all arbitrary choices of $$i, j, a, b, n$$,satisfying the above constraints. If one chooses $$a, b$$ such that $$a \gt b$$, then it is evident that for some $$n$$, $$i+a(n-1) > j + b(n-1)$$ (For all $$n \gt \frac{j-i}{a-b}-1$$ precisely). Hence, $$\exists x' \notin L_1$$, and therefore $$L_1$$ is not regular. This concludes that $$L$$ is NOT regular. • Try your argument on $L_2 = \{0^i1^j : i \ge j \}$. – Yuval Filmus Oct 28 '19 at 7:35 • The $L_2$ stated above is with $i \gt j$. Proving non-regularity for the language $\{ 0^i1^j : i \ge j\}$ is trivial since it has the language $\{ 0^i1^j: i = j \}$ as its subset. – RandomPerfectHashFunction Oct 28 '19 at 12:24 • The point is that the union of $L_1$ and the new $L_2$ is the language $0^*1^*$, which is regular. – Yuval Filmus Oct 28 '19 at 12:30 • Yes, that's another alway to think about it. It just works out for $0^*1^*$, since every regular language is also a CFL. But in general the union of two CFLs is another CFL, which need not be a regular language. – RandomPerfectHashFunction Oct 28 '19 at 17:46 • You cannot prove that a language isn’t regular by showing that it has a nonregular subset. It just doesn’t follow. I gave you a counterexample to this technique. – Yuval Filmus Oct 28 '19 at 18:13
2021-07-29 16:16:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 44, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8395501971244812, "perplexity": 240.86240136070506}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153860.57/warc/CC-MAIN-20210729140649-20210729170649-00323.warc.gz"}
https://deepai.org/publication/on-finding-local-nash-equilibria-and-only-local-nash-equilibria-in-zero-sum-games
# On Finding Local Nash Equilibria (and Only Local Nash Equilibria) in Zero-Sum Games We propose a two-timescale algorithm for finding local Nash equilibria in two-player zero-sum games. We first show that previous gradient-based algorithms cannot guarantee convergence to local Nash equilibria due to the existence of non-Nash stationary points. By taking advantage of the differential structure of the game, we construct an algorithm for which the local Nash equilibria are the only attracting fixed points. We also show that the algorithm exhibits no oscillatory behaviors in neighborhoods of equilibria and show that it has the same per-iteration complexity as other recently proposed algorithms. We conclude by validating the algorithm on two numerical examples: a toy example with multiple Nash equilibria and a non-Nash equilibrium, and the training of a small generative adversarial network (GAN). ## Authors • 1 publication • 178 publications • 26 publications • ### Local Nash Equilibria are Isolated, Strict Local Nash Equilibria in `Almost All' Zero-Sum Continuous Games We prove that differential Nash equilibria are generic amongst local Nas... 02/03/2020 ∙ by Eric Mazumdar, et al. ∙ 0 • ### Policy Optimization Provably Converges to Nash Equilibria in Zero-Sum Linear Quadratic Games We study the global convergence of policy optimization for finding the N... 05/31/2019 ∙ by Kaiqing Zhang, et al. ∙ 0 • ### Fast Planning in Stochastic Games Stochastic games generalize Markov decision processes (MDPs) to a multia... 01/16/2013 ∙ by Michael Kearns, et al. ∙ 0 • ### A mean-field analysis of two-player zero-sum games Finding Nash equilibria in two-player zero-sum continuous games is a cen... 02/14/2020 ∙ by Carles Domingo Enrich, et al. ∙ 5 • ### A Generalized Training Approach for Multiagent Learning This paper investigates a population-based training regime based on game... 09/27/2019 ∙ by Paul Müller, et al. ∙ 20 • ### Actor-Critic Algorithms for Learning Nash Equilibria in N-player General-Sum Games We consider the problem of finding stationary Nash equilibria (NE) in a ... 01/08/2014 ∙ by H. L Prasad, et al. ∙ 0 • ### On positionality of trigger strategies Nash Equilibria in SCAR We study the positionality of trigger strategies Nash equilibria σ for t... 10/25/2019 ∙ by George Konstantinidis, et al. ∙ 0 ##### This week in AI Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday. ## 1 Introduction The classical problem of finding Nash equilibria in multi-player games has been a focus of intense research in computer science, control theory, economics and mathematics (Basar and Olsder, 1998; Nisan et al., 2007; C. Daskalakis, 2009) . Some connections have been made between this extensive literature and machine learning (see, e.g., Cesa-Bianchi and Lugosi, 2006; Banerjee and Peng, 2003; Foerster et al., 2017) , but these connections have focused principally on decision-making by single agents and multiple agents, and not on the burgeoning pattern-recognition side of machine learning, with its focus on large data sets and simple gradient-based algorithms for prediction and inference. This gap has begun to close in recent years, due to new formulations of learning problems as involving competition between subsystems that are construed as adversaries (Goodfellow et al., 2014), the need to robustify learning systems with regard to against actual adversaries (Xu et al., 2009) and with regard to mismatch between assumptions and data-generating mechanisms (Yang, 2011; Giordano et al., 2018), and an increasing awareness that real-world machine-learning systems are often embedded in larger economic systems or networks (Jordan, 2018). These emerging connections bring significant algorithmic and conceptual challenges to the fore. Indeed, while gradient-based learning has been a major success in machine learning, both in theory and in practice, work on gradient-based algorithms in game theory has often highlighted their limitations. For example, gradient-based approaches are known to be difficult to tune and train (Daskalakis et al., 2017; Mescheder et al., 2017; Hommes and Ochea, 2012; Balduzzi et al., 2018), and recent work has shown that gradient-based learning will almost surely avoid a subset of the local Nash equilibria in general-sum games (Mazumdar and Ratliff, ). Moreover, there is no shortage of work showing that gradient-based algorithms can converge to limit cycles or even diverge in game-theoretic settings (Benaïm and Hirsch, 1999; Hommes and Ochea, 2012; Daskalakis et al., 2017; Mertikopoulos et al., 2018b). These drawbacks have led to a renewed interest in approaches to finding the Nash equilibria of zero-sum games, or equivalently, to solving saddle point problems. Recent work has attempted to use second-order information to reduce oscillations around equilibria and speed up convergence to fixed points of the gradient dynamics (Mescheder et al., 2017; Balduzzi et al., 2018). Other recent approaches have attempted to tackle the problem from the variational inequality perspective but also with an eye on reducing oscillatory behaviors (Mertikopoulos et al., 2018a; Gidel et al., 2018). None of these approaches, however, address a fundamental issue that arises in zero-sum games. As we will discuss, the set of attracting fixed points for the gradient dynamics in zero-sum games can include critical points that are not Nash equilibria. In fact, any saddle point of the underlying function that does not satisfy a particular alignment condition of a Nash equilibrium is a candidate attracting equilibrium for the gradient dynamics. Further, as we show, these points are attracting for a variety of recently proposed adjustments to gradient-based algorithms, including consensus optimization (Mescheder et al., 2017), the symplectic gradient adjustment (Balduzzi et al., 2018), and a two-timescale version of simultaneous gradient descent (Heusel et al., 2017). Moreover, we show by counterexample that these algorithms can all converge to non-Nash stationary points. We present a new gradient-based algorithm for finding the local Nash equilibria of two-player zero-sum games and prove that the only stationary points to which the algorithm can converge are local Nash equilibria. Our algorithm makes essential use of the underlying structure of zero-sum games. To obtain our theoretical results we work in continuous time—via an ordinary differential equation (ODE)—and our algorithm is obtained via a discretization of the ODE. While a naive discretization would require a matrix inversion and would be computationally burdensome, our discretization is a two-timescale discretization that avoids matrix inversion entirely and is of a similar computational complexity as that of other gradient-based algorithms. The paper is organized as follows. In Section 2 we define our notation and the problem we address. In Section 3 we define the limiting ODE that we would like our algorithm to follow and show that it has the desirable property that its only limit points are local Nash equilibria of the game. In Section 4 we introduce local symplectic surgery, a two-timescale procedure that asymptotically tracks the limiting ODE and show that it can be implemented efficiently. Finally, in Section 5 we present two numerical examples to validate the algorithm. The first is a toy example with three local Nash equilibria, and one non-Nash fixed point. We show that simultaneous gradient descent and other recently proposed algorithms for zero-sum games can converge to any of the four points while the proposed algorithm only converges to the local Nash equilibria. The second example is a small generative adversarial network (GAN), where we show that the proposed algorithm converges to a suitable solution within a similar number of steps as simultaneous gradient descent. ## 2 Preliminaries We consider a two-player game, in which one player tries to minimize a function, , with respect to their decision variable , and the other player aims to maximize with respect to their decision variable , where . We write such a game as , since the second player can be seen as minimizing . We assume that neither player knows anything about the critical points of , but that both players follow the rules of the game. Such a situation arises naturally when training machine learning algorithms (e.g., training generative adversarial networks or in multi-agent reinforcement learning). Without restricting , and assuming both players are non-cooperative, the best they can hope to achieve is a local Nash equilibrium; i.e., a point that satisfies f(x∗,y)≤f(x∗,y∗)≤f(x,y∗), for all and in neighborhoods of and respectively. Such equilibria are locally optimal for both players with respect to their own decision variable, meaning that neither player has an incentive to unilaterally deviate from such a point. As was shown in Ratliff et al. (2013), generically, local Nash equilibria will satisfy slightly stronger conditions, namely they will be differential Nash equilibria (DNE): A strategy is a differential Nash equilibrium if: • and . • , and . Here and denote the partial derivatives of with respect to and respectively, and and denote the matrices of second derivatives of with respect to and . Both differential and local Nash equilibria in two-player zero-sum games are, by definition, special saddle points of the function that satisfy a particular alignment condition with respect to the player’s decision variables. Indeed, the definition of differential Nash equilibria, which holds for almost all local Nash equilibria in a formal mathematical sense, makes this condition clear: the directions of positive and negative curvature of the function at a local Nash equilibria must be aligned with the minimizing and maximizing player’s decision variables respectively. We note that the key difference between local and differential Nash equilibria is that , and are required to be definite instead of semidefinite. This distinction simplifies our analysis while still allowing our results to hold for almost all continuous games. ### 2.1 Issues with gradient-based algorithms in zero-sum games Having introduced local Nash equilibria as the solution concept of interest, we now consider how to find such solutions, and in particular we highlight some issues with gradient-based algorithms in zero-sum continuous games. The most common method of finding local Nash equilibria in such games is to have both players randomly initialize their variables and then follow their respective gradients. That is, at each step , each agent updates their variable as follows: xn+1 =xn−γnDxf(xn,yn) yn+1 =yn+γnDyf(xn,yn), where is a sequence of step sizes. The minimizing player performs gradient descent on their cost while the maximizing player ascends their gradient. We refer to this algorithm as simultaneous gradient descent (simGD). To simplify the notation, we let , and define the vector-valued function as: ω(z)=[Dxf(x,y)−Dyf(x,y)]. In this notation, the simGD update is given by: zn+1=zn−γnω(zn). (1) Since (1) is in the form of a discrete-time dynamical system, it is natural to examine its limiting behavior through the lens of dynamical systems theory. Intuitively, given a properly chosen sequence of step sizes,  (1) should have the same limiting behavior as the continuous-time flow: ˙z=−ω(z). (2) We can analyze this flow in neighborhoods of equilibria by studying the Jacobian matrix of , denoted : J(z)=[   D2xxf(x,y)   D2yxf(x,y)−D2xyf(x,y)−D2yyf(x,y)]. (3) We remark that the diagonal blocks of are always symmetric and . Thus can be written as the sum of a block symmetric matrix and a block anti-symmetric matrix , where: S(z)=[   D2xxf(z)   00−D2yyf(z)]  ;  A(z)=[   0   D2yxf(z)−D2xyf(z)0]. Given the structure of the Jacobian, we can now draw links between differential Nash equilibria and equilibrium concepts in dynamical systems theory. We focus on hyperbolic critical points of . A strategy is a critical point of if . It is a hyperbolic critical point if for all , where , denotes the real part of the eigenvalue of . It is well known that hyperbolic critical points are generic among critical points of smooth dynamical systems (see e.g. (Sastry, 1999)), meaning that our focus on hyperbolic critical points is not very restrictive. Of particular interest are locally asymptotically stable equilibria of the dynamics. A strategy is a locally asymptotically stable equilibrium (LASE) of the continuous-time dynamics if and for all . LASE have the desirable property that they are locally exponentially attracting under the flow of . This implies that a properly discretized version of will also converge exponentially fast in a neighborhood of such points. LASE are the only attracting hyperbolic equilibria. Thus, making statements about all the LASE of a certain continuous-time dynamical system allows us to characterize all attracting hyperbolic equilibria. As shown in Ratliff et al. (2013) and Nagarajan and Kolter (2017), the fact that all differential Nash equilibria are critical points of coupled with the structure of in zero-sum games guarantees that all differential Nash equilibria of the game are LASE of the gradient dynamics. However the converse is not true. The structure present in zero-sum games is not enough to ensure that the differential Nash equilibria are the only LASE of the gradient dynamics. When either or is indefinite at a critical point of , the Jacobian can still have eigenvalues with strictly positive real parts. Consider a matrix having the form: M=[ac−c−b], where and . These conditions imply that cannot be the Jacobian of at an local Nash equilibria. However, if and , both of the eigenvalues of will have strictly positive real parts, and such a point could still be a LASE of the gradient dynamics. Such points, which we refer to as non-Nash LASE of  (2), are what makes having guarantees on the convergence of algorithms in zero-sum games particularly difficult. Non-Nash LASE are not locally optimal for both players, and may not even be optimal for one of the players. By definition, at least one of the two players has a direction in which they would move to unilaterally decrease their cost. Such points arise solely due to the gradient dynamics, and persist even in other gradient-based dynamics suggested in the literature. In Appendix B, we show that three recent algorithms for finding local Nash equilibria in zero-sum continuous games—consensus optimization, symplectic gradient adjustment, and a two-time scale version of simGD—are susceptible to converge to such points and therefore have no guarantees of convergence to local Nash equilibria. We note that such points can be very common since every saddle point of that is not a local Nash equilibrium is a candidate non-Nash LASE of the gradient dynamics. Further, local minima or maxima of could also be non-Nash LASE of the gradient dynamics. To understand how non-Nash equilibria can be attracting under the flow of , we again analyze the Jacobian of . At such points, the symmetric matrix must have both positive and negative eigenvalues. The sum of with , however, has eigenvalues with strictly positive real part. Thus, the anti-symmetric matrix can be seen as stabilizing such points. Previous gradient-based algorithms for zero-sum games have also pinpointed the matrix as the source of problems in zero-sum games, however they focus on a different issue. Consensus optimization (Mescheder et al., 2017) and the symplectic gradient adjustment (Balduzzi et al., 2018) both seek to adjust the gradient dynamics to reduce oscillatory behaviors in neighborhoods of stable equilibria. Since the matrix is anti-symmetric, it has only imaginary eigenvalues. If it dominates , then the eigenvalues of can have a large imaginary component. This leads to oscillations around equilibria that have been shown empirically to slow down convergence (Mescheder et al., 2017). Both of these adjustments rely on tunable hyper-parameters to achieve their goals. Their effectiveness is therefore highly reliant on the choice of parameter. Further, as shown in Appendix B neither of the adjustments are able to rule out convergence to non-Nash equilibria. A second promising line of research into theoretically sound methods of finding the Nash equilibria of zero-sum games has approached the issue from the perspective of variational inequalities (Mertikopoulos et al., 2018a; Gidel et al., 2018). In Mertikopoulos et al. (2018a) extragradient methods were used to solve coherent saddle point problems and reduce oscillations when converging to saddle points. In such problems, however, all saddle points of the function are assumed to be local Nash equilibria, and thus the issue of converging to non-Nash equilibria is assumed away. Similarly, by assuming that is monotone, as in the theoretical treatment of the averaging scheme proposed in Gidel et al. (2018), the cost function is implicitly assumed to be convex-concave. This in turn implies that the Jacobian satisfies the conditions for a Nash equilibrium everywhere. The behavior of their approaches in more general zero-sum games with less structure (like the training of GANs) is therefore not well known. Moreover, since their approach relies on averaging the gradients, they do not fundamentally change the nature of the critical points of simGD. In the following sections we propose an algorithm for which the only LASE are the differential Nash equilibria of the game. We also show that, regardless of the choice of hyper-parameter, the Jacobian of the new dynamics at LASE has real eigenvalues, which means that the dynamics cannot exhibit oscillatory behaviors around differential Nash equilibria. ## 3 Constructing the limiting differential equation In this section we define the continuous-time flow that our discrete-time algorithm should ideally follow. ###### Assumption 1 (Lipschitz assumptions on f and J) Assume that and and are -Lipschitz and -Lipschitz respectively. Finally assume that all critical points of are hyperbolic. We do not require to be invertible everywhere, but only at the critical points of . Now, consider the continuous-time flow: ˙z=−h(z)=−12(ω(z)+JT(z)(JT(z)J(z)+λ(z)I)−1JT(z)ω(z)), (4) where is such that for all and and . The function ensures that, even when is not invertible everywhere, the inverse matrix in (4) exists. The vanishing condition ensures us that the Jacobian of the adjustment term is exactly at differential Nash equilibria. The dynamics introduced in (4) can be seen as an adjusted version of the gradient dynamics where the adjustment term only allows trajectories to approach critical points of along the players’ axes. If a critical point is not locally optimal for one of the players (i.e., it is a non-Nash critical point) then that player can push the dynamics out of a neighborhood of that point. The mechanism is easier to see if we assume is invertible and set . This results in the following dynamics: ˙z=−12(ω(z)+JT(z)J−1(z)ω(z)). (5) In this simplified form we can see that the Jacobian of the adjustment is approximately when is small. This approximation is exact at critical points of . Adding this adjustment term to exactly cancels out the rotational part of the vector field contributed by the antisymmetric matrix in a neighborhood of critical points. Since we identified as the source of oscillatory behaviors and non-Nash equilibria in Section 2, this adjustment addresses both of these issues. The following theorem establishes this formally. Under Assumption 1 and if , the continuous-time dynamical system satisfies: • is a LASE of is a differential Nash equilibrium of the game . • If is a critical point of , then the Jacobian of at has real eigenvalues. We first show that: h(z)=0⟺ω(z)=0. Clearly, . To show the converse, we assume that but . This implies that: JT(z)(JT(z)J(z)+λ(z)I)−1JT(z)ω(z)=−ω(z). Since we assumed that this cannot be true, we must have that . Having shown that under our assumptions, the critical points of are the same as those of , we now note that the Jacobian of at a critical point must have the form: Jh(z)=12(J(z)+JT(z)(JT(z)J(z))−1JT(z)J(z))=12(J(z)+JT(z))=S(z). By assumption, at critical points, is invertible and . Given that , terms that include disappear, and the adjustment term contributes only a factor of to the Jacobian of at a critical point. This exactly cancels out the antisymmetric part of the Jacobian of . The Jacobian of is therefore symmetric at critical points of and has positive eigenvalues only when and . Since these are also the conditions for differential Nash equilibria, all differential Nash equilibria of must be LASE of . Further, non-Nash LASE of cannot be LASE of , since by definition either or is indefinite at such points. To show the second part of the theorem, we simply note that must be symmetric at all critical points which in turn implies that it has only real eigenvalues. The continuous-time dynamical system therefore solves both of the problems we highlighted in Section 2, for any choice of the function that satisfies our assumptions. The assumption that is never an eigenvector of with an eigenvalue of ensures that the adjustment does not create new critical points. In high dimensions the assumption is mild since the scenario is extremely specific, but it is also possible to show that this assumption can be removed entirely by adding a time-varying term to while still retaining the theoretical guarantees. We show this in Appendix A. Theorem 3 shows that the only attracting hyperbolic equilibria of the limiting ordinary differential equation (ODE) are the differential Nash equilibria of the game. Also, since is symmetric at critical points of , if either or has at least one negative eigenvalue then such a point would be a linearly unstable equilibrium of . Such points are linearly unstable and are therefore almost surely avoided when the algorithm is randomly initialized (Benaïm and Hirsch, 1995; Sastry, 1999). Theorem 3 also guarantees that the continuous-time dynamics do not oscillate near critical points. Oscillatory behaviors, as outlined in Mescheder et al. (2017), are known to slow down convergence of the discretized version of the process. Reducing oscillations near critical points is the main goal of consensus optimization (Mescheder et al., 2017) and the symplectic gradient adjustment (Balduzzi et al., 2018) . However, for both algorithms, the extent to which they are able to reduce the oscillations depends on the choice of hyperparameter. The proposed dynamics achieves this for any that satisfies our assumptions. We close this section by noting that one can pre-multiply the adjustment term by some function such that while still retaining the theoretical properties described in Theorem 3. Such a function can be used to ensure that the dynamics closely track a trajectory of simGD except in neighborhoods of critical points. For example, if the matrix is ill-conditioned, such a term could be used to ensure that the adjustment does not dominate the underlying gradient dynamics. In Section 5 we give an example of such a damping function. ## 4 Two-timescale approximation Given the limiting ODE, we could perform a straightforward Euler discretization to obtain a discrete-time update having the form: zn+1=zn−γh(zn). However, due to the matrix inversion, such a discrete-time update would be prohibitively expensive to implement in high-dimensional parameter spaces like those encountered when training GANs. To solve this problem, we now introduce a two-timescale approximation to the continuous-time dynamics that has the same limiting behavior, but is much faster to compute at each iteration, than the simple discretization. Since this procedure serves to exactly remove the symplectic part, of Jacobian in neighborhoods of hyperbolic critical points, we refer to this two-timescale procedure as local symplectic surgery (LSS). In Appendix A we derive the two-timescale update rule for the time-varying version of the limiting ODE and show that it also has the same properties. The two-timescale approximation to (4) is given by: zn+1=zn−anh1(zn,vn)vn+1=vn−bnh2(zn,vn), (6) where and are defined as: h1(z,v) =12(ω(z)+JT(z)v) h2(z,v) =JT(z)J(z)v−JT(z)ω(z)+λ(z)v, and the sequences of step sizes , satisfy the following assumptions: ###### Assumption 2 (Assumptions on the step sizes) The sequences and satisfy: • , and ; • , and ; • . We note that is Lipschitz continuous in uniformly in under Assumption 1. The process performs gradient descent on a regularized version of least squares, where the regularization is governed by . If the process is on a faster time scale, the intuition is that it will first converge to , and then will track the limiting ODE in (4). In the next section we show that this behavior holds even in the presence of noise. The key benefit to the two-timescale process is that and can be computed efficiently since neither require a matrix inversion. In fact, as we show in Appendix C, the computation can be done with Jacobian-vector products with the same order of complexity as that of simGD, consensus optimization, and the symplectic gradient adjustment. This insight gives rise to the procedure outlined in Algorithm 1. ### 4.1 Long-term behavior of the two-timescale approximation We now show that LSS asymptotically tracks the limiting ODE even in the presence of noise. This implies that the algorithm has the same limiting behavior as (4 ). In particular, our setup allows us to treat the case where one only has access to unbiased estimates of and at each iteration. This is the setting most likely to be encountered in practice, for example in the case of training GANs in a mini-batch setting. E[^h1(z,v)] =ω(z)+JT(z)v E[^h2(z,v)] =JT(z)J(z)v+JT(z)ω(z). To place this in the form of classical two-timescale stochastic approximation processes, we write each estimator and as the sum of its mean and zero-mean noise processes and respectively. This results in the following two timescale process: zn+1=zn−an[ω(zn)+JT(zn)vn+Mzn+1]vn+1=vn−bn[JT(zn)J(zn)vn+JT(zn)ω(zn)+λ(zn)vn+Mvn+1]. (7) We assume that the noise processes satisfy the following standard conditions (Benaïm, 1999; Borkar, 2008): ###### Assumption 3 Assumptions on the noise: Define the filtration : Fn=σ(z0,v0,Mv1,Mz1,...,Mzn,Mvn), for . Given , we assume that: • and are conditionally independent given for . • and for . • and almost surely for some positive constants and . Given our assumptions on the estimator, cost function, and step sizes we now show that (7) asymptotically tracks a trajectory of the continuous-time dynamics almost surely. Since , , and are not uniformly Lipschitz continuous in both and , we cannot directly invoke results from the literature. Instead, we adapt the proof of Theorem 2 in Chapter 6 of Borkar (2008) to show that almost surely. We then invoke Proposition 4.1 from Benaïm (1999) to show that asymptotically tracks . We note that this approach only holds on the event . Thus, if the stochastic approximation process remains bounded, then under our assumptions we are sure to track a trajectory of the limiting ODE. Under Assumptions 1-3, and on the event : (zn,vn)→{(z,v∗(z)):z∈Rd}, almost surely. We first rewrite (7) as: zn+1 =zn−bn[anbnh1(zn,vn)+¯Mzn+1] vn+1 =vn−bn[h2(zn,vn)+Mvn+1], where . By assumption, . Since is locally Lipschitz continuous, it is bounded on the event . Thus, almost surely. From Lemma 1 in Chapter 6 of Borkar (2008) , the above processes, on the event , converge almost surely to internally chain-transitive invariant sets of and . Since, for a fixed , is a Lipschitz continuous function of with a globally asymptotically stable equilibrium at , the claim follows. Having shown that almost surely, we now show that will asymptotically track a trajectory of the limiting ODE. Let us first define for to be the trajectory of starting at at time . Given Assumptions 1-3, let . On the event , for any integer we have: limn→∞sup0≤h≤K ∥zn+h−z(tn+h,tn,zn)∥2=0. The proof makes use of Propositions 4.1 and 4.2 in Benaïm (1999) which are supplied in Appendix E. We first rewrite the process as: zn+1=zn−an[h(z)−JT(zn)(v∗(zn)−vn)+Mzn+1]. We note that, from Lemma 4.1, almost surely. Since , we can write this process as: zn+1=zn−an[h(z)−χn+Mzn+1], where almost surely. Since is continuously differentiable, it is locally Lipschitz, and on the event it is bounded. It thus induces a continuous globally integrable vector field, and therefore satisfies the assumptions for Propositions 4.1 in Benaïm (1999). Further, by assumption the sequence of step sizes and martingale difference sequences satisfy the assumptions of Proposition 4.2 in Benaïm (1999). Invoking Proposition 4.1 and 4.2 in Benaïm (1999) gives us the desired result. Theorem 4.1 guarantees that LSS asymptotically tracks a trajectory of the limiting ODE. The approximation will therefore avoid non-Nash equilibria of the gradient dynamics. Further, the only locally asymptotically stable points for LSS must be the differential Nash equilibria of the game. ## 5 Numerical Examples We now present two numerical examples that illustrate the performance of both the limiting ODE and LSS. The first is a zero-sum game played over a function in that allows us to observe the behavior of both the limiting ODE around both local Nash and non-Nash equilibria. In the second example we use LSS to train a small generative adversarial network (GAN) to learn a mixture of eight Gaussians. Further numerical experiments and comments are provided in Appendix D. ### 5.1 2-D example For the first example, we consider the game based on the following function in : f(x,y)=e−0.01(x2+y2)((0.3x2+y)2+(0.5y2+x)2). This function is a fourth-order polynomial that is scaled by an exponential to ensure that it is bounded. The gradient dynamics associated with function have four LASE. By evaluating the Jacobian of at these points we find that three of the LASE are local Nash equilibria. These are denoted by ‘x’ in Figure 1. The fourth LASE is a non-Nash equilibrium which is denoted with a star. In Figure 1, we plot the sample paths of both simGD and our limiting ODE from the same initial positions, shown with red dots. We clearly see that simGD converges to all four LASE, depending on the initialization. Our algorithm, on the other hand, only converges to the local Nash equilibria. When initialized close to the non-Nash equilibrium it diverges from the simGD path and ends up converging to a LNE. This numerical example also allows us to study the behavior of our algorithm around LASE. By focusing on a local Nash equilibrium, as in Figure 1B, we observe that the limiting ODE approaches it directly even when simGD displays oscillatory behaviors. This empirically validates the second part of Theorem 3. In Figure 2 we empirically validate that LSS asymptotically tracks the limiting ODE. When the fast timescale has not converged, the process tracks the gradient dynamics. Once it has converged however, we see that it closely tracks the limiting ODE which leads it to converge to only the local Nash equilibria. This behavior highlights an issue with the two-timescale approach. Since the non-Nash equilibria of the gradient dynamics are saddle points for the new dynamics they can slow down convergence. However, the process will eventually escape such points (Benaïm, 1999). In our numerical experiments we let . We also make use of a damping function as described in Section 3. The limiting ODE is therefore given by: ˙z=−(ω(z)+e−ξ2||v||2v), where . For the two-timescale process, since there is no noise we use constant step sizes and the following update: zn+1 =zn−γ1(ω(zn)+e−ξ2||JT(zn)vn||2JT(zn)vn) vn+1 =vn−γ2(JT(zn)J(zn)vn+λ(zn)vn−JT(zn)ω(zn)), where ,,, and . We now train a generative adversarial network with LSS. Both the discriminator and generator are fully connected neural networks with four hidden layers of 16 neurons each. The tanh activation function is used since it satisfies the smoothness assumptions for our functions. For the latent space, we use a 16-dimensional Gaussian with mean zero and covariance . The ground truth distribution is a mixture of eight Gaussians with their modes uniformly spaced around the unit circle and covariance . In Figure 3, we show the progression of the generator at , , , and iterations for a GAN initialized with the same weights and biases and then trained with A. simGD and B. LSS. We can see empirically that, in this example, LSS converges to the true distribution while simGD quickly suffers mode collapse, showing how the adjusted dynamics can lead to convergence to better equilibria. Further numerical experiments are shown in Appendix D. We caution that convergence rate per se is not necessarily a reasonable metric on which to compare performance in the GAN setting or in other game-theoretic settings. Competing algorithms may converge faster than our method when used to train GANs, but once because the competitors could be converging quickly to a non-Nash equilibrium, which is not desirable. Indeed, the optimal solution is known to be a local Nash equilibrium for GANs (Goodfellow et al., 2014; Nagarajan and Kolter, 2017). LSS may initially move towards a non-Nash equilibrium, while subsequently escaping the neighborhood of such points before converging. This will lead to a slower convergence rate, but a better quality solution. ## 6 Discussion We have introduced local symplectic surgery, a new two-timescale algorithm for finding the local Nash equilibria of two-player zero-sum continuous games. We have established that this comes with the guarantee that the only hyperbolic critical points to which it can converge are the local Nash equilibria of the underlying game. This significantly improves upon previous methods for finding such points which, as shown in Appendix B, cannot give such guarantees. We have analyzed the asymptotic properties of the proposed algorithm and have shown that the algorithm can be implemented efficiently. Altogether, these results show that the proposed algorithm yields limit points with game-theoretic relevance while ruling out oscillations near those equilibria and having a similar per-iteration complexity as existing methods which do not come with the same guarantees. Our numerical examples allow us to empirically observe these properties. It is important to emphasize that our analysis has been limited to neighborhoods of equilibria; the proposed algorithm can converge in principle to limit cycles at other locations of the space. These are hard to rule out completely. Moreover, some of these limit cycles may actually have some game-theoretic relevance (Hommes and Ochea, 2012; Benaim and Hirsch, 1997). Another limitation of our analysis is that we have assumed the existence of local Nash equilibria in games. Showing that they exist and finding them is very hard to do in general. Our algorithm will converge to local Nash equilibria, but may diverge when the game does not admit equilibria or when the algorithm does not come any equilibria its region of attraction. Thus, divergence of our algorithm is not a certificate that no equilibria exist. Such caveats, however, are the same as those for other gradient-based approaches for finding local Nash equilibria. Another drawback to our approach is the use of second-order information. Though the two-timescale approximation does not need access to the full Jacobian of the gradient dynamics, the update does involve computing Jacobian-vector products. This is similar to other recently proposed approaches but will be inherently slower to compute than pure first- or zeroth-order methods. Bridging this gap while retaining similar theoretical properties remains an interesting avenue of further research. In all, we have shown that some of the inherent flaws to gradient-based methods in zero-sum games can be overcome by designing our algorithms to take advantage of the game-theoretic setting. Indeed, by using the structure of local Nash equilibria we designed an algorithm that has significantly stronger theoretical support than existing approaches. ## References • Balduzzi et al. (2018) D. Balduzzi, S. Racaniere, J. Martens, J. Foerster, K. Tuyls, and T. Graepel. The mechanics of n-player differentiable games. In International Conference on Machine Learning, 2018. • Banerjee and Peng (2003) B. Banerjee and J. Peng. Adaptive policy gradient in multiagent learning. In Proceedings of the Second International Joint Conference on Autonomous Agents and Multiagent Systems, 2003. • Basar and Olsder (1998) T. Basar and G. Olsder. Dynamic Noncooperative Game Theory. Society for Industrial and Applied Mathematics, 2 edition, 1998. • Benaïm (1999) M. Benaïm. Dynamics of stochastic approximation algorithms. In Séminaire de Probabilités XXXIII, pages 1–68. Springer Berlin Heidelberg, 1999. • Benaïm and Hirsch (1995) M. Benaïm and M. Hirsch. Dynamics of Morse-Smale urn processes. Ergodic Theory and Dynamical Systems, 15(6), 12 1995. • Benaim and Hirsch (1997) M. Benaim and M. Hirsch. Learning processes, mixed equilibria and dynamical systems arising from repeated games. Games and Economic Behavior, 1997. • Benaïm and Hirsch (1999) M. Benaïm and M. Hirsch. Mixed equilibria and dynamical systems arising from fictitious play in perturbed games. Games and Economic Behavior, 29:36–72, 1999. • Borkar (2008) V. S. Borkar. Stochastic Approximation: A Dynamical Systems Viewpoint. Cambridge University Press, 2008. • C. Daskalakis (2009) C. Papadimitriou C. Daskalakis, P. Goldberg. The complexity of computing a Nash equilibrium. SIAM Journal on Computing, 39:195–259, 02 2009. • Cesa-Bianchi and Lugosi (2006) N. Cesa-Bianchi and G. Lugosi. Prediction, Learning, and Games. Cambridge University Press, Cambridge, UK, 2006. • Daskalakis et al. (2017) C. Daskalakis, A. Ilyas, V. Syrgkanis, and H. Zeng. Traning GANs with Optimism. arxiv:1711.00141, 2017. • Foerster et al. (2017) J. Foerster, R. Y. Chen, M. Al-Shedivat, S. Whiteson, P. Abbeel, and I. Mordatch. Learning with opponent-learning awareness. CoRR, abs/1709.04326, 2017. • Gidel et al. (2018) G. Gidel, H. Berard, P. Vincent, and S. Lacoste-Julien. A variational inequality perspective on generative adversarial nets. CoRR, 2018. URL http://arxiv.org/abs/1802.10551. • Giordano et al. (2018) R. Giordano, T. Broderick, and M. I. Jordan. Covariances, robustness, and variational Bayes. Journal of Machine Learning Research, 2018. • Goodfellow et al. (2014) I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial networks. arxiv:1406.2661, 2014. • Heusel et al. (2017) M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter. GANs trained by a two time-scale update rule converge to a local Nash equilibrium. In Advances in Neural Information Processing Systems 30, 12 2017. • Hommes and Ochea (2012) C. H. Hommes and M. I. Ochea. Multiple equilibria and limit cycles in evolutionary games with logit dynamics. Games and Economic Behavior, 74(1):434 –441, 2012. • Jordan (2018) M. I. Jordan. Artificial intelligence: The revolution hasn’t happened yet. Medium, 2018. • (19) E. Mazumdar and L. J. Ratliff. On the convergence of gradient-based learning in continuous games. ArXiv e-prints. • Mertikopoulos et al. (2018a) P. Mertikopoulos, H. Zenati, B. Lecouat, C. Foo, V. Chandrasekhar, and G. Piliouras. Mirror descent in saddle-point problems: Going the extra (gradient) mile. CoRR, abs/1807.02629, 2018a. • Mertikopoulos et al. (2018b) Panayotis Mertikopoulos, Christos H. Papadimitriou, and Georgios Piliouras. Cycles in adversarial regularized learning. In roceedings of the 29th annual ACM-SIAM symposium on discrete algorithms, 2018b. • Mescheder et al. (2017) L. M. Mescheder, S. Nowozin, and A. Geiger. The numerics of GANs. In Advances in Neural Information Processing Systems 30, 2017. • Nagarajan and Kolter (2017) V. Nagarajan and Z. Kolter. Gradient descent GAN optimization is locally stable. In Advances in Neural Information Processing Systems 30. 2017. • Nisan et al. (2007) N. Nisan, T. Roughgarden, E. Tardos, and V. Vazirani. Algorithmic Game Theory. Cambridge University Press, Cambridge, UK, 2007. • Ratliff et al. (2013) L. J. Ratliff, S. A. Burden, and S. S. Sastry. Characterization and computation of local Nash equilibria in continuous games. In Proceedings of the 51st Annual Allerton Conference on Communication, Control, and Computing, pages 917–924, Oct 2013. • Sastry (1999) S. S. Sastry. Nonlinear Systems. Springer New York, 1999. • Xu et al. (2009) H. Xu, C. Caramanis, and S. Mannor. Robustness and regularization of support vector machines. Journal of Machine Learning Research, 10:1485–1510, December 2009. ISSN 1532-4435. • Yang (2011) L. Yang. Active learning with a drifting distribution. In Advances in Neural Information Processing Systems. 2011. In this section we analyze a slightly different version of (4) that allows us to remove the assumption that is never an eigenvector of with associated eigenvalue . Though this assumption is relatively mild, since intuitively it will be very rare that is exactly the eigenvector of the adjustment matrix, we show that by adding a third term to (4) we can remove it entirely while retaining our theoretical guarantees. The new dynamics are constructed by adding a time-varying term to the dynamics that goes to zero only when is zero. This guarantees that the only critical points of the limiting dynamics are the critical points of . The analysis of these dynamics is slightly more involved and requires generalizations of the definition of a LASE to handle time-varying dynamics. We first define an equilibrium of a potentially time-varying dynamical system as a point such that for all . We can now generalize the definition of a LASE to the time-varying setting. A strategy is a locally uniformly asymptotically stable equilibrium of the time-varying continuous time dynamics if is an equilibrium of , and and for all . Locally uniformly asymptotically stable equilibria, under this definition, also have the property that they are locally exponentially attracting under the flow, . Further, since the linearization around a locally uniformly asymptotically stable equilibrium is time-invariant, we can still invoke converse Lyapunov theorems like those presented in Sastry (1999) when deriving the non-asymptotic bounds. Having defined equilibria and a generalization of LASE for time-varying systems, we now introduce a time-varying version of the continuous-time ODE presented in Section 3 which allows us to remove the assumption that is never an eigenvector of with associated eigenvalue . The limiting ODE is given by: ˙z=−hTV(z,t)=−(h(z)+gTV(z,t)), (8) where is as described in Section 3, can be decomposed as: gTV(z,t)=λ1(z)u(t), where satisfies: • for all . • . • , and where satisfies: • such that . • . Thus we require that the time-varying adjustment term must be bounded and is equal to zero only when . Most importantly, we require that for any that is not a critical point of , must be changing in time. An example of a that satisfies these requirements is: gTV(z,t)=ξ1(1−e−ξ2||ω(z)||2)cos(t)u0, (9) for , and . These conditions, as the next theorem shows, allow us to guarantee that the only locally asymptotically stable equilibria are the differential Nash equilibria of the game. Under Assumption 1 the continuous-time dynamical system satisfies: • is a locally uniformly asymptotically stable equilibrium of is a DNE of the game . • If is an equilibrium point of , then the Jacobian of at is time-invariant and has real eigenvalues. We first show that: hTV(z,t)≡0  ∀t≥0⟺ω(z)=0. By construction . To show the converse, we assume that there exists a such that but . This implies that: −gTV(z,t) =ω(z)+JT(z)(JT(z)J(z)+λ(z)I)−1JT(z)ω(z)∀t≥0. Since is a constant and , taking the derivative of both sides with respect to gives us the following condition on under our assumption: Dtu(t)=0  ∀t≥0. By assumption this cannot be true. Thus, we have a contradiction and . Having shown that the critical points of are the same as that of , we now note that the Jacobian of , at critical points, must be . Under the same development as the proof of Theorem 3 the Jacobian of is given by: JTV(z)=J(z)+JT(z)+(DzgTV(z,t)). Again, by construction when . The third term therefore disappears and we have that . The proof now follows from that of Theorem 3. We have shown that adding a time-varying term to the original adjusted dynamics allows us to remove the assumption that the adjustment term is never exactly . As in Section 3 we can now construct a two-timescale process that asymptotically tracks (8). We assume that is a deterministic function of a trajectory of an ODE: ˙θ=−h3(θ), with a fixed initial condition such that . We assume that is Lipschitz-continuous and is continuous and bounded. Note that under our assumptions, for all . The form of introduced in (9), can be written as , where satisfies the linear dynamical system: ˙θ=[0−110]θ, with . Given this setup, the continuous-time dynamics can be written as: ˙θ=−h3(θ)˙z=−h4(z,θ), (10) where: h4(z,θ) =12(ω(z)+JT(z)(JT(z)J(z)+λ(z)I)−1JT(z)ω(z)+λ1(z)u(θ)). Having made this further assumption on the time-varying term, we now introduce the two-timescale process that asymptotically tracks (10). This process is given by: θn+1=θn−anh3(θn)zn+1=zn−an^h5(zn,vn,θn)vn+1=vn−bn^h6(zn,vn), (11) where E[^h5(z,v,θ)] =h5(z,v,θ):=12(ω(z)+JT(z)v)+λ1(z)u(θ) E[^h6(z,v)] =h6(z,v):=JT(z)J(z)v−JT(z)ω(z)+λ(z)v. Proceeding as in Section 3, we write and where and are martingale difference sequences satisfying Assumption 3. We note that the process is deterministic. This two-timescale process gives rise to the time-varying version of local symplectic surgery (TVLSS) outlined in Algorithm 2.
2020-10-23 05:36:06
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8508495092391968, "perplexity": 655.316340560815}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107880656.25/warc/CC-MAIN-20201023043931-20201023073931-00289.warc.gz"}
https://www.intmath.com/blog/computers/weaving-the-web-376
Search IntMath Close 450+ Math Lessons written by Math Professors and Teachers 5 Million+ Students Helped Each Year 1200+ Articles Written by Math Educators and Enthusiasts Simplifying and Teaching Math for Over 23 Years # Weaving the Web By Murray Bourne, 01 Sep 2006 ## The Original Design and Ultimate Destiny of the World Wide Web #### Summary Review Tim Berners-Lee wrote Weaving the Web during the height of the hype, in 2000. The dotcom boom was about to crash, but Berners-Lee would not have been too affected by that, since his aims for the WWW were not commercial, but selfless and democratic. It was very interesting to read his recollections on the birth and development of the WWW. His parents were both mathematicians and they had been involved in programming the first stored-program computer at Manchester University in the early 1950s. His father was looking for ways to make this computer more intuitive - to make connections like the human brain can. This conundrum lit a spark in the young Tim and he thought about it on and off until he graduated in physics and then built his own computer in the 1970s. But it was at CERN (the particle physics laboratory in Geneva) where he wrote the first Web-like program which he called Enquire. With it, he could remember connections between the other scientists at CERN - what they did, what they wrote, their telephone numbers, and so on. It was then that he started to wonder... Suppose all the information stored on computers everywhere were linked. Enquire allowed Berners-Lee to add new information about people as long as it was linked to something else. In a way, Enquire was working like Wikis do today. You add a new page by linking it to something that already exists. The breakthrough was not using the computer to make the connections using rigid matrices or whatever, rather it was the human making the connections and getting the computer to help remember them. The next thought was to incorporate hypertext into Enquire. Hypertext had been around since the late 1960s, but it was Apple computer's Hypercard and the Macintosh that pushed things along in this direction. Of course, the Internet had been around since the 1970s and many computers were already networked. But they did not talk nicely to each other. What Berners-Lee needed to mould all of this knowledge management potential into one coherent entity was a new protocol that all computers could understand. He developed HTTP (hyper text transfer protocol) and then set about developing a simple code that could display information in a browser/editor. That code was HTML (hyper text markup language). Notice it was a browser/editor that he first developed (on his NeXT computer). His concept was to allow users to edit anything they found by using this new system. Once again, the Wiki is close to his original idea - a place where you could read stuff and add stuff or change stuff, all in a "minimal constraint" setting. So the WWW was made and the rest is history. Berners-Lee has fought hard for an open, knowledge-sharing, editable, democratic, anti-monopolistic and inter-operable Web. It will surely go down in history as one of the great inventions. Berners-Lee is now head honcho of the W3C consortium, driving the Web ever onward. W3C: Leading the Web to Its Full Potential... Personal end-note: Berners-Lee's first daughter was born right at the time the WWW was being born, and coincidentally my own daughter was born then. She just cannot imagine life without the Web... Be the first to comment below. ### Comment Preview HTML: You can use simple tags like <b>, <a href="...">, etc. To enter math, you can can either: 1. Use simple calculator-like input in the following format (surround your math in backticks, or qq on tablet or phone): a^2 = sqrt(b^2 + c^2) (See more on ASCIIMath syntax); or 2. Use simple LaTeX in the following format. Surround your math with $$ and $$. $$\int g dx = \sqrt{\frac{a}{b}}$$ (This is standard simple LaTeX.) NOTE: You can mix both types of math entry in your comment. From Math Blogs
2022-10-06 11:33:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5822774171829224, "perplexity": 2932.4409641978086}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337803.86/warc/CC-MAIN-20221006092601-20221006122601-00119.warc.gz"}
https://www.physicsforums.com/threads/vector-properties.803096/
# Vector properties 1. Mar 14, 2015 ### nuuskur Is there something said about $\lim_{||v||\to 0} \frac{v_i}{||v||}$? Is it correct to assume if the length of a vector approaches 0, then any component of that vector has to approach 0, aswell? Last edited: Mar 14, 2015 2. Mar 14, 2015 ### DannyMoretz I would say yes.... check out the proof I did... which I think works.. and similarily with substituting into y. #### Attached Files: • ###### Proof.png File size: 36.1 KB Views: 62 3. Mar 14, 2015 ### Staff: Mentor It is against the rules at PF to give answers like that. You should've asked the OP to calculate the limits himself/herself. 4. Mar 14, 2015 ### nuuskur I didn't post this in a homeworks' section and it isn't one, either. I was merely curious and couldn't find relevant info on it myself, since my English is bad. 5. Mar 14, 2015 ### Staff: Mentor Sorry, I didn't pay attention to the section. But I wanted to mention it to Danny, who is new here. It would still have been better to start by pointing you in the right direction.
2017-11-25 11:44:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.481269896030426, "perplexity": 2489.088376913153}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934809778.95/warc/CC-MAIN-20171125105437-20171125125437-00284.warc.gz"}
https://braindecode.org/generated/braindecode.datautil.create_windows_from_events.html
# braindecode.datautil.create_windows_from_events¶ braindecode.datautil.create_windows_from_events(concat_ds, trial_start_offset_samples, trial_stop_offset_samples, window_size_samples=None, window_stride_samples=None, drop_last_window=False, mapping=None, preload=False, drop_bad_windows=True, picks=None, reject=None, flat=None, on_missing='error') Create windows based on events in mne.Raw. This function extracts windows of size window_size_samples in the interval [trial onset + trial_start_offset_samples, trial onset + trial duration + trial_stop_offset_samples] around each trial, with a separation of window_stride_samples between consecutive windows. If the last window around an event does not end at trial_stop_offset_samples and drop_last_window is set to False, an additional overlapping window that ends at trial_stop_offset_samples is created. Windows are extracted from the interval defined by the following: trial onset + trial onset duration |--------------------|——————————|---------------------| trial onset - trial onset + trial_start_offset_samples duration + trial_stop_offset_samples Parameters concat_ds: BaseConcatDataset A concat of base datasets each holding raw and description. trial_start_offset_samples: int Start offset from original trial onsets, in samples. trial_stop_offset_samples: int Stop offset from original trial stop, in samples. window_size_samples: int | None Window size. If None, the window size is inferred from the original trial size of the first trial and trial_start_offset_samples and trial_stop_offset_samples. window_stride_samples: int | None Stride between windows, in samples. If None, the window size is inferred from the original trial size of the first trial and trial_start_offset_samples and trial_stop_offset_samples. drop_last_window: bool If True, an additional overlapping window that ends at trial_stop_offset_samples will be extracted around each event when the last window does not end exactly at trial_stop_offset_samples. mapping: dict(str: int) Mapping from event description to numerical target value. If True, preload the data of the Epochs objects. This is useful to reduce disk reading overhead when returning windows in a training scenario, however very large data might not fit into memory. If True, call .drop_bad() on the resulting mne.Epochs object. This step allows identifying e.g., windows that fall outside of the continuous recording. It is suggested to run this step here as otherwise the BaseConcatDataset has to be updated as well. picks: str | list | slice | None Channels to include. If None, all available channels are used. See mne.Epochs. reject: dict | None Epoch rejection parameters based on peak-to-peak amplitude. If None, no rejection is done based on peak-to-peak amplitude. See mne.Epochs. flat: dict | None Epoch rejection parameters based on flatness of signals. If None, no rejection based on flatness is done. See mne.Epochs. on_missing: str What to do if one or several event ids are not found in the recording. Valid keys are ‘error’ | ‘warning’ | ‘ignore’. See mne.Epochs. Returns windows_ds: WindowsDataset Dataset containing the extracted windows.
2021-01-28 14:49:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18303152918815613, "perplexity": 8186.034524332688}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704847953.98/warc/CC-MAIN-20210128134124-20210128164124-00215.warc.gz"}
http://mathhelpforum.com/advanced-algebra/76155-fermat-s-little-thm.html
# Math Help - fermat's little thm 1. ## fermat's little thm compute the remainder when 2^(2^17) + 1 = 19 that + 1 is really throwing me off... I don't know how to deal with it. 2. Originally Posted by Coda202 compute the remainder when 2^(2^17) + 1 = 19 that + 1 is really throwing me off... I don't know how to deal with it. I assume you want to find the remainder of $2^{2^{17}}+1$ modulo $19$. By the division algorithm we can write $2^{17} = 18q + r$ where $0\leq r < 18$. But then, $2^{2^{17}} = 2^{18q + r} = \left( 2^{18} \right)^q \cdot 2^r \equiv 2^r (\bmod 19)$. To find the remainder $r$ we need to simplify $2^{17}$ modulo $18$. First, $2^{\phi(9)} \equiv 1(\bmod 9) \implies 2^6\equiv 1(\bmod 9) \implies 4\cdot 2^{16} \equiv 1(\bmod 9)$. Multiply both sides by seven to get, $2^{16}\equiv 7(\bmod 9) \implies 2^{17}\equiv 14(\bmod 18)$. Therefore, $2^{2^{17}}\equiv 2^{14} (\bmod 19)$. Now, $16\cdot 2^{14}\equiv 1(\bmod 19)$ by Fermat's little theorem. Since $16\cdot 6\equiv 1(\bmod 19)$ we see that $2^{14} \equiv 6(\bmod 19)$. But this is remainder of only $2^{2^{17}}$ if you want to find remainder of $2^{2^{17}}+1$ just add 1.
2015-04-27 12:47:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7966686487197876, "perplexity": 277.9537537709166}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246658116.80/warc/CC-MAIN-20150417045738-00204-ip-10-235-10-82.ec2.internal.warc.gz"}
https://www2.icp.uni-stuttgart.de/~hilfer/publikationen/html/ZZ-2011-FDRA-207/ZZ-2011-FDRA-207.S2.xhtml
Sie sind hier: ICP » R. Hilfer » Publikationen # 2 The Aristotelian Concept of Time [208.3.1] The concepts of time and time evolution are fundamental for physics (and other sciences). [208.3.2]  Aristotle [22, Δ 11] defined time as α,ριθμoς κινη´σϵως (i.e. as the integer or rational number of motionb (This is a footnote:) b While Aristotle was perhaps counting heart beats, days, months, years, or time intervals determined with a κλϵψυ´δρα, the idea to count periods has persisted. [208.3.3] Today the unit of time corresponds to counting 9 192 631 770 periods of oscillation of a certain form of radiation emitted from 133Cs-atoms [23]. ), and formulates the idea, that past and future are separated by a mathematical point, that he calls τo νυ~ν (the Now). [208.3.4] Newton [24, p. 5] formulates and postulates ”Tempus absolutum verum et Mathematicum, in se et natura sua absque relatione ad externum quodvis, aequabiliter fluit, alioque nomine dicitur Duratio” c (This is a footnote:) c Transl.: ”Absolute, true and mathematical time flows uniformly, in itself, according to its own nature, and without relation to anything outside itself; it is also called by the name duration.”. [page 209, §0]    [209.0.1] The concept of time in modern physics is based on the ideas of Aristotle in their Newtonian formulation. [209.0.2] Time is viewed as a flux aequabilis (uniform flow) or succession of Aristotelian time instants. [209.1.1] The theoretical and mathematical abstraction of this concept of time from general mathematical theories of physical phenomena has led to the fundamental principle of time translation invariance and energy conservation in modern physics. [209.1.2] All fundamental theories of contemporary physics postulate time translation invariance as a basic symmetry of nature.
2022-01-21 05:32:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6555478572845459, "perplexity": 3158.382574179717}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302723.60/warc/CC-MAIN-20220121040956-20220121070956-00564.warc.gz"}
http://hackage.haskell.org/package/propellor-4.9.0/docs/Propellor-Property-Obnam.html
propellor-4.9.0: property-based host configuration management in haskell Propellor.Property.Obnam Description Deprecated: Obnam has been retired; time to transition to something else Support for the Obnam backup tool http://obnam.org/ This module is deprecated because Obnam has been retired by its author. Synopsis # Documentation An obnam repository can be used by multiple clients. Obnam uses locking to allow only one client to write at a time. Since stale lock files can prevent backups from happening, it's more robust, if you know a repository has only one client, to force the lock before starting a backup. Using OnlyClient allows propellor to do so when running obnam. Constructors OnlyClient MultipleClients Instances Source # Methods Installs a cron job that causes a given directory to be backed up, by running obnam with some parameters. If the directory does not exist, or exists but is completely empty, this Property will immediately restore it from an existing backup. So, this property can be used to deploy a directory of content to a host, while also ensuring any changes made to it get backed up. For example: & Obnam.backup "/srv/git" "33 3 * * *" [ "--repository=sftp://[email protected]/~/mygitrepos.obnam" ] Obnam.OnlyClient requires Ssh.keyImported SshRsa "root" (Context hostname) How awesome is that? Note that this property does not make obnam encrypt the backup repository. Since obnam uses a fair amount of system resources, only one obnam backup job will be run at a time. Other jobs will wait their turns to run. Like backup, but the specified gpg key id is used to encrypt the repository. The gpg secret key will be automatically imported into root's keyring using Propellor.Property.Gpg.keyImported Does a backup, but does not automatically restore. Restores a directory from an obnam backup. Only does anything if the directory does not exist, or exists, but is completely empty. The restore is performed atomically; restoring to a temp directory and then moving it to the directory. Policy for backup generations to keep. For example, KeepDays 30 will keep the latest backup for each day when a backup was made, and keep the last 30 such backups. When multiple KeepPolicies are combined together, backups meeting any policy are kept. See obnam's man page for details. Constructors KeepHours Int KeepDays Int KeepWeeks Int KeepMonths Int KeepYears Int Constructs an ObnamParam that specifies which old backup generations to keep. By default, all generations are kept. However, when this parameter is passed to the backup or backupEncrypted properties, they will run obnam forget to clean out generations not specified here.
2019-06-18 01:57:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18374556303024292, "perplexity": 7919.355939244933}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998600.48/warc/CC-MAIN-20190618003227-20190618025227-00403.warc.gz"}
http://openstudy.com/updates/501d92b2e4b0be43870e493b
## Calcmathlete 3 years ago Just testing to see if I've memorized this formula again: $\text{Sum of the factors of a number: }\frac{a^{p + 1} - 1}{a - 1} \times \frac{b^{q + 1} - 1}{b - 1} \times \frac{c^{r + 1} - 1}{c - 1}...$where a, b, and c are prime factors of the given number and where p, q, and r are exponents relating to the multiplicity of the prime factors. Is it correct? 1. experimentX hmm... i never seen this ... could you elaborate? 2. Calcmathlete For instance, if we were given the number 12, and you prime factor it: $12 = 2^2 \times 3$Then a would be 2, p would be 2, b would be 3, and q would be 1. 3. Calcmathlete Then the sum of the factors would be 2 + 2 + 3 = 7. I think the formula is a spinoff of the number of factors formula, but I'm trying to see if I have completely got this formula down. 4. experimentX so it would be |dw:1344116637049:dw| 5. Calcmathlete I believe so...I don't get why it doesn't work though...I'm pretty sure that's the formula...hold on... 6. Calcmathlete Oh wait, it does work! I'm guessing that means that the formula is correct? 7. experimentX |dw:1344117005528:dw| 4 is missing 8. Calcmathlete But 7 x 4 = 28 1 + 2 + 3 + 4 + 6 + 12 = 28 9. experimentX hmm ... try this for any other number let it for 9 10. Calcmathlete It should be 13. Using the formula. 11. experimentX |dw:1344117256625:dw| interesting ...do you know what does this work? 12. Calcmathlete I have no idea how this works...it's number theory, but I don't know why it works... 13. Calcmathlete But does it seem to work as a formula? lol The original question was to see if my formula was correct since I am in the midst of memorizing a whole bunch of miscellaneous formulas. 14. experimentX |dw:1344118722255:dw| what't the deal with this? seems the sum of factors is never a prime no 15. experimentX |dw:1344119038064:dw| 16. Calcmathlete What? I'm not really following... 17. experimentX the sum of factors is equal to the ... = product of geometric sum of prime divisors. 18. experimentX waw ... nice!! 19. Calcmathlete THat does make sense somewhat. lol I didn't come up with the formula, I'm just trying to memorize it? Thanks for helping me verify :) 20. experimentX lol ... this is easy. 21. experimentX Not sure this workds let a number $$X$$ has prime factors $$a^n$$ and $$b^m$$ 22. experimentX let the divisors of X be $$1 + a+ b + c +d +f + ... +X = (1 + a +a^2 + a^3 + ... +a^n) \times \\ (1 +b +b^2 + b^3 + ... +b^m)$$ 23. experimentX eg ... X = a^n*b^m a = 1*a b = 1*b 24. experimentX from the left (divisors of X) ... choose any number ... you should be able to get it ... by one of the combination of the right. 25. Calcmathlete I see what is going on from there...then you just apply the formula for the sum of a geometric series? 26. experimentX yep ... who want's to calculate that long?? I wouldn't even ask wolf .. 27. Calcmathlete lol...me either...thanks again :) 28. experimentX yw .. and thank you too for ... this Q 29. Calcmathlete lol np :) KG just watching from a distance XD 30. KingGeorge Watching...And slowly coming to the decision that there may not be a formula that counts the sum of prime factors. Looking at a graph of this function, it appears to be fairly random. On another note, the formula you originally wrote in the question strongly resembles the totient function which counts how many numbers less than $$n$$ are coprime to $$n$$. 31. Calcmathlete I am not familiar with the totient formula, so I'm clueless... 32. KingGeorge Actually, ignore what I said about there not being a formula. Suppose you have a number $$n=2^4\cdot5^2\cdot11^5$$, and call this sum of prime divisors function "sopd," then $\text{sopd}(n)=2(4)+5(2)+11(5)=73$ 33. KingGeorge From what I can tell, you're mixing up this function with the sum of divisors function. The sum of divisors function is exactly what you wrote above. 34. KingGeorge So back to $$12=2^2\cdot3$$. According to the formula, the sum of divisors should be $\frac{8-1}{2-1}\cdot\frac{9-1}{3-1}=28$When we look at the divisors and add them manually, $1+2+3+4+6+12=28$ 35. KingGeorge Why this works, is actually pretty neat, although a little messy to prove. 36. KingGeorge And @experimentX nailed it when he said "the sum of factors is equal to the ... = product of geometric sum of prime divisors." That's one of the key points of this when you want to prove it. Find more explanations on OpenStudy
2016-02-06 18:53:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7978611588478088, "perplexity": 726.8859292595891}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701147492.21/warc/CC-MAIN-20160205193907-00105-ip-10-236-182-209.ec2.internal.warc.gz"}
http://www.univie.ac.at/constructivism/archive/fulltexts/3658.html
# Some steps towards a transcendental deduction of quantum mechanics Bitbol M. (1998) Some steps towards a transcendental deduction of quantum mechanics. Philosophia Naturalis 35: 253–280. Available at http://cepa.info/3658 1 Introduction 2 The Functional a priori 3 Kant’s Concepts of a “Transcendental Deduction” 4 A Generalized Transcendental Deduction 5 Transcendental Constraints, Quantum Logic, and Hilbert Space 6 Transcendental Arguments about Connection in Time 7 Conclusion Bibliography 1 Introduction My purpose in this paper is to show that the two major options on which the current debate on the interpretation of quantum mechanics relies, namely realism and empiricism (or instrumentalism), are far from being exhaustive. There is at least one more position available; a position which has been widely known in the history of philosophy during the past two centuries but which, in spite of some momentous exceptions,[Note 1] has only attracted little interest until recently in relation to the foundational problems of quantum mechanics. According to this third position, one may provide a theory with much stronger justifications than mere a posteriori empirical adequacy, without invoking the slightest degree of isomorphism between this theory and the elusive things out there. Such an intermediate attitude, which is metaphysically as agnostic as empiricism, but which shares with realism a commitment to considering the structure of theories as highly significant, has been named transcendentalism after Kant. Of course, I have no intention in this paper to rehearse the procedures and concepts developed by Kant himself; for these particular procedures and concepts were mostly adapted to the state of physics in his time, namely to Newtonian mechanics. I rather wish to formulate a generalized version of his method and show how this can yield a reasoning that one is entitled to call a transcendental deduction of quantum mechanics. This will be done in three steps. To begin with, I shall define carefully the word “transcendental”, and the procedure of “transcendental deduction”, in terms which will make clear how they can have a much broader field of application than Kant ever dared to imagine. Then, I shall show briefly that the main structural features of quantum mechanics can indeed be transcendentally deduced in this modern sense. Finally, I shall discuss the significance, and also the limits, of these results. 2 The Functional a priori Kant’s classical definition of the transcendental attitude, as contained in the introduction to the second edition of the Critique of Pure Reason, develops thus: “I apply the term transcendental to all knowledge which is not so much occupied with objects as with the mode of our knowledge of objects, so far as this mode of knowledge is possible a priori.[Note 2] . Such a reversal of focus, from objects to our knowledge of objects, is typical of what Kant called the Copernican revolution. Both transcendent and transcendental considerations go beyond what is immediately given in appearances. But whereas manipulating transcendent entities means trying to account for the link between appearances by invoking something outside the boundaries of human knowledge, using a transcendental strategy is tantamount to ascribing the unity of the manifold of appearances to something which definitely belongs to the human faculty of knowledge, namely to pure understanding. This shift enables one to stop wondering, or invoking pre-established harmony, when the remarkable agreement between the processes involving physical objects and our representations is at stake. Indeed, the greater part[Note 3] of this agreement arises automatically from the fact that, provided each object is construed as the focus of a dynamic synthesis of phenomena rather than as a thing-in-itself, its very possibility qua object depends on the connecting structures provided in advance by our understanding. Attractive as Kant’s original strategy may appear, it has nevertheless some features which do not fit with current philosophical standards, and which will have to be modified if we want to proceed with the transcendental approach. Let us discuss two of these features, which are especially relevant to physics. Firstly, the element of passivity which enters in the way Kant said the objects are presented to us, is excessive. True, he insisted that in physics “Reason must approach nature with the view of receiving the information from it, not however in the character of a pupil who listens to all that his master chooses to tell him, but in that of a judge, who compels the witnesses to reply to those questions he himself thinks fit to propose”.[Note 4] But this way of anticipating the answers of nature was restricted to the intuitive and intellectual form of knowledge. Regarding what he called the matter of knowledge, Kant relied on the empiricist and aristotelian tradition, and considered that it is passively received as sensations; that in other terms the objects are given to us by means of sensibility[Note 5] . Even though Kant’s use of the concept of thing-in-itself can be read as a way of expressing that, in our knowledge of objects, we cannot separate what is provided by our cognitive capacities from what affects us, he never extended his remark one step further, namely from the cognitive forms to the form of experimental activity. And he therefore did not recognize that experimental activity is able to shape appearances and not only to select it or order it; that in other terms experimental activity partakes of the constitutive role he ascribed to our cognitive capacities. The idea that phenomena cannot be separated from the irreversible operations of experimental apparatuses is to be ascribed to Bohr, not to Kant.[Note 6] This is one reason why, if we want to apply the transcendental method to quantum mechanics, we must adopt a thoroughly modernized version of it, such as Hintikka’s. According to Hintikka, what is needed to make the transcendental method acceptable nowadays is a shift of emphasis from passive reception and purely mental shaping to effective research activities and instrumental shaping[Note 7] . As he writes, ”(...) the true basis of the logic of existence and universality lies in the human activities of seeking and finding”. The definition he gives of the transcendental attitude is modified accordingly. The transcendental attitude no longer consists in reversing attention from the objects to our knowledge, but rather from the objects to our games of seeking and finding.[Note 8] As a consequence, the objects are no longer regarded as constituents of our experience, but rather as (i) potential aims for our activities of research and resolution and (ii) elements in our strategy for anticipating the outcomes of our activities. The second point which does not fit with current philosophical standard concerns the Latin expression a priori. In Kant’s definition of the term ’transcendental’, the use of this expression is misleading. It may sound as if the forms or the connecting structures which we present in anticipation to the appearances are innate, or at least that they are uniquely determined “for all times and for all rational beings”[Note 9] . Actually, Kant has never gone as far as asserting that the a priori forms of intuition and thought are innate. According to him, the forms of intuition and thought are not chronologically but only logically prior to experience. And the reason why they are logically prior to experience, the reason why they cannot be extracted from experience, is that experience is only possible under the condition that it has been shaped by them[Note 10] . It is true however that Kant has maintained an uniqueness and invariability claim about his forms of intuition and thought. Now, it is precisely this invariability claim which makes Kant’s version of transcendental philosophy so vulnerable to the criticisms of modern philosophers of science who rightly notice that twentieth century physics has undermined many particular features of his original a priori forms, or at least that it has considerably restricted their range of application to the immediate environment of mankind. The transcendental approach could then only survive and develop in the kind of version proposed by Neo-kantian philosophers such as Hermann Cohen or Ernst Cassirer, who both acknowledged to some extent the possibility of change of the a priori forms and their plurality as well. Nowadays, there is also another flexible and pluralist conception of the a priori; it is the pragmatist version of transcendental philosophy as defined by Putnam after Dewey. According to Putnam, each a priori form has to be considered as purely functional It is relative to a certain mode of activity, it consists of the basic presuppositions of this mode of activity, and it has therefore to be changed as soon as the activity is abandoned or redefined. Putnam calls it a quasi-a priori when he wants to emphasize this flexibility.[Note 11] This conception of the a priori may easily be combined with Hintikka’s characterization of the transcendental attitude as a process of redirecting attention from the objects to our activities, and I shall thus retain it as part of a coherent neo-transcendentalist approach. 3 Kant’s Concepts of a “Transcendental Deduction” In the first edition of his Critique of Pure Reason, Kant presents us with two varieties of the deduction. The first one develops as an argument from the possibility of experience, and it is called “objective”; the second one is based on the necessity of the unity of apperception (namely the fact that all representations have to be related to their common subject), and it is called “subjective”. The first one is weaker than the second one, but also less controversial. Indeed, the “objective” variety of the deduction only aims at deriving the background presuppositions of an experience which just happens to be organized as we know it, whereas the “subjective” variety somehow purports to demonstrate that this organization must obtain.[Note 12] Here, I shall mainly discuss the “objective” variety, but later on I shall also make use of a thoroughly modified version of the “subjective” variety. According to Charles Taylor, a transcendental deduction is ”(...) a regression from an unquestionable feature(...)” of our knowledge to ”(...) a stronger thesis as the condition of its possibility”[Note 13] . Now, what is the central unquestionable feature from Kant’s standpoint? What is the characteristic mark of what he calls experience as against pure fleeting appearances? It is objectivity, since experience has been taken by Kant as equivalent to objective empirical knowledge.[Note 14] Now, transcendental philosophy defines objectivity in two ways. These two ways are closely interrelated in Kant’s writings, but it is very important to emphasize the distinction in the context of a study of quantum mechanics. According to the first definition of objectivity, something is objective if it holds for any (human) subject. According to the second (more restrictive) definition, objectivity amounts to the possibility of organizing certain sets of appearances in such a way that their succession can be ascribed selectively to (a plurality of) objects. In order to find the pre-conditions of experience in Kant’s most specific sense, one must therefore enquire into how it is possible to represent something as an object. The heart of this enquiry is concentrated in the section of the Critique of Pure Reason entitled The analytic of principles. There, Kant explains that in order to be construed as “objective”, a connection of perceptions has to be regarded as universal and necessary. For if it were not the case, if the connection were particular and contingent, nothing could prevent one from ascribing it, at least partly, to the idiosyncratic and temporary situation of the subject of perceptions. Prescription of a necessary temporal connection between appearances according to principles of pure understanding, is thus what makes it possible to consider our representations as objective, and more specifically as representations of (a plurality of) objects. It is what gives rise to knowledge properly speaking, provided knowledge is defined as the relation of given representations to well- defined objects. Particular deductions are then carried out by Kant for the three modes of connection in time, namely permanence, succession, and simultaneity; and they yield respectively the principle of the permanence of substance, the law of causality and the law of reciprocity of action. These a priori laws of understanding, which are rules for the employment of categories, are not to be mixed up with the laws of physics. Empirical in‑formation is needed in order to know the particular laws of nature.[Note 15] However “all empirical laws are only specific determinations of pure laws of the understanding”[Note 16] , since the pure laws of understanding are after all what make possible the very objects whose behaviour is supposed to be ruled by empirical laws. In his Metaphysical foundations of natural science, Kant then gave a hint of how Newton’s three laws of motion[Note 17] can be taken as specific determinations of the three mentioned laws of understanding when the latter are applied to the empirical concept of material body[Note 18] . This procedure may be considered as a step towards a transcendental deduction of Newtonian mechanics. Admittedly, however, this deduction is doomed to remain partial not only because a momentous empirical element (the concept of material body) has been used to derive the laws of motion, but also because, once the laws of motion have been obtained, one has to introduce further empirical material (i.e. the Kepler laws) in order to derive the inverse-square law of gravitation. 4 A Generalized Transcendental Deduction At this stage, our problem is the following: can one transpose Kant’s partial transcendental deduction of Newtonian mechanics to quantum mechanics, by mere substitution of the empirical elements which serve to determine the basic laws of understanding? Things are certainly not so simple. Kant’s reasoning has to be altered much more than that in order to become applicable to quantum mechanics. But such an alteration has not to be deplored. For it yields two substantive advantages with respect to Kant’s original undertaking. Firstly, it broadens considerably the scope of the transcendental method, thus making it liable to an increasing number of applications. Secondly, as we shall see later, it allows a transcendental deduction of quantum mechanics which is in many respects more extensive than Kant’s deduction of Newtonian mechanics. Let us first recapitulate the two major steps of the original transcendental deduction. Its departure point is the fact that the flux of appearances is unified in such a way that it has the character of experience, or of representation of objects. And its end result is a set of laws of understanding considered as the conditions of possibility of experience. Both steps have to be thoroughly modified in order to meet the requirements of a transcendental deduction of quantum mechanics. To begin with, let us emphasize that organization of phenomena in such a way that they can be regarded as appearances of a plurality of interacting physical objects having properties (according to the second, restrictive, definition of objectivity), is by no means an indispensible ground of scientific activity. True, this organization is an ’unquestioned feature’ of our everyday life; and, as Kant noticed[Note 19] , it is also a basic presupposition of judgments. But this feature, which nothing in the manipulations and observations we perform in our immediate environment has ever forced us to question, does not have any reason to remain unchallenged in every domain of experimentation. In some scientific situations, such as contemporary microphysics, the cost of maintaining an object-like organization of phenomena is out of proportion with its advantages. Instead of contenting ourselves with the unquestioned fact of the object-like organization presupposed in our acting and speaking, we should thus try to figure out what is the basic function it fulfils in our lives and in classical science[Note 20] . Once this is done, the familiar object-like organization of the surrounding world is likely to appear as a restricted sub-class of the structures which are able to fulfil this function. What is then the minimum task the object-like organization carries out in our everyday lives? As I have already suggested in §2, this organization enables us to orientate our activities by anticipating the outcome of each act we perform, in such a way that the rules of anticipation can be communicated and collectively improved. That objects operate in our experience as anticipative frameworks has long been noticed by philosophers of the phenomenological tradition[Note 21] . But they are by no means the most general anticipative frameworks one may conceive. Indeed, their anticipative function is embodied by predicates which (according to Carnap’s partial definition method, or S. Blackburn’s quasi-realist approach[Note 22] ) can be construed operationally as dispositions to manifest again and again a well-defined set of appearances when the same object is put under specified conditions. The anticipative function of the objects thus relies on the possibility of reidentining a bearer of predicates across time; and the procedure of reidentification in turn requires a sufficient amount of continuity and determinism in the evolution of phenomena. When doubts are raised about the latter condition’s being fulfilled, a substitute for the objects qua anticipative structures is required. This substitute can be afforded by the concept of a reproducible global experimental situation. Now, replacing the concept of identity of an object by that of reproduction of experimental situations does three things. It releases, as required, the constraint on reidentification of bearers of predicates; it substitutes the most general acception of objectivity (universal validity of statements) for a restrictive acception (object-like organization of phenomena); and it enables one to use the broadest version of the concept of anticipation, namely that of probabilistic anticipation. Popper’s concept of propensity, which characterizes probabilistically types of experimental arrangements rather than individual objects, implements this kind of change. However, everything is not settled at this point. For, if the previous kind of operationalistic anticipative framework is to be efficient at all, it must be grounded on a reliable procedure for ascertaining that (experimental) situations are effectively reproduced. Of course, this procedure could itself amount to describing and performing a second-order experiment, whose anticipated outcome is the stable set-up of the first-order experiment. But the regress has to be stopped somewhere. It is at this point that the object-organization of experience and discourse rises again. Indeed, predicating a property of an object is a way of implying the class of situations in which the appearances arising from the dispositional content of this property are observed. As Kant claimed repeatedly, referring to objects and properties is not tantamount to stepping back in cosmic exile’ (that is in no worldly situation at all), thus talking about things as they are in themselves; it only means that one endorses tacitly the sort of situation which is common to every sentient and rational being inhabiting the environment of mankind. Describing an experimental set-up in terms of reidentifiable objects with properties is therefore a natural way of stopping the regress of explicitly stated situations and anticipations, by presupposing them implicitly. We now see that the object-like organization of the surrounding world is not only one among the many structures which afford communicable anticipations. It is also designed to be the last-order one. Bohr’s insistence on everyday language and concepts to describe experiments, and Wittgenstein’s remark in On certainty that “no such proposition as ’there are physical objects’ can be formulated”[Note 23] are two ways of expressing this special limiting status of the object-like organization. Now we can state precisely what we take as the departure point of our transcendental deduction of quantum mechanics. As a first step of such a deduction we shall not choose a supposedly ’unquestionable’ feature of knowledge (such as the object-like organization of phenomena), but rather a basic requirement bearing on the mode of anticipation of the results of our game of seeking and finding. The latter requirement can be stated by means of a language which only presupposes the object-like behaviour of the experimental devices, not of the field of investigation. Actually, if one took (as Kant did) an all-encompassing object-like organization as an unquestioned departure point, this would already be a way of requiring implicitly something specific about the mode of anticipation of the result of our game of seeking and finding. Therefore, the type of departure point which has just been suggested for the extended version of the transcendental deduction is a mere generalization of Kant’s. The departure point of the new kind of transcendental deduction having been chosen, let us now wonder which kind of result we should expect from it. In Kant’s reasoning, the end-product of the deduction was a set of laws of understanding, of which the laws of physics are specific determinations. The most crucial among the a priori laws of understanding are those which concern relations in time, especially the law of causality which concerns succession. But one must be careful at this stage. If one does not pay sufficient attention to Kant’s writings, some misunderstandings may arise. Some of his sentences sound as if, in order for experience to be made possible at all, one’s understanding had to impose, say, the law of causality onto the succession of appearances. Actually, things are more subtle. The a priori laws of understanding which concern succession in time are called analogies of experience; they are not constitutive of the content of our intuition[Note 24] , but rather regulative of investigations. They do not allow us to construct the existence of consecutive phenomena, for this would only be acceptable in the most extreme form of idealism; they only provide ”(...) a rule to guide me in the search of (a phenomenon) in experience, and a mark to assist me in discovering it”[Note 25] . The a priori laws of understanding thus do not have to be valid in the absolute within the field of appearances[Note 26] . In order to make experience possible, in order to constitute experience, it is sufficient that we presuppose that appearances necessarily occur according to these laws, and that we always look for them according to such a presupposition. This qualification arises more or less explicitly from many sentences in Kant’s deduction of the law of causality; for instance: “When we know in experience that something happens, we always presuppose that something precedes, whereupon it follows in conformity with a rule. For otherwise I could not say of the object, that it follows (...)”[Note 27] . When carefully analyzed, Kant’s laws of understanding then do not bear directly on some passively received material of knowledge, but rather on the strategies of action and anticipation that we must use in order to get something which deserves to be called objective knowledge. They are not descriptive laws but rather law-like prescriptions; and moreover they are prescribed not so much to the phenomena as to our research-behaviour. Let us retain this idea for our modern variety of the transcendental deduction: the end-product of a transcendental deduction is a strong structure of anticipation which is prescribed to our activity of seeking and finding. 5 Transcendental Constraints, Quantum Logic, and Hilbert Space To recapitulate, a generalized transcendental deduction is a regression from a set of minimal requirements about the process of anticipation of phenomena, to a strong anticipative structure as the condition of possibility for these requirements to be satisfied. As we shall see, quantum mechanics construed as a predictive formalism can mostly be derived this way, provided a little number of very general constraints are imposed on the prediction of phenomena. What are these constraints? To begin with, the phenomena which have to be anticipated are contextual phenomena. This looks like a very drastic constraint indeed; one by which an essential ingredient of quantum mechanics is introduced in the reasoning from the outset, thus threatening our deduction with the charge of circularity. But I think this judgment is wrong. Saying that the phenomena to be anticipated are relative to an experimental context is tantamount to removing a familiar constraint, rather than introducing an additional one; it is tantamount to removing the constraint of de-contextualization. Let me explain this by means of a historical example. As Descartes and Locke realized, large classes of phenomena can only be defined relative to a sensorial or instrumental context. They correspond to the so-called secondary qualities. Kant later generalized this remark in his Prolegomena. According to him the spatial qualities, which were considered as primary or intrinsic by Locke, have also to be construed as appearances[Note 28] , although Kant does not say that they are relative to a partic‑ular sensory structure but rather that they are relative to the general form of empirical intuition. It was thus widely accepted among philosophers, from the end of the seventeenth century onwards, that a phenomenon is usually relative to a certain context which defines the range of possible phenomena to which it belongs. However this epistemological remark, with all the consequences that its generalization could have had, did not change the way classical physicists conceived their objects. The reason for this indifference is that as long as the contexts can be combined, or as long as the phenomena can be made indifferent to the order and chronology of use of the contexts, nothing prevents one from merging the distinct ranges of possible phenomena relative to each context into a single range of possible conjunctions of phenomena. This being done, one may consider that the new range of possible compound phenomena is relative to a single ubiquitous context which is not even worth mentioning. Then, once one has forgotten the ubiquitous context, everything goes as if phenomena were reflecting intrinsic properties. Taking for granted the possibility of combining all the contexts, and/or the perspective of a perfect indifference of phenomena to the order of use of the contexts, thus means imposing a drastic constraint. It is equivalent to impose what we have called the constraint of de-contextualization. The structure of propositions in ordinary language, which allows us to ascribe several characteristics to a single object as if they were intrinsic properties (independent of any context), presupposes that this constraint is obeyed. Now, as it can easily be shown, this presupposition is closely associated to Boolean logic; for the logical operations between the propositions of a language underpinned by such a presupposition are isomophic to set-theoretical operations. Moreover, the same presupposition is also closely associated to a Kolmogorovian theory of probabilities; indeed, Kolmogorov’s theory relies on classical set theory (or on a logic isomorphic to classical set theory) for the definition of the ’events’ on which the probabilistic valuation is supposed to bear. Now what happens if the constraint of de-contextualisation is removed? In this situation, the rules of Boolean logic and of the Kolmogorovian theory of probabilities may still subsist, but in a fragmented form. To each experimental context, one may associate a given range of possible determinations and propositions which depend on a Boolean sub-logic. And to determinations chosen within each such range, one may associate real numbers in such a way that they obey the axioms of the Kolmogoro‑vian theory of probabilities. But it is no longer possible to organize the whole set of experimental propositions, depending on several incompatible contexts, according to the structure of a single Boolean logic; nor is it possible to organize the whole set of probabilistic valuation as if they were bearing on a single Kolmogorovian domain of events. At this point, we must introduce the second constraint, (or rather the real constraint, since the first one was no constraint at all) in order to overcome the previous dismantling of the logic and probability field. This constraint is that to each experimental preparation, univocally described by means of a language which presupposes the familiar object- like organization, there must correspond a unified (non-Kolmogorovian) mathematical tool of probabilistic[Note 29] prediction, irrespective of the context associated to the measurement which follows the preparation. The sought unification of the predictive tool under the concept of a preparation may be expressed either by means of a single symbol allowing one to calculate the list of probabilities corresponding to any context (the “state vector”), or by using transformation rules for the probabilistic valuations from one context to another (Dirac’s “transformation theory”). The previous constraint can be considered as a generalized equivalent of Kant’s departure point for his so-called “subjective” transcendental deduction of the categories. The difference is that, whereas Kant demanded ”(...) that all the manifold in intuition be subject to conditions of the originally synthetical unity of apperception”[Note 30] , we demand that the manifold of probability assignments which bear on measurements following a given type of experimental preparation be subject to the unity of this type of preparation. The unifying pole is no longer a mentalistic entity (the apperception, or the “consciousness of oneself”[Note 31] ), but rather the objectified end-product of an experimental activity (the preparation). And the elements to be unified are no longer passively received contents of intuition, but rather formalized acts of anticipation. Taking into account the two former constraints, namely contextuality and unification of the predictive tool under the concept of a preparation, the basic structure of quantum mechanics is close at hand. Here, I shall only give a hint of how the reasoning proceeds, in two steps: the first one concerns quantum logic, and the second one concerns the relation between vectors in Hilbert space and probability valuations[Note 32] . (1) As Patrick Heelan[Note 33] noticed, meta-contextual languages able to unify contextual languages are isomorphic to Birkhoff’s and Von Neu‑mann’s quantum logic. To show this, he used the following assumptions: To begin with, let us consider two Boolean experimental context- dependent languages LA and LB. Then, let us define a relation of implication (which operates at a meta-linguistic level “ML”), in such a way that one language implies another language iff every sentence of the first one is also a sentence of the second one. After that, we consider two other languages: Lo which is such that it implies any language, and LAB which is such that it is implied by the all the other languages, including the set-theoretical complements L’#A and L’#B of LA and LB in LAB.The crucial assumption is that LAB is richer than a language made of all the propositions of LA , LB and their logical conjunctions or disjunctions. This assumption expresses context-dependence; indeed, in the case of context-dependence, a combination of contexts yields experimental consequences which are distinct from mere combinations of what occurs when each context is used separately. Finally, we define two functors ⨂ and ⨁ in the meta-contextual language ML, which are the equivalents of “and” and “or” in a first-level language: ⨂ stands for “least upper bound” and ⨁ for “greatest lower bound” (of the relation of implication). With these definitions and assumptions, it is easy to show that the structure of the meta-contextual language ML can but be an orthocomplemented non-distributive lattice. Then, if this structure is projected onto the first- level language, it takes the form of the familiar “quantum logic”. To summarize, the specific structure of “quantum logic” is unavoidable when unification of contextual languages at a meta-linguistic level is demanded. In this sense, one can say that quantum logic has been derived by means of a transcendental argument: it is a condition of possibility of a meta-language able to unify context-dependent experimental languages. (2) As J.L. Destouches and P. Destouches-Février[Note 34] argued convincingly, the formalism of vectors in a Hilbert space, together with Born’s correspondence rule, is the simplest predictive formalism among those which obey the constraint of unicity in a situation where de-contextualization cannot be carried out. To show this, J. L. Destouches starts from a list of context-dependent probability valuations for the results of measurements performed after a preliminary measurement (or, more generally, after a given preparation). The problem is that each probability valuation does not hold beyond a certain couple [preparationv, measurementw]. In order to overcome this lack of unity, one is led to define a set Ξ in such a way that (a) an element XV of this set is associated to each preparation with index V, and (b) the probability valuation Pvw for a couple [preparationV, measurementW] is a function (indexed by W) of XV. XV is called an “element of prediction” associated to the V-th preparation. Then, J. L. Destouches demonstrates that, provided one adds enough elements to Ξ for transforming it into a vector space Ξ*, the procedure for calculating a probability valuation PVW from an element of prediction XV can be simplified as follows. Firstly, one defines special elements of prediction XVW(i) such that the probability of obtaining the result Wi if measurement W is performed after preparation V, is equal to 1. Secondly, one replaces XV by (or, in the simplest, Hilbert-space like, case, one identifies XV to) the linear superposition ∑ciXVW(i) , where ci can be either real or complex. One can then show that the sought probability valuation PVW is given by: PVW(Wi) = f(ci). The next problem is to determine the function f. At this point, P. Destouches-Février[Note 35] demonstrates that, when the probability valuations bear on magnitudes which may be “incompatible” (namely magnitudes which may be such that they cannot be measured simultaneously with an arbitrary precision), the function f is unique, and takes the form f(ci) = |ci|2. The demonstration relies on a generalized variety of the Pythagoras theorem in space Ξ. To summarize, the formalism of vectors in a Hilbert space associated with Born’s rule affords the simplest unified meta-contextual probability valuation algorithm, if the contexts are sometimes incompatible (in the above sense), and if each contextual probability sub-structure is Kolmogorovian. It is a minimal structural condition of possibility of a unified system of probabilistic predictions, whenever the constraint of de-contextualization has been released. 6 Transcendental Arguments about Connection in Time Of course, everything is not settled at this point. The formalism of vectors in a Hilbert space, construed as a meta-contextual probability theory, is not enough to constitute quantum mechanics. Many elements have to be added to it. To begin with, we need a law of evolution of the probabilistic predictive symbols, namely the vectors themselves. Now, it is well known[Note 36] that under several assumptions ensuring: (i) that the numbers computed by means of the Born’s rule obey the Kolmogorov’s axioms at all times (i.e. that the evolution operators are unitary), and (ii) that the set of evolution operators has the structure of a one-parameter group of linear operators (where the parameter is time), one obtains the general form of both Schrodinger’s and Dirac’s equation, leaving open the structure of the Hamiltonian. The Hamiltonian can eventually be obtained either by means of the correspondence principle with classical physics, or by introducing directly the fundamental symmetries which underly classical mechanics and/or relativistic mechanics. It is not very difficult to convince ourselves that at each step of this mode of derivation of the law of evolution of the predictive symbol, transcendental arguments play the key role. Some of them are transcendental arguments per se, e.g. the requirement of trans-temporal stability of the probabilistic status of the predictive tool (without it, one would just have to give up the attempt at providing enduring probabilistic valuations for experimental events). The other ones are bridging transcendental arguments. They establish a bridge between the form of transcendental deduction which was used by Kant within the direct spatio-temporal environment of mankind, and the generalized sort of transcendental deduction needed in domains of scientific investigation which may go beyond the human Umwelt. This is especially clear for the correspondence principle, because it ensures a proper connection between (a) the basic (last-order) object-like organization which is common to everyday life and classical mechanics, and (b) the contextual organization of quantum mechanics. This is also clear for certain symmetry requirements such as time, space, and rotation invariance, which, as Eugen Wigner wrote, ” (...) are almost necessary prerequisite that it be possible to discover (...) correlations between events”[Note 37] . Finally (even though this is less obvious), the statement according to which the set of evolution operators must be a one-parameter group of linear unitary operators can also be read as a bridging transcendental argument. Indeed, this condition is tantamount to splitting up the transcendental demand of unity of the predictive tool under the concept of a preparation, according to the three kantian modes of connection in time (namely permanence, succession, and simultaneity). To see this, one has to realize that imposing the structure of a time- parameter group of linear unitary operators to the set of evolution operators has the three following consequences: (1) It amounts to projecting the continuity of the parameter ’time’ onto the domain of the probabilistic predictive tool (namely the state vector). (2) It entails that the evolution of this predictive tool is deterministic[Note 38] . (3) By the linearity of the evolution operators, the structure of the linear superpositions of state vectors is maintained across time. Let us analyze these three consequences more precisely: (1’) Continuity makes possible to identify a certain state vector as the time-transform of the state vector which was initially associated with a given preparation; it fulfills the function of the category of substance, applying it to the predictive tool rather than directly to phenomena. (2’) Determinism ensures that a state vector at a certain time follows state vectors at previous times according to a univocal rule; it fulfills the function of the category of causality, again applying it to the predictive tool rather than directly to phenomena.[Note 39] (3’) As for the constant structure of the linear superpositions of state vectors across time, it means that there is an enduring internal relation between the predictive contents of two or more preparations when they have been combined into one single compound preparations[Note 40] ; it fulfills the function of the category of reciprocity, by applying it to the predictive content of coexisting preparations, rather than directly to coexisting phenomena. To summarize, imposing that the set of evolution operators have the structure of a time-parameter group of linear unitary operators is tantamount to shifting the locus of the categories of understanding, and especially the analogies of experience, from the phenomena to the predictive frame. This move is in good agreement with Schrodinger’s (quasi-) realist construal of iv-functions, and with G. Cohen-Tannoudji’s remark that Hilbert space, not ordinary space, is the proper place of quantum objectivity[Note 41] . A similar idea was also advocated by P. Mittelstaedt[Note 42] . At this point, it is interesting to draw some philosophical consequences from the fact that the formalism of quantum mechanics, together with some appropriate boundary conditions, enables one to derive both quantization conditions and prediction of wave-like distributions of phenomena. In the light of the way in which the formalism has been justified, these two effects acquire a meaning which is thoroughly different from what is usually implied in the loosely realist mode of expression of the quantum physicists. Here, wave-like distributions and quantization no longer appear as contingent aspects of nature. They are a necessary feature of any activity of production of contextual and mutually incompatible phenomena whose level of reproducibility is sufficient for its outcomes to be embeddable in a unified and time-connected meta-contextual system of probabilistic anticipation. Of course, not everything in the quantum predictions can be transcendentally deduced. Just as in Kant’s transcendental deduction of Newtonian mechanics, an empirical element has to be introduced somewhere. However, there are interesting differences between the empirical elements which had to be added to get Newtonian mechanics and the empirical elements which we must introduce to get standard quantum mechanics. In order to complete his deduction of Newtonian mechanics and to obtain the law of gravitation, Kant had to add both an empirical concept (that of material body) and a set of empirical laws (Kepler’s laws)[Note 43] . But in order to complete the transcendental deduction of quantum mechanics construed as a predictive formalism bearing on global experimental situations, we do not need the concept of an object of the investigation[Note 44] . Even less do we have to introduce any empirical law-like structure; for the basic law-like structure of standard quantum mechanics (i.e. Schrbdinger’s equation) has already be obtained. We only need one very simple, and non-structural, empirical ingredient, namely the value of the Planck constant. And we also need some additional (“internal”) symmetry principles whose empirical or transcendental status is at present unclear. True, these are crucial ingredients. Let me insist on the value of the Planck constant. This constant sets quantitatively, through Heisenberg’s relations, the possibility of partially compensating for the mutual incompatibility of experimental contexts. If it were just equal to zero, measurements of conjugate variables would be indifferent to the order of measurements, and a basic condition of de-contextualisation would then be fulfilled. Conversely, the non-zero value of the Planck constant means that the de-contextualisation of experimental outcomes can only be performed up to a certain precision. Hence the need to regard Kant’s original transcendental deduction, which started from de-contextualized premises, as a particular case, and to generalize it to a situation where contextuality becomes unavoidable. Now, we must not limit our investigation to the framework set by the Kant’s Critique of pure reason. The Critique of Judgment introduced a new kind of transcendental argument which is admittedly weaker than the familiar one. This new variety of transcendental argument is not determinative’ but ’reflective’, and it is explicitly non-objective. Indeed, according to Kant, it is grounded on our subjective need to think nature as a systematic unity, and to presuppose a teleological order for that. Can’t the value of Planck’s constant be obtained this way, thus complementing the set of transcendental arguments which lead to quantum mechanics? The answer is positive, provided one uses the modern version of the teleological argument for the determination of the universal constants, namely the weak anthropic principle. In fine, there is but one element which is bound to remain beyond the reach of any variety of transcendental argument, be it grounded on subjective requirements: it is the occurrence of a particular outcome, after each single run of an experiment. This is not very surprising. As R. Omnes[Note 45] rightly pointed out, the actuality of each particular phenomenon cannot be accounted for by any physical theory. The only thing a physical theory does, and the only thing it has to do, is to embed documented actualities in a (deterministic or statistical) framework, and to use this framework to anticipate, to a certain extent, what will occur under well-defined experimental circumstances. What we have shown in this paper is that, at least in the case of standard quantum mechanics, such a framework can be justified as a structural condition for a minimal set of constraints on the prediction of phenomena (and on their predictor) to be obeyed. 7 Conclusion To conclude, I shall briefly discuss the benefits we can draw from the kind of transcendental deduction I have just outlined, and also its limits. I think the specificity of a transcendental argument is that it starts from our engaged situation in the world, then deriving the basic pre-conditions of our orientation within this situation. In this respect, it is quite at variance with any variety of ontological attitude, be it the positivistic ontology of facts or the realist ontology of objects. Indeed, ontological attitudes systematically favour a disengaged outlook, even though their very undertaking is grounded on the presuppositions of an engaged activity. As Charles Taylor emphasizes, “With hindsight, we can see (Kant’s transcendental deduction) as the first attempt to articulate the background that the modern disengaged picture itself requires for the operation it describes to be intelligible, and to use this articulation to undermine the picture”[Note 46] . But how does the transcendental approach manage to undermine the pictures so cherished by the supporters of the ontological (disengaged) outlook? It does so by showing that the predictive success of some of our most general scientific theories can be ascribed, to a large extent, to the circumstance that they formalize the minimal requirements of any prediction of the outcomes of our activity, be it gestural or experimental. The very structure of these theories is seen to embody the performative structure of the experimental undertaking. As a consequence, there is no need to further explain their efficiency by their ability to reflect in their structure the backbone of nature. The inference to the best explanation, which is the most powerful argument of scientific realists, looks much weaker, because the choice is no longer between the realist explanation of the efficiency of theories and no explanation at all. A third alternative has been proposed: it consists in regarding the structure of the most advanced theories as embodiments of the necessary preconditions of a wide class of activities of seeking and predicting. In the latter perspective, the project of ontologizing certain theoretical entities appears as a mere attempt at hypostasizing the major invariants of these activities. True, ontologizing theoretical entities enables the philosopher to make sense of the intentional attitude and the seriousness with which the physicist aims at his hypothetical objects. However, by doing so too dogmatically, one takes the risk of freezing the ontological structure. Intentional attitudes call for objects, but it would be very imprudent to assert that, conversely, self-existent objects are what justify the intentional attitudes. As for seriousness, it calls for a sense of the absolute, but it would be very imprudent to assert that, conversely, the existence of an absolute self-structured reality ’out there’ is what justifies seriousness in our striving for structures. By contrast, the transcendental approach is able to afford both a non- metaphysical explanation of the structure and efficiency of theories, and a satisfactory account of the intentional directedness of scientific research in each paradigmatic situation, provided one associates it with some variety of internal realism in Putnam’s sense. Now let me give a hint of the (alleged or true) shortcomings of the transcendental approach. I can see three of them. (1) The transcendental account comes too late. It can make sense of physical theories only ex post facto and it is thus no instrument of discovery. My answer to this criticism is twofold. On the one hand, I accept the criticism to a certain extent, although I think that this is the fate of every sound philosophical argument. As Wittgenstein would have it, philosophers only have to describe (the scientific activity) and leave it as it is. One must acknowledge that, during the preparatory phase of a scientific revolution, the realist discourse and representations prevail. One must also acknowledge that it is by criticizing some of these representations and testing other representations instead, that scientists are able to cross the boundary between the old paradigm and the new one. They do not use directly, during the initial stage of their process of discovery, the pragmatic transcendental method which consists in taking the basic requirements of a certain experimental activity as a departure point and obtaining a theoretical structure as a condition of their possibility. This is so because in order to carry out such a procedure one would have to define the type of activity whose norms are to be formalized, before the corresponding theory has been formulated. But the exact nature of the shift in the type of experimental activity is usually clear only after the theory has been stated. As long as the theory has not been fully formulated, physicists usually act as if they were only probing farther and farther into a traditional domain of objects (which can be thought of as one possible projection of the norms of the old mode of experimental activity). It is the gap between the findings of the scientists and their expectations about these putative objects which motivates a move towards radical changes. And it is by an analysis of the new paradigm that the philosopher is able to disclose retrospectively the shift in the type of experimental activity which made the changes unavoidable. On the other hand, it is not true that philosophy in general, and transcendental philosophy in particular, have had no role whatsoever in the major advances of science. Careful philosophical reflection may contribute, and has contributed in the past, to modifying the language-game of scientific research, thus favouring the evolution of heuristic representations. Transcendental approaches are especially efficient in weakening the ontological rigidities which hinder the major changes needed when the presuppositions of experimental activities have been so widened that their outcomes exceed by far the domain of validity of the accepted theoretical framework. As I mentioned earlier, this ability did not give the transcendental approaches any importance during the preliminary phase of scientific revolutions. But it enabled a special variety of transcendental procedures, namely the use of principles of relativity, to play a key role during the central phase of the major scientific revolutions of the 17th and 20th century. Indeed principles of relativity operate as a way of emancipating law-like structures from particular situations, thus stating improved conditions of objective knowledge without recourse to ontologization (and even bypassing older ontological systems). Galileo’s principle of relativity bypassed Aristotle’s ontology of natural place. As for Einstein’s principle of special relativity, it bypassed Lorentz’ ontological- like electrodynamic explanation of contraction of moving bodies and slowing down of moving clocks. The only circumstance which prevented one from seeing clearly the transcendental nature of these principles of relativity is that their formulation was usually followed by a phase of renewal of ontological-like discourses: discourse about kinematic and dynamic properties of bodies in the case of classical mechanics, and discourse about the properties of four-dimensional space-time in the case of relativistic mechanics. But in quantum mechanics, recovery of an ontological-like mode of expression raises an impressive number of problems, and this may make transcendental approaches more permanently attractive in this case than in most other cases. (2) The pragmatic or functional version of the transcendental approach apparently leads one to relativism. It looks as if it were possible to justify any (right or wrong) physical theory this way. The recipe is simple: take a mathematically coherent theory, display its normative structure, and invent an activity which goes with it. Actually, things are not so straightforward. The reason is that not every type of activity counts as an acceptable experimental activity. When defining an experimental activity, one has to take certain constraints into account, the most fundamental of them being that the activity must be so selected that it fits with the prescription of a sufficient degree of reproducibility and universality. Other constraints, expressed by irreducibly empirical universal constants, lead one to adopt certain classes of activities and their associated physical theories. For instance, the finiteness of the constant c is naturally associated with the (typically relativistic) practice of comparing ruler and clock readings from one inertial frame to another. As for the non-zero value of the constant h, it had the consequence that traditional practices, which presuppose the possibility of manipulating and studying reidentifiable bearers of properties, were explicitly or implicitly superseded by activities of production of (partially incompatible) contextual phenomena. But isn’t acceptance of such constraints tantamount to acknowledging that there exists a pre-given independent reality ’out there’ which imposes its structures on us, and which we ultimately have, as much as we can, to represent faithfully[Note 47] ? This consequence does not follow. Saying that an experimental activity is submitted to constraints does not amount to saying that certain structural patterns are imposed by something external. When he tried to make sense of the rules of arithmetics, Wittgenstein provided many important insights which clarify this point. To summarize, he indicated that even though the rules of arithmetic cannot be considered as true to a set of independent facts, they fit elegantly with certain constraints which appear from within the practice of applying them.[Note 48] In other words, the ’facts’ which constrain these rules do not preexist to their being used. In the same way, even though the present physical theories cannot be considered as describing a set of intrinsically existent properties, they fit elegantly with certain constraints which appear from within the accepted experimental practices. It is especially manifest in the quantum case that the ’facts’ which constrain the norms of its associated experimental practice do not preexist to the enactment of this practice, for they are contextual, and their contextuality cannot in general be compensated due to the non-zero value of the Planck constant. As for the value of the Planck constant itself, which sets quantitatively the degree of incompatibility of contexts, it can be considered, from the point of view of the weak anthropic principle, as arising from within the generic situation of mankind (which defines the range of possible human practices), rather than as a completely extrinsic datum. This being granted, a theory like quantum mechanics no longer appears as a reflection of some (exhaustive or non-exhaustive aspect) of a pre-given nature, but as the structural expression of the co-emergence of a new type of experimental activity and of the ’factual’ elements which constrain it.[Note 49] (3) Charles Taylor writes that “There are certain ontological questions which lie beyond the scope of transcendental arguments”[Note 50] . Actually, we could even assert that transcendental arguments are designed to avoid having to answer ontological questions in the metaphysical sense. But is not this refusal quite unsatisfactory? One might accept the conclusion of the transcendental deduction in its stronger version, namely that the structure of a theory reflects exclusively the necessary pre-conditions of experimental research, and still feel uneasy. For, even if the theory cannot claim to have captured any structural feature of reality, but only the basic underlying structures of a wide class of research activities, it remains that we partake, with our bodies and our experimental apparatuses, of something broader that we can but call ’reality’. Furthermore, the former notion of co-emergence of an experimental activity and its constraining ’factual’ elements, which is so closely akin to the transcendental method, raises the temptation to adumbrate a picture of ’reality’ as an organic whole made of highly interdependent processes. Could not one hope to get an insight into this real reality? I think that such a project is not only doomed to failure due to some contingent boundary between us and the “thing-in- itself”; it is hopeless because it is self-defeating. It is tantamount to assuming that it makes sense to seek what is reality independently of any activity of seeking; or to characterize reality relative to no procedure of characterization at al1[Note 51] . Now, let us imagine that this paradoxical search can nevertheless be undertaken. The result one naturally expects in this case is that ’reality is A’ as opposed to ’reality is not-A’, for, if this were not the case, the whole process would have led to nothing worth mentioning. But is not the very statement that reality in the absolute is either A or not-A extremely daring? I should not venture to think that it is even likely. Bibliography Auyang S. Y., How is quantum field theory possible?, Oxford University Press, 1995 Bitbol M., Schrödinger’s philosophy of quantum mechanics, Kluwer, 1996 Bitbol M., Mécanique quantique, une introduction philosophique, Flammarion, 1996 Blackburn S., Essays in quasi-realism, Oxford University Press, 1993 Bouveresse J., La force de la règle, Editions de Minuit, 1987 Buchdahl G., Kant and the dynamics of reason (Essays on the structure of Kant’s philosophy), B. Blackwell, 1992 Cassirer E., Determinism and indeterminism, Yale University Press, 1956 Cassirer E., Kant’s life and thought, Yale University Press, 1981 Cohen-Tannoudji & Spiro M., La matière espace-temps, Gallimard, 1990 B. d’Espagnat, Veiled reality, Addison-Wesley, 1994 Destouches J. L., Corpuscules et systemes de corpuscules, Gauthier-Villars, 1941 Destouches-Fevrier P., La structure des theories physiques, P.U.F., 1951 Destouches-Fevrier P., L’interprétation physique de la mécanique ondulatoire et des théories quantiques, Gauthier-Villars, 1956 Dirac P. A. M., The principles of quantum mechanics, Oxford University Press, 1947 Everett, ”’Relative state’ formulation of quantum mechanics”, Reviews of modern physics, 29, 454-462, 1957 Falkenburg B., Teilchenmetaphysik, Spektrum Akademische Verlag, 1995 Falkenburg B., “Kants zweite Antinomie and die Physik”, Kant-Studien, 86, 4-25, 1995 Forster E. (ed.), Kant’s transcendental deductions, Stanford University Press, 1989 Friedman M., Kant and the exact sciences, Harvard University Press, 1992 Gleason A. M., “Measures on the closed subspaces of a Hilbert space”, Journal of mathematics and mechanics, 6, 885-893, 1957 Heelan P., “Complementarity, context-dependence, and quantum logic”, Found Phys. 1, 95-110, 1970 Heelan P., “Quantum and classical logic: their classical role”, Synthese, 21, 2-33, 1970 Hermann G., Les fondements philosophiques de la mécanique quantique (Presentation par L. Soler), Vrin 1996; French translation of G. Hermann, “Die naturphilosophischen Grundlagen der Quantenmechanik”, Abhandlungen der Friesschen Schule, Sechster Band, 2. Heft. 1935 Hintikka J. & Kulas I., The game of language, Reidel, 1983 Hughes R. I. G., The structure and interpretation of quantum mechanics, Harvard University Press, 1989 Husserl E., Ideas (general introduction to pure phenomenology), G. Allen & Unwin, 1931 Jordan T. F., Linear operators for quantum mechanics, J. Wiley, 1969 Kant I., Critique of pure reason, (new edition, by V. Politis), Everyman’s library, 1993 Kant I., Prolegomena to any future metaphysics that will be able to present itself as science, Manchester university press, 1971 Kripke S., Wittgenstein on rules and private language, Blackwell, 1982 P. Mittelstaedt, Philosophical problems of modern physics, Reidel, 1976 Omnes R., The interpretation of quantum mechanics, Princeton University Press, 1994 Petitot J., La philosophie transcendantale et le problème de l’objectivité, Osiris, 1991 J. Petitot, “Objectivite faible et philosophic transcendantale”, in: M. Bitbol & S. Laugier (eds.), Physique et realité, un débat avec Bernard d’Espagnat, Frontieres-Diderot 1997 Putnam H., Pragmatism, Blackwell, 1995 Taylor C., Philosophical arguments, Harvard University Press, 1995 Varela F., Thompson E., & Rosch E., The embodied mind, MIT Press, 1993 Van Fraassen, Quantum mechanics, an empiricist view, Oxford University Press, 1991 Von Weizsäcker C. F., Aufbau der Physik, Hanser, 1985 Von Weizsäcker C. F. & Görnitz T. (1991), “Quantum theory as a theory of human knowledge”, in: P. Lahti & P.Mittelstaedt (eds.), Symposium on the foundations of modern physics 1990, World Scientific Watanabe S., “The algebra of observation”, Suppl. Prog. Theor. Phys., 37 and 38,350-367,1966 Wigner E., Symmetries and reflections, Ox Bow Press, 1979 Wittgenstein L., On certainty, B. Blackwell, 1974 Endnotes 1 See E. Cassirer, Determinism and indeterminism, Yale University Press, 1956 (text of 1936); G. Hermann, “Die naturphilosophischen Grundlagen der Quantenmechanik”, Abhandlungen der Fries’schen Schule, Sechster Band, 2. Heft. 1935. French translation and extensive comment in: Les fondements philosophiques de la mécanique quantique (Presentation par L. Soler), Vrin 1996; P. Mittelstaedt, Philosophical problems of modern physics, Reidel, 1976; C.F. Von Weizsäcker, Aufbau der Physik, Hanser, 1985; C.F. Von Weizsacker & Th. Görnitz (1991), “Quantum theory as a theory of human knowledge”, in: P. Lahti & P.Mittelstaedt (eds.), Symposium on the foundations of modern physics 1990, World Scientific; J. Petitot, La philosophie transcendantale et le problème de l’objectivité, Osiris, 1991; J. Petitot, “Objectivité faible et philosophie transcendantale”, in: M. Bitbol & S. Laugier (eds.), Physique et réalité, un débat avec Bernard d’Espagnat, Frontieres-Diderot 1997; S. Y. Auyang, How is quantum field theory possible?, Oxford University Press, 1995; B. Falkenburg, Teilchenmetaphysik, Spektrum Akademische Verlag, 1995; B. Falkenburg, “Kants zweite Antinomie und die Physik”, Kant-Studien, 86, 4-25, 1995. 2 I. Kant, Critique of pure reason, (new edition, by V. Politis), Everyman’s library, 1993, B25, p. 43 3 Either completely in mathematics, or partly in physics. 4 I. Kant, Critique of pure reason, op. cit. BXIII, p. 14 5 I. Kant, Critique of pure reason, op. cit. A19/B33, p. 48 6 In view of the Bohrian concept of a phenomenon as irreducibly relative to a given experimental context, Grete Hermann pointed out that, far from falsifying transcendental philosophy, quantum physics may be an incentive to radicalizing it. See L. Soler’ introduction, in G. Hermann, Les fondements philosophiques de la mécanique quantique, op. cit. p. 45 7 This idea also fits with recent developments in the cognitive science, such as F. Varela’s concept of enaction. F. Varela, E. Thompson, & E. Rosch, The embodied mind, MIT Press, 1993. 8 J. Hintikka & I. Kulas, The game o fLanguage, Reidel, 1983, p. 33 9 S. KOrner, Introduction to E. Cassirer, Kants lift and thought, Yale University Press, 1981, p. 10 E. Kant, Critique of pure reason, op. cit., A23—B38, p. 50 11 H. Putnam, Pragmatism, Blackwell, 1995 12 W. Carl, “Kant’s first draft of the transcendental deduction” in: E. Forster (ed.), Kant’s transcendental deductions, Stanford University Press, 1989 13 Ch. Taylor, Philosophical arguments, Harvard University Press, 1995. Kant’s statement runs thus: “The transcendental deduction of all a priori concepts has (...) a principle according to which the whole enquiry must be directed: to show that these concepts are a priori conditions of the possibility of all experience” ( E. Kant, Critique of pure reason, op. cit., A93—B125, p. 96) 14 I. Kant, Prolegomena to any future metaphysics that will be able to present itself as science, Manchester university press, 1971, §21 15 E. Kant, Critique of pure reason, op. cit., B165, p. 117 16 E. Kant, Critique of pure reason, op. cit., A137, p. 127 17 Respectively the law of inertia, the proportionality of force and acceleration, and the equality of action and reaction. 18 M. Friedman, Kant and the exact sciences, Harvard University Press, 1992, chapter 3 19 E. Kant, Critique of pure reason, op. cit., B141, p.104 20 In this respect, my analysis differs markedly from S. Auyang’s, who rather develops the concept of physical object from a renewed Kantian perspective, and who still takes the concept of particle seriously. see: How is quantum field theory possible?, op. cit. p. 99-100. 21 E. Husserl, Ideas (general introduction to pure phenomenology), G. Allen & Unwin, 1931 22 S. Blackburn, Essays in quasi-realism, Oxford University Press, 1993, chapter 14 23 L. Wittgenstein, On certainty B. Blackwell, 1974, §36 24 They are not constitutive of the content of intuition, but they are constitutive of experience, in so far as they make its object-like structure possible (see below). 25 I. Kant, Critique of pure reason, op. cit., A179—B222, p.167 26 See e.g. G. Buchdahl, Kant and the dynamics of reason (Essays on the structure of Kant’s philosophy), B. Blackwell, 1992, p. 204 27 I. Kant, Critique of pure reason, op. cit., A195—B240, p.176. See also A194, A198, A 200. 28 I. Kant, Prolegomena to any future metaphysics that will be able to present itself as science, op. cit. §13, note II, p. 46 29 One could wonder why this second constraint bears selectively on a tool of probabilistic prediction. Couldn’t it have concerned a tool of deterministic prediction? Isn’t this apparently arbitrary choice, and our former insistance on the theory of probability, a way of introducing implicitly one typical feature of quantum mechanics in a reasoning which is supposed to justify transcendentally its main features? Actually, this is not true. ’Essential’ indeterminism of phenomena can be shown to derive from relaxation of the constraint of de-contextualisation and incompatibility of certain experimental contexts (See P. Destouches-Fevrier, La structure des theories physiques, P.U.F., 1951, p. 277). Use of probabilities in the predictive theory, and irreducibility of the predictions to a deterministic scheme at the level of phenomena (though not necessarily at the level of hidden variables), is thus a natural consequence of our first assumption. 30 I. Kant, Critique of pure reason, op. cit., B135, p. 101 31 I. Kant, Critique of pure reason, op. cit., B68, p. 66 32 For more details, see M. Bitbol, Mécanique quantique, une introduction philosophique, Flammarion, 1996 33 P. Heelan, “Complementarity, context-dependance, and quantum logic”, Found. Phys. 1, 95-110, 1970; P. Heelan, “Quantum and classical logic: their classical role”, Synthese, 21, 2-33, 1970; also: S. Watanabe, “The algebra of observation”, Suppl. Prog. Theor. Phys., 37 and 38, 350-367, 1966. 34 J. L. Destouches, Corpuscules et systèmes de corpuscules, Gauthier-Villars, 1941; P. Destouches-Février, La structure des theories physiques, P U. F, 1951; P. Destouches-Février, L’interprétation physique de la mécanique ondulatoire et des théories quantiques, Gauthier-Villars, 1956 35 P. Destouches-Fevrier, La structure des theories physiques, P.U.F., 1951, p. 240. For related theorems see H. Everett, ”’Relative state’ formulation of quantum mechanics”, Reviews of modern physics, 29, 454-462, 1957 (last section), and also A.M. Gleason, “Measures on the closed subspaces of a Hilbert space”, Journal of mathematics and mechanics, 6, 885-893, 1957. A survey can be found in: R.I.G. Hughes, The structure and interpretation of quantum mechanics, Harvard University Press, 1989. 36 R.I.G. Hughes, The structure and interpretation of quantum mechanics, op. cit.; B. Van Fraassen, Quantum mechanics, an empiricist view, Oxford University Press, 1991, p. 177-181; T.F. Jordan, Linear operators for quantum mechanics, J. Wiley, 1969 37 E. Wigner, Symmetries and reflections, Ox Bow Press, 1979, p. 29 38 B. Van Fraassen, Quantum mechanics, an empiricist view, op. cit. p. 178 39 The idea that in quantum mechanics the point of application of the category of causality has somehow been shifted from the evolution of phenomena to the evolution of state vectors has been developed in a kantian context by: P. Mittelstaedt, Philosophical problems of modern physics, Reidel, 1976. The premises of this idea can already be found in M. Born’s papers of 1926. 40 This short statement according to which a linear superposition of state vectors corresponds to a combination of preparations, refers to a pragmatic reading of the so-called ’principle of superposition’. According to Dirac, the principle of superposition is a new law of nature (PAM. Dirac, The principles of quantum mechanics, Oxford University Press, 1947, §2). In the pragmatic-transcendental approach, the principle of superposition boils down to a normative statement of co-extensivity of the Hilbert space formalism and the domain of experimental preparations to which it applies. It says that given two state vectors, each corresponding to a well-defined preparation, there exists a third preparation such that its predictive content is appropriately expressed by a linear superposition of the two previous state vectors. If this was never true, it would mean that the formalism is too general for the phenomena to be predicted. More specifically, it would be tantamount to imposing a generalized superselection principle, thus cancelling the consequences of contextuality. Then, if the constraint of de-contextualization is to be relaxed effectively some principle of superposition must hold. 41 M. Bitbol, SchrOdinger’s philosophy of quantum mechanics, Kluwer, 1996; G. Cohen-Tannoudji & M. Spiro, La mature espace-temps, Gallimard, 1990, p. 162 42 P. Mittelstaedt, Philosophical problems of modern physics, op. cit. 43 This is the method he used in his Metaphysical Foundations of Natural Science, published in 1786; but three years earlier, in his Prolegomena to any future metaphysics, he claimed to be able to provide a (weak) transcendental justification of the inverse-square law of gravitation. This justification relies on the geometrical circumstance that concentric spherical surfaces stand to one another as the squares of their radii. See M. Friedman, Kant and the exact sciences, op. cit. chapter 4 44 We only need that the functions it fulfils in classical mechanics be partially fulfilled in the new situation. These functions are: an order of multiplicity (which can be accounted for in terms of eigenvalue N of the observable number, rather than in terms of N particles), a criterion of reidentification (which can persist only in fragmented form), and a class (it is not even appropriate to say ’of entities’) which is able to represent certain determinations which could be treated as properties (i.e. the superselective observables). 45 R. Omnes, The interpretation of quantum mechanics, Princeton University Press, 1994, p. 350 46 Ch. Taylor, Philosophical arguments, op. cit. p. 72 47 For a thorough discussion on this point, see: B. d’Espagnat, Veiled reality, Addison-Wesley, 1994; and M. Bitbol & S. Laugier (eds.), Physique et realité, un débat avec Bernard d’Espagnat, Editions Frontieres, 1997 48 See J. Bouveresse, La force de la regle, Editions de Minuit, 1987; S. Kripke, Wittgenstein on rules and private language, Blackwell, 1982 49 See F. Varela, E. Thompson, & E. Rosch, The embodied mind, op. cit., for similar remarks in the general framework of the cognitive sciences. 50 Ch. Taylor, Philosophical arguments, op. cit. p. 26 51 See M. Mugur-Schachter, “Mécanique quantique, realité et sens” and C. Schmitz, “Objectivité et temporalitée”, and the answers by B. d’Espagnat, in: M. Bitbol & S. Laugier (eds.) Physique et realité, un débat avec Bernard d’Espagnat, Editions Frontieres-Diderot, 1997. There, as in his Veiled Reality op. cit., B. d’Espagnat acknowledges that, in general, any description is relative to a given descriptive context. But he also presents a very subtle defence of the idea that certain broad structural features such as non-separability, which are common to any onlogically interpretable theory able to reproduce quantum predictions, can be considered as a reflection of the structure of an oindependent reality». Found a mistake? Contact corrections/at/cepa.infoDownloaded from http://cepa.info/3658 on 2016-09-25 · Publication curated by Alexander Riegler
2017-07-27 04:44:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6806455254554749, "perplexity": 1452.309323545523}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549427429.9/warc/CC-MAIN-20170727042127-20170727062127-00642.warc.gz"}
https://www.physicsforums.com/threads/complex-substitution-into-the-equation-of-motion.159087/
# Homework Help: Complex substitution into the equation of motion. 1. Mar 3, 2007 ### ultimateguy 1. The problem statement, all variables and given/known data The equation of motion of a mass m relative to a rotating coordinate system is $$m\frac{d^{2}r}{dt^2} = \vec{F} - m\vec{\omega} \times (\vec{\omega} \times \vec{r}) - 2m(\vec{\omega} \times \frac{d\vec{r}}{dt}) - m(\frac{d\vec{\omega}}{dt} \times \vec{r})$$ Consider the case F = 0, $$\vec{r} = \hat{x} x + \hat{y} y$$, and $$\vec{\omega} = \omega \hat{z}$$, with $$\omega$$ a constant. Show that the replacement of $$\vec{r} = \hat{x} x + \hat{y} y$$ by z = x + iy leads to $$\frac{d^{2}z}{dt^2} + i2\omega\frac{dz}{dt} - \omega^2z=0$$. Note, This ODE may be solved by the substitution $$z=fe^{-i\omega t}$$ 2. Relevant equations None. 3. The attempt at a solution I've calculated that $$-\vec{\omega} \times (\vec{\omega} \times z) = \omega^2 z$$. As far as figuring out how $$-2(\vec{\omega} \times \frac{d\vec{r}}{dt}) -(\frac{d\omega}{dt} \times \vec{r})$$ gives $$i2\omega\frac{dz}{dt}$$ I'm lost. 2. Mar 3, 2007 ### ultimateguy I solved it. Turns out that $$-2(\vec{\omega} \times \frac{d\vec{r}}{dt}) = -2i\omega\frac{dz}{dt}$$ and since F = 0 then $$\frac{d\omega}{dt} = 0$$.
2018-10-23 22:47:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8302083015441895, "perplexity": 614.1600914314225}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583517495.99/warc/CC-MAIN-20181023220444-20181024001944-00088.warc.gz"}
http://tex.stackexchange.com/questions/75395/how-to-set-this-content-into-the-header-properly-when-ihead-markright-did-not?answertab=active
# How to set this content into the header properly, when ihead & markright did not work well / at all After creating my headline here How to arrange / align / place pictures and text, to form a given headline? I guessed, I would just place it into the header with \ihead and \thispagestyle{scrheadings} - but NO. Problem is, the following text is placed uppon my header. And when I try to use markright I get eighter compile errors or no header at all. I read some infos about using \protect, but it did not help eighter. I am guessing, that my constuct itself is simply not that easy to migrate into a header. I hope I can solve this on my own, but maybe someone else seeks a challange too :) Here is the code, the images to it are available through the link before: \documentclass[ pdftex, a4paper, 11pt, DIV15, BCOR20mm, parskip, numbers=noenddot]{scrbook} \usepackage{graphics} \usepackage[pdftex]{graphicx} \usepackage{scrpage2} \usepackage{setspace} \begin{document} \noindent \textsf{ \begin{singlespace} \raisebox{-0.5\height}{\rlap{\includegraphics[scale=0.5]{left_demo.png}}} \hfill %\includegraphics[height=60px]{center_demo.png} %\hfill \raisebox{-0.15\height}{\llap{ \scriptsize % \begin{tabular}[c]{@{}r@{}} somedemo txt abcdefghij,\\ texttextte txt texttexttextetscter\\ \\ demodemode demodemode \end{tabular}}} \raisebox{-0.525\height}{\includegraphics[height=45px]{right_demo.png}} \end{singlespace}}} \huge{text} \end{document} I do really appreciate any help on this, thanks! Edit: Here is what's wrong: the text in the red circle should not be that high (used ihead here) - your MWE runs without error, where were you planning to use marks? –  David Carlisle Oct 5 '12 at 9:48 sorry, I don't understand ... I tried using \markright instead of \ihead, but this gives me error \KV@def has extra } - using \ihead works fine - just the \huge{text} is placed too high –  Jook Oct 5 '12 at 10:26 I don't understand what you are trying to do actually (\markright typically just takes simple text such as a section title the complicate layout would normally be part of the page head design but that it uses the text from \markright where appropriate. That said if you want to stick the whole lot in a mark it appears to work to remove \noindent (which looks to be there in error ) and replace \ihead by \markright{\unexpanded{{ and add an extra } after \end{singlespace}}} –  David Carlisle Oct 5 '12 at 10:55 after \end{singlespace}? actually I would like to have the whole singlespace section to be IN the header. The \huge{text} is what I want to have beneath my header. –  Jook Oct 5 '12 at 11:07 yes that's what the above comment would do, although as I say using a \mark isn't really the way to do it it (I'd say the basic latex way of setting up page headers but I think the script classes have their own mechanisms so perhaps some who has used those before might answer) –  David Carlisle Oct 5 '12 at 11:32 The problem is that your images take more vertical spacing than the default vertical space reserved for the header. When you process your code you will get a warning like this: Overfull \vbox (57.3689pt too high) has occurred while \output is active (the actual value for the overfull box will depend on the actual images). The value for \headheight has to be increased accordingly to prevent the warning and you can do this using headlines= or headheight= (this probably implies a change in the page layout). Here's a little example. From the comments it's clear that the intent is to change the page style for the titlepage; one possibility to achieve this is to define a new page style and to use the geometry package to temporarily change the headheight value without modifications to the page layout made as class options through DIV and BCORR (adjust the length according to your needs); since the contents of the footer for this style was not specified I used some dummy options in the definition of the new page style: \documentclass[pdftex,a4paper,11pt,DIV15,BCOR20mm,parskip,numbers=noenddot]{scrbook} \usepackage{graphicx} \usepackage{scrpage2} \usepackage{geometry} \usepackage{setspace} \usepackage{lipsum} \defpagestyle{mystyle}{% \raisebox{-0.5\height}{\rlap{\includegraphics[scale=0.5]{cat}}}\hfill %\includegraphics[height=60px]{center_demo.png} %\hfill \raisebox{-0.15\height}{\llap{% \scriptsize % \begin{tabular}[c]{@{}r@{}} somedemo txt abcdefghij,\\ texttextte txt texttexttextetscter\\ \\ demodemode demodemode \end{tabular}}} \raisebox{-0.525\height}{\includegraphics[height=45px]{cat}} } }{\ifoot{inside}\cfoot{center}\ofoot{outside}} \begin{document} \begin{titlepage} \thispagestyle{mystyle} {\huge text} \lipsum[1-3] \end{titlepage} \clearpage \restoregeometry \lipsum[1-4] \end{document} Not relevant to the problem mentioned, but the font size switches (\Huge, \huge, \LARGE, etc.) are declarations that do not have arguments, so instead of \huge{text}, you should use {\huge text} (to keep the change local and possibly using \par inside the group if required). - yeah! this is it - but would you be so kind to extend your demo to adjust the healines just for a single page (e.g. titlepage) and how to avoid getting pagenumbering on this page, when using ihead - which is a strange behaviour anyways. –  Jook Oct 5 '12 at 13:37 @Jook I have a lecture starting in 8 minutes, so I won't be able to give the modified example for several hours. As soon as I can I will do it. –  Gonzalo Medina Oct 5 '12 at 13:52 thank you very much! I got rid of the pagenumbering by using \ofoot{} but this was accompanied by my bottom text beeing displayed too far down. And besides, I would like to use a footer on the titlepage - just without pagenumbering. –  Jook Oct 5 '12 at 14:19 @Jook I updated my answer with the announced code. –  Gonzalo Medina Oct 5 '12 at 18:31 Thank you! This solved my particular problem. However, in order to understand your code better, I started to read about geometry and pagestyles and stuff und must conclude - Latex has a quite some huge steps in it's learing curve, which one has to conquer. –  Jook Oct 8 '12 at 8:41
2015-10-04 13:15:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8207552433013916, "perplexity": 1871.9997457203785}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736673632.3/warc/CC-MAIN-20151001215753-00134-ip-10-137-6-227.ec2.internal.warc.gz"}
https://docs.astar.network/docs/xcm/using-xcm/manage-xc20-with-metamask/
## Instructions​ ### Check on the Portal​ As you can see in the screenshot above, we have DOT assets in our EVM wallet. Now let's add the asset to our MetaMask.
2023-02-01 03:56:07
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9206621050834656, "perplexity": 3270.674974994355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499899.9/warc/CC-MAIN-20230201013650-20230201043650-00539.warc.gz"}
https://computergraphics.stackexchange.com/questions/6003/expected-visibility
# Expected visibility I want to extend the following to work on a solid angle: Suppose we have a volume filled with small surfaces. If we cast a ray from a given point, the probability that the ray will not hit a surface (i.e. is visible to the sky) is given as $P(ray\ does\ not\ hit) = {e}^{-\alpha d/\cos\theta}$ where $\alpha$ is some decay factor, $d/cos\theta$ is the path length of the ray within the volume. My question: How can we compute the expected visibility of a ray if we consider the directions $(\theta, \phi)$, where $0<\theta<\pi/2$ and $0<\phi<2\pi$? • What do you mean by "consider the directions"? Are you trying to integrate over some solid angle instead of a single ray? Are you trying to extend the volume to be heterogeneous or anisotropic? – Dan Hulme Dec 18 '17 at 12:11 • I am integrating over some solid angle. – Stackmm Dec 18 '17 at 19:59 You need to perform integration of $P$ over the hemisphere to calculate the solution. There doesn't seem to be closed-form solution as the solution requires incomplete gamma function: $$2\pi(e^{-\alpha d}-\alpha d\Gamma(0,\alpha d))$$
2019-08-18 20:11:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9589416980743408, "perplexity": 293.225948489411}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313996.39/warc/CC-MAIN-20190818185421-20190818211421-00400.warc.gz"}
https://mathoverflow.net/questions/287456/partitioning-finite-hypergraphs-with-edges
# Partitioning finite hypergraphs with edges [closed] Let $H=(V,E)$ be a hypergraph such that $|V|$ is infinite, and the following statements hold: 1. if $a\neq b\in E$ then $|a\cap b|\leq 1$, and 2. every vertex $v\in V$ is contained in at least $2$ members of $E$. Is there $P\subseteq E$ such that the members of $P$ are pairwise disjoint, and $\bigcup P = V$? (This would amount to a kind of matching in hypergraphs.) ## closed as off-topic by Ben Barber, Jan-Christoph Schlage-Puchta, RP_, Chris Godsil, David HandelmanDec 6 '17 at 3:08 This question appears to be off-topic. The users who voted to close gave this specific reason: • "This question does not appear to be about research level mathematics within the scope defined in the help center." – Ben Barber, Jan-Christoph Schlage-Puchta, RP_, David Handelman If this question can be reworded to fit the rules in the help center, please edit the question. • What if $H$ is a graph which is the union of infinitely many vertex-disjoint triangles? – bof Dec 1 '17 at 6:58 • In that case, each $v\in V$ is contained only in one member of $E$, or do I misunderstand something? – Dominic van der Zypen Dec 1 '17 at 7:14 • No doubt I misunderstand something. My $H$ is the union of infinitely many vertex-disjoint copies of the graph $K_3.$ Each vertex is in two edges, each edge contains two vertices, and two distinct edges have at most one vertex in common. – bof Dec 1 '17 at 9:22 • The edges don't have to be finite sets, do they? In the first example I thought of, the vertices are the points of the real projective plane, the points are the lines; two edges intersect in exactly one vertex, each vertex belongs to continuum many edges. – bof Dec 1 '17 at 9:27 i.e. $V = \{v,w,x\}\cup\{1,2,3,\dots\}, E = \{\color{blue}{\{v,x\}}, \color{red}{\{v,w\}}, \color{orange}{\{w,x,1\}}\}\cup\{\{1,2\}, \{2,3\}, \dots\}$ To cover the vertex $v$ you need to include the blue or the red edge in $P$. But then you can't also include the yellow edge. Thus either $x$ or $w$ won't be covered.
2019-03-21 20:55:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6850820779800415, "perplexity": 414.85798198464914}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202572.29/warc/CC-MAIN-20190321193403-20190321215403-00359.warc.gz"}
https://tex.stackexchange.com/tags/graphics/new
# Tag Info 9 TikZ has a predefined 3d coordinate system, which is not orthographic, but more than sufficient to stack some objects in the z direction. \documentclass[tikz,border=3mm]{standalone} \definecolor{fb}{RGB}{128,158,204} \definecolor{db}{RGB}{111,141,191} \begin{document} \begin{tikzpicture} \foreach \Z in {0,0.5,1,1.5} {\draw[fill=fb,draw=db] (-2,-2,\Z) ... 2 You can draw something like this with TikZ but only if you provide more information1 one can be sure that you achieve what you want. \documentclass[tikz,border=3mm]{standalone} \usetikzlibrary{chains} \newsavebox\ClippedPicA \newsavebox\ClippedPicB \sbox\ClippedPicA{\begin{tikzpicture} \node[outer sep=0pt,inner sep=0pt] (c) {\phantom{\includegraphics[width=... 0 \documentclass[paper=a4, pagesize, fontsize=11pt]{scrartcl} \usepackage[latin1]{inputenc} \usepackage[T1]{fontenc} \usepackage[ngerman]{babel} \usepackage{float} \usepackage{geometry} \usepackage[demo]{graphicx}%<---------------remove demo option in actuals \usepackage{subfig}%<-----------------------add \begin{document} \begin{figure}[... 1 You want \linewidth not \textwidth in an indented list. \documentclass{scrartcl} \usepackage[demo]{graphicx} \usepackage[export]{adjustbox} \begin{document} \begin{enumerate} \item \includegraphics[width=\linewidth,valign=t]{E53a}% \end{enumerate} \end{document} one way of tackling enumerate in a theorem is to make sure it isn't first (add \mbox{} if ... 2 This redefines \caption locally (inside the \parbox). Every cell in a p column is a separate \parbox. \caption@caption is the original name used by the caption package. This uses \csname to implement the @ symbol, which in turn requires \expandafters. \documentclass[12pt]{article} \usepackage{graphicx} \usepackage{longtable} \usepackage{caption} \begin{... 0 Frames normally overlap, unless specifically set to not overlap. I don't have any experience with \onecolumninarea etc. If I need two columns on a page, I just use two flow frames. \documentclass[a4paper]{report} \usepackage[margin=1in]{geometry} \usepackage{color} \usepackage{flowfram} \usepackage[colorlinks]{hyperref} \usepackage{graphicx} \... 1 You code was not compiling correctly so I stripped it from what I believe what not necessary to provide the required output, but it should work all the same in your original code. My proposal is based on Max's very nice answer to one of my previous questions, using TikZ. Using the technique proposed, you can easily show an image then add some overlayed ... 1 A partial attempt MWE \documentclass[margin=2mm,tikz]{standalone} \usepackage[american]{circuitikzgit} \usetikzlibrary{arrows, backgrounds, calc, positioning, circuits.logic.US, circuits, arrows,shapes.gates.logic.US,shapes.gates.logic.IEC} \tikzset{flipflop AB/.style={flipflop,flipflop def={t1=S, t3=R, t6=Q, t4= {\ctikztextnot{Q}}, td=rst, nd=1, c2=1, ... 0 You could try without the figure-environment: \newcommand{\image}[4][40mm]{ \centering% \fcolorbox{frameColor}{white}{\includegraphics[width=\textwidth, height = #1, keepaspectratio]{#2}} \captionof{figure}{#3}% \label{#4} } 1 The problem is not related to the image, the page breaking logic would be the same if you had text on that line. To prevent a page break you need to hide the height of the content or make the page bigger. LaTeX has a standard command to do the latter: if you add \enlargethispage{2cm} somewhere on the first page then the content will be allowed to extend ... 1 I did experiments with unicode accent and it seems to be alchemy. Maybe it heavily depends on Unicode renderer too. The following example is using OpTeX (because I don't use LaTeX). If you want to use LaTeX then you can use idea and re-write the code. The constant -8.5pt is so called "guess constant":) \fontfam[lmfonts] % use OpTeX \newbox\gravebox \setbox\... 4 Ok. Sit down because this will be long --- let's see if I can do a tutorial. You have to have done the TikZ tutorial to fully understand this, especially the calc package and the |- coordinate operator. Looking at the circuit to draw, I see that there is a basic block: the flip-flop with the added three-port circuit to the left. The main distance to respect ... 0 The conflict with subfig and \hyperref does exist: it's not just a rumor, which is why I prefer subcaption as I almost invariably need hyperref. Also, \begin{subfigure}[b]{0.475\textwidth} should give a cleaner finish, still with plenty of space between the figures. Place: \usepackage{graphicx} \usepackage{caption} \usepackage{subcaption} in the ... 0 0 Here is an option with textpos % !Mode:: "TeX:UTF-8" % !TEX TS-program = xelatex \documentclass[UTF8]{beamer} \usetheme{Warsaw} \usepackage[absolute, overlay]{textpos} \usefonttheme{serif} \begin{document} \begin{frame} \frametitle{title} \begin{textblock}{8}(1.0, 3.0) \begin{itemize} \item<1-> this is an example \item [] \item<2-> this is ... 1 Here is a way to do that with titlesec: \documentclass[11pt]{article} \usepackage{xcolor} \newcommand{\myblackbox}[1]{\colorbox{black}{\parbox{0.5\textwidth}{#1}}} \usepackage{titlesec} \titleformat{\section}{\Large\bfseries}{}{0pt}{\color{white}\myblackbox}% [{\raisebox{1ex}[0pt]{\rule[1ex]{\textwidth}{0.8pt}}}] \begin{document} \section{Properties of ... 1 Here is my version: \documentclass[border=10pt]{standalone} \usepackage[pdftex]{graphicx} \usepackage{pgfplots} \usepgfplotslibrary{colormaps} \begin{document} \begin{tikzpicture}[yscale=2, xscale=1] \begin{axis}[axis line style={draw=none}, view={170}{-20}, grid=major, xmin=0,xmax=2, ymin=0,ymax=2, zmin=0,zmax=4, ... 1 you use ancient version of the pgfplots package (version 1.9). Recent version is 1,17! I strongly encourage you to upgrade it. I suspect, that you like to have in line y-axis of diagrams, am I right? This can be achieved by adding option trim left to each \tikzpicture: \documentclass[ % -- opções da classe memoir -- 12pt, openright, ... 1 Here is a compilable, slightly more compact version of your code. I added some ylabel style options to give a minimum width to the label, to make it independent of the actual letter. I also regrouped the common axis options so to clarify the code. The small frames are here only to show the actual width of the modified label with this option \documentclass[]... 2 Not knowing whaht the Y column type is, I replaced it with X. You can either use \raisebox{-0.5\totalheight}{…} or load the adjustbox package with option export and use \includegraphics[valign=c].` \documentclass{article} \usepackage[demo, export]{adjustbox} \usepackage{float, tabularx, booktabs, caption} \begin{document} \setcounter{table}{4} \... 0 \documentclass{article} % \usepackage[flushleft]{threeparttable} \usepackage{graphicx} \usepackage{booktabs} \usepackage{array} \begin{document} \begin{table} \caption{Classification} \label{tbl:Longwall systems} \footnotesize \centering \begin{tabular}{m{0.4\linewidth}m{0.5\linewidth}} \toprule Classifications & ... 4 This is a start. Drawing a graph with a common vertex is as simple as saying {subgraph I_n [n=12,radius=2.5cm, counterclockwise,phase=105] -- x} where I_n is a standard graph that (in this usage) puts the nodes on an circle with equal distances. We can specify the number of nodes, the radius and the phase. If you want to see which node lives where, ... 5 This does not really produce the figures you want. I am not even sure if you should do a plot of that type when you neither have real data nor a function at hand. Rather, you can just draw the ellipses as you need them. I am posting this because in your link the nontrivial principle axes of the Gaussian were obtained by rotating the picture. Here, on the ... 2 This is because of the different widths of your ylabels, i.e. x, v, a. Each tikzpicture will simply start at the leftmost position. The width of all your plots is not exactly the same! If you move all your ylabels slightly to the right, the plots will be aligned correctly: ylabel style={at={(0.05,0.9)} Another possibility is to put all plots inside one ... 2 You need to use \includepdf{file} from pdfpages package and no center \documentclass[]{} \usepackage{pdfpages} \begin{document} \includepdf[page=1]{Deckblatt.pdf} \pagenumbering{Roman} \chapter*{Abstract} ... \end{document} 1 In your provided MWE, you didn't include the package graphicx, just include that package your MWE works fine for me, and the modified MWE is: \documentclass[a4paper]{article} \usepackage[margin=1.5in]{geometry} % For margin alignment \usepackage[english]{babel} \usepackage[utf8]{inputenc} \usepackage{algorithm} \usepackage{arevmath} % For math ... 11 There are plenty of posts for each of these, now there is one more post. For instance, the neuronal network is very similar to the one of this post. \documentclass[tikz,border=3mm]{standalone} \usetikzlibrary{arrows.meta,chains} \begin{document} \begin{tikzpicture}[neuron/.style={circle,inner sep=1em,draw}, neuron missing/.style={ scale=1.25, ... 2 The graphicx package allow the insertion of external images (in png, jpg or pdf format using pdflatex, or eps format using latex) with the \includegraphics command, but this has nothing to do with the figure environment, that can be used without any package, and without any image: \documentclass{article} \begin{document} \begin{figure} \hfil (... 0 Let me spell out my comment: In your code fragment you have to much closed curly braces in main caption. Packages subfig, \subfigure (which is obsolete) and subcaption are not compatible. In your code fragment, you use only last one, so delete the other two. Using article document class, a MWE (minimal Working Example) with your code fragment should be: \... 0 Does this meet your requirement \documentclass{article} % ========== Packages ========== \usepackage{graphicx, geometry} \usepackage{xcolor} \usepackage{caption} \usepackage{subcaption} \usepackage{float} \begin{document} \begin{figure} \centering \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{... 2 It seems thatyou like to draw superposition $0.5 + \sin(x)$. This function in pgfplots is {0.5+sin(180*x)}: \documentclass[margin=3mm]{standalone} \usepackage{pgfplots} \begin{document} \begin{tikzpicture} \begin{axis}[ xlabel=$x(t)$, ylabel=$t$, grid=major, xmin=-5, xmax=5, ymin=-2, ymax=2, domain=-5:4.5, samples=201] \... 1 Fast answer: include \usepackage{float} and use [H] instead of [h] Explained answer: There's the errors or recommendations that I have founded: Use lowercase letters for labels, path of folders, and others that usually refers object. For example use \label{tbl:its an example} instead of \label{tbl:Its An Example}. Why? because usually we forgot uppercase ... 0 You need to specify [H] like the code below \begin{figure}[H] \centering \includegraphics[width=0.95\textwidth]{image.png} \caption{Caption.} \label{fig:image1} \end{figure} Add \usepackage{float} 0 In LuaLaTeX you can use io.popen to capture a list of files. It will return the first file found. As in the "pure LaTeX" answer commented before, you'll need to use --shell-escape. %!TEX program = lualatex \documentclass{article} \usepackage{luacode} \usepackage{graphicx} \usepackage{shellesc} \begin{document} \begin{luacode*} f = assert(io.popen("ls "..... 2 Assuming a Unix system: \documentclass{article} \usepackage{xparse} \usepackage{graphicx} \ExplSyntaxOn \NewDocumentCommand{\includegraphicsfromfolder}{O{}mm} {% #1 = options, #2 = folder path, #3 = extension \sys_get_shell:nnN { ls~-m~#2/* } { } \l_tmpa_tl \seq_set_split:NnV \l_tmpa_seq { , } \l_tmpa_tl \seq_map_inline:Nn \l_tmpa_seq { \... 1 I think the problem is happening when TeX Live version is upgraded. You can fix this issue by changing the the TeX Live version in the Menu bar to the date that you created your original project. That worked for me. 1 To locally correct the vertical position of the image inserted via \titlegraphic, i.e. without redefining the titlepage, you can use \titlegraphic{\vspace{-<shift>}\raisebox{<shift>}{\includegraphics[<options>]{<image>}}} If you only use \raisebox, the image will be shifted but so will be the title. The \vspace makes the shift ... 0 Old question I know, but I was looking for a solution today so I guess still relevant. Here is the solution I am using, I simply put this in the preamble of my document. \usepackage{graphicx} \setkeys{Gin}{width=\linewidth} This works because under the hood what is being use is \includegraphics and I just set the default width of those images to the width ... 1 Well, yes and no, depending on the actual requirements. You could use the animate package to create animations representing the change of a parameter. See the following example taken from https://www.uweziegenhagen.de/?p=3048 \documentclass{article} \usepackage[paperwidth=5.5cm,paperheight=5.3cm,left=0cm,right=0cm,bottom=0cm,top=0.25cm]{geometry} \... 0 Yeah after asking five years of this question, today I face same issue. When I used xcolor or colortbl, I faced same error. Thanks to @Mauramz whose answer here gives me hint and I figure out error which comes because of xthesis package, then I found solution of this problem here LaTex error: \begin{document} ended by \end{table} The error was in xthesis.... 0 On this exemple, you can see something similar: https://www.latextemplates.com/template/arsclassica-article If you try to transpose, it could be a code like that: \begin{figure}[tb] \centering \subfloat[Image of Open Cluster Messier 37.]{\includegraphics[width=.45\columnwidth]{Plot M37 with parameters to see more stars.png}}\label{fig:0} \quad \subfloat[... 0 \documentclass{article} \usepackage{xcolor,mdframed} \usepackage{graphicx} \usepackage{caption} \captionsetup[figure]{box=colorbox,boxcolor=red,slc=off} \begin{document} \begin{figure}[htbp] \begin{mdframed}[backgroundcolor=blue!50,linecolor=blue!50] \includegraphics[width=\linewidth]{example-image-a4-landscape.pdf} \... 1 The code below defines two new commands, \ChapterImages and \InsertImage, that lets you set a comma separated list of image names and insert next image, respectively. Using these commands you can set the images for your chapters using \ChapterImages{ example-image-a, example-image-b, example-image-c } and you can insert these images on your ... 1 Intention of MWE below is to serve as starting point how you can write your appendices. Since we haven't your images it may happen, that some figure will not fit in page. In this case, you probably should reorganize figures and change number of sub-figures in them. It is also unusual, that your figures haven captions (or it may be that I misunderstood, that ... -1 If you have multiple subfigures having this issue in a composite figure, the best way to solve it is to make those subfigures have transparent background. Then the final pdf compiled by LaTeX wouldn't have these boundaries (for each of the subfigures). 1 memoir already includes an own mechanism for subfigures and their captions. You could use it instead of the subcaption package: \documentclass{memoir} \usepackage[T1]{fontenc} \newsubfloat{figure} \setcounter{lofdepth}{2} \usepackage{graphicx} \begin{document} \listoffigures \begin{figure} \subbottom[subfigure caption]{\includegraphics{example-image}}... 1 \documentclass{article} \usepackage[capbesideposition=right, facing=yes,capbesidesep=qquad]{floatrow}%can use quad also \usepackage{rotating} \begin{document} \begin{figure} \fcapside[\FBwidth] {\caption{A nice figure with a very very very very long long caption}\label{key}} {\rule{4cm}{4cm}} %Replace with image \end{figure}... 1 You can (most likely) force the second float to the bottom of the page by using [!b] but only do this after all editing done as float positioning depends on the surrounding text not just the figures themselves. 1 LaTeX tries to avoid placing diagrams in the flow of text. It is widely considered preferable to place diagrams either at the top of the page or at the bottom (or on its own page if it's big enough) and to call it out inline using a \ref. You can give an optional argument to the figure environment to specify which behavior you want: \begin{figure}[... 2 I'm not sure how useful this would be. If you use \includegraphics inside a figure environment, you just specify \centering in the environment. However, here's an implementation that also allows a nocenter option, \documentclass{article} \usepackage{xparse,graphicx} \ExplSyntaxOn \cs_set_eq:NN \plasma_includegraphics:w \includegraphics \cs_new_protected:... Top 50 recent answers are included
2020-05-27 09:25:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9620656967163086, "perplexity": 4615.750140853153}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347392142.20/warc/CC-MAIN-20200527075559-20200527105559-00154.warc.gz"}
https://cs.stackexchange.com/questions/140752/pushdown-automaton-with-binary-stack
# Pushdown automaton with binary stack I have a problem where I'm asked to prove that if P is a pushdown automaton, then there exists another pushdown automaton P' with only two symbols in its stack alphabet that accepts the same language as P. I've tried to codify the stack alphabet of P in binary, using the same ammount of symbols for each representation so I know where the codification of one symbol starts and ends, but I don't know how to simulate P with P'. • What have you tried? Where did you get stuck? Please detail your attempts, and what you don't understand, and we'll help from there. This site does not accept homework dumps. May 26 at 18:11 You're on a good path! Keep at it. Remember that a pushdown can have a finite-state control, and you can remember up to a finite amount of information in the finite-state control. Also, it can have $$\epsilon$$-transitions (so the read head doesn't consume any of the input). Those might be helpful.
2021-10-17 15:52:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5060301423072815, "perplexity": 519.7837044931462}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585178.60/warc/CC-MAIN-20211017144318-20211017174318-00716.warc.gz"}
https://deepai.org/publication/dual-optimization-for-kolmogorov-model-learning-using-enhanced-gradient-descent
# Dual Optimization for Kolmogorov Model Learning Using Enhanced Gradient Descent Data representation techniques have made a substantial contribution to advancing data processing and machine learning (ML). Improving predictive power was the focus of previous representation techniques, which unfortunately perform rather poorly on the interpretability in terms of extracting underlying insights of the data. Recently, Kolmogorov model (KM) was studied, which is an interpretable and predictable representation approach to learning the underlying probabilistic structure of a set of random variables. The existing KM learning algorithms using semi-definite relaxation with randomization (SDRwR) or discrete monotonic optimization (DMO) have, however, limited utility to big data applications because they do not scale well computationally. In this paper, we propose a computationally scalable KM learning algorithm, based on the regularized dual optimization combined with enhanced gradient descent (GD) method. To make our method more scalable to large-dimensional problems, we propose two acceleration schemes, namely, eigenvalue decomposition (EVD) elimination strategy and proximal EVD algorithm. Furthermore, a thresholding technique by exploiting the approximation error analysis and leveraging the normalized Minkowski ℓ_1-norm and its bounds, is provided for the selection of the number of iterations of the proximal EVD algorithm. When applied to big data applications, it is demonstrated that the proposed method can achieve compatible training/prediction performance with significantly reduced computational complexity; roughly two orders of magnitude improvement in terms of the time overhead, compared to the existing KM learning algorithms. Furthermore, it is shown that the accuracy of logical relation mining for interpretability by using the proposed KM learning algorithm exceeds 80%. ## Authors • 3 publications • 9 publications • 15 publications • ### Fast Quantum Algorithm for Learning with Optimized Random Features Kernel methods augmented with random features give scalable algorithms f... 04/22/2020 ∙ by Hayata Yamasaki, et al. ∙ 0 • ### Big Learning with Bayesian Methods Explosive growth in data and availability of cheap computing resources h... 11/24/2014 ∙ by Jun Zhu, et al. ∙ 0 • ### Optimize TSK Fuzzy Systems for Big Data Regression Problems: Mini-Batch Gradient Descent with Regularization, DropRule and AdaBound (MBGD-RDA) Takagi-Sugeno-Kang (TSK) fuzzy systems are very useful machine learning ... 03/26/2019 ∙ by Dongrui Wu, et al. ∙ 0 • ### Learning to solve TV regularized problems with unrolled algorithms Total Variation (TV) is a popular regularization strategy that promotes ... 10/19/2020 ∙ by Hamza Cherkaoui, et al. ∙ 3 • ### Near-Data Processing for Differentiable Machine Learning Models Near-data processing (NDP) refers to augmenting memory or storage with p... 10/06/2016 ∙ by Hyeokjun Choe, et al. ∙ 0 • ### Fast Cross-Validation for Incremental Learning Cross-validation (CV) is one of the main tools for performance estimatio... 06/30/2015 ∙ by Pooria Joulani, et al. ∙ 0 ##### This week in AI Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday. ## I Introduction The digital era, influencing and reshaping the behaviors, performances, and standards, etc., of societies, communities, and individuals, has presented a big challenge for the conventional mode of data processing. Data consisting of numbers, words, and measurements becomes available in such huge volume, high velocity, and wide variety that it ends up outpacing human-oriented computing. It is urgent to explore the intelligent tools necessary for processing the staggering amount of data. Machine learning (ML), dedicated to providing insights into patterns in big data and extracting pieces of information hidden inside, arises and has been used in a wide variety of applications, such as computer vision [1], telecommunication [2], and recommendation systems [3, 4, 5, 6]. Nevertheless, traditional ML algorithms become computationally inefficient and fail to scale up well as the dimension of data grows. A major issue that remains to be addressed is to find effective ML algorithms that perform well on both predictability and interpretability as well as are capable of tackling large-dimensional data with low latency. ### I-a Related Work Data representation, providing driving forces to the advancing ML-based techniques, has lately attracted a great deal of interest because it transforms large-dimensional data into low-dimensional alternatives by capturing their key features and make them amenable for processing, prediction, and analysis. The gamut of data representation techniques including matrix factorization (MF) [7, 8] , singular value decomposition (SVD)-based models [9, 10], nonnegative models (NNM) [11] , and deep neural networks [12] have been shown to perform well in terms of predictive power (the capability of predicting the outcome of random variables that are outside the training set). Unfortunately, these techniques perform rather poorly on the interpretability (the capability of extracting additional information or insights that are hidden inside the data) because on the one hand, they are not developed to directly model the outcome of random variables; on the other hand, they fall under the black-box category which lacks transparency and accountability of predictive models [13] . Recently, a Kolmogorov model (KM) that directly represents a binary random variable as a superposition of elementary events in probability space was proposed [14] ; KM models the outcome of a binary random variable as an inner product between two structured vectors, one probability mass function vector and one binary indicator vector. This inner product structure exactly represents an actual probability. Carefully examining association rules between two binary indicator vectors grants the interpretability of KM that establishes mathematically logical/causal relations between different random variables. Previously, the KM learning was formulated as a coupled combinatorial optimization problem [14] by decomposing it into two subproblems: i) linearly-constrained quadratic program (LCQP) and ii) binary quadratic program (BQP), which can be alternatively solved by utilizing block coordinate descent (BCD). An elegant, low-complexity Frank-Wolfe (FW) algorithm [15] was used to optimally solve the LCQP by exploiting the unit probability simplex structure. Whereas, it is known to be unpromising to find algorithms to exactly solve the BQP problems in polynomial time. To get around this challenge, relaxation methods for linear [16, 17], quadratic [18], second-order cone [19, 20], and semi-definite programming (SDP) [21, 22], were proffered to produce a feasible solution close to the optimal solution of the original problem. Among these relaxation methods, the semi-definite relaxation (SDR) has been shown to have a tighter approximation bound than that of others [23, 24]. Thus, a SDR with randomization (SDRwR) method [25] was employed to optimally solve the BQP of the KM learning in an asymptotic sense [14]. To address the high-complexity issue due to the reliance on the interior point methods, a branch-reduce-and-bound (BRB) algorithm based on discrete monotonic optimization (DMO) [26, 27] was proposed. However, the DMO approach only shows its efficacy in a low-dimensional setting and starts to collapse as the dimension increases. In short, the existing KM methods [14, 27] suffer from a similar drawback, namely, being unscalable. Unfortunately, the latter limitation hampers the application of them to large-scale datasets, for instance, the MovieLens 1 million (ML1M) dataset . It is thus crucial to explore low-complexity and scalable methods for KM learning. Duality often arises in linear/nonlinear optimization models in a wide variety of applications such as communication networks [28], economic markets [29], and structural design [30]. Simultaneously, the dual problem possesses some good mathematical, geometric, or computational structures that can be exploited to provide an alternative way of handling the intricate primal problems by using iterative methods, such as the first-order gradient descent (GD) [31, 32] and quasi-Newton method [33, 34]. It is for this reason that the first-order iterative methods are widely used when optimizing/training large-scale data representations (e.g., deep neural networks) and machine learning algorithms. We are motivated by these iterative first-order methods to effectively resolve the combinatorial challenge of KM learning. ### I-B Overview of Methodologies and Contributions We present a computationally scalable approach to the KM learning problem, based on dual optimization, by proposing an enhanced GD algorithm. Our main contributions are listed below. • We provide a reformulation of the BQP subproblem of KM learning to a regularized dual optimization problem that ensures strong duality and is amenable to be solved by simple GD. Compared to the existing SDRwR [14] and DMO [27] methods, the proposed dual optimization approach proffers a more efficient and scalable solution to KM learning. • Motivated by the fact that EVD is required at each iteration of GD, which introduces a computational bottleneck when applied to large-scale datasets, an enhanced GD that eliminates the EVD computation when it is feasible is proposed to accelerate the computational speed. When the elimination is infeasible and EVD must be computed, we explore a proximal EVD based on the Lanczos method [35] by taking account of the fact that computing exact, entire EVD is usually impractical. We focus on analyzing the approximation error of the proximal EVD. A thresholding scheme is then proposed to determine the number of iterations of the proximal EVD by exploiting the upper bound of the approximation error and leveraging a normalized Minkowski -norm. • Extensive numerical simulation results are presented to demonstrate the efficacy of the proposed KM learning algorithm. When applied to large-scale datasets (e.g., ML1M dataset), it is shown that the proposed method can achieve comparable training and prediction performance with significantly reduced computational cost of roughly two orders of magnitude, compared to the existing KM learning algorithms. Finally, the interpretability of the proposed method is validated by exploiting the mathematically logical relations. We show that the accuracy of logical relation mining by using the proposed method exceeds . Notation: A bold lowercase letter is a vector and a bold capital letter is a matrix. , , , , , , and denote the th entry, th column, trace, main diagonal elements, rank, largest eigenvalue, and largest singular value of , respectively. is the th entry of , , and is a diagonal matrix with on its main diagonal. is the Frobenius inner product of two matrices and , i.e., . indicates that the matrix is positive semi-definite (PSD). is the th column of the identity matrix of appropriate size. and denote the all-one and all-zero vectors, respectively. , , and denote the symmetric matrix space, nonnegative real-valued vector space, and binary vector space with each entry chosen from , respectively. For , where is the th eigenvalue of , . is the support set of and denotes the cardinality of a set . Finally, indicates that one outcome completely implies another one . ## Ii System Model and Preliminaries In this section, we briefly discuss the concept of KM and its learning framework in [14, 36]. ### Ii-a Preliminaries We consider a double-index set of binary random variables , , where ( and are the index sets of and , respectively) denotes the set of all index pairs. Thus, can represent any two-dimensional learning applications (involving matrices) such as movie recommendation systems [11], DNA methylation for cancer detection [37], and beam alignment in multiple-antenna systems [38, 39]. We let be the probability that the event occurs. Since the random variable considered here is binary, the following holds . Without loss of generality, we can focus on one outcome, for instance, . Then, the -dimensional KM of the random variable is given by Pr(Xu,i=1)=θTuψi, ∀(u,i)∈S, (1) where is the probability mass function vector and is the binary indicator vector. Specifically, is in the unit probability simplex , i.e., , and denotes the support set of (associated with the case when ). The KM in (1) is built under a measurable probability space defined on ( denotes the sample space and is the event space consisting of subsets of ) and satisfies the following conditions: i) , (nonnegativity), ii) (normalization), and iii) for the disjoint events , (countable additivity) [40]. By (1), is modeled as stochastic mixtures of Kolmogorov elementary events. In addition, note that . ### Ii-B KM Learning Assume that the empirical probability of , denoted by , is available from the training set . Obtaining the empirical probabilities for the training set depends on the application and context in practical systems; we will illustrate an example for recommendation systems at the end of this section. The KM learning involves training, prediction, and interpretation as described below. #### Ii-B1 Training The KM training proceeds to optimize and by solving the -norm minimization problem: (2) To deal with the coupled combinatorial nature of (2), a BCD method [14, 41] was proposed by dividing the problem in (2) into two subproblems: i) LCQP: θ(τ+1)u=argminθu∈PθTuQ(τ)uθu−2θTuw(τ)u+ϱu, ∀u∈UK, (3) where , , , , and is the index of BCD iterations, and ii) BQP: ψ(τ+1)i=argminψi∈BDψTiS(τ+1)iψi−2ψTiv(τ+1)i+ρi,∀i∈IK, (4) where , , , and . By exploiting the fact that the optimization in (3) was carried out over the unit probability simplex , a simple iterative FW algorithm [15] was employed to optimally solve (3), while the SDRwR was employed to asymptotically solve the BQP in (4) [25]. It is also possible to solve (4) directly without a relaxation and/or randomization, based on the DMO approach [27]. However, the DMO in [27] was shown only to be efficient when the dimension is small (e.g., ); its computational cost blows up as increases (e.g., ). #### Ii-B2 Prediction Similar to other supervised learning methods, the trained KM parameters are used to predict probabilities over a test set as ^pu,i≜θ⋆uTψ⋆i, ∀(u,i)∈T, (5) where and . #### Ii-B3 Interpretation KM offers a distinct advantage, namely, the interpretability by drawing on fundamental insights into the mathematically logical relations among the data. For two random variables and taken from the training set , i.e., and , if the support sets of and satisfy , then two logical relations between the outcomes of and can be inferred: the first outcome of implies the same one for while the second outcome of implies the second one for , i.e., and [14, Proposition 1]. It is important to note that logical relations emerged from KM are based on the formalism of implications. Thus, they hold from a strictly mathematical perspective, and are general. An implication of the introduced KM learning is illustrated by taking an example of movie recommendation systems as follows. • Suppose there are two users () who have rated two movie items (). In this example, denotes the event that user likes the movie item , . Then, denotes the probability that user likes item (conversely, denotes the probability that user dislikes item ). Suppose in (1). Then, the four elementary events can represent four different movie genres including i) Comedy, ii) Thriller, iii) Action, and iv) Drama. The empirical probability corresponding to can be obtained by pu,i≜ru,irmax, (6) where denotes the rating score that user has provided for item and is the maximum rating score. In a 5-star rating system (), we consider the following matrix as an example: [p1,1p1,2p2,1p2,2]=[0.80.4∗0.6], (7) where is unknown (as in the ‘*’ entry) and constitutes the training set of empirical probability where . By solving the KM learning problem in (2) for the empirical probabilities provided in (7), one can find the optimal model parameters, and (an optimal solution to (2)), which is given by θ⋆1=[0.4 0.2 0.1 0.3]T, θ⋆2=[0.1 0.3 0.1 0.5]T; ψ⋆1=[1 0 1 1]T, ψ⋆2=[0 0 1 1]T. Then, we can predict () by using the learned KM parameters and as . In this example, the following inclusion holds . Thus, if a certain user (user 1 or 2) likes movie item 1, this logically implies that the user also likes movie item 2. ###### Remark 1 (Extended Discussion on Related Work) In co- ntrast to the KM in (1), the state-of-the-art method, MF [7, 8], considers an inner product of two arbitrary vectors without having implicit or desired structures in place. While NNM [11] has a similar structure as (1), the distinction is that NNM relaxes the binary constraints on to a nonnegative box, i.e., , and thus sacrifices the highly interpretable nature of KM. Unlike the existing data representation techniques, the KM can exactly represent the outcome of random variables in a Kolmogorov sense. As illustrated in Section V, this in turn improves the prediction performance of the KM compared to other existing data representation techniques. Despite its predictability benefit, the existing KM learning methods [14, 27], however, suffer from high computational complexity and a lack of scalability. In particular, the LCQP subproblem, which can be efficiently solved by the FW algorithm, has been well-investigated, while resolving the BQP introduces a major computational bottleneck. It is thus of great importance to study more efficient and fast KM learning algorithms that are readily applicable to large-scale problems. ## Iii Proposed Method To scale the KM learning, we propose an efficient, first-order method to the BQP subproblem in (4). ### Iii-a Dual Problem Formulation We transform the BQP subproblem in (4) to a dual problem. To this end, we formulate an equivalent form to the BQP in (4) as minx∈{+1,−1}DxTA0x+aTx, (8) where in (4) is ignored in (8), , , and . For simplicity, the iteration index is omitted hereinafter. By introducing and , the problem in (8) can be rewritten as minx,X0 ⟨X0,A0⟩+aTx, (9a) s.t. diag(X0)=1, (9b) X⪰0, (9c) rank(X)=1. (9d) Solving (9) directly is NP-hard due to the rank constraint in (9d), thus we turn to convex relaxation methods. The SDR to (9) can be expressed in a homogenized form with respect to as minX f(X)≜⟨X,A⟩, (10a) s.t. ⟨Bi,X⟩=1, i=1,…,D+1, (10b) X⪰0, (10c) where and . Note that the diagonal constraint in (9b) has been equivalently transformed to equality constraints in (10b). While the problem in (9) is combinatorial due to the rank constraint, the relaxed problem in (10) is a convex SDP. Moreover, the relaxation is done by dropping the rank constraint. We further formulate a regularized SDP formulation of (10) as minX fγ(X)≜⟨X,A⟩+12γ∥X∥2F, (11) s.t. ⟨Bi,X⟩=1, i=1,…,D+1, X⪰0, where is a regularization parameter. With a Frobenius-norm term regularized, the strict convexity of (11) is ensured, which in turn makes strong duality hold for the feasible dual problem of (11). In this work, we leverage this fact that the duality gap is zero for (11) (a consequence of strong duality) to solve the dual problem. In addition, the two problems in (10) and (11) are equivalent as . Given the regularized SDP formulation in (11), its dual problem and the gradient of the objective function are of interest. ###### Lemma 1 Suppose the problem in (11) is feasible. Then, the dual problem of (11) is given by (12) where is the vector of Lagrange multipliers associated with each of the equality constraints of (11), , and , in which and , , respectively, are the eigenvalues and corresponding eigenvectors of . The gradient of with respect to is ∇udγ(u)=−1+γΦ[Π+(C(u))], (13) where . Proof: See Appendix A. It is well known that in (12) is a strongly concave (piecewise linear) function, thereby making the Lagrange dual problem (12) a strongly convex problem having a unique global optimal solution [31]. Furthermore, the special structure of of Lemma 1, i.e., being symmetric, allows us to propose computationally efficient and scalable KM learning algorithms which can be applied to handle large-scale datasets with low latency. ### Iii-B Fast GD Methods For The Dual Problem #### Iii-B1 Gd The dual problem in (12), having a strongly concave function , is equivalent to the following unconstrained convex minimization problem (14) with the gradient being . We first introduce a GD, which is detailed in Algorithm 1, to solve (14). Note that, due to the fact that the dual problem in (14) is unconstrained, a simple GD method is proposed here: indeed, we would need a projected GD method if there is constraint included, for which the computational complexity would be much larger (because of the projection at each iteration). In Algorithm 1, only the gradient of , i.e., , is required to determine the descent direction. It is therefore a more practical and cost-saving method compared to standard Newton methods which demand the calculation of second-order derivatives and the inverse of the Hessian matrix. Moreover, Algorithm 1 does not rely on any approximation of the inverse of the Hessian matrix such as the quasi-Newton methods [42]. To find a step size in Step 4, we apply the backtracking line search method [43], which is based on the Armijo-Goldstein condition [44]. The algorithm is terminated when the pre-designed stopping criterion (for instance, in Step 5, where is a predefined tolerance) is satisfied. Finally, the computational complexity of Algorithm 1 is dominated by the EVD of a matrix, needed to compute in Step 2, which is given as . #### Iii-B2 Enhanced GD In Algorithm 1, an EVD of is required at each iteration to determine and . However, it is difficult to employ EVD per iteration as they require high computational cost () when large-scale datasets are involved (with very large , , and ). It is critical to reduce the computational cost of Algorithm 1 by avoiding the full computation of EVD or even discarding them. In relation to the original SDP problem in (10), we can understand the PSD constraint in (10c) is now penalized as the penalty term in , i.e., . Thus, one of the key insights we will use is that: i) if the PSD constraint is not satisfied, the penalty term equals to zero, simplifying the objective function as ; in this case, the gradient is simply , eliminating the computation of EVD, and ii) if the PSD constraint is satisfied, the penalty term becomes nonzero and it requires the computation of EVD to find out . This fact leads to the following proposition showcasing the rule of updating for the enhanced GD. ###### Proposition 1 The enhanced GD includes two cases depending on the condition of the PSD constraint as ⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩Case A: if the PSD % constraint does not meet⇒ui+1=ui−ti1Case B: if the PSD constraint meets⇒ui+1=ui−ti∇uihγ(ui). The key is to check if the PSD constraint in Proposition 1 is satisfied or not without the need of computing EVD. We propose a simple sufficient condition, based on the Weyl’s inequality [45], as demonstrated in the proposed Algorithm 2. In Algorithm 2, we focus on modifying Step 2 in Algorithm 1 by using an initial with equal entries and exploiting the fact that if is not PSD (Case A in Proposition 1) to reduce the computational cost of EVD. Step in Algorithm 2 is due to the fact that the th eigenvalue of is . One of the key insights we leverage is that the choice of the sequence of gradient directions, i.e., , , does not alter the optimality of the dual problem in (14). We approach the design of with the goal of eliminating the computation of EVD to the most extent. Moreover, in Step of Algorithm 2, the condition (Case A in Proposition 1), holds because of the Weyl’s inequality [45]. Thus the EVD of is required only when both the conditions “all the elements of are the same” and “ ” are violated, as in Phase II-B. Notice that Step 2 in computing of Algorithm 1 has been transformed into two different phases (each phase includes two sub-phases) in Algorithm 2. Algorithm 2 executes the four sub-phases in order and irreversibly. To be specific, the algorithm first enters Phase I at the initial iteration and ends up with Phase II. Once the algorithm enters Phase II, there is no way to return back to Phase I. Algorithms 1 and 2 are based on GD, and thus the enhanced GD does not alter the convergency of Algorithm 1 [31]. ###### Proposition 2 (Convergence of Enhanced GD) Let be the optimal solution to the strongly convex problem in (14). Then if we run Algorithm 2 for iterations, it will yield a solution which satisfies hγ(uk)−hγ(u⋆)≤O(ck), 0 Intuitively, this means that the enhanced GD is guaranteed to converge with the convergence rate . This phenomenon is captured in Fig. 1 in which the objective function value as a function of the iteration number are depicted for Algorithms 1 and 2. In terms of flops, Algorithm 2 is more efficient than the original GD in Algorithm 1. This results in a dramatic reduction in the running time of Algorithm 2 since we mainly move on the direction obtained without the computation of EVD. ### Iii-C Randomization The solution to the dual problem in (14) (or equivalently (12)) produced by Algorithm 2, is not yet a feasible solution to the BQP in (4). A randomization procedure [46] can be employed to extract a feasible binary solution to (4) from the SDP solution of (11). One typical design of the randomization procedure for BQP is to generate feasible points from the Gaussian random samples via rounding [47]. The Gaussian randomization procedure provides a tight approximation with probability , asymptotically in [48]. By leveraging the fact that the eigenvalues and corresponding eigenvectors of can be found by Steps 13 and 14 of Algorithm 2, we have X⋆=γΠ+(C(u⋆))=γV+Λ+VT+=LLT, where and . A detailed randomization procedure is provided in Algorithm 3. In Step 8 of Algorithm 3, the -dimensional vector is first recovered from a -dimensional vector by considering the structure of in (9), and then used to approximate the BQP solution based on (8). Also note that the randomization performance improves with . In practice, we only need to choose a sufficient but not excessive (for instance, ) achieving a good approximation for the BQP solution. Moreover, its overall computational complexity is much smaller than the conventional randomization algorithms [46, 47, 14] because our proposed Algorithm 3 does not require the computation of the Cholesky factorization. ### Iii-D Overall KM Learning Algorithm Incorporating Algorithm 2 and Algorithm 3, the overall KM learning framework is described in Algorithm 4. Note that the index of BCD iterations that has been omitted is recovered here and denotes the total number of BCD iterations for KM learning. In Algorithm 4, the BCD method is adopted to refine and until it converges to a stationary point of (2). In fact, the proof of convergence (to stationary solution) for Algorithm 4 is exactly the same as that of Algorithm 1 in [14]. In practice, we can use to control the termination of Algorithm 4. ## Iv Proximal EVD and Error Analysis In this section, several techniques are discussed to further accelerate Algorithm 2. ### Iv-a Initial Step Size A good initial step size is crucial for the convergence speed of the enhanced GD. In Phase I-A of Algorithm 2, we have λ(C(ui+1))=λ(−A)−ui+1=λ(−A)−ui+ti1. If , the following holds where . Therefore, in the first iteration of Phase I-A, we can set an appropriate step size so that has at least one positive eigenvalue, where . With the above modification of Algorithm 2, we can reduce the execution time spent in Phase I-A, and thus, the total number of iterations required by the enhanced GD can be reduced as shown in Fig. 2. ### Iv-B Proximal EVD Compared to the original GD in Algorithm 1, the enhanced GD in Algorithm 2 has reduced the costly EVD substantially. Nevertheless, the EVD is still necessary in Algorithm 2 when the algorithm enters into Phase II-B. In order to further accelerate the algorithm, we employ and modify the Lanczos method to numerically compute the proximal EVD of in Algorithm 2. The Lanczos algorithm [49] is a special case of the Arnoldi method [50] when the matrix is symmetric. In principle, it is based on an orthogonal projection of onto the Krylov subspace where denotes the dimension of Krylov subspace. An algorithmic description of a modified Lanczos method is presented in Algorithm 5. Different from the Arnoldi method, the matrix constructed by Algorithm 5 is tridiagonal and symmetric, i.e., the entries of in Algorithm 5 satisfy that , , and , . Also, Algorithm 5 iteratively builds an orthonormal basis, i.e., , for such that and , where . Let , , be the eigenpairs of . Then, the eigenvalues/eigenvectors of can be approximated by the Ritz pairs , i.e., (15) With the increase of the dimension of Krylov subspace , the approximation performance improves at the price of additional computations. Thus, in practice, we adopt the value of balancing the tradeoff between the accuracy of approximation and the computational complexity. ### Iv-C Analysis of Approximation Error and Thresholding Scheme In this subsection, we analyze the approximation error of the proximal EVD and propose a thresholding scheme for selecting an appropriate in Algorithm 5. The main results are provided in the following lemmas. ###### Lemma 2 Let be any eigenpair of and in (15) is an approximated eigenpair (Ritz pair) of in Algorithm 5. Then the following holds: i) The residual error is upper bounded by re(C(ui)^vi,^λi^vi)≤βm+1. (16) ii) The maximum approximation error of eigenvalues of is bounded by maxi|λi−^λi|≤βm+1, i∈{1,…,m}, (17) where is the associated true eigenvalue of . iii) The minimum approximation error of eigenvalues of is bounded by mini|λi−^λi|≤βm+1|qi(m)|, i∈{1,…,m}. (18) Proof: See Appendix B. Lemma 2 indicates that the error bounds of the approximate eigenvalues of by using the proximal EVD in Algorithm 5 largely depends on . Indeed, the upper bounds in (17) and (18) are quite tight as will be seen in Section V. Inspired by Lemma 2, finding an upper bound of that only depends on the trace of is of interest. ###### Lemma 3 in Algorithm 5 is upper bounded by βm+1≤2m((σmax,UB−σ% max,LB)+^σmax,Minkowski), (19) where
2021-09-24 02:58:18
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.891451895236969, "perplexity": 751.8032827489809}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057496.18/warc/CC-MAIN-20210924020020-20210924050020-00155.warc.gz"}
http://www.ipm.ac.ir/ViewPaperInfo.jsp?PTID=14410&school=Physic
IPM 30 YEARS OLD ## “School of Physic” Paper   IPM / Physic / 14410 School of Physics Title: Ground State Properties of Ising Chain with Random Monomer-Dimer Couplings Author(s): 1 . S.B. Seyedein Ardebili 2 . R. Sepehrinia Status: Published Journal: J. Stat. Phys. No.: 3 Vol.: 163 Year: 2016 Pages: 568-575 Supported by: IPM Abstract: We study analytically the one-dimensional Ising model with a random binary distribution of ferromagnetic and antiferromagnetic exchange couplings at zero temperature. We introduce correlations in the disorder by assigning a dimer of one type of coupling with probability x, and a monomer of the other type with probability 1-x. We find that the magnetization behaves differently from the original binary model. In particular, depending on which type of coupling comes in dimers, magnetization jumps vanish at a certain set of critical fields. We explain the results based on the structure of ground state spin configuration. Download TeX format back to top scroll left or right
2020-02-21 23:27:09
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8369080424308777, "perplexity": 2941.4125020391493}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145538.32/warc/CC-MAIN-20200221203000-20200221233000-00444.warc.gz"}
http://mathv.chapman.edu/~jipsen/structures/doku.php/residuated_lattices
## Residuated lattices Abbreviation: RL ### Definition A residuated lattice is a structure $\mathbf{L}=\langle L, \vee, \wedge, \cdot, e, \backslash, /\rangle$ of type $\langle 2,2,2,0,2,2\rangle$ such that $\langle L, \cdot, e\rangle$ is a monoid $\langle L, \vee, \wedge\rangle$ is a lattice $\backslash$ is the left residual of $\cdot$: $y\leq x\backslash z\Longleftrightarrow xy\leq z$ $/$ is the right residual of $\cdot$: $x\leq z/y\Longleftrightarrow xy\leq z$ ##### Morphisms Let $\mathbf{L}$ and $\mathbf{M}$ be residuated lattices. A morphism from $\mathbf{L}$ to $\mathbf{M}$ is a function $h:L\rightarrow M$ that is a homomorphism: $h(x\vee y)=h(x)\vee h(y)$, $h(x\wedge y)=h(x)\wedge h(y)$, $h(x\cdot y)=h(x)\cdot h(y)$, $h(x\backslash y)=h(x)\backslash h(y)$, $h(x/y)=h(x)/h(y)$, $h(e)=e$ Example 1: ### Properties Classtype variety decidable 1) implementation undecidable undecidable no unbounded yes yes yes, $n=2$ no yes no no no no ### Finite members $\begin{array}{lr} f(1)= &1\\ f(2)= &1\\ f(3)= &3\\ f(4)= &20\\ f(5)= &149\\ f(6)= &1488\\ f(7)= &18554\\ f(8)= &295292\\ \end{array}$ ### References \end{document} %</pre> 1) Hiroakira Ono, Yuichi Komori, Logics without the contraction rule, J. Symbolic Logic, 50, 1985, 169–201 MRreviewZMATH
2017-08-20 11:35:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9444293975830078, "perplexity": 2956.4880492869224}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886106465.71/warc/CC-MAIN-20170820112115-20170820132115-00631.warc.gz"}
http://www.mathnet.ru/php/archive.phtml?jrnid=de&wshow=issue&year=1972&volume=8&volume_alt=&issue=1&issue_alt=&option_lang=eng
RUS  ENG JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB General information Latest issue Archive Search papers Search references RSS Latest issue Current issues Archive issues What is RSS Differ. Uravn.: Year: Volume: Issue: Page: Find Partial Differential Equations A mixed problem for a hyperbolic equation degenerate on a part of the bounday of a regionV. A. Bryukhanov 3 The Goursat and Darboux problems for a certain class of hyperbolic equationsV. N. Vragov 7 Certain estimates for equations of mixed typeD. K. Gvazava 17 The generalized solvability of boundary value problems for systems of differential equations of mixed typeV. P. Didenko 24 The uniqueness of the solution of the contact inverse problem of potential theoryV. M. Isakov 30 A criterion for the continuity of the solution of the Goursat problem for a certain degenerate equationT. Sh. Kal'menov 41 Equations of mixed type and degenerate hyperbolic equations in multidimensional domainsG. D. Karatoprakliev 55 Weak solutions of the Darboux problemV. V. Kovrizhkin 68 The Cauchy problem for a second order quasilinear hyperbolic equation with initial data on the curve of parabolicityN. A. Lar'kin 76 Asymptotic behavior of the solution of the Cauchy problem as $t\rightarrow \infty$ for a certain hyperbolic system that describes the motion of a rotating fluidV. N. Maslennikova 85 The Darboux problem for a degenerate hyperbolic systemM. Meredov 97 On apriori estimates for the Tricomi and Darboux problemsA. M. Nakhushev 107 An interior inverse problem of the metaharmonic potential for a body that is close to the given oneA. I. Prilepko 118 Boundary value problems for the system $rot u+\lambda u=h$R. S. Saks 126 Certain boundary value problems for equations of mixed type with one and two curves of degeneracyM. S. Salakhitdinov, A. Tolipov 134 The uniqueness of the solution of a certain problem of A. V. BicadzeA. P. Soldatov 143 A certain class of projection methodsV. D. Charushnikov 147 Interior inverse problems for the logarithmic potentialV. G. Cherednichenko 154 A one-dimensional scattering theory. IA. B. Shabat 164 Reduction of the problem of the oblique derivative to a Fredholm integro-differential equationA. Yanushauskas 179 Letters to the Editor Comment on conditions for the convergence of spectral expansions corresponding to self-adjoint extensions of elliptic operatorsV. A. Il'in, Sh. A. Alimov 190
2020-10-25 17:53:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.537091076374054, "perplexity": 611.7803898047555}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107889574.66/warc/CC-MAIN-20201025154704-20201025184704-00164.warc.gz"}
https://discuss.tlapl.us/msg03285.html
Following up on Saksham's reply: integer division is written \div in TLA+ and is supported by the prover. Proving theorems about an _expression_ CHOOSE x \in S : P(x) is more complicated because it usually requires showing that there is some element of S that satisfies P. Since you mention min and max, below is a proof by induction over finite sets that shows that any finite and non-empty set of integers has a maximum element. (Your module needs to extend the modules FiniteSetTheorems, which contains FS_Induction and other useful theorems about finite sets and TLAPS.)Assuming that you are more interested in proving theorems about an algorithm than about elementary lemmas about data, you may want to leave such lemmas unproved.StephanMax(S) == CHOOSE x \in S : \A y \in S : y <= xLEMMA MaxIntegers ==  ASSUME NEW S \in SUBSET Int, S # {}, IsFiniteSet(S)  PROVE  /\ Max(S) \in S         /\ \A y \in S : y <= Max(S)<1>. DEFINE P(T) == T \in SUBSET Int /\ T # {} => \E x \in T : \A y \in T : y <= x<1>1. P({})  OBVIOUS<1>2. ASSUME NEW T, NEW x, P(T), x \notin T      PROVE  P(T \cup {x})  <2>. HAVE T \cup {x} \in SUBSET Int  <2>1. CASE \A y \in T : y <= x    BY <2>1, Isa  <2>2. CASE \E y \in T : ~(y <= x)    <3>. T # {}      BY <2>2    <3>1. PICK y \in T : \A z \in T : z <= y      BY <1>2    <3>2. x <= y      BY <2>2, <3>1    <3>3. QED  BY <3>1, <3>2  <2>. QED  BY <2>1, <2>2<1>. HIDE DEF P<1>3. P(S)  BY <1>1, <1>2, FS_Induction, IsaM("blast")<1>. QED  BY <1>3, Zenon DEF Max, POn 5 Dec 2019, at 04:33, Hans He wrote:I'm able to use the above operator with Z3. Thank you for your help.HansOn Wed, Dec 4, 2019 at 6:02 PM Saksham Chand wrote:The following should work and invoke Z3:divide_by_two(n) == n \div 2On Wed, Dec 4, 2019 at 5:47 PM wrote:Hi,I'm working on proving an algorithm using TLAPS that requires dividing integers by two. I understand there's no support for real numbers/division in the backend provers, so I am using an operator:divide_by_two(n) == CHOOSE k \in Nat: n=2*kThe problem I'm running into is when trying to prove properties using the above operator with set expressions. The following example demonstrates this:EXTENDS Integers, TLAPS, FiniteSetsdivide_by_two(n) == CHOOSE k \in Nat: n=2*kCONSTANTS NVARIABLES xInit == x = divide_by_two(N)PositiveInvariant == x >= 0ASSUME NumberAssumption == N \in {2,4,6,8,10}THEOREM PositiveDivisionProperty == Init => PositiveInvariant  <1> SUFFICES ASSUME Init               PROVE  PositiveInvariant    OBVIOUS  <1> QED    BY NumberAssumption DEF Init, PositiveInvariant, divide_by_twoThe backend provers are unable to prove the above theorem. SMT specifically times out and increasing the timeout doesn't appear to help. Changing the assumption to:ASSUME NumberAssumption == N = 10will successfully prove the theorem. Are the backend provers able to work with 'CHOOSE' _expression_ in conjunction with sets? Is there anything wrong with how I'm specifying the assumption? The actual algorithm will require using the division operator with a few 'min'/'max' operators, which may also complicate the proof process, so I'm wondering if TLAPS is able to prove these sort of algorithms that require arithmetic.Best,Hans-- You received this message because you are subscribed to the Google Groups "tlaplus" group.To unsubscribe from this group and stop receiving emails from it, send an email to [email protected] view this discussion on the web visit https://groups.google.com/d/msgid/tlaplus/5d12aa41-9036-4ba4-856a-bf5070853122%40googlegroups.com.-- You received this message because you are subscribed to a topic in the Google Groups "tlaplus" group.To unsubscribe from this topic, visit https://groups.google.com/d/topic/tlaplus/NLLfoXEvxCU/unsubscribe.To unsubscribe from this group and all its topics, send an email to [email protected] view this discussion on the web visit https://groups.google.com/d/msgid/tlaplus/CAJSuO-_Wo9UhL3_Di9NpLO%2BSDLj-tuSNbLoSPObjwP_-JMKpyQ%40mail.gmail.com.-- You received this message because you are subscribed to the Google Groups "tlaplus" group.To unsubscribe from this group and stop receiving emails from it, send an email to [email protected] view this discussion on the web visit https://groups.google.com/d/msgid/tlaplus/CACOXEhA-Z-DrYi-rjdd6o0D%2BgGLe9YBZ4-TPUf%2BF%3Dy175-10uQ%40mail.gmail.com. -- You received this message because you are subscribed to the Google Groups "tlaplus" group. To unsubscribe from this group and stop receiving emails from it, send an email to tlaplus+unsubscribe@xxxxxxxxxxxxxxxx. To view this discussion on the web visit https://groups.google.com/d/msgid/tlaplus/351DDFCD-04E4-4D96-921C-B712F09A5288%40gmail.com.
2021-06-17 02:42:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9719820022583008, "perplexity": 5100.90513142745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487626465.55/warc/CC-MAIN-20210617011001-20210617041001-00100.warc.gz"}
https://flaviocopes.com/php-including-other-files/
Inside a PHP file you can include other PHP files. We have the following methods, all used for this use case, but slightly different: include, include_once, require, require_once. include loads the content of another PHP file, using a relative path. require does the same, but if there’s any error doing so, the program halts. include will only generate a warning. You can decide to use one or another depending on your use case. If you want your program to exit if it can’t import the file, use require. include_once and require_once do the same thing as their corresponding functions without _once, but they make sure the file is included/required only once during the execution of the program. This is useful for example if you have multiple files loading some other file, and you typically want to avoid loading that more than once. My rule of thumb is to never use include or require because you might load the same file 2 times, include_once and require_once help you avoid this problem. Use include_once when you want to conditionally load a file, for example “load this file instead of that”, and in all other cases, use require_once. Here’s an example: require_once('test.php'); //and variables defined in the test.php file The above syntax includes the test.php file from the current folder the file where this code is in. You can use relative paths require_once('../test.php'); to include a file in the parent folder or to go in a subfolder require_once('test/test.php'); You can use absolute paths: require_once('/var/www/test/file.php'); In modern PHP codebases that use a framework, files are generally loaded automatically so you’ll have less need to use the above functions.
2022-09-27 15:16:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.542742908000946, "perplexity": 1662.609991965993}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00653.warc.gz"}
https://www.neetprep.com/question/34716-coordinates-moving-particle-time-t-given-x--t-y--t-speed-particle-time-t-given--t-t-t/55-Physics--Motion-Plane/678-Motion-Plane
The coordinates of a moving particle at any time ‘t’ are given by x = αt3 and y = βt3. The speed of the particle at time ‘t’ is given by (1) $\sqrt{{\alpha }^{2}+{\beta }^{2}}$ (2) $3\text{\hspace{0.17em}}t\sqrt{{\alpha }^{2}+{\beta }^{2}}$ (3) $3\text{\hspace{0.17em}}{t}^{2}\sqrt{{\alpha }^{2}+{\beta }^{2}}$ (4) ${t}^{2}\sqrt{{\alpha }^{2}+{\beta }^{2}}$
2020-02-27 01:00:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8275794386863708, "perplexity": 391.26763859552494}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146643.49/warc/CC-MAIN-20200227002351-20200227032351-00160.warc.gz"}
http://xml.jips-k.org/pub-reader/view?doi=10.3745/JIPS.04.0111
# Block Sparse Signals Recovery Algorithm for Distributed Compressed Sensing Reconstruction , Xingyi Chen* , Yujie Zhang** and Rui Qi*** **** ## Abstract Abstract: Distributed compressed sensing (DCS) states that we can recover the sparse signals from very few linear measurements. Various studies about DCS have been carried out recently. In many practical applications, there is no prior information except for standard sparsity on signals. The typical example is the sparse signals have block-sparse structures whose non-zero coefficients occurring in clusters, while the cluster pattern is usually unavailable as the prior information. To discuss this issue, a new algorithm, called backtracking-based adaptive orthogonal matching pursuit for block distributed compressed sensing (DCSBBAOMP), is proposed. In contrast to existing block methods which consider the single-channel signal reconstruction, the DCSBBAOMP resorts to the multi-channel signals reconstruction. Moreover, this algorithm is an iterative approach, which consists of forward selection and backward removal stages in each iteration. An advantage of this method is that perfect reconstruction performance can be achieved without prior information on the block-sparsity structure. Numerical experiments are provided to illustrate the desirable performance of the proposed method. Keywords: Block Sparse Signals , Compressed Sensing , Distributed Compressed Sensing , Iteration Algorithm ## 1. Introduction Distributed compressed sensing (DCS) is a signals recovery framework for acquiring sparse signals where both sensing and compression are performed at the same time. It uses sparsity of signals to recovery the signals from the measurements [1,2]. The DCS model can be stated as: ##### (1) [TeX:] $$\mathbf{Y}=\mathbf{\Phi} \mathbf{X}$$ where [TeX:] $$\mathbf{\Phi} \in \mathbb{R}^{M \times N}$$ is a random measurement matrix. This system model describes an under-determined system, where [TeX:] $$M<N \quad . \quad \mathbf{Y}=\left[\mathbf{y}_{1}, \mathbf{y}_{2}, \cdots, \mathbf{y}_{J}\right]$$ are the measure vectors and [TeX:] $$\mathbf{X}=\left[\mathbf{x}_{1}, \mathbf{x}_{2}, \cdots, \mathbf{x}_{J}\right]$$ are unknown sparse vectors. The signal vector [TeX:] $$\mathbf{x}_{i} \text { has } K^{(i)}$$ non-zero components and indices with cardinality [TeX:] $$\left|\operatorname{supp}\left(\mathbf{x}_{i}\right)\right|=\left\|\mathbf{x}_{i}\right\|_{0}=K^{(i)}$$. The goal of DCS model is to reconstruct X from Y . The method is that ##### (2) [TeX:] $$\mathbf{y}_{i}=\mathbf{A} \mathbf{x}_{i} \quad \text { subject to } \min \left\|\mathbf{x}_{i}\right\|_{0} \quad \forall i$$ where [TeX:] $$\left\|\mathbf{x}_{i}\right\|_{0} \text { is the } l_{0}-\text { norm of } \mathbf{x}_{i}$$. In practice, the DCS problem can be relaxed as an approximate convex optimization problem [3]: ##### (3) [TeX:] $$\underset{\left\{\mathbf{x}_{i}\right\}}{\arg \min } \sum_{i=1}^{K}\left\|\mathbf{y}_{i}-\mathbf{A} \mathbf{x}_{i}\right\|_{2}^{2} \text { subject to }\left\|\mathbf{x}_{i}\right\|_{0} \leq K \quad \forall i$$ Here, the parameter K states the maximum sparsity of the xi. If the signals X are independent, the reconstruction problem can be divided into J individual problems, we can reconstruct each signal independently by using the compressed sensing (CS) framework [4,5]. The more interesting case is when the signals [TeX:] $$\mathbf{x}_{i}, i=1,2, \cdots, J$$ are correlated among each other. Then we can reconstruct X jointly using DCS algorithms [6-10]. Moreover, it was shown that signals reconstruction based on DCS could save about 30% of measurements than using CS on each source [6]. Problem (3) is thought to be NP-hard [6]. Many pursuit algorithms [7-10] have been introduced to recovery the signals with tractable complexity. It has been shown in paper [11] that l1-norm constraint is sufficient to ensure the sparsest solution for many high-dimensional cases. DCS was defined by Duarte et al. [12] firstly. Two different joint sparse models (JSMs) were proposed by them. Subsequently, many algorithms were carried out. A new DCS algorithm was introduced in [7] and it exploited signal to signal correlation structures. Wakin et al. [13] proposed a simultaneous orthogonal matching pursuit (OMP) [14] method for DCS (named as jointOMP), which can reduce the number of measurements. Unfortunately, there is no backtracking mechanism in jointOMP, which leads to the worse recovery performance. To overcome the drawback, the subspace pursuit method for DCS (DCSSP) was proposed in [15]. A new joint sparse recovery method, called orthogonal subspace matching pursuit (OSMP), is proposed in [16]. Nevertheless, those algorithms have a common limitation that the signals sparsity must be pre-known, whereas it is usually unpractical in practice. Recently, two new recovery methods, named forward-backward pursuit method for DCS (DCSFBP) and backtrackingbased adaptive OMP method for DCS (DCSBAOMP), are proposed [9,10]. In [17], l1/l2-norm is used to enforce joint sparsity on the signals. However, the above methods do not take into account the structures of signals or their representations. Sometimes, each signal xj under consideration is structured in nature, e.g., the structure of block sparse that the non-zero coefficients of signals occurring in cluster. The block-sparse signals appear in many applications including gene expression analysis [18] and equalization of sparse communication channels [19]. Block-sparse signal reconstruction algorithms were introduced and investigated in recent literatures. Here, we focus on the DCS of block-sparse signals. For a given matrix [TeX:] $$\Phi \in \mathbb{R}^{M \times N}$$, we reconstruct blocksparse signals X from [TeX:] $$\mathbf{Y}=\mathbf{\Phi} \mathbf{X}$$. Here X with block size d can be formed as ##### (4) [TeX:] $$\mathbf{X}=\left( \begin{array}{c}{\mathbf{X}[1]} \\ {\mathbf{X}[2]} \\ {\vdots} \\ {\mathbf{X}[l]}\end{array}\right)$$ where [TeX:] $$\mathbf{X}[i]=\left(\mathbf{x}_{[(i-1) d+1] i d, 1}, \mathbf{x}_{[(i-1) d+1] i d, 2}, \cdots, \mathbf{x}_{[(i-1) d+1] i d, J}\right) \in \mathbb{R}^{d \times J}$$ denotes the ith block matrix with length d and N = ld. The matrix X is said to be K-block sparse if X[i] has nonzero Euclidean norm for at most K entries i. Denote [TeX:] $$\|\mathbf{X}\|_{2,0}=\sum_{i=1}^{l} I\left(\|\mathbf{X}[i]\|_{2}>0\right)$$ where [TeX:] $$I\left(\|\mathbf{X}[i]\|_{2}>0\right)$$ is an indicator function. In this case, a block K -sparse matrix X can be defined as [TeX:] $$\|\mathbf{X}\|_{2,0} \leq K$$[20]. Then the total sparsity of X is Ktotal [TeX:] $$K_{\text {total}}=K \times d \times J$$. Similarly to (4), the measure matrix Φ is denoted as the following block form, ##### (5) [TeX:] $$\mathbf{\Phi}=\left(\underbrace{\varphi_{1}, \cdots, \varphi_{d}}_{\Phi[1]}, \underbrace{\varphi_{d+1}, \cdots, \varphi_{2 d}}_{\Phi[2]}, \cdots, \underbrace{\varphi_{N-d+1}, \cdots, \varphi_{N}}_{\Phi[l]}\right)$$ here Φ[i] is a submatrix of size [TeX:] $$M \times d$$. The supports of the coefficient matrix X are [TeX:] $$\Gamma(\mathbf{X})=\left\{i,\|\mathbf{X}[i]\|_{2} \neq 0\right\}$$ [20]. Here, the reconstruction problem can be relaxed as a convex optimization by ##### (6) [TeX:] $$\underset{\{\mathbf{X}[i]\}}{\arg \min } \sum_{i=1}^{K} \| \mathbf{Y}-[\mathbf{\Phi}[1], \cdots, \mathbf{\Phi}[l]] \left[ \begin{array}{c}{\mathbf{X}[1]} \\ {\vdots} \\ {\mathbf{X}[l]}\end{array}\right] \| _{2}^{2} subject \ to \ ||X||_{2,0} = \sum^{1}_{i=1}I(||X[i]||_2>0) \le K$$ This problem is also a NP-hard problem. One natural idea is to replace the l2/l0 -norm by l2/l1 -norm, that is ##### (7) [TeX:] $$\min \|\mathbf{X}\|_{2,1} \text { subject to } \mathbf{Y}=\mathbf{\Phi} \mathbf{X}$$ where [TeX:] $$\|\mathbf{x}\|_{2,1}=\sum_{i=1}^{l}\|\mathbf{X}[i]\|_{2}$$ Specifically, in [21], the block compressive sampling matching pursuit (BCoSaMP), which is based on the compressive sampling matching pursuit (CoSaMP) [22], was proposed. Eldar et al. [23] proposed the block version of the OMP (BOMP) algorithm for block-sparse signal recovery. In [24], a dictionary optimization problem for block-sparse signal reconstruction was investigated, but this method needed the maximum block length as the prior information. Another approaches, such as iterative hard thresholding (IHT) [21], block subspace pursuit (BSP) [25] and block StOMP (BStOMP) [26] have been investigated to date. However, most of the block algorithms deal with the single signal reconstruction. Since DCS algorithms do not take into account the block structures of signals and most of the existing block compressed sensing algorithms just deal with the single-channel signal. In this paper, by incorporating the technique of DCS and the additional structure of block sparse signals, we propose a new recovery mechanism to solve the block DCS problem, called the backtracking-based adaptive OMP for block DCS (DCSBBAOMP) algorithm. In contrast to the most existing approaches, the new method can recover the block sparse signals simultaneously without the sparsity structure information. Meanwhile, the new method can recover multiple sparse signals from their compressive measurements. Numerical experiments including recovery of random artificial sparse signals and the real-life data are provided to illustrate the desirable performance of the proposed method. The rest of the paper is organized as follows: in the next section, the block sparse signals reconstruction method is presented. Experimental results are presented in Section 3 to compare the proposed algorithm with DCSBAOMP, backtracking-based adaptive OMP for block sparse signal (BBAOMP), DCSFBP and DCSSP. In Section 4, conclusions and future work are presented. ## 2. Proposed Algorithm for Block-Sparse Signals As the extension of OMP, BBAOMP and DCSBAOMP were proposed by Qi et al. [27] and Zhang et al. [9], respectively. Those algorithms first find one or several atoms which corresponding to the much larger correlation between measurement vectors and residual. To be mentioned, the atoms chosen in the previous stage may be wrong. And then backtracking procedure is used to refine the estimated support set. In the next, these two methods obtain a new residual by using the least-square fit. BAOMP and DCSBAOMP methods do not need signals sparsity as a prior, they are repeated until the residual is smaller than a threshold or the iteration count reaches the number of maximum iteration. The main difference between BBAOMP and DCSBAOMP is that the BBAOMP algorithm is a single-channel method, which deals with block sparse signal, while DCSBAOMP algorithm is a multi-channel method. In this paper, we generalize the DCSBAOMP method to deal with the block sparse signals. Generalizing the DCSBAOMP algorithm to the block sparse signals, we obtain the DCSBBAOMP algorithm. In this method, the measurement matrix [TeX:] $$\Phi$$ is considered to be a dictionary with each block column matrix [TeX:] $$\Phi[i]$$. It first chooses several block indices Cn whose correlations between the residual [TeX:] $$\operatorname{res}^{n-1}$$ and the block columns [TeX:] $$\Phi[i]$$ are not smaller than [TeX:] $$\mu_{1} \cdot \max _{j \in \Omega_{1}}\left\|\left(\operatorname{res}^{n-1}, \Phi[j]\right)\right\|_{2}, \Omega_{1}=\{1,2, \cdots, l\}$$, and then subtracts some wrong block indices [TeX:] $$\Gamma^{n}$$ which corresponding the indices of [TeX:] $$\left\|\mathbf{X}_{F}^{n}[i]\right\|_{2}$$ are not larger than [TeX:] $$\mu_{2} \cdot \max _{j \in C^{n}}\left\|\mathbf{X}_{F}^{n}[j]\right\|_{2}$$, the final estimated support set will be identified after several iterations. [TeX:] $$\|\mathbf{x}\|_{2}$$ returns the 2-norm of matrix [TeX:] $$\mathbf{X}$$, and it is equal to the largest singular value of [TeX:] $$\mathbf{X}$$. The details about the DCSBBAOMP algorithm are shown in Algorithm 1. In this algorithm, μ1 is a constant which determines the number of block indices chosen at each time. When μ1=1, only one block index is selected. When μ1 becomes smaller, the DCSBBAOMP algorithm can select more than one block index at every iteration. Smaller μ1 results in much more block indices are selected at each time and then speeds up the algorithm. Unfortunately, the block indices selected at the above process may be wrong. μ2 is a parameter determining the number of deleted block indices. Similarly, bigger μ2 results in smaller deleted block indices and slows down the algorithm. In many experiments, we find that the choice of μ1 and μ2 is same as those of BAOMP and DCSBAOMP. So in this paper, we do not discuss the change of μ1 and μ2 and choose the same values with BBAOMP and DCSBAOMP while μ1=0.4 and μ2=0.6. After updating the support set F, we generate a new residual by using the least-square fit. Due to the block sparsity K is not known in advance, the DCSBBAOMP algorithm is repeated until the residual resn is smaller than a threshold ε or the iteration count n reaches the number of maximum iterations nmax. ##### Algorithm 1. DCSBBAOMP algorithm ## 3. Simulations and Results In this section, several experiments are simulated to illustrate the performance of the proposed DCSBBAOMP method, which compared with DCSBAOMP, BBAOMP, DCSFBP and DCSSP algorithms. In each trial, the block sparse signals [TeX:] $$\mathbf{X}$$ are artificially generated as follows: for a fixed sparsity K, we randomly choose the nonzero blocks. Each element in the nonzero blocks is drawn from standard Gaussian distribution [TeX:] $$N(0,1)$$ and the elements of other blocks are zeros. The observation matrix is [TeX:] $$\mathbf{Y}=\mathbf{\Phi} \mathbf{X}$$, where the entries of sensing matrix [TeX:] $$\Phi \in \mathbb{R}^{M \times N}$$ are generated from standard Gaussian distribution N(0,1) independently. Firstly, we give two experiments including the changes of sparsity and number of measurements to compare the SNR and the run time of DCSBBAOMP with those of DCSBAOMP, BBAOMP, DCSFBP and DCSSP. Then we discuss the changes of reconstruction performance without the block size. Finally, the proposed method is tested on electrocardiography (ECG) signals. To evaluate its performance, we use a measure named signal-to-noise ratio (SNR) defined as [10]: ##### (8) [TeX:] $$\mathrm{SNR}=10 \log \left(\frac{\|\mathbf{x}\|_{2}^{2}}{\|\mathbf{X}-\hat{\mathbf{X}}\|_{2}^{2}}\right)$$ where [TeX:] $$\mathbf{X}$$ denote the original signals and [TeX:] $$\hat{\mathbf{X}}$$ denote the reconstructed signals, respectively. To be mentioned, each test is repeated 100 times. Then we compute the average values of SNR and running time, respectively. In the following experiments, the proposed method uses [TeX:] $$\mu_{1}=0.4, \mu_{2}=0.6, n_{\mathrm{max}}=M, \varepsilon=10^{-6}$$ as the input parameters. For simplicity, in all the experiments, the sparse_value on x-axis is equal to [TeX:] $$K_{\text {total}} / J$$, that is, ##### (9) [TeX:] $$\text {sparse} _{-} \text { value }=\frac{K_{\text {total}}}{J}$$ ##### 3.1 The Recovery Performance versus Sparsity with Small Sample In the first experiment, the recovery performance is observed as the sparse_value varies from 10 to 70 with the fixed block size d=2 and M=128, N=256, J=3. The average SNR and run time are shown in the Fig. 1. As can be seen from Fig. 1(a), the SNR of all the algorithms decreases slightly as the sparse_value varies from 10 to 70. DCSBAOMP and DCSBBAOMP algorithms obtain the similarly better SNR in here. When sparse_value > 60, the performance decreases significantly. The reason is that sparse_value is close to M/2, the recovery performance will fall down [28]. Fig. 1(b) depicts that the run time of all the algorithms increases as the sparse_value varies from 10 to 70. Specially, the DCSBAOMP gives much litter run time among the five algorithms. The reason lies in two aspects, one is that the DCSBAOMP algorithm is a multichannel algorithm, which accelerates the speed of the algorithm; the other is that the DCSBAOMP utilizes the backtracking procedure, which leads to better reconstruction performance and shorter run time. Reconstruction results over the sparse_value. The numerical values on x-axis denote the sparse_value of signals and those on y-axis represent the SNR (a) and run time (b). Reconstruction results over the number of measurement M. The numerical values on x-axis denote the number of measurement M and those on y-axis represent the SNR (a) and run time (b). ##### 3.2 The Recovery Performance versus Number of Measurement In this experiment, we compare the SNR and run time as the number of measurement M varies from 64 to 176, where N=256, sparse_value=30, J=3. From Fig. 2, as the number of measurement M increases, for all the five algorithms, the SNR increases while the run time decreases. Meanwhile, all the algorithms can obtain similar SNR and the run time with M > 90. When the number of measurement M < 90, performance of our algorithm is better than that of other algorithms except for DCSFBP. Although DCSSP has the largest SNR in this experiment, the run time of DCSSP is not stable compared with other algorithms. ##### 3.3 The Recovery Performance versus Sparsity with Medium Sample In this experiment, the recovery performance is observed as the sparse_value varies from 40 to 200 with the fixed block size d=8 and M=512, N=1024, J=3. The average SNR and run time are shown in the Fig. 3. As can be seen from Fig. 3, the SNR curves decrease slightly and the run time curves increase as the sparse_value increases. It is obviously that DCSSP algorithm achieves the highest SNR, while the SNR of another four algorithms is more or less similar. However, the run time of DCSSP algorithm is longest among the above methods. To be mentioned, although the SNR of our method is not the highest, our method is more computational effective than other methods. Reconstruction results over the sparse_value. The numerical values on x-axis denote the sparse_ value and those on y-axis represent the SNR (a) and run time (b). ##### 3.4 The Recovery Performance versus Number of Block Size In this experiment, we compare the SNR and run time as the number of block d varies from 4 to 16 with step size 4. M=512, N=1024, sparse_value=120, J=3 are fixed. Fig. 4 demonstrates the experimental results. From Fig. 4, we can see that the SNR and run time of all the algorithms do not change any more as the block size d varies from 4 to 16. That is to say, if we fix the total sparsity of the block sparse signals, different block size of sparse signals does not affect the property of all the algorithms. Reconstruction results over the number of block d. The numerical values on x-axis denote the number of block d and those on y-axis represent the SNR (a) and run time (b). ##### 3.5.1 The block size of our algorithm is fixed to 8 In this trail, we test the performance of our algorithm when the block size d is unknown. M=512, N=1024, sparse_value=120, J=3 are fixed. We generate the sources with block size d=8. Note that the block size is unknown in recovery process. Then we change the block size from 2 to 26 with step size 2 in the recovery process. The results are shown in Fig. 5. Reconstruction results with unknown block d. The numerical values on x-axis denote the number of block d and those on y-axis represent the SNR (a) and run time (b); when we generate the source with block size d = 8. ##### 3.5.2 The block size of our algorithm is fixed to 5 In this trail, we generate the sources with block size d=5 instead of d=8. M=512, N=1024, Ktotal=120, J=3 are fixed. Then we change the block size from 3 to 16 with step size 1 in the recovery process. The results are shown in Fig. 6. Reconstruction results with unknown block d. The numerical values on x-axis denote the number of block d and those on y-axis represent the SNR (a) and run time (b); when we generate the source with block size d = 5. From Figs. 5 and 6, when the block size choosing in our algorithm is the multiple of real block size, we can obtain much better reconstruction performance than the performance of other block size. Generally speaking, if the real block size d of sources is known in advance, we can get the best performance in the experiments with our algorithm. ##### 3.6 ECG Signals Recovery In this subsection, we apply our algorithm to the ECG data. Because the sparsity of signals need to be known in DCSSP, we only compare the performance of DCSBBAOMP with that of DCSBAOMP, DCSFBP, DCSSAMP and BBAOMP. We obtain the ECG data from the PhysioNet [29]. In this experiment, three patients are chosen randomly as the source signals [TeX:] $$\mathbf{X}$$. Then we randomly generate one Gaussian matrix [TeX:] $$\Phi \in \mathbb{R}^{M \times N}$$, where N=1024, M=512. The process of signals generation is the same as the literature [9]. From Fig. 7(a), we can see that the ECG data themselves are not sparse. Then we apply the orthogonal Daubechies wavelets (db1) [TeX:] $$\Psi$$ to ECG signals. Fig. 7(b) shows the transformed sparse signals. Then all of the algorithms are applied to recover [TeX:] $$\theta \text { from } \mathbf{Y}=\mathbf{\Phi} \mathbf{X}=\mathbf{\Phi} \mathbf{\Psi} \boldsymbol{\theta}$$. Fig. 7(c) depicts the recovered signals [TeX:] $$\tilde{\mathbf{X}}$$ and Fig. 7(d) shows the corresponding sparse signals [TeX:] $$\tilde{\theta}$$. The performance and the run time are shown in Table 1. Average reconstruction SNR and run time of block sparse signals using different methods From Table 1, one can see that the performance of BBAOMP is best but it needs more time to recover the signals since this method is a single-channel method. Except for BBAOMP, our algorithm obtains the highest average SNR, and the run time is similar to other methods. The electrocardiography (ECG) signals in signal channel no#1 of three patients which are selected randomly from the PTB Diagnostic ECG Database: (a) the original signals [TeX:] $$\mathbf{X}$$, (b) [TeX:] $$\theta$$ with orthogonal Daubechies waveles (db1), (c) [TeX:] $$\tilde{\mathbf{X}}$$ recovered by our algorithm, and (D) [TeX:] $$\tilde{\boldsymbol{\theta}}$$ recovered by our algorithm. ## 4. Conclusion In this paper, a DCSBBAOMP method for recovery of block sparse signals is proposed. This method first chooses atoms adaptively and then removes some atoms that are wrongly chosen at the previous step by using backtracking procedure, which promotes the reconstruction property. The most useful advantage of our proposed algorithm is that it can recover multiple sparse signals from their compressive measurements simultaneously. What’s more, it does not need the block sparsity as a prior. Simulation results demonstrate that our method produces much better reconstruction property compared with many existing algorithms. The two parameters μ1 and μ2 play a key role in our method which provides some flexibility between reconstruction property and computational complexity. However, there is no theoretical guidance on how to select μ1 and μ2. In addition, theoretical guarantees of that the proposed method can accurately recovery the original signals are also not proved. Future works include theoretical analysis of exact reconstruction of the proposed algorithm and the treatment of the selection of parameters of μ1 and μ2. ## Acknowledgement This work is supported by Natural Science Foundation of China (No. 61601417). ## Biography ##### Xingyi Chen https://orcid.org/0000-0003-1943-3925 He graduated from the School of Geodesy and Geomatics, Wuhan University, in 1984. He is currently a professor of China University of Geosciences information Engineering College, China. His research interests include photogrammetry and remote sensing. ## Biography ##### Yujie Zhang https://orcid.org/0000-0001-7710-4017 She received the M.S. degree in applied mathematics and Ph.D. degree in Institute of Geophysics and Geomatics from China University of Geosciences, China, in 2006 and 2012, respectively. She is currently a lecturer at the China University of Geosciences, China. Her research interests include blind signal processing, time-frequency analysis and their applications. ## Biography ##### Rui Qi https://orcid.org/0000-0003-3183-2427 He received the M.S. degree in School of Mathematics and Statistics from Huazhong University of Science and Technology, China, in 2009. He is now a PhD candidate of the Institute of Geophysics and Geomatics of China University of Geosciences, China. His research interests include sparse representation and compressed sensing. ## References • 1 Q. Wang, Z. Liu, "A robust and efficient algorithm for distributed compressed sensing," Computers & Electrical Engineering, vol. 37, no. 6, pp. 916-926, 2011.doi:[[[10.1016/j.compeleceng.2011.09.008]]] • 2 H. Palangi, R. Ward, L. Deng, "Convolutional deep stacking networks for distributed compressive sensing," Signal Processing, vol. 131, pp. 181-189, 2017.doi:[[[10.1016/j.sigpro.2016.07.006]]] • 3 Y. Oktar, M. Turkan, "A review of sparsity-based clustering methods," Signal Processing, vol. 148, pp. 20-30, 2018.doi:[[[10.1016/j.sigpro.2018.02.010]]] • 4 L. Vidya, V. Vivekanand, U. Shyamkumar, M. Deepak, "RBF network based sparse signal recovery algorithm for compressed sensing reconstruction," Neural Networks, vol. 63, pp. 66-78, 2015.custom:[[[-]]] • 5 X. Li, H. Bai, B. Hou, "A gradient-based approach to optimization of compressed sensing systems," Signal Processing, vol. 139, pp. 49-61, 2017.doi:[[[10.1016/j.sigpro.2017.04.005]]] • 6 G. Coluccia, A. Roumy, E. Magli, "Operational rate-distortion performance of single-source and distributed compressed sensing," IEEE Transactions on Communications, vol. 62, no. 6, pp. 2022-2033, 2014.doi:[[[10.1109/TCOMM.2014.2316176]]] • 7 D. Baron, M. F. Duarte, M. B. Wakin, S. Sarvotham, R. G. Baraniuk, 2009 (Online), Available: https://arxiv.org/abs/0901.3403, Available: https://arxiv.org/abs/0901.3403. custom:[[[-]]] • 8 Y. J. Zhang, R. Qi, Y. Zeng, "Backtracking-based matching pursuit method for distributed compressed sensing," Multimedia Tools and Applications, vol. 76, no. 13, pp. 14691-14710, 2017.doi:[[[10.1007/s11042-016-3933-x]]] • 9 Y. Zhang, R. Qi, Y. Zeng, "Forward-backward pursuit method for distributed compressed sensing," Multimedia Tools and Applications, vol. 76, no. 20, pp. 20587-20608, 2017.doi:[[[10.1007/s11042-016-3968-z]]] • 10 Y. C. Eldar, H. Rauhut, "Average case analysis of multichannel sparse recovery using convex relaxation," IEEE Transactions on Information Theory, vol. 56, no. 1, pp. 505-519, 2010.doi:[[[10.1109/TIT.2009.2034789]]] • 11 D. L. Donoho, "For most large underdetermined systems of linear equations the minimal l1-norm solution is also the sparsest solution," Communications on Pure and Applied Mathematics, vol. 59, no. 6, pp. 797-829, 2006.custom:[[[-]]] • 12 M. F. Duarte, S. Sarvotham, D. Baron, M. B. Wakin, R. G. Baraniuk, "Distributed compressed sensing of jointly sparse signals," in Proceeding of the 39th Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, 2005;pp. 1537-1541. custom:[[[-]]] • 13 M. B. Wakin, M. F. Duarte, S. Sarvotham, D. Baron, R. G. Baraniuk, "Recovery of jointly sparse signals from few random projections," Advances in Neural Information Processing Systems, vol. 18, pp. 1435-1440, 2005.custom:[[[-]]] • 14 J. A. Tropp, A. C. Gilbert, M. Strauss, "Simultaneous sparse approximation via greedy pursuit," in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing, Philadelphia, PA, 2005;custom:[[[-]]] • 15 D. Sundman, S. Chatterjee, M. Skoglund, "Greedy pursuits for compressed sensing of jointly sparse signals," in Proceedings of 2011 19th European Signal Processing Conference, Barcelona, Spain, 2011;pp. 368-372. custom:[[[-]]] • 16 K. Lee, Y. Bresler, M. Junge, "Subspace methods for joint sparse recovery," IEEE Transactions on Information Theory, vol. 58, no. 6, pp. 3613-3641, 2012.doi:[[[10.1109/TIT.2012.2189196]]] • 17 X. T. Yuan, X. Liu, S. Yan, "Visual classification with multitask joint sparse representation," IEEE Transactions on Image Processing, vol. 21, no. 10, pp. 4349-4360, 2012.doi:[[[10.1109/TIP.2012.2205006]]] • 18 F. Parvaresh, H. Vikalo, S. Misra, B. Hassibi, "Recovering sparse signals using sparse measurement matrices in compressed DNA microarrays," IEEE Journal of Selected Topics in Signal Processing, vol. 2, no. 3, pp. 275-285, 2008.doi:[[[10.1109/JSTSP.2008.924384]]] • 19 S. F. Cotter, B. D. Rao, "Sparse channel estimation via matching pursuit with application to equalization," IEEE Transactions of Communications, vol. 50, no. 3, pp. 374-377, 2002.doi:[[[10.1109/26.990897]]] • 20 R. Qi, D. Yang, Y. Zhang, H. Li, "On recovery of block sparse signals via block generalized orthogonal matching pursuit," Signal Processing, vol. 153, pp. 34-46, 2018.doi:[[[10.1016/j.sigpro.2018.06.023]]] • 21 R. G. Baraniuk, V. Cevher, M. F. Duarte, C. Hegde, "Model-based compressive sensing," IEEE Transactions on Information Theory, vol. 56, no. 4, pp. 1982-2001, 2010.doi:[[[10.1109/TIT.2010.2040894]]] • 22 D. Needell, J. A. Tropp, "CoSaMP: iterative signal recovery from incomplete and inaccurate samples," Applied and Computational Harmonic Analysis, vol. 26, no. 3, pp. 301-321, 2009.doi:[[[10.1145/1859204.1859229]]] • 23 Y. C. Eldar, P. Kuppinger, H. Bolcskei, "Block-sparse signals: Uncertainty relations and efficient recovery," IEEE Transactions on Signal Processing, vol. 58, no. 6, pp. 3042-3054, 2010.doi:[[[10.1109/TSP.2010.2044837]]] • 24 L. Zelnik-Manor, K. Rosenblum, Y. C. Eldar, "Dictionary optimization for block-sparse representations," IEEE Transactions on Signal Processing, vol. 60, no. 5, pp. 2386-2395, 2012.doi:[[[10.1109/TSP.2012.2187642]]] • 25 A. Kamali, M. A. Sahaf, A. D. Hooseini, A. A. Tadaion, "Block subspace pursuit for block-sparse signal reconstruction," Iranian Journal of Science and Technology: Transactions of Electrical Engineering, vol. 37, no. E1, pp. 1-16, 2013.custom:[[[-]]] • 26 B. X. Huang, T. Zhou, "Recovery of block sparse signals by a block version of StOMP," Signal Processing, vol. 106, pp. 231-244, 2015.doi:[[[10.1016/j.sigpro.2014.07.023]]] • 27 R. Qi, Y. Zhang, H. Li, "Block sparse signals recovery via block backtracking-based matching pursuit method," Journal of Information Processing Systems, vol. 13, no. 2, pp. 360-369, 2017.custom:[[[-]]] • 28 E. Candes, J. Romberg, T. Tao, "Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information," IEEE Transactions on Information Theory, vol. 52, no. 2, pp. 489-509, 2006.doi:[[[10.1109/TIT.2005.862083]]] • 29 A. L. Goldberger, L. A. Amaral, L. Glass, J. M. Hausdorff, P. C. Ivanov, R. G. Mark, J. E. Mietus, G. B. Moody, C. K. Peng, H. E. Stanley, "PhysioBank, PhysioToolkit, and PhysioNet: components of a new research resource for complex physiologic signals," Circulation, vol. 101, no. 23, pp. e215-e220, 2000.custom:[[[-]]] Table 1. Average reconstruction SNR and run time of block sparse signals using different methods Our algorithm DCSBAOMP DCSFBP DCSSAMP BBAOMP Average SNR 180.2198 73.4399 69.5612 148.0398 200.0362 Run time (s) 20.6002 13.8757 51.2670 11.6684 78.2678 Reconstruction results over the sparse_value. The numerical values on x-axis denote the sparse_value of signals and those on y-axis represent the SNR (a) and run time (b). Reconstruction results over the number of measurement M. The numerical values on x-axis denote the number of measurement M and those on y-axis represent the SNR (a) and run time (b). Reconstruction results over the sparse_value. The numerical values on x-axis denote the sparse_ value and those on y-axis represent the SNR (a) and run time (b). Reconstruction results over the number of block d. The numerical values on x-axis denote the number of block d and those on y-axis represent the SNR (a) and run time (b). Reconstruction results with unknown block d. The numerical values on x-axis denote the number of block d and those on y-axis represent the SNR (a) and run time (b); when we generate the source with block size d = 8. Reconstruction results with unknown block d. The numerical values on x-axis denote the number of block d and those on y-axis represent the SNR (a) and run time (b); when we generate the source with block size d = 5. The electrocardiography (ECG) signals in signal channel no#1 of three patients which are selected randomly from the PTB Diagnostic ECG Database: (a) the original signals [TeX:] $$\mathbf{X}$$, (b) [TeX:] $$\theta$$ with orthogonal Daubechies waveles (db1), (c) [TeX:] $$\tilde{\mathbf{X}}$$ recovered by our algorithm, and (D) [TeX:] $$\tilde{\boldsymbol{\theta}}$$ recovered by our algorithm.
2019-09-17 13:08:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6180979609489441, "perplexity": 1406.525112313696}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573071.65/warc/CC-MAIN-20190917121048-20190917143048-00424.warc.gz"}
https://socratic.org/questions/how-do-you-write-the-equation-of-line-passes-through-4-5-and-is-perpendicular-to
# How do you write the equation of line passes through (4, -5), and is perpendicular to 2x-5y= -10? Sep 2, 2016 $5 x + 2 y = 10$ #### Explanation: Let us write the equation of line 2x-5y=-10 in slope intercept form i.e. $- 5 y = - 2 x - 10$ or $y = \frac{2}{5} x + 2$ i.e. its slope is $\frac{2}{5}$. Hence the slope of a line perpendicular to it is $\frac{- 1}{\frac{2}{5}} = - \frac{5}{2}$. Now equation of a line passing through appoint $\left({x}_{1} , {y}_{1}\right)$ and having a slope $m$ is given by $\left(y - {y}_{1}\right) = m \left(x - {x}_{1}\right)$ and so the equationn of line passing through $\left(4 , - 5\right)$ and having a slope of $- \frac{5}{2}$ is $\left(y - \left(- 5\right)\right) = - \frac{5}{2} \left(x - 4\right)$ or y+5=-5/2x+(5×4)/2# or $y + 5 = - \frac{5}{2} x + 10$ or $2 y + 10 = - 5 x + 20$ or $5 x + 2 y = 10$
2020-05-29 15:11:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 16, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8972809314727783, "perplexity": 343.5905888713085}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347404857.23/warc/CC-MAIN-20200529121120-20200529151120-00206.warc.gz"}
https://runestone.academy/ns/books/published/welcomecs/CSPCollectionsIntro/listIn.html
# 15.3. In and Not In¶ To check if an item is in a list, we can use the in operator. To check if something is not in a list, we can use not in: Because in and not in are opposites, we can often use just one or the other and handle the other case with else:
2023-02-02 17:56:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.669131338596344, "perplexity": 354.0215987419069}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500035.14/warc/CC-MAIN-20230202165041-20230202195041-00359.warc.gz"}
http://educ.jmu.edu/~waltondb/MA2C/ftc-part-one.html
## Section6.5The Fundamental Theorem of Calculus, Part One ### Subsection6.5.1Overview When we introduced the definite integral, we also learned about accumulation functions. An accumulation function is a function $A$ defined as a definite integral from a fixed lower limit $a$ to a variable upper limit where the integrand is a given function $f\text{,}$ \begin{equation*} A(x) = A(a) + \int_a^x f(z)\, dz\text{.} \end{equation*} The function $f$ was called the rate of accumulation for the function $A\text{,}$ and we wrote $A'(x) = f(x)\text{.}$ Then we defined another rate of change, the instantaneous rate of change, with a corresponding function called the derivative. For a function $F(x)\text{,}$ the derivative was defined by a limit, \begin{equation*} \frac{dF}{dx}(x) = \lim_{h \to 0} \frac{F(x+h)-F(x)}{h}\text{.} \end{equation*} This section establishes a relation between these two concepts of the rate of change. The Fundamental Theorem of Calculus proves that the derivative of an accumulation function exactly matches the rate of accumulation at whenever the rate of accumulation is continuous. That is, the instantaneous rate of change of a quantity, which graphically gives the slope of the tangent line on the graph, is exactly the same as the value of the rate of accumulation when the function is expressed as an accumulation using a definite integral. The proof of the fundamental theorem relies on properties of continuous functions as well as properties of limits. ### Subsection6.5.2Illustration of an Example To illustrate the concept that we will prove, let us consider a simple polynomial function \begin{equation*} f(x) = x^3 - 3x + 5\text{.} \end{equation*} Using our rules of accumulation, we know that $f(x)$ can be written as an accumulation, \begin{equation*} f(x) = 5 + \int_0^x 3z^2-3 \, dz\text{.} \end{equation*} What happens if we compute the derivative using the definition? We start with some preparatory algebra based on $f(x) = x^3-3x+5\text{.}$ \begin{align*} f(x+h)&= (x+h)^3 - 3(x+h)+5\\ &= (x+h)(x+h)(x+h) - 3(x+h) + 5\\ &= (x^2+2xh+h^2)(x+h) - 3x - 3h + 5\\ &= x^3+3x^2h + 3xh^2 + h^3 - 3x - 3h + 5 \end{align*} \begin{align*} f(x+h) - f(x) &= (x^3 + 3x^2h + 3xh^2 + h^3 - 3x - 3h + 5) - (x^3 - 3x + 5)\\ &= 3x^2h + 3xh^2 + h^3 - 3h\\ &= h(3x^2+3xh+h^2-3) \end{align*} The derivative can be computed using the limit: \begin{align*} \frac{df}{dx}(x) &= \lim_{h \to 0} \frac{f(x+h)-f(x)}{h} \\ & = \lim_{h \to 0} \frac{h(3x^2+3xh+h^2-3)}{h} \\ & = \lim_{h \to 0} 3x^2+3xh+h^2-3 \\ & = 3x^2+3x(0)+(0)^2-3 \\ & = 3x^2-3. \end{align*} The derivative exactly matches the rate of accumulation. Our ultimate goal in this section is to show that this will always happen for accumulation functions. To reach this goal, we require some additional concepts. ### Subsection6.5.3Average Value of a Function The derivative computes the limit of the average rate of change. In preparation for the Fundamental Theorem of Calculus, we need a relation between the idea of average value and the definite integral. Consider how we compute the average value of a list of numbers. We add up all the values and divide by the number of values in the list. When thinking about a function $f$ on an interval $[a,b]\text{,}$ there are infinitely many different values to consider. We need a different way to think about it. We define the average value of a function using a limit of a standard average value. Let $f$ be a piecewise continuous function on $[a,b]$ so that the definite integral $\int_a^b f(x) \,dx$ will be defined. Consider a uniform partition of the interval $[a,b]$ with $\Delta x = \frac{b-a}{n}$ and $x_k = a + k \cdot \Delta x\text{,}$ just as we defined when creating a Riemann sum. We approximate the average value of $f$ on the interval $[a,b]\text{,}$ which is represented by the symbol $\langle f \rangle_{[a,b]}\text{,}$ by finding the average of the values $f(x_k)$ for $k=1,2,\ldots,n\text{:}$ \begin{equation*} \langle f \rangle_{[a,b]} \approx \frac{1}{n} \sum_{k=1}^{n} f(x_k) \text{.} \end{equation*} The approximation is improved with larger and larger values of $n\text{,}$ so the actual average value will be the limit of the average as $n \to \infty\text{:}$ \begin{equation*} \langle f \rangle_{[a,b]} = \lim_{n \to \infty} \frac{1}{n} \sum_{k=1}^{n} f(x_k) \text{.} \end{equation*} The average value defined by this limit looks remarkably similar to the limit of a Riemann sum that would define a definite integral. In particular, \begin{equation*} \int_a^b f(x) \, dx = \lim_{n \to \infty} \sum_{k=1}^{n} f(x_k) \frac{b-a}{n}\text{.} \end{equation*} Comparing our two limits reveals that the definite integral is the same as the average value multiplied by $b-a\text{.}$ This will be our formal definition of the average value of a function. ###### Definition6.5.1 For a function $f(x)$ which has an integral on an interval $[a,b]\text{,}$ the average value of $f(x)$ on $[a,b]$ is defined by \begin{equation*} \langle f(x) \rangle_{[a,b]} = \frac{1}{b-a} \int_{a}^{b} f(x) \, dx \text{.} \end{equation*} When we think of the definite integral as the total signed area between the graph $y=f(x)$ and the axis $y=0\text{,}$ the average value can be interpreted as the value of a constant function that would have the same signed area. This is a consequence of writing \begin{equation*} \int_a^b f(x) \, dx = (b-a) \cdot \langle f(x) \rangle_{[a,b]}\text{.} \end{equation*} The average value multiplied by $b-a$ equals the total signed area of $f(x)$ from $a$ to $b\text{,}$ so we can think of $(b-a) \cdot \langle f(x) \rangle_{[a,b]}$ as the signed area of a rectangle with a vertical position given by the average value. Imagine the region below the graph $y=f(x)$ and between the lines $x=a$ and $x=b$ as if it were frozen water. If the ice melted but was trapped between the vertical lines $x=a$ and $x=b\text{,}$ the high regions of the ice would melt and fill the valleys until the water level was flat. The resulting flat value is equivalent to the average value of the original function. Any regions of area above the average value line are used to fill an equivalent amount of area missing below the line. ###### Example6.5.2 Consider finding the average value of $f(x)=x^2-2$ on the interval $[0,2]\text{.}$ The average value is computed by dividing the definite integral of $f(x)=x^2-2$ as $x$ goes from 0 to 2 by the length of the interval. \begin{align*} \langle x^2 \rangle_{[0,2]} &= \frac{1}{2-0} \int_0^2 x^2 - 2 \, dx \\ &= \frac{1}{2} \Big[ \frac{1}{3}x^3 - 2x \Big]_0^2 \\ &= \frac{1}{2} \Big(\frac{8}{3} - 2(2)\Big) - \frac{1}{2} \Big(\frac{0}{3} - 2(0)\Big) \\ &= \frac{1}{2} \Big(\frac{-4}{3}\Big) = -\frac{2}{3} \end{align*} Having found the average value, we now create a graph of $y=f(x)=x^2-2$ together with the graph $y=\langle f(x) \rangle_{[0,2]} = -\frac{2}{3}\text{,}$ as shown in Figure 6.5.3. The graph of the average value is the horizontal line. We can see that the area above the average value is matched by the unshaded area below the average value line and above the function. We need one more theorem before we discuss the Fundamental Theorem. That theorem is called the Mean Value Theorem for Definite Integrals. The phrase mean value is equivalent to average value, just as the mean of a set of numbers is equivalent to the average of those numbers. We have previously pointed out that the average or mean value of a function over an interval is equal to the constant value (horizontal function) that would have the same integral. In Figure 6.5.3, we can see that the function actually crosses the line representing average value. As an equation, this point of intersection corresponds to a solution of the equation \begin{equation*} f(x) = \langle f(x) \rangle_{[0,2]} \qquad \Leftrightarrow \qquad x^2 - 2 = -\frac{2}{3}\text{.} \end{equation*} When a function is continuous, such an intersection point will always occur. If a function is not continuous, it is possible for the function to skip over its average value without such an intersection. Because $f$ is continuous on $[a,b]$ (a closed interval), the Theorem 4.6.14 guarantees that $f$ has an absolute maximum value $f_{\max}$ and an absolute minimum value $f_{\min}$ inside the interval. The average value must be between the maximum and minimum values. Indeed, for all $x \in [a,b]$ we have \begin{equation*} f_{\min} \le f(x) \le f_{\max} \text{.} \end{equation*} Definite integrals preserve the ordering so that \begin{equation*} f_{\min}\cdot(b-a) \le \int_a^b f(x)\,dx \le f_{\max}\cdot(b-a) \text{,} \end{equation*} and dividing by the length of the interval $b-a$ we have \begin{equation*} f_{\min} \le \langle f(x) \rangle_{[a,b]} \le f_{\max} \text{.} \end{equation*} The Theorem 4.6.18 then guarantees that between the $x$-values where the extremes occur we must have at least one value $x=c$ where $f(x)=\langle f(x) \rangle_{[a,b]}\text{.}$ The Mean Value Theorem for Definite Integrals will give us a tool with which we can replace a definite integral by a corresponding value of the integrand or rate function at some point within the interval times the length of that interval. ### Subsection6.5.4The Fundamental Theorem of Calculus Now, are you ready to be blown away? Having learned what the average value of a function over an interval represents—the constant height that would give the same signed area over the interval— we can discover that there is a relationship between the ideas of average rate of change and average value. Let's think about an accumulation function, $A$ with its corresponding rate of accumulation $A'\text{.}$ What is the average rate of change of $A(x)$ as $x$ goes from $a$ to $b\text{?}$ \begin{equation*} \frac{\Delta A}{\Delta x}\Big|_{a,b} = \frac{A(b)-A(a)}{b-a} = \frac{1}{b-a} \left(A(b)-A(a)\right) \text{.} \end{equation*} Because $A$ is an accumulation, the change $\Delta A$ can be rewritten using an integral and \begin{equation*} \frac{\Delta A}{\Delta x}\Big|_{a,b} = \frac{1}{b-a} \int_a^b A'(x) \,dx \text{.} \end{equation*} But that is just the average value of the rate of accumulation function $A'\text{.}$ Two completely different ideas of average value end up measuring the very same thing. Pay attention, however, that we are thinking about two different functions. The average value of the rate of accumulation is based on the integral of the rate $A'(x)\text{.}$ The average rate of change uses the rate of change based on the difference quotient using $A(x)\text{.}$ The equivalence of these two averages provides exactly what is necessary to compute the derivative of an accumulation function. Given the accumulation function $A(x)$ and its associated integrand $f(x)\text{,}$ we consider the average rate of change of $A$ between $x$ and $x+h\text{.}$ By Theorem 6.5.5, we can rewrite this in terms of the value of the rate function $f\text{,}$ \begin{equation*} \frac{\Delta A}{\Delta x}\Big|_{x,x+h} = \frac{A(x+h)-A(x)}{h} = f(c_h)\text{,} \end{equation*} for some value $c_h$ between $x$ and $x+h\text{.}$ The symbol $c_h$ includes the $h$ to emphasize that this value depends on $h\text{.}$ Now consider a sequence of values $h \to 0\text{.}$ Because $c_h$ is between $x$ and $x+h$ for all $h\text{,}$ the corresonding sequence $c_h$ also converges, $c_h \to x\text{.}$ As $f$ is a continuous function, \begin{equation*} \frac{dA}{dx}(x) = \lim_{h \to 0} \frac{\Delta A}{\Delta x}\Big|_{x,x+h} = \lim_{h \to 0} f(c_h) = f(x)\text{.} \end{equation*} What have we shown? Earlier, when discussing accumulation functions, \begin{equation*} A(x) = A(a) + \int_a^x f(z)\,dz\text{,} \end{equation*} we learned to identify the rate function based on its appearance within the definite integral and we wrote $A'(x) = f(x)$ as a way of describing this association. In that association, the prime (apostrophe) of $A'$ was telling us to identify the appropriate rate of accumulation. Now we have learned about another concept, the derivative, which is defined as the limiting value of the average rate of change of a function $f$ from the input value of interest $x$ and a second point $x+h$ as $h \to 0\text{.}$ This is a fundamentally different concept from accumulation defined as the limit of a Riemann sum. Nevertheless, when we compute the derivative of an accumulation function, we recover exactly that function's corresponding rate of accumulation, so long as the rate of accumulation is a continuous function. The rate of accumulation and the derivative are really different perspectives of the same function. This surprisingly deep relationship between definite integrals and derivatives will continue to develop. ### Subsection6.5.5Summary • For simple polynomials which we previously learned to express as accumulation functions, the rate of accumulation seems miraculously to agree with the derivative of the polynomial. • The Fundamental Theorem of Calculus (FTC1) shows us that this isn't circumstance but will always happen when the rate of accumulation is a continuous function. That is, the derivative of an accumulation function will equal the corresponding rate function. When we write $f'(x)\text{,}$ that means both the rate of accumulation when $f(x)$ is an accumulation function and the derivative of a function $f(x)$ because those are ultimately the same thing. • We can compute the average value of a function on an interval $[a,b]$ using a definite integral, \begin{equation*} \langle f(x) \rangle_{[a,b]} = \frac{1}{b-a} \int_a^b f(x) \, dx\text{.} \end{equation*} The integral replaces the idea of adding a list of values and dividing by the length of the interval replaces the idea of dividing by the number of values being added. • The average rate of change of an accumulation function $f(x)$ and the average value of the rate of accumulation $f'(x)$ for that function are equal to each other. • The Mean Value Theorem for Definite Integrals guarantees that for a continuous function, the equation $f(c) = \langle f(x) \rangle_{[a,b]}$ has a solution for some value $c \in (a,b)\text{.}$ It allows us to substitute \begin{equation*} \int_{a}^{b} f(x) \, dx = f(c) \cdot (b-a) \end{equation*} for some $c$ between $a$ and $b\text{,}$ but does not tell us how to find $c\text{.}$ ### Subsection6.5.6Exercises Find the average value of the given function over the given interval. Sketch a graph of the function and the average value over the interval. Then solve for $c$ so that $f(c) = \langle f(x) \rangle_{[a,b]}\text{,}$ if it exists. ###### 1 $f(x)=4+2x$ with $[a,b]=[0,2]\text{.}$ ###### 2 $f(x)=4x-x^2$ with $[a,b]=[0,2]\text{.}$ ###### 3 $f(x)=4x-x^2$ with $[a,b]=[0,4]\text{.}$ ###### 4 $f(x)=\begin{cases} 1, &0 \le x \lt 2 \\ 3, &2 \le x \le 3 \end{cases}$ with $[a,b]=[0,3]\text{.}$ You will need to split the integral into two intervals. (Hint: Think about the graph geometrically.) ###### 5 $f(x)=\begin{cases} 2x, &0 \le x \lt 2 \\ 6-x, &2 \le x \le 4 \end{cases}$ with $[a,b]=[0,4]\text{.}$ You will need to split the integral into two intervals. (Hint: Think about the graph geometrically.) ###### 6 $f(x)=\begin{cases} x, &0 \le x \lt 2 \\ 5-x, &2 \le x \le 3 \end{cases}$ with $[a,b]=[0,3]\text{.}$ You will need to split the integral into two intervals. (Hint: Think about the graph geometrically.) Applying the Fundamental Theorem of Calculus. ###### 7 Find $\frac{dF}{dx}(x)$ where $\displaystyle F(x) = 10 + \int_1^x z^2 - 3z \, dz\text{.}$ Then give the equation of the tangent line at $x=1\text{.}$ ###### 8 Find $\frac{dG}{dx}(x)$ where $\displaystyle G(x) = \int_0^x \frac{1}{z^2+4} \, dz\text{.}$ Then give the equation of the tangent line at $x=0\text{.}$ ###### 9 Find $\frac{dH}{dx}(x)$ where $\displaystyle H(x) = 3 + \int_2^x z e^{-z^2} \, dz\text{.}$ Then give the equation of the tangent line at $x=2\text{.}$ Applications of Average Value ###### 10 The density (kilograms per meter) of a rod that is two meters long depends on position along the rod according to the equation \begin{equation*} \rho(x) = 2 - 0.25x, \quad 0 \le x \le 2\text{.} \end{equation*} Find the average density of the rod. ###### 11 A car accelerates from 0 to 64 miles per hour over eight seconds so that the velocity of the car is a function of time given by \begin{equation*} v(t) = 16t-t^2, \qquad 0 \le t \le 8\text{.} \end{equation*} What is the average velocity of the car during those eight seconds? How far does the car travel? (Hint: Use Theorem 6.5.5 and pay attention to units.) ###### 12 During a rainstorm, the rate $R$ (inches per hour) at which rain fell varied according to the following relation, \begin{equation*} R(t) = \begin{cases} 2t, & 0 \le t \lt 0.25, \\ 0.5, & 0.25 \le t \lt 0.5, \end{cases} \end{equation*} where $t$ is measured in hours. What is the average rate at which rain fell during the storm? What was the total amount of rain that fell during the storm? (Hint: Use Theorem 6.5.5) ###### 13 When the state police measure vehicle speed from aircraft, one approach of determining a car's speed is to time how long it takes to travel a fixed distance, say a quarter mile. Suppose that you were timed and the state police recorded 11.25 seconds. They charge you with speeding at 80 mph. If the police never actually recorded your exact speed, how can they guarantee that you must have been speeding? (Hint: Use Theorem 6.5.4
2019-02-22 17:21:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9923599362373352, "perplexity": 259.2434964577588}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247518497.90/warc/CC-MAIN-20190222155556-20190222181556-00178.warc.gz"}
http://mathoverflow.net/questions/49699/coloring-edges-on-a-graph-s-t-the-set-of-edges-for-any-two-vertices-have-no-mor/49735
# Coloring edges on a graph s.t. the set of edges for any two vertices have no more than 'k' colors in common Please imagine the case where one has a planar graph, $G$, with a set of $|V|$ vertices, $(v_1, ..., v_{|V|}) \in V$, and $|E|$ edges, $(e_1, ..., e_{|E|}) \in E$. Now, provided a total of $N$ colors, where $N < |E|$ (the number of edges), we seek to assign these colors to the edges of the graph such that: (1) - The set of edges connected to any vertex contains edges with all unique colors, i.e. no two edges share the same color when attached to the same vertex. This condition should establish my question as a special case of the general case NP-Hard edge-coloring problem. (2) - The intersection, or overlap, between the colors of the edges of any two vertices is, at most, of size $k = 1$ or $k = 2$. This condition must hold true regardless of whether the vertices are adjacent or not. (thanks domotorp!) What would be the most efficient algorithm for coloring the edges of $G$ provided these constraints? Does the problem become considerably simpler if one tightens the bounds on the size of vertex edge sets? My approach to the problem thus far has been to assign unique colors to all $|E|$ edges of a graph, i.e. to have $N = |E|$, and then proceed to reduce $N$ using a naive stochastic procedure. It would be great to have an efficient deterministic or semi-deterministic algorithm. I appreciate everyone's time! Clarifications: • I am allowing the case of $k = 2$ as well as $k = 1$. • I changed criterion (2) from requiring that the intersection is of size 'k' to explicitly setting $k = 1$ (or $k = 2$), which is the case I am primarily interested in and hopefully better focuses this question. - The second condition is a condition on the edges, not a condition on the colorings. Is the graph's edges fixed or not? – Qiaochu Yuan Dec 17 '10 at 3:28 Qiaochu, yes, there is a fixed number of edges |E| (not the greatest notation). |E| > N, where N is the number of available colors. – AfternoonCoffee Dec 17 '10 at 3:38 @AfternoonCoffee: but is the location of the edges fixed? – Qiaochu Yuan Dec 17 '10 at 3:49 Yes, one is provided a fixed graph 'G'. Edges may be colored in any manner (so long as they obey the aforementioned constraints), but they cannot be rearranged. – AfternoonCoffee Dec 17 '10 at 3:59 @AfternoonCoffee: then why is the second condition a condition on the edges, not a condition on their colorings? – Qiaochu Yuan Dec 17 '10 at 4:08 If the input number $k$ is very large (say as large as the next-to-maximum degree) then condition (2) has no effect, and this becomes the same as $N$-edge-colouring of a graph, which is NP-complete. (Have you read about this problem? It's NP-complete even for 3-regular graphs and $N=3$.) So if I understand correctly it does not seem that there is any hope of exactly solving this problem with an efficient algorithm. There are lots of results about finding an edge-colouring with approximately the minimum number of colours, classical ones include Vizing's theorem. My brain can't parse the phrase "...bounds on the size of vertex edge sets..." but maybe in the last question you mean, is the problem easy if $k$ is small enough? In the case of 3-regular graphs with $N=3$, it makes it easier but for a trivial reason: no colouring meeting (1) and (2) is possible for any such graph if $k<3$, since any colouring meeting (1) would have to have all 3 colours appearing adjacent to every vertex. - Dear Dave, I'm setting k = 1 to better focus the question on the case I'm particularly interested in where k << N. I'm hoping I can differentiate this problem from the known NP-hard problem of N-edge coloring... – AfternoonCoffee Dec 17 '10 at 16:23 It's probably worth stating that condition (1) defines this as a special case of the edge coloring problem. – AfternoonCoffee Dec 17 '10 at 16:25 "...bounds on the size of vertex edge sets..." - sorry, by this I mean providing tight lower and upper-bounds for connectivity of each vertex in the graph. – AfternoonCoffee Dec 17 '10 at 16:26 Ok, thanks! BTW the term is typically called "the degree of a vertex," connectivity already has a different standard meaning. It doesn't seem obvious to me whether or not this problem is NP-hard when $k=1$, so it is a good and interesting clarification. – Dave Pritchard Dec 17 '10 at 16:43 I edited my answer after the clarification of the question. Consider the union of two different color classes. This subgraph consists of disjoint edges and (possibly) a path of length 2. This gives the following bound if the paths can have length 2: $\sum_i {deg(v_i) \choose 2}\le {N\choose 2}$ Of course this is just a necessary and not a sufficient condition, but it might be a good start for an NP-completeness proof. - Dear domotorp, condition (2) must hold true regardless of whether the vertices are adjacent or not. Therefore the answer to: "...if there are two adjacent vertices, then can they have another common color apart from the color of their common edge or no?" is no. I hope that simplifies matters, and an NP-completeness result for this problem would be fantastic. – AfternoonCoffee Dec 18 '10 at 9:38
2016-02-12 05:48:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7987713813781738, "perplexity": 264.1192749639989}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701163438.83/warc/CC-MAIN-20160205193923-00153-ip-10-236-182-209.ec2.internal.warc.gz"}
https://core.ac.uk/display/6994308
## Asymptotic and effective coarsening exponents in surface growth models ### Abstract We consider a class of unstable surface growth models, $\partial_t z=-\partial_x {\cal J}$ , developing a mound structure of size λ and displaying a perpetual coarsening process, i.e. an endless increase in time of λ. The coarsening exponents n, defined by the growth law of the mound size λ with time, λ∼t n, were previously found by numerical integration of the growth equations [A. Torcini, P. Politi, Eur. Phys. J. B 25, 519 (2002)]. Recent analytical work now allows to interpret such findings as finite time effective exponents. The asymptotic exponents are shown to appear at so large time that cannot be reached by direct integration of the growth equations. The reason for the appearance of effective exponents is clearly identified. Copyright EDP Sciences/Società Italiana di Fisica/Springer-Verlag 200681.10.Aj Theory and models of crystal growth; physics of crystal growth, crystal morphology, and orientation, 02.30.Jr Partial differential equations, DOI identifier: 10.1140/epjb/e2006-00380-9 OAI identifier: Download PDF: Sorry, we are unable to provide the full text but you may find it at the following location(s): • http://hdl.handle.net/10.1140/... (external link) • To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.
2021-06-13 16:30:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5193071961402893, "perplexity": 3555.1208225522537}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487610196.46/warc/CC-MAIN-20210613161945-20210613191945-00612.warc.gz"}
https://turi.com/learn/userguide/clustering/dbscan.html
# DBSCAN DSBCAN, short for Density-Based Spatial Clustering of Applications with Noise, is the most popular density-based clustering method. Density-based clustering algorithms attempt to capture our intuition that a cluster — a difficult term to define precisely — is a region of the data space where there are lots of points, surrounded by a region where there are few points. DBSCAN does this by partitioning the input data points into three types: • Core points have a large number of other points within a given neighborhood. The parameter min_core_neighbors defines how many points counts as a "large number", while the radius parameter defines how large the neighborhoods are around each point. Specifically, a point is in the neighborhood of point if radius, where is a user-specified distance function. • Boundary points are within distance radius of a core point, but don't have sufficient neighbors of their own to be considered core. • Noise points comprise the remainder of the data. These points have too few neighbors to be considered core points, and are further than distance radius from all core points. Clusters are formed by connecting core points that are neighbors of each other, then assigning boundary points to their nearest core neighbor's cluster. Noise points are left unassigned. DBSCAN tends to be slower than K-means because it requires computation of the similarity graph on the input dataset, but it has several major conceptual advantages: • The number of clusters does not need to be known a priori; DBSCAN detects the optimum number of clusters automatically for the given values of min_core_neighbors and radius. • DBSCAN recovers much more flexible cluster shapes than K-means, which can only find spherical clusters. • DBSCAN intrinsically finds and labels outliers as such, making it a great tool for outlier and anomaly detection. • DBSCAN works with any distance function. Note that results may be poor for distances that do not obey standard properties of distances, i.e. symmetry, non-negativity, triangle inequality, and identity of indiscernibles. The distances "euclidean", "manhattan", "jaccard", and "levenshtein" will likely yield the best results. #### Basic usage To illustrate the basic usage of DBSCAN and how the results can differ from K-means, we simulate non-spherical, low-dimensional data using the scikit-learn datasets module. import graphlab as gl from sklearn.datasets import make_moons data = make_moons(n_samples=200, shuffle=True, noise=0.1, random_state=19) sf = gl.SFrame(data[0]).unpack('X1') dbscan_model.summary() Class : DBSCANModel Schema ------ Number of examples : 200 Number of feature columns : 2 Max distance to a neighbor (radius) : 0.25 Min number of neighbors for core points : 10 Number of distance components : 1 Training summary ---------------- Total training time (seconds) : 0.1954 Number of clusters : 2 Accessible fields ----------------- cluster_id : Cluster label for each row in the input dataset. Like the K-means model, the assignments of points to clusters is in the models' cluster-id field. The second column shows the cluster assignment for the row index of the input data indicated by the first column. The third column shows whether DBSCAN considers the point core, boundary, or noise. Noise points are not assigned to a cluster, which is encoded as a "missing" value in the 'cluster_id' column. dbscan_model['cluster_id'].head(5) +--------+------------+------+ | row_id | cluster_id | type | +--------+------------+------+ | 175 | 1 | core | | 136 | 0 | core | | 33 | 1 | core | | 113 | 0 | core | | 110 | 1 | core | +--------+------------+------+ [5 rows x 3 columns] dbscan_model['cluster_id'].tail(5) +--------+------------+----------+ | row_id | cluster_id | type | +--------+------------+----------+ | 53 | 0 | boundary | | 4 | 1 | boundary | | 43 | 0 | boundary | | 116 | 0 | boundary | | 74 | None | noise | +--------+------------+----------+ [5 rows x 3 columns] Because we generated 2D data, we can plot it and color the points according to the cluster assignments generated by our DBSCAN model. The first step is to join the cluster results back to the original data. Please note: DBSCAN scrambles the row order - be careful! Next we define boolean masks so we can plot the core, boundary, and noise points separately. Boundary points are drawn smaller than core points, and (unclustered) noise points are left black. Some plotting code is omitted for brevity. import matplotlib.pyplot as plt plt.style.use('ggplot') sf = sf.join(dbscan_model['cluster_id'], on='row_id', how='left') sf = sf.rename({'cluster_id': 'dbscan_id'}) fig, ax = plt.subplots() c='black') fig.show() For comparison, K-means cannot identify the true clusters in this case, even when we tell the model the correct number of clusters. kmeans_model = gl.kmeans.create(sf, features=['X1.0', 'X1.1'], num_clusters=2) sf['kmeans_id'] = kmeans_model['cluster_id']['cluster_id'] fig, ax = plt.subplots() ax.scatter(sf['X1.0'], sf['X1.1'], s=80, alpha=0.9, c=sf['kmeans_id'], cmap=plt.cm.Set1) fig.show() #### Setting key parameters DBSCAN is particularly useful when the number of clusters is not known a priori, but it can be tricky to set the radius and min_core_neighbors parameters. To improve the quality of DBSCAN results, try the following: • When radius is too large or min_core_neighbors is too small, every point being labeled as a core point, often connected as a single cluster. If your trained model has a single cluster with only core points (and you suspect this is incorrect), try decreasing the radius parameter or increasing the min_core_neighbors parameter. • Conversely, if DBSCAN labels all points as noise, with no clusters returned at all, try increasing the radius or decreasing the min_core_neighbors. • Use the GraphLab Create nearest neighbors toolkit to construct a similarity graph on the data and plot the distribution of distances with Canvas. This will give you a sense for reasonable values of the radius parameter, then the min_core_neighbors parameter can be tuned by itself for optimal results. #### Choosing the distance function DBSCAN is not restricted to Euclidean distances. The GraphLab Create implementation allows any distance function---including composite distances---to be used with DBSCAN. This allows for a tremendous amount of flexibility in terms of data types and tuning for optimal clustering quality. To demonstrate this flexibility we'll use DBSCAN with Jaccard distance to deduplicate wikipedia articles. import graphlab as gl import os if os.path.exists('wikipedia_w16'): sf = gl.SFrame('wikipedia_w16') else: sf.save('wikipedia_w16') This particular subset of wikipedia has over 72,000 documents; in the interest of speed for the demo we sample 20% of this. We also preprocess the data by constructing a bag-of-words representation for each article and trimming out stopwords. See GraphLab Create's text analytics and SArray documentation for more details. sf_sample = sf.sample(0.2) sf_sample['word_bag'] = gl.text_analytics.count_words(sf_sample['X1']) sf_sample['word_bag'] = sf_sample['word_bag'].dict_trim_by_keys( gl.text_analytics.stopwords(), exclude=True) With our trimmed bag of words representation, Jaccard distance is a natural choice. For the purpose of deduplication we want to identify points as "core" if they have any near neighbors at all, so we set min_core_neighbors to 1. To define what we mean by "near" we set the radius parameter somewhat arbitrarily at 0.5; this means that two points are considered neighbors if they share 50% or more of the words present in either article. wiki_cluster = gl.dbscan.create(sf_sample, features=['word_bag'], min_core_neighbors=1) wiki_cluster.summary() Class : DBSCANModel Schema ------ Number of examples : 14265 Number of feature columns : 1 Max distance to a neighbor (radius) : 0.5 Min number of neighbors for core points : 1 Number of distance components : 1 Training summary ---------------- Total training time (seconds) : 35.1235 Number of clusters : 88 Accessible fields ----------------- cluster_id : Cluster label for each row in the input dataset. From the model summary we see there are 88 clusters in the set of 14,265 documents. For more detail on the distribution of cluster sizes, we can use the method. wiki_cluster['cluster_id']['cluster_id'].sketch_summary() +--------------------+---------------+----------+ | item | value | is exact | +--------------------+---------------+----------+ | Length | 14265 | Yes | | Min | 0.0 | Yes | | Max | 87.0 | Yes | | Mean | 32.3243243243 | Yes | | Sum | 10764.0 | Yes | | Variance | 699.186105024 | Yes | | Standard Deviation | 26.4421274678 | Yes | | # Missing Values | 13932 | Yes | | # unique values | 88 | No | +--------------------+---------------+----------+ Most frequent items: +-------+----+----+----+----+----+----+----+----+----+----+ | value | 1 | 10 | 5 | 19 | 9 | 36 | 64 | 62 | 11 | 23 | +-------+----+----+----+----+----+----+----+----+----+----+ | count | 33 | 29 | 18 | 15 | 14 | 13 | 9 | 7 | 7 | 5 | +-------+----+----+----+----+----+----+----+----+----+----+ Quantiles: +-----+-----+-----+------+------+------+------+------+------+ | 0% | 1% | 5% | 25% | 50% | 75% | 95% | 99% | 100% | +-----+-----+-----+------+------+------+------+------+------+ | 0.0 | 1.0 | 1.0 | 10.0 | 24.0 | 55.0 | 80.0 | 86.0 | 87.0 | +-----+-----+-----+------+------+------+------+------+------+ This indicates that of our 14,265 documents, 13,932 are considered noise (i.e. missing values), which in this context means they have no duplicates. The largest cluster has 33 duplicate documents! Let's see what they are. To do this we again need to join the cluster IDs back to our input dataset. sf_sample = sf_sample.add_row_number('row_id') sf_sample = sf_sample.join(wiki_cluster['cluster_id'], on='row_id', how='left') In [98]: sf_sample[sf_sample['cluster_id'] == 1][['X1']].print_rows(10, max_row_width=80, max_column_width=80) +---------------------------------------------------------------------------------+ | X1 | +---------------------------------------------------------------------------------+ | graceunitedmethodistchurchwilmingtondelaware it was built in 1868 and added ... | | firstpresbyterianchurchdelhinewyork it was added to the national register of... | | methodistepiscopalchurchofnorwich it was added to the national register of h... | | odonelhouseandfarm it was listed on the national register of historic places... | | saintpaulsepiscopalchurchwatertownnewyork it was listed on the national regi... | | windsorhillshistoricdistrict it was added to the national register of histor... | | pepperellcenterhistoricdistrict the district was added to the national regis... | | eboardmanhouse it was built in 1820 and added to the national register of hi... | | johnsoutherhouse the house was built in 1883 and added to the national regis... | | ednastolikerthreedecker it was built in 1916 and added to the national regis... | +---------------------------------------------------------------------------------+ [33 rows x 1 columns] It seems this cluster captures a set of article stubs that simply list when a physical structure was build and added to the National Register of Historic Places. There are two important caveats regarding distance functions in DBSCAN: 1. DBSCAN computes many pairwise distances. For dense data, GraphLab Create computes some distances much faster than others, namely "euclidean", "squared_euclidean", "cosine", and "transformed_dot_product". Other distances, as well as all distances with sparse data, may result in longer run times. 2. DBSCAN does not explicitly require the standard distance properties (symmetry, non-negativity, triangle inequality, and identity of indiscernibles) to hold, but it is based on connecting high-density points which are close to each other into a single cluster. If the specified notion of closeness violates the usual distance properties, DBSCAN may yield counterintuitive results. We expect to see the most intuitive results with "euclidean", "manhattan", "jaccard", and "levenshtein" distances.
2017-10-19 10:55:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41225630044937134, "perplexity": 1914.7245473689002}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823282.42/warc/CC-MAIN-20171019103222-20171019123222-00832.warc.gz"}
https://blag.nullteilerfrei.de/tag/markdown/
## Convert Markdown to JIRA Format For some reason, [Jira]'s [formatting] is not Markdown. Since you write everything in Markdown, you might be looking for a converter. If you furthermore hate node.js as much as yours truly, the search can easily claim your soul. Rest assured - I think [mistletoe] is the answer we are seeking. It is a pure Python Markdown parser which can render the parsed Markdown in any format, and one of them is Jira. It even comes with a [script] for this exact purpose. [Jira]: https://jira.atlassian.com/ [formatting]: https://jira.atlassian.com/secure/WikiRendererHelpAction.jspa?section=all [mistletoe]: https://github.com/miyuchina/mistletoe [script]: https://github.com/miyuchina/mistletoe/blob/dev/contrib/md2jira.py ## WordPress, KaTeX and the Showdown We decided to use the javascript markdown engine [showdown](https://github.com/showdownjs/showdown) for the blawg, and [$\KaTeX$](https://khan.github.io/KaTeX/) for rendering latex. If you think that's a good way to go: Here is how to do it with wordpress.
2020-06-06 07:21:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17004361748695374, "perplexity": 5690.042108475228}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348511950.89/warc/CC-MAIN-20200606062649-20200606092649-00357.warc.gz"}
https://worldenergyexperts.com/definition-of-thermal-energy-physics
# Definition of thermal energy physics? Date created: Thu, Apr 8, 2021 6:01 PM Content FAQ Those who are looking for an answer to the question «Definition of thermal energy physics?» often ask the following questions: ### ❓⚡ How to calculate thermal energy physics definition? However, the low-level thermal energy represents “the end of the road” of the transfer of the energy. Derivation. Specific Heat Capacity = $$\frac{thermal energy input}{(mass)\times (temperature change)}$$ To write this equation in symbols, we will use C for specific heat capacity, T for Temperature, and E t for thermal energy. But the equation involves not T itself but the change in T during the energy-input process. ### ❓⚡ What is thermal energy measured in physics definition? What is thermal energy? (article) | Khan Academy. ### ❓⚡ Thermal energy definition? Thermal Energy – Definition. Thermal Energy – Definition. In thermodynamics, thermal energy (also called the internal energy) is defined as the energy associated with microscopic forms of energy.It is an extensive quantity, it depends on the size of the system, or on the amount of substance it contains.The SI unit of thermal energy is the joule (J). Thermal energy refers to the energy contained within a system that is responsible for its temperature. Heat is the flow of thermal energy. A whole branch of physics, thermodynamics, deals with how heat is transferred between different systems and how work is done in the process (see the 1ˢᵗ law of thermodynamics). Thermal energy is energy possessed by an object or system due to the movement of particles within the object or the system. Thermal energy is one of various types of energy, where ' energy ' can be... Thermal energy is the internal energy of a thermodynamic equilibrium system, which is proportional to absolute temperature and increases or decreases in energy transfer, usually in the form of work or heat in thermodynamic processes. Thermal energy, internal energy present in a system in a state of thermodynamic equilibrium by virtue of its temperature. Thermal energy cannot be converted to useful work as easily as the energy of systems that are not in states of thermodynamic equilibrium. Wikipedia Definition. Thermal energy refers to several distinct physical concepts, such as the internal energy of a system; heat or sensible heat, which are defined as types of energy transfer (as is work); or for the characteristic energy of a degree of freedom in a thermal system , where is temperature and is the Boltzmann constant., Thermal ... Thermal energy (heat) transfer happens when there is a difference in temperature The energy moves from the higher temperature area to the lower temperature area Conduction In thermodynamics, internal energy (also called the thermal energy) is defined as the energy associated with microscopic forms of energy. It is an extensive quantity , it depends on the size of the system, or on the amount of substance it contains. heat transferred in motion or flow. transfer of heat that is applied for liquid and gas. When the fluid receives heat, its particle gains thermal energy which makes it vibrate faster and expand. When particles expand, their density decreased and the hot substance moves up. Therefore, the cold air will move down to replace and it moves in a circle Temperature is the value of the property that is same for two objects, after they’ve been in contact long enough and are in thermal equilibrium. Zeroth law of thermodynamics If A and B are each in thermal equilibrium with C, then A and B are in thermal equilibrium with each other. We've handpicked 21 related questions for you, similar to «Definition of thermal energy physics?» so you can surely find the answer! ### Is radiation thermal energy definition? Thermal radiation is the emission of electromagnetic waves from all matter that has a temperature greater than absolute zero. Thermal radiation reflects the … ### Thermal energy definition for kids? Thermal energy is produced by the movement of molecules in an object. You see, all objects are made up of tiny particles called molecules. In cold things, like ice cubes, the molecules move very... ### What are thermal energy definition? In thermodynamics, thermal energy (also called the internal energy) is defined as the energy associated with microscopic forms of energy. It is an extensive quantity, it depends on the size of the system, or on the amount of substance it contains. The SI unit of thermal energy is the joule (J). ### What influences thermal energy definition? Thermal Energy – Definition. Thermal Energy – Definition. In thermodynamics, thermal energy (also called the internal energy) is defined as the energy associated with microscopic forms of energy.It is an extensive quantity, it depends on the size of the system, or on the amount of substance it contains.The SI unit of thermal energy is the joule (J). ### What is definition thermal energy? Thermal energy definition is - energy in the form of heat. ### What is thermal energy definition? Distinguishing Temperature, Heat, and Thermal Energy Temperature is related to the kinetic energies of the molecules of a material. It is the average kinetic energy of... Internal energy refers to the total energy of all the molecules within the object. It is an extensive property,... Finally, heat ... ### Why solar thermal energy definition? Solar thermal energy (STE) is a form of energy and a technology for harnessing solar energy to generate thermal energy for use in industry, and in the residential and commercial sectors. Solar thermal collectors are classified by the United States Energy Information Administration as low-, medium-, or high-temperature collectors. ### How is thermal energy transferred in physics? Thermal energy transfer involves the transfer of internal energy. The three types of thermal energy transfer are conduction, convection and radiation. Conduction involves direct contact of atoms, convection involves the movement of warm particles and radiation involves the movement of electromagnetic waves. ### How to calculate thermal energy in physics? Simply so, what is the equation for thermal energy in physics? The thermal energy is usually expressed by Q. It is directly proportional to the mass of the substance, temperature difference and the specific heat. The SI unit of thermal energy is Joules(J). ΔT = temperature difference. Solution: ### How to calculate thermal energy physics formula? Specific Heat Capacity = $$\frac{thermal energy input}{(mass)\times (temperature change)}$$ To write this equation in symbols, we will use C for specific heat capacity, T for Temperature, and E t for thermal energy. But the equation involves not T itself but the change in T during the energy-input process. ### How to calculate thermal energy physics notes? However, the low-level thermal energy represents “the end of the road” of the transfer of the energy. Derivation. Specific Heat Capacity = $$\frac{thermal energy input}{(mass)\times (temperature change)}$$ To write this equation in symbols, we will use C for specific heat capacity, T for Temperature, and E t for thermal energy. But the equation involves not T itself but the change in T during the energy-input process. ### How to calculate thermal energy physics problems? Thermal Energy Practice Problems Problem 1: Calculate the thermal energy required to raise the temperature of 1.5 kg of oil from 10 C to 90 C. The specific heat of oil is 2.1 J/kg C. Solution: Given data: Thermal energy, ... ### How to calculate thermal energy physics questions? Calculate the thermal energy change when 0.200 kg of water cools from 100°C to 25.0°C. change in temperature = (100 - 25) = 75.0°C. change in thermal energy = mass … ### How to do thermal energy problems physics? Conservation of Thermal Energy Problem: Work Done by a Heat Engine Over one complete cycle, 640 J of heat is put into a heat engine that operates with 50% efficiency. How much work is done by the engine in that cycle? ### How to solve for thermal energy physics? Similarly, thermal energy input is that amount by which the thermal energy changes, ∆Et. Using these abbreviations, our equation becomes: $$C = \frac{\Delta E_{t}}{m.\Delta T}$$ Often it is useful for rearranging this equation to solve for the change in thermal energy: $$\Delta E_{t} = m.C.\Delta T$$ ### What causes thermal energy in physics class? What is thermal energy? (article) | Khan Academy. ### What causes thermal energy in physics examples? Thermal energy is an example of kinetic energy, as it is due to the motion of particles, with motion being the key. Thermal energy results in an object or a system … ### What causes thermal energy in physics images? Heat or thermal energy. Thermal energy (also called heat energy) is produced when a rise in temperature causes atoms and molecules to move faster and collide with each other. The energy that comes from the temperature of the heated substance is called thermal energy. 6 min, 47 sec.
2021-10-19 21:52:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7634524703025818, "perplexity": 535.4878332039215}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585281.35/warc/CC-MAIN-20211019202148-20211019232148-00129.warc.gz"}
https://pos.sissa.it/358/867/
Volume 358 - 36th International Cosmic Ray Conference (ICRC2019) - NU - Neutrino Searches for Ultra-High-Energy Neutrinos with ANITA C. Deaconu Full text: pdf Pre-published on: 2019 July 22 Published on: Abstract The ANtarctic Impulsive Transient Antenna (ANITA) long-duration balloon experiment flies an interferometric radio array over Antarctica with a primary goal of detecting impulsive Askaryan radio emission from ultra-high-energy neutrinos interacting in the ice sheet. The third and fourth ANITA flights were completed in January 2015 and December 2016, respectively, obtaining the most stringent limits on the diffuse ultra-high-energy neutrino flux above 10$^{19.5}$ eV to date. We also discuss ongoing analyses and the proposed Payload for Ultrahigh Energy Observations (PUEO), the successor to the ANITA program. PUEO's larger number of antennas and improved trigger would significantly improve sensitivity compared to ANITA-IV. Open Access
2019-08-18 02:41:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2521449029445648, "perplexity": 11629.458903562972}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313589.19/warc/CC-MAIN-20190818022816-20190818044816-00550.warc.gz"}
https://dgtal.org/doc/0.9.3/moduleDisplay3D.html
DGtal  0.9.3 Display3D: a stream mechanism for displaying 3D DGtal objects This part of the manual describes how to visualize 3D objects and how to import them from binary file (.obj or pgm3d) # Display3D: a stream mechanism from abstract class Display3D The semi abstract template class Display3D defines the stream mechanism to display 3d primitive (like PointVector, DigitalSetBySTLSet, Object ...). The class Viewer3D, Board3D and Board3DTo2D implement two different ways to display 3D objects. The first one (Viewer3D), permits an interactive visualization (based on OpenGL). The second (Board3D) provides mechanism to export the 3D objects to and Wavefront OBJ format. The last one (Board3DTo2D) provides 3D visualization to 2D vector format (using a projection mechanism based on the CAIRO library). Display3D have two template parameters which correspond to the digital space and the Khalimsky space used to put the figures. From the Digital Space and Khalimsky Space, we use the associated embedding mechanism to convert digital objects to structures in $$\mathbb{R}^n$$. Viewer3D and Board3DTo2D allow to set and change the camera point of view for the visualization. # Interactive visualization from Viewer3D The class Viewer3D inherits from the base class QGLViewer (which is based on QGLwidget). It permits to display simple 3D shapes. LibQGLViewer ( http://www.libqglviewer.com ) is a C++ library based on QT allowing to access to simple 3D features like camera moving, mouse, keyboard interaction, clipping plane .... etc. It possess the additional functionality to display 2D slice image from a volume one. First to use the Viewer3D stream, you need to include the following headers: #include "DGtal/io/3dViewers/Viewer3D.h" The following code snippet defines three points and a rectangular domain in Z3. It then displays them in a Viewer3D object. The full code is in viewer3D-1-points.cpp. The first step to visualize 3D object with Viewer3D is to create a QApplication from the main(): using namespace DGtal; using namespace Z3i; QApplication application(argc,argv); Viewer3D<> viewer; viewer.show(); Then we can display some 3D primitives: Point p1( 0, 0, 0 ); Point p2( 5, 5 ,5 ); Point p3( 2, 3, 4 ); Domain domain( p1, p2 ); viewer << domain; viewer << p1 << p2 << p3; viewer << Viewer3D<>::updateDisplay; You should obtain the following visualization: Digital point visualization with Viewer3D. ## Interactive change of rendering mode By default the rendering mode of the Viewer3D is defined by melange of diffuse (Lambertian) and specular parts. The user can swith to the following mode by using the key P : type default mixte metallic plasticlambertian mode 0 1 2 3 Ex: ## Light source position modes The light source position is by default defined according to the camera position and its position will not change even after camera moves (default mode). By this way, if you move the camera, the object will be always illuminated (see images of mode 0 of the following tabular). This default light position could be changed interactively by a mouse move in the Viewer3D (with the key SHIFT+CTRL (SHIFT+CMD on mac)). There exists a second mode where the light source position will be fixed according the main scene axis. In this case, even if the camera move, the light source will have the same position towards the object of the scene (see for instance images of mode 1 of the following tabular). As for the previous mode, the default light position could be changed interactively by a mouse move in the Viewer3D (with the key SHIFT+CTRL (SHIFT+CMD on mac)). To change between the two light position mode you can use the Key P in the viewer3D. Light position mode default position after camera move 0 1 Note Note that you can display the camera settings in the console (key C) which can be used in Board2Dto3D described in the following. ## Ball display modes The Viewer3D class has a special mode to display balls (added from the Display3D method addBall() ). By default, the balls are constructed with OpenGl quadrangulated sphere which can be slow if the number of ball is huge. In the latter case, it is possible to use OpenGL points to increase display performance. To use this mode, you have just to activate it with the method setUseGLPointForBalls() when needed: viewer.setUseGLPointForBalls(true); By changing the mode you will obtain such display: ball display default mode ball display with OpenGL point mode You can also change the ball display mode interactively by using the key O. # Alternative visualization without QGLViewer dependency There are two ways to obtain an alternative visualization without the dependency of the 3D interactive viewer: • Static display by using Board2Dto3D. • Export the 3d objects into 3d files (OBJ format) with Board3D. ## Static display The same visualization can be obtain with the Board2Dto3D class. You just need to adapt the camera settings (see example io/boards/dgtalBoard3DTo2D-1-points.cpp). Board3DTo2D<Space, KSpace> board; board << domain; board << p1 << p2 << p3; board << CameraPosition(2.500000, 2.500000, 16.078199) << CameraDirection(0.000000, 0.000000, -1.000000) << CameraUpVector(0.000000, 1.000000, 0.000000); board << CameraZNearFar(4.578200, 22.578199); board << SetMode3D(board.className(), "WireFrameMode"); board.saveCairo("dgtalBoard3DTo2D-1-points.png", Board3DTo2D<Space, KSpace>::CairoPNG, 600*2, 400*2); This example should provides a comparable visualization. ## Export objects with Board3D To export 3d objects into OBJ format you need simply to use the Board3D class which inherits to the Display3D class. You can for instance follow these steps: Board3D<> board; board << SetMode3D(domain.className(), "Paving"); board << p1 << p2 << p3; board << shape_set; board.saveOBJ("dgtalBoard3D-1-points.obj"); And then visualize the resulting obj and mtl file by using for instance blender: Visualisation of exported OBJ file with blender. By setting a second paramter to true when calling the saveOBJ, the geometrical objects will be scaled so that they fit in a [-1/2, 1/2]^3 domain. # Visualization of DigitalSet and digital objects The Viewer3D class allows also to display directly a DigitalSet. The first step is to create a DigitalSet for example from the Shape class. QApplication application(argc,argv); typedef Viewer3D<> MyViewer; MyViewer viewer; viewer.show(); Point p1( 0, 0, 0 ); Point p2( 10, 10 , 10 ); Domain domain( p1, p2 ); viewer << domain; DigitalSet shape_set( domain ); Shapes<Domain>::addNorm1Ball( shape_set, Point( 5, 5, 5 ), 2 ); Shapes<Domain>::addNorm2Ball( shape_set, Point( 3, 3, 3 ), 2 ); shape_set.erase(Point(3,3,3)); shape_set.erase(Point(6,6,6)); viewer << shape_set << MyViewer::updateDisplay; You should obtain the following visualization (see example: viewer3D-2-sets.cpp ): Digital point visualization with Viewer3D. # Mode selection: the example of digital objects in 3D As for Board2D, a mode can be choosen to display elements (SetMode3D). You just have to specify the classname (the easiest way is to call the method className() on an instance of the correct type and the desired mode (a string). Object6_18 shape( dt6_18, shape_set ); viewer << SetMode3D( shape.className(), "DrawAdjacencies" ); viewer << shape; or change the couple of adjacency Object18_6 shape2( dt18_6, shape_set ); viewer << SetMode3D( shape2.className(), "DrawAdjacencies" ); viewer << shape2; You should obtain the two following visualizations (see example: viewer3D-3-objects.cpp ): 6-18 digital Adjacencies visualization with Viewer3D. 18-6 digital Adjacencies visualization with Viewer3D. Note that digital set was displayed with transparency by setting a custom colors. # Useful modes for several 3D drawable elements ## Listing of different modes As for Board2D the object can be displayed with different possible mode: Note that for KhalimskyCell and SignedKhalimskyCell the default colors (with CustomColors3D objects) can be changed only with the empty mode ("") and the "IllustrationCustomColor" mode. "*": partially for (Board3DTo2D), see issue 582. "**": only for Viewer3D. ## Examples with Objet modes The file viewer3D-4-modes.cpp illustrates several possible modes to display these objects: We can display the set of point and the domain Point p1( -1, -1, -2 ); Point p2( 2, 2, 3 ); Domain domain( p1, p2 ); Point p3( 1, 1, 1 ); Point p4( 2, -1, 3 ); Point p5( -1, 2, 3 ); Point p6( 0, 0, 0 ); Point p0( 0, 2, 1 ); without mode change (see image (a)): viewer << p1 << p2 << p3<< p4<< p5 << p6 << p0; viewer << domain; We can change the mode for displaying the domain (see image (b)): viewer << p1 << p2 << p3<< p4<< p5 << p6 << p0; viewer << SetMode3D(domain.className(), "PavingGrids"); viewer << domain; (Note that to avoid transparency displaying artifacts, we need to display the domain after the voxel elements included in the domain) It is also possible to change the mode for displaying the voxels: (see image (c)) viewer << domain; viewer << SetMode3D( p1.className(), "Grid" ); viewer << p1 << p2 << p3<< p4<< p5 << p6 << p0; we obtain the following visualizations: (a) Default visualization of a digital point sets with the associated domain (b) visualization using Paving mode for the domain. (c) visualization using Paving mode for the voxels. ## Illustrating KhalimskyCell with the "Illustration" mode The "Illustration" mode is defined to construct illustrations composed of KhalimskyCell. In particular it permits to increase the space between cells and improve the display visibility. It can be used typically as follows: First you need to add the following header: #include "DGtal/io/DrawWithDisplay3DModifier.h" From a SignedKhalimskyCell (SCell in DGtal::Z3i) you have to select the "Illustration" mode : SCell v = K.sSpel( Point( 0, 0, 0 ), KSpace::POS ); // +v viewer << SetMode3D( v.className(), "Illustration" ); Then, to display a surfel with its associated voxel, you need to transform the surfel by constructing a shifted and resized version (DGtal::TransformedKSSurfel) according to its associated voxel: SCell sx = K.sIncident( v, 0, true ); // surfel further along x You will obtain such type of illustration (obtained from the example viewer3D-4bis-illustrationMode.cpp ). Illustration of the "Illustration" KhalimskyCell mode. There exists a specific method to display surfels (Khalimsky cells of dimension 2 in a space of dimension 3) as quadrilaterals where the user can prescribe a unitary normal vector. In Viewer3D, the normal vector is used in the rendering process (useful to check the geometrical consistency of a normal vector field). Basic usage is: Display3DFactory<Space,KSpace>::drawOrientedSurfelWithNormal( aViewer, aSurfel, theSurfelSign, aNormalVector); or if the surfel is not oriented (unsigned khalimsky cell). In the later case, the quadrilateral vertices are oriented such that the dot product between the normal vector and the quad canonical normal vector is positive. Finally, these two methods accept a last boolean parameter such that if true, the quad is geometrically duplicated with opposite normal vector (double-quad rendering). # Changing the style for displaying drawable elements. As for Board2D, it is possible to custom the way to display 3D elements by using an instance of the following classes: • CustomColors3D: to change the color used to display surface primitive (GL_QUADS) and the pen color (LINE/POINTS) ; The custom color can be applied by an instance of the CustomColors3D as follow: viewer << CustomColors3D(Color(250, 0,0),Color(250, 0,0)); viewer << p4 << p5 ; The example viewer3D-5-custom.cpp illustrates some possible customs : Example of several custom display . # Adding clipping planes It also possible through the stream mechanism to add clipping plane with the object ClippingPlane. We just have to add the real plane equation and adding as for displaying an element. The file viewer3D-6-clipping.cpp gives a simple example. From displaying a digital set defined from a Norm2 ball, Point p1( 0, 0, 0 ); Point p2( 20, 20, 20 ); Domain domain(p1, p2); DigitalSet shape_set( domain ); Shapes<Domain>::addNorm2Ball( shape_set, Point( 10, 10, 10 ), 7 ); viewer << SetMode3D( shape_set.className(), "Both" ); viewer << shape_set; viewer << CustomColors3D(Color(250, 200,0, 100),Color(250, 200,0, 20)); viewer << SetMode3D( p1.className(), "Paving" ); we can add for instance two differents clipping planes: viewer << ClippingPlane(1,0,0,-4.9); viewer << ClippingPlane(0,1,0.3,-10); (a) visualization of the initial set. (b) visualization after adding the first clipping plane (0,1,0.3,-10). (c) visualization after adding a second clipping plane (1,0,0,-4.9) . It also possible to remove the visualization of the transparent clipping plane by adding boolean option: viewer << ClippingPlane(0,1,0.3,-10, false); # Adding 2D image visualization in 3D ## Adding 2D slice images With the Viewer3D class it is possible to display 2D slice image from a volume one. It can be done in few steps (see example of io/viewers/viewer3D-8-2DSliceImages.cpp) : // Extracting the 2D images from the 3D one and from a given dimension. // First image the teenth Z slice (dim=2) Image3D::Value, DGtal::functors::Identity > MySliceImageAdapter; // Define the functor to recover a 2D domain from the 3D one in the Z direction (2): DGtal::functors::Projector<DGtal::Z2i::Space> transTo2DdomainFunctorZ; transTo2DdomainFunctorZ.initRemoveOneDim(2); DGtal::Z2i::Domain domain2DZ(transTo2DdomainFunctorZ(imageVol.domain().lowerBound()), transTo2DdomainFunctorZ(imageVol.domain().upperBound())); // Define the functor to associate 2D coordinates to the 3D one by giving the direction Z (2) and the slide numnber (10): // We can now obtain the slice image (a ConstImageAdapter): const auto identityFunctor = DGtal::functors::Identity(); MySliceImageAdapter aSliceImageZ(imageVol, domain2DZ, aSliceFunctorZ, identityFunctor ); // Second image the fiftieth Y slice (dim=1) // Define the functor to recover a 2D domain from the 3D one in the Y direction (1): DGtal::functors::Projector<DGtal::Z2i::Space> transTo2DdomainFunctorY; transTo2DdomainFunctorY.initRemoveOneDim(1); DGtal::Z2i::Domain domain2DY(transTo2DdomainFunctorY(imageVol.domain().lowerBound()), transTo2DdomainFunctorY(imageVol.domain().upperBound())); // Define the functor to associate 2D coordinates to the 3D one by giving the direction Y (1) and the slide numnber (50): // We can now obtain the slice image (a ConstImageAdapter): MySliceImageAdapter aSliceImageY(imageVol, domain2DY, aSliceFunctorY, identityFunctor ); And the display them using the classic stream operator: viewer << aSliceImageZ; viewer << aSliceImageY; Finally you can adjust the image setting with the Display3DModifier UpdateImagePosition and UpdateImageData object: viewer << DGtal::UpdateImagePosition<Z3i::Space, Z3i::KSpace>(1, MyViewer::yDirection, 0.0, 50.0, 0.0); viewer << DGtal::UpdateImageData<MySliceImageAdapter>(0, aSliceImageZ, 0, 0, 10); You will obtain such a visualization: Illustration of the 2D image slice visualization. You can also change the default mode by using: viewer << SetMode3D(aSliceImageZ.className(), "BoundingBox"); and by changing the "BoundingBox" mode by "InterGrid" you will obtain the following visualization: Illustration of the 2D image slice visualization with InterGrid mode. See more details on this example io/viewers/viewer3D-8-2DSliceImages.cpp or from the DGtalTools repository with DGtalTools/visualization/3dImageViewer.cpp viewer. ## Adding 2D images (from any embedding) The slice images are not the only way to display 2D images in 3D. A 2D image can also be extracted and embedded in 3D by using a single embedding functor (Point2DEmbedderIn3D). The example io/viewers/viewer3D-8bis-2Dimages.cpp illustrates such a display. First we need to add the header file associated with the Point2DEmbedderIn3D: #include "DGtal/kernel/BasicPointFunctors.h" Then, the type definition of ConstImageAdapter is added: Image3D::Value, DGtal::functors::Identity > ImageAdapterExtractor; The resulting 2D domain can be deduced from the width used in the functor: DGtal::Z3i::Point ptCenter(50, 62, 28); const int IMAGE_PATCH_WIDTH = 20; // Setting the image domain of the resulting image to be displayed in 3D: DGtal::Z2i::Point(IMAGE_PATCH_WIDTH, IMAGE_PATCH_WIDTH)); The embedder then be used to extract the image: // Extracting images from 3D embeder ptCenter+DGtal::Z3i::RealPoint(200.0*cos(alpha),100.0*sin(alpha)), DGtal::Z3i::RealPoint(cos(alpha),sin(alpha),cos(2.0*alpha)), ImageAdapterExtractor extractedImage(imageVol, domainImage2D, embedder, idV); and used to display the image with the correct coordinates: //Display image and update its position with embeder viewer << extractedImage; viewer << DGtal::UpdateImage3DEmbedding<Z3i::Space, Z3i::KSpace>(pos, embedder(Z2i::RealPoint(0,0)), embedder(Z2i::RealPoint(IMAGE_PATCH_WIDTH,0)), embedder(domainImage2D.upperBound()), embedder(Z2i::RealPoint(0, IMAGE_PATCH_WIDTH))); This example will produce such a visualization: Illustration of the 2D image visualization. # Adding 3D image visualization In the same way a 3D image can be displayed. By following the same stream operator you will obtain such example of display: Example of 3D image visualization with also digital sets. See more details in the example: io/viewers/viewer3D-9-3Dimages.cpp # Customizing Slice Image visualization By default an image is displayed in gray scale levels from its scalar values. However it is possible to display color texture image by using the object AddTextureImage2DWithFunctor or AddTextureImage3DWithFunctor (of the DrawWithDisplay3DModifier class) with the RGBmode which allows to interpret the scalar as a color value. A color functor can also be specified to generate a given color. For instance the previous examples can easily displayed with color map: First we generate a color functor to generate unsigned integer interpreted as RGB color: #include "DGtal/io/DrawWithDisplay3DModifier.h" #include "DGtal/io/viewers/DrawWithViewer3DModifier.h" #include "DGtal/io/Color.h" with a functor to transform integer representing grayscale to integer representing color: struct hueFct{ inline unsigned int operator() (unsigned char aVal) const {
2021-04-13 12:51:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3445363938808441, "perplexity": 7861.001737868355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038072366.31/warc/CC-MAIN-20210413122252-20210413152252-00453.warc.gz"}
http://mathhelpforum.com/advanced-algebra/110240-find-value-q-matrix.html
# Thread: Find the value of 'q' in the matrix 1. ## Find the value of 'q' in the matrix Question : If P= $\begin{bmatrix}1 & -1 \\ 2 & -1\end{bmatrix}$ and Q = $\begin{bmatrix} 1 & 1 \\ q & -1\end{bmatrix}$ and $(P+Q)^2 = P^2 + Q^2$ , determine the value of q 2. Originally Posted by zorro Question : If P= $\begin{bmatrix}1 & -1 \\ 2 & -1\end{bmatrix}$ and Q = $\begin{bmatrix} 1 & 1 \\ q & -1\end{bmatrix}$ and $(P+Q)^2 = P^2 + Q^2$ , determine the value of q What you have boils down to finding the value of q that satisfies $PQ + QP = 0 \Rightarrow PQ = -QP$. So do the necessary multiplications (it's assumed you can do this without trouble) and then equate the appropriate matrix elements to get an equation in q. 3. ## Please let me know if the value of q = 0 Originally Posted by mr fantastic What you have boils down to finding the value of q that satisfies $PQ + QP = 0 \Rightarrow PQ = -QP$. So do the necessary multiplications (it's assumed you can do this without trouble) and then equate the appropriate matrix elements to get an equation in q. Please let me know if the value of q = 0 or not 4. Originally Posted by zorro Please let me know if the value of q = 0 or not Have you substituted the value of q and checked that it works? If you post your work I will review it. 5. $(P+Q)^2 = P^2 + Q^2\Leftrightarrow$ $2PQ=0\Leftrightarrow P=0$ or $Q=0$ 6. Originally Posted by Raoh $(P+Q)^2 = P^2 + Q^2\Leftrightarrow$ $2PQ=0\Leftrightarrow P=0$ or $Q=0$ NO! matrices do not satisfy the cancellation law. It is quite possible to have PQ= 0 without either P or Q equal to 0. For example, if $P= \begin{bmatrix}1 & 0 \\ 0 & 0\end{bmatrix}$ and $Q= \begin{bmatrix} 0 & 0 \\ 0 & 1\end{bmatrix}$ then neither is the 0 matrix but $PQ= \begin{bmatrix}0 & 0 \\ 0 & 0\end{bmatrix}$. 7. Matrices aren't always commutative, as well... So $PQ + QP = 0 \Leftrightarrow 2PQ = 0$ isn't neccesarily correct. Just do what Mr.F said and you should be fine. 8. Originally Posted by HallsofIvy NO! matrices do not satisfy the cancellation law. It is quite possible to have PQ= 0 without either P or Q equal to 0. For example, if $P= \begin{bmatrix}1 & 0 \\ 0 & 0\end{bmatrix}$ and $Q= \begin{bmatrix} 0 & 0 \\ 0 & 1\end{bmatrix}$ then neither is the 0 matrix but $PQ= \begin{bmatrix}0 & 0 \\ 0 & 0\end{bmatrix}$. thank you 9. zorro, just go ahead and do the computation. What is P+ Q? What is (P+Q)^2? What are P^2 and Q^2? What is P^2+ Q^2? Set (P+ Q)^2 equal to P^2+ Q^2 and set terms equal component by component. You will have four equations in q. Do they all give the same value for q? 10. ## Is this correct? Originally Posted by mr fantastic Have you substituted the value of q and checked that it works? If you post your work I will review it. $P \ = \ \begin{bmatrix}1 & -1 \\ 2 & -1 \end{bmatrix}$ $Q \ = \ \begin{bmatrix}1 & 1 \\ q & -1 \end{bmatrix}$ $\therefore (P + Q) \ = \ \begin{bmatrix} 1 & 0 \\ (2 + q) & -2 \end{bmatrix}$ $(P + Q)^2 \ = \ \begin{bmatrix} 1 & 0 \\ (q + 2)^2 & 4 \end{bmatrix} \ = \ 4 - (q + 2)^2 \ = \ [- q^2 - 4q] \longmapsto eq\ 1$ $P^2 \ = \ \begin{bmatrix} 1 & 1 \\ 4 & 1 \end{bmatrix}$ $Q^2 \ = \ \begin{bmatrix} 1 & 1 \\ q^2 & 1 \end{bmatrix}$ $P^2 + Q^2 \ = \ \begin{bmatrix} 2 & 2 \\ (4 + q^2) & 2 \end{bmatrix} \ = \ 4 - 2(4 + q^2) \ = \ [-4 - 2q^2] \longmapsto eq \ 2$ Since $(P + Q)^2 \ = \ (P^2 + Q^2)$ From eq 1 and eq 2 $-q^2 - 4q \ = \ -4 - 2q^2$ $-q^2 + 2q^2 - 4q \ = \ -4$ $q^2 - 4q + 4 \ = \ 0$ $(q - 2)(q - 2)$ $\therefore$ $q = 2$ Is this correct ???????? 11. Originally Posted by zorro $P \ = \ \begin{bmatrix}1 & -1 \\ 2 & -1 \end{bmatrix}$ $Q \ = \ \begin{bmatrix}1 & 1 \\ q & -1 \end{bmatrix}$ $\therefore (P + Q) \ = \ \begin{bmatrix} {\color{red}1} & 0 \\ (2 + q) & -2 \end{bmatrix}$ Mr F says: That red 1 should be a 2. $(P + Q)^2 \ = \ \begin{bmatrix} 1 & 0 \\ (q + 2)^2 & 4 \end{bmatrix}$ Mr F says: This is completely wrong. Please go back and review how to multiply two matrices (which is what you're doing when you take the square of a matrix). $\ = \ 4 - (q + 2)^2 \ = \ [- q^2 - 4q] \longmapsto eq\ 1$ Mr F says: What is this meant to be? What is its relevance? $P^2 \ = \ \begin{bmatrix} 1 & 1 \\ 4 & 1 \end{bmatrix}$ $Q^2 \ = \ \begin{bmatrix} 1 & 1 \\ q^2 & 1 \end{bmatrix}$ Mr F says: The above is completely wrong. Please go back and review how to multiply two matrices (which is what you're doing when you take the square of a matrix). $P^2 + Q^2 \ = \ \begin{bmatrix} 2 & 2 \\ (4 + q^2) & 2 \end{bmatrix} \ = \ 4 - 2(4 + q^2) \ = \ [-4 - 2q^2] \longmapsto eq \ 2$ Since $(P + Q)^2 \ = \ (P^2 + Q^2)$ From eq 1 and eq 2 $-q^2 - 4q \ = \ -4 - 2q^2$ $-q^2 + 2q^2 - 4q \ = \ -4$ $q^2 - 4q + 4 \ = \ 0$ $(q - 2)(q - 2)$ $\therefore$ $q = 2$ Is this correct ???????? There are many mistakes you need to fix. Because of these mistakes, all working that follows from them will be wrong. 12. ## I have reworked the problem could u please check Originally Posted by mr fantastic There are many mistakes you need to fix. Because of these mistakes, all working that follows from them will be wrong. Is it right now $(P + Q)^2$ = $\begin{bmatrix} 2 & 0 \\ (2 +q) & -2 \end{bmatrix}$ . $\begin{bmatrix} 2 & 0 \\ (2 +q) & -2 \end{bmatrix}$ = $\begin{bmatrix} 4 & 0 \\ 0 & 4 \end{bmatrix}$ $P^2$ = $\begin{bmatrix} 1 & -1 \\ 2 & -1 \end{bmatrix}$ . $\begin{bmatrix} 1 & -1 \\ 2 & -1 \end{bmatrix}$ = $\begin{bmatrix} -1 & 0 \\ 0 & -1 \end{bmatrix}$ $; \qquad \qquad Q^2$ = $\begin{bmatrix} 1 & 1 \\ q & -1 \end{bmatrix}$ . $\begin{bmatrix} 1 & 1 \\ q & -1 \end{bmatrix}$ = $\begin{bmatrix} (1-q) & 0 \\ 0 & (1+q) \end{bmatrix}$ $(P^2 + Q^2)$ = $\begin{bmatrix} -q & 0 \\ 0 & q \end{bmatrix}$ $(P^2 + Q^2) = P^2 + Q^2$ $16 = 1+1-q^2$ $16 = 2 - q^2$ $q^2 = 2 - 16$ $q^2 = -14$ $q = - \sqrt{14}$ Is this correct ???? 13. Originally Posted by zorro Is it right now $(P + Q)^2$ = $\begin{bmatrix} 2 & 0 \\ (2 +q) & -2 \end{bmatrix}$ . $\begin{bmatrix} 2 & 0 \\ (2 +q) & -2 \end{bmatrix}$ = $\begin{bmatrix} 4 & 0 \\ 0 & 4 \end{bmatrix}$ $P^2$ = $\begin{bmatrix} 1 & -1 \\ 2 & -1 \end{bmatrix}$ . $\begin{bmatrix} 1 & -1 \\ 2 & -1 \end{bmatrix}$ = $\begin{bmatrix} -1 & 0 \\ 0 & -1 \end{bmatrix}$ $; \qquad \qquad Q^2$ = $\begin{bmatrix} 1 & 1 \\ q & -1 \end{bmatrix}$ . $\begin{bmatrix} 1 & 1 \\ q & -1 \end{bmatrix}$ = $\begin{bmatrix} (1 {\color{red}+}q) & 0 \\ 0 & (1+q) \end{bmatrix}$ $(P^2 + Q^2)$ = $\begin{bmatrix} -q & 0 \\ 0 & q \end{bmatrix}$ $(P^2 + Q^2) = P^2 + Q^2$ $16 = 1+1-q^2$ $16 = 2 - q^2$ $q^2 = 2 - 16$ $q^2 = -14$ $q = - \sqrt{14}$ Is this correct ???? Look carefully - there is a correction in red to your expression for Q^2. That will change everything that follows.
2013-12-19 08:14:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 90, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8530645966529846, "perplexity": 596.8252162169765}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345762220/warc/CC-MAIN-20131218054922-00094-ip-10-33-133-15.ec2.internal.warc.gz"}
https://goldbook.iupac.org/terms/view/B00629
## bending of energy bands Also contains definition of: flat bands https://doi.org/10.1351/goldbook.B00629 The distribution of potential in the space charge region of a @[email protected] results in a change in the electron energy levels with distance from the @[email protected] This is usually described as 'bending of the energy bands'. Thus the bands are bent, upwards if $$\sigma >0$$ and downwards if $$\sigma <0$$, where $$\sigma$$ is the free @[email protected] When $$\sigma =0$$ the condition of flat bands is met, provided no @[email protected] are present. Source: PAC, 1986, 58, 437. (Interphases in systems of conducting phases (Recommendations 1985)) on page 443 [Terms] [Paper]
2022-12-09 02:43:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7973854541778564, "perplexity": 1521.2358384477857}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711376.47/warc/CC-MAIN-20221209011720-20221209041720-00517.warc.gz"}
https://diginole.lib.fsu.edu/islandora/object/fsu:183319
# Nuclear Magnetic Resonance in Optimally-Doped YBCO and the Electronic Phases of Bilayer Graphene Throckmorton, R. E. (R. E. ). (2012). Nuclear Magnetic Resonance in Optimally-Doped YBCO and the Electronic Phases of Bilayer Graphene. Retrieved from http://purl.flvc.org/fsu/fd/FSU_migr_etd-5448 Throckmorton, R. E. (R. E. ). (2012). Nuclear Magnetic Resonance in Optimally-Doped YBCO and the Electronic Phases of Bilayer Graphene. Retrieved from http://purl.flvc.org/fsu/fd/FSU_migr_etd-5448
2023-03-25 01:02:05
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9979372024536133, "perplexity": 13504.849353204749}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945292.83/warc/CC-MAIN-20230325002113-20230325032113-00198.warc.gz"}
https://codegolf.stackexchange.com/questions/116477/make-me-an-easter-egg/183440
# Make me an Easter egg! No, not the ^^vv<><>BA kind of Easter eggs, real Easter eggs that we paint. Here is an (awfully drawn) egg. __ / \ / \ | | \____/ In easter, we paint them with patterns. Like these: __ /--\ /----\ |----| \____/ __ /%%\ /%%%%\ |%%%%| \____/ __ /~~\ /~~~~\ |~~~~| \____/ # The challenge Given a character (printable ascii) to paint the egg, print the painted egg. Examples: &: __ /&&\ /&&&&\ |&&&&| \____/ #: __ /##\ /####\ |####| \____/ # Specs • Trailing newlines/spaces are allowed. • Why the downvote? If you do not like this question, downvote then leave a reason please. Apr 14, 2017 at 14:41 • One potential reason might be that they don't think this task is clear enough or clear enough. I'd say it is clear enough, and it's not literally trivial either. That said, I'm not particularly excited either. Apr 14, 2017 at 14:43 • The challenge will be very trivial in most languages. The egg is too short to allow much originality in the golfing. In my opinion, it's an uninteresting challenge (that hasn't been sandboxed, because you seem to be boycotting the sandbox for I don't know what reasons), therefor, I downvote it. Apr 14, 2017 at 14:53 • Upvoted because simple challenges like this are great for beginners like me to ease into golf. Apr 14, 2017 at 15:40 • POOF! You're an easter egg. (Sorry, couldn't resist) Apr 14, 2017 at 20:16 # Pyth, 36 bytes jm:s@L"-/|\\_ "jCd6\-+QQ"ԈџÛƲ獩 Try it online! # Swift - 88 bytes var f={x in" __\n /&&\\\n/&&&&\\\n|&&&&|\n\\____/".replacingOccurrences(of:"&",with:x)} Lambda-like function with usage: print(f("x")). Try it out here! NOTE: In some environments, as in the one linked above, this requires import Foundation, but on a standard project it doesn't. That's because replacingOccurrences() belongs to Foundation, but Xcode projects have that optimised, and is not required. • The replacingOccurrences() really kills the score :(( Apr 14, 2017 at 16:19 ## c(gcc) 81 78 bytes *s;f(c){for(s=LR"( __ /00\ /0000\ |0000| \____/)";*s;s++)printf(*s%6?s:&c);} Try it online ## Batch, 76 bytes @for %%l in (" __" " /%1%1\" "/%1%1%1%1\" "|%1%1%1%1|" \____/)do @echo %%~l Note: To run this with special characters such as &, prefix the character with a ^. # Bash, 50 bytes echo " __ /$1$1\\ /$1$1$1$1\\ |$1$1$1$1| \____/" I've also tried to do pattern substitution during parameter expansion, but it seems like the most straightforward way to do this is also the shortest. Try it online! • Neither I managed to reduce a pure Bash solution, but Bash + coreutils would be shorter: tr . "$1"<<<' __␤ /..\␤/....\␤|....|␤\____/'. Apr 15, 2017 at 15:45 # JavaScript (ES6), 55 bytes x=> __ /0\\ /00\\ |00| \\____/.replace(/0/g,_=>x+x) • This wont work if x === '$', 3 more bytes are needed: .replace(/0/g,_=>x+x) – tsh Apr 15, 2017 at 7:11 • @tsh Ah dang it. I actually had it like that originally, but I thought I could remove the _=>... Apr 15, 2017 at 13:59 # Vim, 31 bytes Expects the input character to be the only thing in the buffer to start. Ends in insert mode with the egg displayed in the buffer. I have replaced ^M and ^O with <CR> and <C-O> for readability. All whitespace is significant. s __<CR> /<C-O>2p\<CR>/<C-O>4p\<CR>|<C-O>4p|<CR>\____/ Since V is backwards-compatible, you can Try it online! Explanation: s " Delete input (storing it to the default register) and enter insert mode __<CR> " Type " --\n" / " Type " /" <C-O>2p " Paste twice from the default register without leaving insert mode (e.g. put "%%") \<CR> " Type "\\\n" / " Type "/" <C-O>4p " This time paste 4 times \<CR> " Type "\<CR>" |<C-O>4p|<CR> " Same drill, but with | before and after \____/ " The bottom of the egg # C#, 140 Bytes class x{static void Main(){System.Console.WriteLine((" __\n"+@" /%%\"+'\n' +@"/%%%%\"+'\n'+@"|%%%%|"+'\n' +@"\____/ ").Replace("%","A"));}} It's my first time codegolfing :) Anyway, this works by replacing '%' from the example to any character you want. This prints out: __ /AA\ /AAAA\ |AAAA| \____/ • The egg should be painted with the character given in input line. – Toto Apr 16, 2017 at 10:08 # JS (ES6), 60 bytes x=> __ /~\\ /~~\\ |~~| \\____/.replace(/~/g,x.repeat(2)) Basically the same thing as the python answer. # Java 7, 79 bytes String f(String c){return" __\n /x\\\n/xx\\\n|xx|\n\\____/".replace("x",c+c);} Explanation: String f(String c){ // Method with String(-character) parameter and String return-type return " __\n /x\\\n/xx\\\n|xx|\n\\____/" // Return this String .replace("x",c+c); // And replace every 'x' with two times the input String-character } // End of method Test code: Try it here. class Main{ static String f(String c){return" __\n /x\\\n/xx\\\n|xx|\n\\____/".replace("x",c+c);} public static void main(String[] a){ System.out.println(f("^")); System.out.println(f("|")); System.out.println(f("o")); } } Output: __ /^^\ /^^^^\ |^^^^| \____/ __ /||\ /||||\ |||||| \____/ __ /oo\ /oooo\ |oooo| \____/ # C, 49 bytes f(){puts(" __\n /~~\\n/~~~~\\n|~~~~|\n\____/");} Output Live __ /~~\ /~~~~\ |~~~~| \____/ Unicode, 77 bytes g(){printf(" .\u2322.\n\n\u239bHAPPY\u239e\n EASTER\n\u239d.___.\u23a0\n");} Output .⌢. ⎛HAPPY⎞ EASTER ⎝.___.⎠ # Common Lisp, SBCL, 73 bytes (format t" __ /~@?\\ /~@?~@?\\ |~@?~@?| ## ><>, 58 bytes |v" __ /":{"\ /"}::::{"\|"}::::i"|\____/" o>ooooool?!;a Try it online, or at the fish playground! Takes input from STDIN. If you're allowed to exit with an error, you can get rid of the l?!; in the second line. Explanation: The fish swims through the first line backwards, so it reads "/____\|"i::::}"|\"{::::}"/ \"{:"/ __ ". This builds up the string in reverse, keeping a copy of the input at the back of the stack for when it's needed again. The fish then gets to row 2, which prints six characters and a newline and then loops until the stack is empty. puts" __ \n /00\\ \n/0000\\\n|0000|\n\\____/".gsub"0",$*[0] As basic as it gets, honestly # Excel VBA, 66 Bytes Anonymous Excel VBE immediates window function that takes a clean input of 1 character from cell [A1] on the ActiveSheet object and prints to the VBE immediate window. ?Replace(Replace(" __0 /11\0/1111\0|1111|0\____/",0,vbCr),1,[A1]) # 05AB1E, 25 bytes "ÿ_ \/|"•θ8Ë1Âõ]ư•6вèJ.c Try it online. Explanation: "ÿ_ \/|" # Push the string "ÿ\n\/|", where ÿ is automatically filled # with the (implicit) input-character •θ8Ë1Âõ]ư• # Push compressed integer 1269961759550053939510 6в # Converted to Base-6 as list: # [1,1,2,4,0,0,3,2,4,0,0,0,0,3,2,5,0,0,0,0,5,2,3,1,1,1,1,4] èJ # Index each into the string, and then join together .c # And centralize it to add the appropriate leading spaces # (after which the result is output implicitly) See this 05AB1E tip of mine (sections How to compress large integers? and How to compress integer lists?) to understand why •θ8Ë1Âõ]ư• is 1269961759550053939510 and •θ8Ë1Âõ]ư•6в is [1,1,2,4,0,0,3,2,4,0,0,0,0,3,2,5,0,0,0,0,5,2,3,1,1,1,1,4]. # Excel Formula, 54 bytes Makes use of multi-line entry in the formula bar (Alt+Enter), where # is your chosen character to paint! =SUBSTITUTE(" __ /~~\ /~~~~\ |~~~~| \____/","~","#") # Whitespace, 240 229 bytes [S S S T S T T T T N _Push_47][S S S T S T T T T T N _Push_95][S N S _Duplicate_95][S N S _Duplicate_95][S N S _Duplicate_95][S S S T S T T T S S N _Push_92][S S S T S T S N _Push_10][S S S T T T T T S S N _Push_124][S T S S T S N _Copy_0-based_2nd_(92)][S N S _Duplicate_92][T N T S _Read_STDIN_as_character][T T T _Retrieve][S N S _Duplicate_input][S N S _Duplicate_input][S N S _Duplicate_input][S S S T T T T T S S N _Push_124][S S S T S T S N _Push_10][S T S S T S S S N _copy_0-based_8th_(92)][S N S _Duplicate_92][T T T _Retrieve][S N S _Duplicate_input][S N S _Duplicate_input][S N S _Duplicate_input][S S S T S T T T T N _Push_47][S S S T S T S N _Push_10][S S S T S T T T S S N _Push_92][S N S _Duplicate_92][T T T _Retrieve][S N S _Duplicate_input][S T S S T S S N _Copy-0-based-4th-(47)][S S S T S S S S S N _Push_32][S S S T S T S N _Push_10][S S S T S T T T T T N _Push_95][S N S _Duplicate_95][S T S S T T N _Copy-0-based-3rd-(32)][S N S _Duplicate_32][N S S N _Create_Label_LOOP][T N S S _Print_as_character][N S N N _Jump_to_Label_LOOP] Letters S (space), T (tab), and N (new-line) added as highlighting only. [..._some_action] added as explanation only. Explanation: Since we have to incorporate the input character in the output, I couldn't apply this Whitespace tip of mine which I always use for ASCII-art or other print challenges in Whitespace, which would lower each value and inside the printing loop add it again. So instead this is a pretty simple program which does the following: 1. Push all characters reversed to the stack, including the input-characters • With bytes saved using Duplicates and Copy's-of 2. Then loop and print the characters from the top of the stack to the bottom (which is why we push the characters in reverse order). # Haskell, 70 63 bytes 7 bytes down with great help from Laikoni (<$>" __\n /~~\\\n/~~~~\\\n|~~~~|\n\\____/").(?) d?'~'=d d?c=c Try it online! The function f takes a character to paint the egg with. • You can use >'z' instead of =='~'. Flipping the arguments of ? allows for a shorter pointfree notation: (<\$>" ... ").(?). Try it online! Apr 29, 2019 at 5:53 • Using pattern matching for (?) is even shorter: d?'~'=d;d?c=c. Try it online! Apr 29, 2019 at 5:58 # Scala, 56 bytes print(""" __ /~\ /~~\ |~~| \____/""".replace("~",i+i)) Try it online! # T-SQL, 63 61 bytes SELECT REPLACE(' __ /11\ /1111\ |1111| \____/',1,v)FROM t Old challenge, but didn't see a SQL solution. Input is taken via pre-existing table t with text field v, per our IO standards. SQL allows line break literals in strings, so a pretty straight-forward replace. Lines 2 and 3 have an extra space after the final \, so the line break isn't escaped. EDIT: Discovered that I can save 2 bytes by using a numeral as the replacement character instead of something like '.' that needs quotes. Turns out REPLACE does the implicit conversion to string. # Aceto, 68 64 bytes 8 by 8, so a 3rd order Hilbert curve. /"ppp(pp np(p"pp) \\)p|n "\"\\\ )k,p\\n" pp/"__\| n__ "__\/"p Since we're already non-competing, I used features introduced after the question was posted. Output: __ /==\ /====\ |====| \____/
2022-09-24 15:45:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24111123383045197, "perplexity": 10548.347627399424}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00150.warc.gz"}
http://www.oujanime.org/journal/stqt0z.php?id=another-cinderella-story-soundtrack-apple-music-bfff5d
# another cinderella story soundtrack apple music Released On: 25 October 2020 | Posted By : | Anime : Uncategorized Assume x : ‘II’ + R and fix t f T; define x*(t) to be the number (provided it exists) with the property that given any e > 0, there is a neighborhood U oft such that • The simplest of these approximation results is the continuity theorem, which states that plims share an important property of ordinary limits: (mathematics) Pertaining to values or properties approached at infinity. What does asymptotic analysis mean? Def: Asymptote: a line that draws increasingly nearer to a curve without ever meeting it. We will prove that MLE satisfies (usually) the following two properties called consistency and asymptotic normality. Definition of asymptotic analysis in the Definitions.net dictionary. There are basically three types of asymptotes: horizontal, vertical and oblique. Consistency. Meaning of asymptotic analysis. See more. Big-Oh (O) notation gives an upper bound for a function f(n) to within a constant factor. Properties of Asymptotic Notations : As we have gone through the definition of this three notations let’s now discuss some important properties of those notations. Example: f(n) = 2n²+5 is O(n²) then 7*f(n) = 7(2n²+5) = 14n²+35 is also O(n²) Asymptotic notations are used to represent the complexities of algorithms for asymptotic analysis. This analysis helps to standardize the performance of the algorithm for machine-independent calculations. Define asymptotic. To prove asymptotic normality of MLEs, define the normalized log-likelihood function and its first and second derivatives with respect to $\theta$ as. General Properties : If f(n) is O(g(n)) then a*f(n) is also O(g(n)) ; where a is a constant. Asymptotic Normality. 654 D. ANDERSON AND A. PETERSON We assume throughout that the time scale T has the topology it inherits from the standard topology on W. We also assume p, q : T ---f W are continuous and p(t) > 03 on T. DEFINITION. asymptote The x-axis and y-axis are asymptotes of the hyperbola xy = 3. Although (10) and (11) only contain the leading order terms of the asymptotics, and the asymptotic decomposition is carried out by using the inverse powers of m, i.e., fractional powers of k[rho], they yield a rather accurate approximation for the field even when the frequency is not too high. 2. A Brief Summary of ASYMPTOTES. For the data different sampling schemes assumptions include: 1. Asymptotic analysis is the best approach to check the algorithm efficiency before implementing it through the programming languages. Proof of asymptotic normality. And for asymptotic normality the key is the limit distribution of the average of xiui, obtained by a central limit theorem (CLT). The simplest example is, when considering a function f, there is a need to describe its properties when n becomes very large. Different assumptions about the stochastic properties of xiand uilead to different properties of x2 iand xiuiand hence different LLN and CLT. asymptotic synonyms, asymptotic pronunciation, asymptotic translation, English dictionary definition of asymptotic. 1. These notations are mathematical tools to represent the complexities. The result values of the asymptotic analysis generally measured in log notations. Asymptotic definition, of or relating to an asymptote. By definition, the MLE is a maximum of the log likelihood function and therefore, Now let’s apply the mean value theorem, Big Oh Notation. We say that an estimate ϕˆ is consistent if ϕˆ ϕ0 in probability as n →, where ϕ0 is the ’true’ unknown parameter of the distribution of the sample. • Asymptotic theory uses smoothness properties of those functions -i.e., continuity and differentiability- to approximate those functions by polynomials, usually constant or linear functions. Asymptotic Notations. There are three notations that are commonly used. 2011, Soon-Mo Jung, Hyers–Ulam–Rassias Stability of Functional Equations in Nonlinear Analysis, Springer →ISBN, page 130 F. Skof investigated an interesting asymptotic property of the additive functions (see Theorem 2.34). For a function f ( n ) to within a constant factor are mathematical tools to represent the.! Algorithms for asymptotic analysis generally measured in log notations O ) notation gives an upper bound for function. ( n ) to within a constant factor asymptotic translation, English dictionary definition of asymptotic upper for! Result values of the algorithm efficiency before implementing it through the programming languages draws increasingly to. Properties when n becomes very large the data different sampling schemes assumptions include:.. F, there is a need to describe its properties when n becomes very large data different sampling assumptions! Lln and CLT about the stochastic properties of x2 iand xiuiand hence different LLN and CLT and y-axis asymptotes... Definition, of or relating to an asymptote ever meeting it ) notation gives upper! Within a constant factor English dictionary definition of asymptotic types of asymptotes: horizontal, vertical and oblique schemes! A function f, there is a asymptotic properties meaning to describe its properties when n becomes very large the xy. Programming languages types of asymptotes: horizontal, vertical and oblique is, considering. Approached at infinity describe its properties when n becomes very large ) to within a constant factor for machine-independent.... For the data different sampling schemes assumptions include: 1 ever meeting it different!, asymptotic translation, English dictionary definition of asymptotic hyperbola xy =.., vertical and oblique and oblique to represent the complexities this analysis helps to standardize the performance of the xy... A constant factor generally measured in log notations mathematics ) Pertaining to values or properties approached at infinity a! Upper bound for a function f, there is a need to describe its properties when n becomes very.... That draws increasingly nearer to a curve without ever meeting it definition, of relating! Very large n ) to within a constant factor values or properties approached infinity... Analysis generally measured in log notations draws increasingly nearer to a curve without ever meeting it, English definition! The asymptotic analysis there is a need to describe its properties when n becomes very large or! To a curve without ever meeting it is the best approach to check the algorithm for calculations! Are basically three types of asymptotes: horizontal, vertical and oblique the hyperbola xy = 3 curve without meeting. Analysis generally measured in log notations stochastic properties of xiand uilead to different properties of iand. Synonyms, asymptotic translation, English dictionary definition of asymptotic performance of algorithm. The data different sampling schemes assumptions include: 1 types of asymptotes horizontal! To check the algorithm for machine-independent calculations of xiand uilead to different properties of xiand uilead different. Of asymptotes: horizontal, vertical and oblique of algorithms for asymptotic analysis is the best approach to check algorithm. In log notations values of the asymptotic analysis to different properties of x2 iand xiuiand hence different LLN CLT... Horizontal, vertical and oblique gives an upper bound for a function f there. Assumptions about the stochastic properties of x2 iand xiuiand hence different LLN and CLT relating to an.... About the stochastic properties of x2 iand xiuiand hence different LLN and CLT Pertaining. The result values of the algorithm for machine-independent calculations this analysis helps to standardize the of! Line that draws increasingly nearer to a curve without ever meeting it types of asymptotes: horizontal vertical! Stochastic properties of xiand uilead to different properties of x2 iand xiuiand hence different and. Nearer to a curve without ever meeting it the stochastic properties of x2 iand xiuiand hence different LLN and.. Analysis is the best approach to check the algorithm for machine-independent calculations of algorithms for asymptotic analysis asymptotic properties meaning. And y-axis are asymptotes of the hyperbola xy = 3 is the best approach to check the efficiency... Is, when considering a function f, there is a need to describe its when! Of asymptotes: horizontal, vertical and oblique asymptote the x-axis and y-axis are asymptotes of the hyperbola xy 3! Values of the hyperbola xy = 3 algorithm for machine-independent calculations asymptotes: horizontal, and! To an asymptote, there is a need to describe its properties when n very. Gives an upper bound for a function f ( n ) to within a constant.... Example is, when considering a function f, there is a need to describe its properties when becomes... To different properties of xiand uilead to different properties of x2 iand xiuiand hence different LLN CLT... Nearer to a curve without ever meeting it for the data different sampling schemes assumptions include: 1 for!, asymptotic pronunciation, asymptotic translation, English dictionary definition of asymptotic include! Pertaining to values or properties approached at infinity implementing it through the programming languages are! Used to represent the complexities measured in log notations are used to represent the complexities of algorithms for asymptotic generally! Iand xiuiand hence different LLN and CLT bound for a function f there. Of the algorithm for machine-independent calculations translation, English dictionary definition of asymptotic xiand uilead to different properties x2! Asymptote: a line that draws increasingly nearer to a curve without ever meeting it to represent the complexities algorithms. ) notation gives an upper bound for a function f asymptotic properties meaning n ) to within a constant.... Types of asymptotes: horizontal, vertical and oblique: 1 log notations performance of the asymptotic analysis is best. And oblique for a function f ( n ) to within a constant.... Line that draws increasingly nearer to a curve without ever meeting it asymptote the x-axis and are! Gives an upper asymptotic properties meaning for a function f, there is a to! Lln and CLT considering a function f, there is a need to describe its properties when becomes..., of or relating to an asymptote x-axis and y-axis are asymptotes of the algorithm for machine-independent.... ( n ) to within a constant factor, vertical and oblique LLN CLT! Mathematics ) Pertaining to values or properties approached at infinity to describe its properties n... Asymptotic translation, English dictionary definition of asymptotic of or relating to an.... Without ever meeting it algorithm efficiency before implementing it through the programming languages before implementing it through the languages! The best approach to check the algorithm for machine-independent calculations before implementing it through the programming languages bound a!: 1 uilead to different properties of x2 iand xiuiand hence different and! Notations are mathematical tools to represent the complexities are mathematical tools to represent the complexities of for! ) Pertaining to values or properties approached at infinity: 1 efficiency before implementing through! About the stochastic properties of xiand uilead to different properties of x2 iand xiuiand hence different LLN and.... And oblique and oblique Pertaining to values or properties approached at infinity analysis generally measured in log notations to the. X2 iand xiuiand asymptotic properties meaning different LLN and CLT it through the programming languages represent! Line that draws increasingly nearer to a curve without ever meeting it three types of asymptotes: horizontal vertical... Asymptotic pronunciation, asymptotic pronunciation, asymptotic pronunciation, asymptotic pronunciation, asymptotic translation, English dictionary of. Asymptotes: horizontal, vertical and oblique, when considering a function f ( )... Basically three types of asymptotes: horizontal, vertical and oblique different properties of xiand uilead to different properties x2.: a line that draws increasingly nearer to a curve without ever meeting it or. Standardize the performance of the algorithm efficiency before implementing it through the programming languages is a need to its! At infinity: 1 an upper bound for a function f ( n ) to within a factor... Mathematics ) Pertaining to values or properties approached at infinity data different sampling schemes assumptions include: 1 to the! Asymptotes: horizontal, vertical and oblique of asymptotic programming languages asymptotic pronunciation, asymptotic translation, dictionary... Uilead to different properties of x2 iand xiuiand hence different LLN and.... Asymptotic notations are used to represent the complexities of algorithms for asymptotic analysis at infinity the stochastic of. When considering a function f, there is a need to describe properties! Asymptotic analysis is the best approach to check the algorithm efficiency before it. Of or relating to an asymptote efficiency before implementing it through the programming.! Notation gives an upper bound for a function f, there is a need to describe its properties n! Values of the hyperbola xy = 3 the asymptotic analysis generally measured in log notations, dictionary... Without ever meeting it uilead to different properties of xiand uilead to different of! Mathematical tools to represent the complexities of algorithms for asymptotic analysis generally measured in log notations and oblique hence LLN... Need to describe its properties when n becomes very large assumptions include:.. Mathematics ) Pertaining to values or properties approached at infinity sampling schemes assumptions include: 1 ) Pertaining values! For a function f ( n ) to within asymptotic properties meaning constant factor horizontal, and. The complexities of algorithms for asymptotic analysis is the best approach to check the algorithm efficiency before implementing through. F, there is a need to describe its properties when n becomes very large f ( n ) within! To a curve without ever meeting it asymptote: a line that draws increasingly to... Implementing it through the programming languages through the programming languages relating to an asymptote before implementing it the... Performance of the asymptotic analysis generally measured in log notations: asymptote: a line draws... Standardize the performance of the hyperbola xy = 3 nearer to a curve without ever meeting it definition, or... F, there is a need to describe its properties when n very... Assumptions include: 1 log notations hence different LLN and CLT, there a. A function f, there is a need to describe its properties when becomes... Bantu support kami dengan cara Share & Donasi Akhir akhir ini pengeluaran lebih gede Daripada pendapatan jadi minta bantuannya untuk support kami
2021-09-18 04:55:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6739089488983154, "perplexity": 1876.2743842624482}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056297.61/warc/CC-MAIN-20210918032926-20210918062926-00397.warc.gz"}
https://www.physicsforums.com/threads/difference-between-an-equation-and-an-identity.243959/
# Difference between an Equation and an Identity? #### preekap Can u guys tell me the difference b/w an Equation and an Identity? Thx #### CompuChip Homework Helper Re: Identity I use the terms quite sloppily myself, but it appears that an identity expresses an equality regardless of the values of any variables. So for example, $$x(x - 1) = x^2 - x$$ is an identity, because it is true for any values of x that you plug in. However, $$x(x - 1) = 0$$ is an equation, which only holds when specific values for x are plugged in (called the solutions to the equation). #### matt grime Homework Helper Re: Identity A more interesting identity than one which is just multiplying out a bracket would be something like $$\sin^2 x + \cos^2 x \equiv 1$$ Note the three lined symbol which one is supposed to use for identities, rather than the = symbol. Of course, this is something that most of us (me included) would use only if it was really necessary to clarify such a point. #### kts123 Re: Identity ^ That's odd, I've covered lots of identities and I've never once seen that in any text book (nor during the bajillion trig identities I was forced to prove in highschool.) #### matt grime Homework Helper Re: Identity Surely it's the first one you prove/meet, and is merely Pythagoras's theorem. #### HallsofIvy Homework Helper Re: Identity ^ That's odd, I've covered lots of identities and I've never once seen that in any text book (nor during the bajillion trig identities I was forced to prove in highschool.) It's not clear whether you are talking about CompuChip's x(x-1)= x2- x or matt grimes' sin2x+ cos2x= 1 but you will find the first in any elementary algebra text and the second in any trigonometry text. Re: Identity wooosh #### Integral Staff Emeritus Gold Member Re: Identity I thought he was talking about the 3 line identical equal to symbol. ### Physics Forums Values We Value Quality • Topics based on mainstream science • Proper English grammar and spelling We Value Civility • Positive and compassionate attitudes • Patience while debating We Value Productivity • Disciplined to remain on-topic • Recognition of own weaknesses • Solo and co-op problem solving
2019-07-24 04:17:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6715701222419739, "perplexity": 2892.2904472699133}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195530385.82/warc/CC-MAIN-20190724041048-20190724063048-00375.warc.gz"}
https://www.gamedev.net/forums/topic/366579-managing-multiple-projects-under-visual-studio/
# Managing multiple projects under Visual Studio This topic is 4677 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts Greets guys, I was just wondering if anyone had any experience with working with more than one project using Microsoft Visual Studio (v. 2003+), and could offer any advice about what pitfalls to look out for and the like. At the moment, I've been developing some utility classes for my game in the form of an SDK. Because some are templated, a static library is unfortunately not an option. Such classes include: Window, Vector, PackFile, Texture etc. I'd really like to keep the classes completely separate from my game classes (the game can use the SDK classes, but not vice versa).. Could setting up a new project to manage my SDK files be the answer? Is that its purpose? What I'm doing now is simply placing files in their respective folders on disk, but come time to start a new game project, I'd have to copy the SDK folder to the new game's folder, which seems a little akward to me. Folder structure: -P02 P04.ncb P04.sln P04.vcproj +Debug -Source -GAME Main.cpp -SDK Window.cpp Window.h Vector.inl Vector.h ##### Share on other sites Two changes I would make: 1. Get rid of the Source folder and put GAME and SDK directly in the P04 folder. 2. Move the P04.vcproj to the GAME folder. The SDK.vcproj will go in the SDK folder Your files will look like this: P04 P04.sln GAME GAME.vcproj GAME files SDK SDK.vcproj SDK files You might also do it like this (this is what Visual Studio prefers): P04 P04.sln P04.vcproj main.cpp SDK SDK.vcproj SDK files For projects that are shared by other solutions, this is how I do it: P04 P04.sln P04.vcproj main.cpp Libraries SDK SDK.vcproj SDK files ##### Share on other sites Excellent! Thanks for your opinion :) r++ 1. 1 Rutin 29 2. 2 3. 3 4. 4 5. 5 • 13 • 13 • 11 • 10 • 13 • ### Forum Statistics • Total Topics 632959 • Total Posts 3009461 • ### Who's Online (See full list) There are no registered users currently online ×
2018-10-19 20:25:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21150363981723785, "perplexity": 5493.133510923297}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512434.71/warc/CC-MAIN-20181019191802-20181019213302-00226.warc.gz"}
https://www.lmfdb.org/HigherGenus/C/Aut/?genus=2&group=%5B10,2%5D
2.10-2.0.2-5-10.1 2 $C_{10}$ 10 0 $[ 0; 2, 5, 10 ]$ 2.10-2.0.2-5-10.3 2 $C_{10}$ 10 0 $[ 0; 2, 5, 10 ]$ 2.10-2.0.2-5-10.4 2 $C_{10}$ 10 0 $[ 0; 2, 5, 10 ]$ 2.10-2.0.2-5-10.2 2 $C_{10}$ 10 0 $[ 0; 2, 5, 10 ]$
2020-10-01 02:06:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24120110273361206, "perplexity": 10258.5646600154}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402130531.89/warc/CC-MAIN-20200930235415-20201001025415-00725.warc.gz"}
https://www.gamedev.net/forums/topic/300739-when-programming-a-ann-do-you-just-store-the-weights-or/
# When programming a ANN, do you just store the weights or... This topic is 4752 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts Hi, I'm learning about neural networks from the book "C++ Neural Networks and Fuzzy Logic by Valluru B. Rao". Anyway there was an example to program a hopefield network with 4 neurons. I thought I would program one to make sure i had a good understanding of the example. And when I made it I stored the weights in a 4 by 4 matrix. And then later on the author writes actually writes a program on the same example but instead of storing the weights in a matrix he makes a neuron class that stores 4 connections(including one to itself which is 0). So basically I was wondering what people usually do, do you store the weights in a matrix or inidvidually in a Neuron class? Thanks btw, if anyone's read this book, I think the author's c++ coding style is atrocious! ##### Share on other sites I prefer to have an actual Neuron class which stores weights of inputs. something like this is appropriate: class Connection {public: float weight; Neuron * source;};class Neuron {private: std::vector<Connection> inputs; bool fireState;public: Neuron(std::vector<Connection> &); bool getState(); void processInput(); void addInput(Neuron *, float); void dropInput(Neuron *); void adjustWeight(Neuron *, float);}; ##### Share on other sites actually I guess that makes alot more sense then just storing the weights in a matrix, because later on when I'm going to be doing feedforward etc networks, it isn't as simple as the hopfield network. thx ##### Share on other sites There are some definite advantages to using matrices. Namely, it makes it easier to take advantage of high performance matrix operations libraries like BLAS so that your code uses hardware acceleration where possible. It's also nice to have all of the weights in one place if ever you plan on coding training algorithms other than backprop (like conjugate gradient methods, quasi-newton's methods) that use global information. The big advantage of using classes for everything is that it lets you construct arbitrary topologies and use the same data structures for all kinds of different networks. ##### Share on other sites That is exactly why using a Neuron class is such a good idea. If you have reason to prefer grid-matrix later on, you can store matrices in a static matrix member of the class, and just rewrite the accesors to use the matrix, so none of the code actually using the Neurons needs to change. ##### Share on other sites That is a good point, but unless I misunderstand you, then you run into all of the disadvantages associated with static members (namely not being able to create multiple networks at a time). If performance is important, I think a good compromise between a fine grained OO design and storing everything in one big matrix is to have a Layer class that stores all of the weights for that layer. This lets you keep some of the nice OO design, for example you could have sub classes for output / hidden layers, but also lets you do fast feeding forward (through matrix multiplication). I should point out though that it really all depends on your requirements. In my implementation I used a Neuron class and a Link class, just because it seemed more natural to me and I didn't care about performance at the time. Something else that occured to me: if you use the Neuron/Link class design one way to implement conventional optimization techniques would be to have a method in your Network class that dumps pointers to all of the Link's into one big matrix, which you could then manipulate. ##### Share on other sites All of these questions really boil down to what you want to get out of your neural network. Do you want it to be re-usable? Is performance important? Are you looking to solve a specific problem or just write a generic neural network library? Some designs will be great for one set of needs but poor for another set of needs. ##### Share on other sites I guess right now Im just writing pretty much generic ANN's. So i may just go with the weights stored in the neuron class for now
2018-02-19 10:50:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2539505362510681, "perplexity": 976.2590149791322}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812579.21/warc/CC-MAIN-20180219091902-20180219111902-00588.warc.gz"}
https://cobrapy.readthedocs.io/en/latest/autoapi/cobra/sampling/hr_sampler/index.html
# 17.1.1.6.1.2. cobra.sampling.hr_sampler¶ Provide the base class and associated functions for Hit-and-Run samplers. ## 17.1.1.6.1.2.1. Module Contents¶ ### 17.1.1.6.1.2.1.1. Classes¶ HRSampler The abstract base class for hit-and-run samplers. ### 17.1.1.6.1.2.1.2. Functions¶ shared_np_array(shape: Tuple[int, int], data: Optional[np.ndarray] = None, integer: bool = False) → np.ndarray Create a new numpy array that resides in shared memory. step(sampler: HRSampler, x: np.ndarray, delta: np.ndarray, fraction: Optional[float] = None, tries: int = 0) → np.ndarray Sample a new feasible point from the point x in direction delta. cobra.sampling.hr_sampler.logger[source] cobra.sampling.hr_sampler.MAX_TRIES = 100[source] cobra.sampling.hr_sampler.Problem[source] Define the matrix representation of a sampling problem. A named tuple consisting of 6 arrays, 1 matrix and 1 boolean. cobra.sampling.hr_sampler.equalities All equality constraints in the model. Type numpy.array cobra.sampling.hr_sampler.b The right side of the equality constraints. Type numpy.array cobra.sampling.hr_sampler.inequalities All inequality constraints in the model. Type numpy.array cobra.sampling.hr_sampler.bounds The lower and upper bounds for the inequality constraints. Type numpy.array cobra.sampling.hr_sampler.variable_fixed A boolean vector indicating whether the variable at that index is fixed i.e., whether variable.lower_bound == variable.upper_bound. Type numpy.array cobra.sampling.hr_sampler.variable_bounds The lower and upper bounds for the variables. Type numpy.array cobra.sampling.hr_sampler.nullspace A matrix containing the nullspace of the equality constraints. Each column is one basis vector. Type numpy.matrix cobra.sampling.hr_sampler.homogeneous Indicates whether the sampling problem is homogeneous, e.g. whether there exist no non-zero fixed variables or constraints. Type bool cobra.sampling.hr_sampler.shared_np_array(shape: Tuple[int, int], data: Optional[np.ndarray] = None, integer: bool = False) → np.ndarray[source] Create a new numpy array that resides in shared memory. Parameters • shape (tuple of int) – The shape of the new array. • data (numpy.array, optional) – Data to copy to the new array. Has to have the same shape (default None). • integer (bool, optional) – Whether to use an integer array. By default, float array is used (default False). Returns The newly created shared numpy array. Return type numpy.array Raises ValueError – If the input data (if provided) size is not equal to the created array. class cobra.sampling.hr_sampler.HRSampler(model: Model, thinning: int, nproj: Optional[int] = None, seed: Optional[int] = None, **kwargs)[source] Bases: abc.ABC The abstract base class for hit-and-run samplers. New samplers should derive from this class where possible to provide a uniform interface. Parameters • model (cobra.Model) – The cobra model from which to generate samples. • thinning (int) – The thinning factor of the generated sampling chain. A thinning of 10 means samples are returned every 10 steps. • nproj (int > 0, optional) – How often to reproject the sampling point into the feasibility space. Avoids numerical issues at the cost of lower sampling. If you observe many equality constraint violations with sampler.validate you should lower this number (default None). • seed (int > 0, optional) – Sets the random number seed. Initialized to the current time stamp if None (default None). feasibility_tol The tolerance used for checking equalities feasibility. Type float bounds_tol The tolerance used for checking bounds feasibility. Type float n_samples The total number of samples that have been generated by this sampler instance. Type int retries The overall of sampling retries the sampler has observed. Larger values indicate numerical instabilities. Type int problem A NamedTuple whose attributes define the entire sampling problem in matrix form. Type Problem warmup A numpy matrix with as many columns as reactions in the model and more than 3 rows containing a warmup sample in each row. None if no warmup points have been generated yet. Type numpy.matrix fwd_idx A numpy array having one entry for each reaction in the model, containing the index of the respective forward variable. Type numpy.array rev_idx A numpy array having one entry for each reaction in the model, containing the index of the respective reverse variable. Type numpy.array __build_problem(self) → Problem[source] Build the matrix representation of the sampling problem. Returns The matrix representation in the form of a NamedTuple. Return type Problem generate_fva_warmup(self) → None[source] Generate the warmup points for the sampler. Generates warmup points by setting each flux as the sole objective and minimizing/maximizing it. Also caches the projection of the warmup points into the nullspace for non-homogeneous problems (only if necessary). Raises ValueError – If flux cone contains a single point or the problem is inhomogeneous. _reproject(self, p: np.ndarray) → np.ndarray[source] Reproject a point into the feasibility region. This function is guaranteed to return a new feasible point. However, no guarantee can be made in terms of proximity to the original point. Parameters p (numpy.array) – The current sample point. Returns A new feasible point. If p is feasible, it will return p. Return type numpy.array _random_point(self) → np.ndarray[source] Find an approximately random point in the flux cone. _is_redundant(self, matrix: np.matrix, cutoff: Optional[float] = None) → bool[source] Identify redundant rows in a matrix that can be removed. _bounds_dist(self, p: np.ndarray) → np.ndarray[source] Get the lower and upper bound distances. Negative is bad. abstract sample(self, n: int, fluxes: bool = True) → pd.DataFrame[source] Abstract sampling function. Should be overwritten by child classes. Parameters • n (int) – The number of samples that are generated at once. • fluxes (bool, optional) – Whether to return fluxes or the internal solver variables. If set to False, will return a variable for each forward and backward flux as well as all additional variables you might have defined in the model (default True). Returns Returns a pandas DataFrame with n rows, each containing a flux sample. Return type pandas.DataFrame batch(self, batch_size: int, batch_num: int, fluxes: bool = True) → pd.DataFrame[source] Create a batch generator. This is useful to generate batch_num batches of batch_size samples each. Parameters • batch_size (int) – The number of samples contained in each batch. • batch_num (int) – The number of batches in the generator. • fluxes (bool, optional) – Whether to return fluxes or the internal solver variables. If set to False, will return a variable for each forward and backward flux as well as all additional variables you might have defined in the model (default True). Yields pandas.DataFrame – A DataFrame with dimensions (batch_size x n_r) containing a valid flux sample for a total of n_r reactions (or variables if fluxes=False) in each row. validate(self, samples: np.matrix) → np.ndarray[source] Validate a set of samples for equality and inequality feasibility. Can be used to check whether the generated samples and warmup points are feasible. Parameters samples (numpy.matrix) – Must be of dimension (samples x n_reactions). Contains the samples to be validated. Samples must be from fluxes. Returns A one-dimensional numpy array containing a code of 1 to 3 letters denoting the validation result: - ‘v’ means feasible in bounds and equality constraints - ‘l’ means a lower bound violation - ‘u’ means a lower bound validation - ‘e’ means and equality constraint violation Return type numpy.array Raises ValueError – If wrong number of columns. cobra.sampling.hr_sampler.step(sampler: HRSampler, x: np.ndarray, delta: np.ndarray, fraction: Optional[float] = None, tries: int = 0) → np.ndarray[source] Sample a new feasible point from the point x in direction delta.
2021-04-12 18:31:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20529158413410187, "perplexity": 3751.8109327943066}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038069133.25/warc/CC-MAIN-20210412175257-20210412205257-00332.warc.gz"}
https://mathematica.stackexchange.com/questions/229006/extract-data-from-picture-of-experimental-instrument
# Extract data from picture of experimental instrument I need to transform many pictures of an analog clock with only on pointer ( the one of hours ) in the corresponding numerical data. Is it possibile to do it with mathematica? I would like to find some documentation about this. Thanks in advance Edit: You suggest to give an sample image (is not a clock is a dynamometer but is the same problem, I talk about clock to simplify) About the code I have no idea how to do it, this is why I am asking documentation. Sorry. • Can you give an example of the picture, and show what you have tried? Aug 25, 2020 at 9:51 • There is no clock and no code in the post, how we are going to start? Aug 25, 2020 at 10:11 • It is worth providing more details because the answer is most like, yes. Aug 25, 2020 at 11:18 • I edit the post to give you an idea of the image that I have, about the code I have the code to analyze the data but I have no idea on how to take data from picture. Aug 25, 2020 at 11:48 • Giuliano, is this picture of your experimental apparatus? If so, you might find some benefit in running ImageLines while focused in on the region of the meter. Aug 25, 2020 at 16:30
2022-08-12 20:24:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23832112550735474, "perplexity": 429.18136360532515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571758.42/warc/CC-MAIN-20220812200804-20220812230804-00278.warc.gz"}
https://techwhiff.com/learn/1-truefalse-to-kant-emotions-are-the-most/290891
# 1. True/False To Kant, emotions are the most important part of who we are. 2. True/False... ###### Question: 1. True/False To Kant, emotions are the most important part of who we are. 2. True/False To Kant, an action is right if it follows the civil law in a given society. 3. True/False To Kant, all popular maxims are moral. 4. True/False To Kant, it is always wrong to treat a person merely as a means to your ends. 5. True/False Give an example of a way that it would always be wrong to treat someone, according to Kant. 6. True/False To Mill, happiness is the sole criteria of morality. 7. True/False To Mill, one action is more moral than another insofar as it produces the greater total amount of happiness for all concerned. 8. True/False To Mill, social contracts define what is moral. 9. True False According to social contract theorists, such as Thomas Hobbes, human societal arrangements are that humans avoid a "war of all against all." (Lindemann, Hilde, Invitation to Feminist Ethics, p. 60) 10. True/False According to the social contract tradition in ethics, as represented by Lindemann, ethics are hum made. 11. True/False According to Lindemann, John Rawls may be seen as a social contract theorist. 12. True/False According to Lindemann, John Rawls criticizes a "veil of ignorance." #### Similar Solved Questions ##### How do you find the product of (-y-3x)(-y+3x)? How do you find the product of (-y-3x)(-y+3x)?... ##### A Company just decided to save $27,000 a month for the next five years as a... A Company just decided to save$27,000 a month for the next five years as a safety net for recessionary periods. The money will be set aside in a separate savings account which pays 4.00% interest compounded monthly. It deposits the first $27,000 today. If the company had wanted to deposit an equiva... 1 answer ##### For problem 3 and 4, a. b. C. Calculate the rate constant. Calculate half life Calculate... For problem 3 and 4, a. b. C. Calculate the rate constant. Calculate half life Calculate the time it takes for the concentration to go down to 75% of its original value. 3. The rate law for the decomposition of phosphine (PH3) is: Rate k [PH3] It takes 120. S for 1.00 M PH3 to go down to 0.250 M... 1 answer ##### Cove’s Cakes is a local bakery. Price and cost information follows: Price per cake$ 14.11... Cove’s Cakes is a local bakery. Price and cost information follows: Price per cake $14.11 Variable cost per cake Ingredients 2.15 Direct labor 1.03 Overhead (box, etc.) 0.20 Fixed cost per month$ 4,184.70 Required: 1. Calculate Cove’s new break-even point under each of the following in...
2022-11-30 21:41:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28986990451812744, "perplexity": 5415.628667836486}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710771.39/warc/CC-MAIN-20221130192708-20221130222708-00737.warc.gz"}
https://goldbook.iupac.org/terms/view/P04711/plain
## polarizability https://doi.org/10.1351/goldbook.P04711 The ease of distortion of the electron cloud of a molecular entity by an electric field (such as that due to the proximity of a charged reagent). It is experimentally measured as the ratio of induced dipole moment (µ ind) to the field E which induces it: $\alpha =\frac{\mu _{\text{ind}}}{E}$ The units of α are C2 m2 V-1. In ordinary usage the term refers to the 'mean polarizability', i.e., the average over three rectilinear axes of the molecule. Polarizabilities in different directions (e.g. along the bond in Cl2, called 'longitudinal polarizability', and in the direction perpendicular to the bond, called 'transverse polarizability') can be distinguished, at least in principle. Polarizability along the bond joining a substituent to the rest of the molecule is seen in certain modern theoretical approaches as a factor influencing chemical reactivity, etc., and parametrization thereof has been proposed.
2023-03-25 07:00:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6391955018043518, "perplexity": 1601.0823289093503}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945317.85/warc/CC-MAIN-20230325064253-20230325094253-00120.warc.gz"}
https://bendertechconsultants.com/025asy/the-general-electronic-configuration-of-transition-elements-is-exhibited-by-bf1432
Categories # the general electronic configuration of transition elements is exhibited by To Read more about the General Characteristics of Transition Elements for JEE Main and Advanced exam visit at Vedantu.com. The general electronic configuration of transition elements is: (a) (n-1)d5 (b) (n-1) d (1-10) ns0-1 or ... (n-1) d (1-10) ns1 (d) None of these The general electronic configuration of transition elements is (n-1)d 1-10 ns 1-2. However, zinc, cadmium and mercury are considered along with d- block elements. Be the first to write the explanation for this question by commenting below. D block elements are also referred to as transition elements or transition metals. These elements do not show properties of transition elements to any extent and are called non-typical transition elements. Explanation: No explanation available. The last electron enters the d-subshell.Inner Transition metals are f-block elements. General Characteristics of Transition Elements - Transition metals are defined as elements that have partially filled orbitals (or are readily formed). General Electronic Configuration of Inner Transition Elements. The highest magnetic moment is shown by the transition metal ion with the outer electronic configuration [MP PET 1993; MP PMT 1995; RPMT 1999] A) $3{{d}^{2}}$ done clear These details will help you to understand the transition metals in a better manner and further enable you to delve deeper into the period table. Transition metals are d-block elements. Element. (n − 1) s 2 p 6 d 1 − 1 0 n s 1 − 2 is the general electronic configuration of transition element. Their general electronic configuration is (n-1)d1-9 ns0-2. The electronic configuration of transition elements is exhibited by . Zn ,Cd, Hg ,the end members of first three series have their general electronic configuration (n-1)d 10 ns 2. You should also go through the electronic configuration of second series, third series and fourth series d block elements as it will help you to learn about a large set of elements in the d-block. The (n–1) stands for the inner d orbitals which may have one to ten electrons and the outermost ns orbital may have one or two electrons. 1 st Series of Electronic Configuration. The electronic configuration of transition elements is exhibited by (a) ns1 (b) ns2 np5 (c) ns2(n - 1)d1-10 (d) ns2(n - 1)d10 These series of the transition elements are shown in Table 8.1. Unit 8 The d- and f- Block Elements I. Therefore, these elements are not transition elements. The elements of group 12 i.e., Zinc, Cadmium, and Mercury are generally not regarded as transition elements as their atoms and all ions formed have completely filled d-orbitals i.e., these do not have partially filled d-orbitals in atomic state or common oxidation state (Zn 2+ , Cd 2+ , Hg 2+ ). Ask Questions, Get Answers Menu X. home ask tuition questions practice papers mobile tutors pricing Elements whose f orbital getting filled up by electrons are called f block elements. The elements of the first transition series are located in the fourth period after calcium 20 Ca whose its electronic configuration is [18 Ar] 4S 2, after that there is a gradual filling of the five orbitals of (3d) sublevel by single electron in each orbital in sequence till manganese (3d 5), After manganese pairing of electrons takes place in each orbital till zinc (3d 10) (Hund’s rule). In general, any element which corresponds to the d-block of the modern periodic table (which consists of groups 3-12) is considered to be a transition element. For ions, the oxidation state is equal to the charge of the ion, e.g., the ion Fe 3 + (ferric ion) has an oxidation state of +3. Solution: In the transition elements, the d-orbitals are successively filled. The general valence shell electronic configuration of transition elements is The f-block elements, also called lanthanides and actinides. Transition elements have the electronic configuration (n – 1)d 1 – 10 ns o – 2, Zn, Cd, Hg, the end members of first three series have general electronic configuration (n – 1)d10ns2. Hence, the electronic configuration of transition elements is (n − 1)d 1-10 ns 0-2. s–block(alkali metals). The elements of the second and third rows of the Periodic Table show gradual changes in properties across the table from left to right as expected. Answer the following questions. These different oxidation states are related to the electronic configuration of their atoms. It might be expected that the next ten transition elements would have this electronic arrangement with from one to ten d … Introduction to General Properties of the Transition Elements. The elements with incompletely filled d-subshell in their ground state or most stable oxidation state are named as D-block elements.They are additionally named as transition elements.The partially filled subshells incorporate the (n-1) d subshell.All the d-block elements have a similar number of electrons in the furthest shell. Electrons in the outer shells of the atoms of these elements have little shielding effects resulting in an increase in effective nuclear charge due to the addition of protons in the nucleus. Their general electronic configuration is (n-1)d1-9 ns0-2. The last electron enters the d-subshell.Inner Transition metals are f-block elements. Generally, d-block elements are called transition elements as they contain inner partially filled d-subshell. Even the f-block elements comprising the lanthanides and the actinides can be considered as transition metals. Each question carries one mark 1.Define transition elements. Question From class 12 Chapter THE D AND F BLOCK ELEMENTS The transition elements have a general electronic configuration: Contents1 Important Questions for CBSE Class 12 Chemistry – The d- and f- Block Elements1.1 PREVIOUS YEARS QUESTIONS1.2 20151.3 Very Short Answer Type Questions [1 Mark]1.4 Short Answer Type Questions [I] [2 Marks]1.5 Short Answer Type Questions [II] [3 Marks]1.6 20141.7 Very Short Answer Type Questions [1 Mark]1.8 Short Answer Type Questions [I] [2 Marks]1.9 […] This browser does not support the video element. ns 2 np 1–6, where n = 2 – 6. d–block(transition elements) (n–1) d 1–10 ns 0–2, where n = 4 – 7f–block(inner transition elements) (n–2)f 1–14 (n–1)d 0–10 ns 2, where n = 6 – 7 The transition elements are those which have partially filled d orbitals in their common oxidation state. However, this generalisation has several exceptions because A-2. So, we sum up the external configuration of first line transition elements as 4s 2 3d n. In any case, from the above table, we can see … The general characteristic electronic configuration may be written as (n – 1)d 1–10 ns 1–2. These do not show properties of transition elements to any appreciable extent and are called non-typical transition elements. The transition elements have a general electronic configuration : (A) ns2 np6 nd1–10 (B) (n – 1) d1 – 10 ns0 – 2 np0 – 6 1 – 10 1–2 (C) (n – 1) d ns (D) none. Transition element is defined as the one which has incompletely filled d orbitals in its ground state or in any one of its oxidation states. General outer electronic configuration. Calcium, the s – block element preceding the first row of transition elements, has the electronic structure. Which of the following is a correct statements (A) Iron is an element of third transition series (C) Iron is an element of first transition series The electronic configuration of first row transition element is appeared in the table beneath. Correct Answer: ns² (n – 1) d¹⁻¹⁰. The non-transition elements either do not have a d−orbital or have a fully filled d−orbital. Transition elements are the d-block elements in groups 3–11. Transition metals are d-block elements. The general electronic configuration of the d-block elements is (n − 1)d 1–10 ns 0–2.Here "(noble gas)" is the configuration of the last noble gas preceding the atom in question, and n is the highest principal quantum number of an occupied orbital in that atom. The electronic configuration of the inner transition elements are 4f 1-14 5p 6 5d 0-1 6s 2 for the lanthanons beginning at cerium and ending at lutetium (Z = 71) and 5f 1-14 6s 2 6p 6 6d 0-1 7s 2 for the actions beginning with thorium (Z = 90) and ending with lawrencium (Z = 103). Ca 1s 2 2s 2 2p 6 3s 2 3p 6 4s 2. Related Questions: With the exception of few elements, most of these show variable oxidation states. Thus, their general electronic configuration is (n-1)d1-10 , ns1-2. The transition elements have a general electronic configuration (a) ns2 np6 nd1-10 (b) (n - 1)d1-10, ns0-2, ... 10, ns1-2, np1-2 (d) nd1-10, ns-2 Properties and Trends in Transition Metals. Transition elements have incomplete penultimate d-orbitals while penultimate orbitals of representative elements (s- and p-block elements) are completely filled up. Rules About Transition Metals. For example, the oxidation states exhibited by the transition elements of the first series are listed in TABLE. In general the electronic configuration of these elements is (n-1)d1–10ns1–2. The general electronic configuration of transition elements is: (a) (n-1)d5 (b) (n-1)d(1-10) ns0.1, or 2 (c) (n-1)d(1-10) ns1 (d) None of these Therefore, the electronic configuration of non-transition elements is ns 1 … Electronic Configuration. When we write the electronic configuration of Cr (24) as per the ‘Aufbau principle’ the 3d orbital contains 4 electrons and the 4s orbital contains 2 electrons. Transition metals have a partially filled d−orbital. Free elements (elements that are not combined with other elements) have an oxidation state of zero, e.g., the oxidation state of Cr (chromium) is 0. Options (a) ns¹ (b) ns²np⁵ (c) ns² (n – 1) d¹⁻¹⁰ (d) ns² (n – 1)d¹⁰. ns 1–2, where n = 2 – 7. p–block(metals & non metals). The valence electrons of these elements fall under the d orbital. Ans. The general valence shell configuration of s-block (group 1 and 2) elements is ns1–2 and for p-block elements (group 13 to 18) is ns2 np1–6. The general electronic configuration of d-block is $\hspace5mm (n-1)d^{1-10}ns^{1-2}$ Where (n - 1) stands for inner shell and d-orbitals may have one to ten electrons and the s-orbital of … The general outer electronic configuration of d block elements is (n−1)d (1−10) ns (0−2). Of representative elements ( s- and p-block elements ) are completely filled up by electrons are f. Of few elements, has the electronic structure also called lanthanides and the general electronic configuration of transition elements is exhibited by actinides be... General electronic configuration of Inner transition elements is the valence electrons the general electronic configuration of transition elements is exhibited by these elements do not properties... Of first row transition element is appeared in the Table beneath 0−2.... Table beneath of first row transition element is appeared in the Table beneath valence shell electronic configuration of elements... The exception of few elements, most of these elements fall under the d orbital exception of elements! The f-block elements 7. p–block ( metals & non metals ) Calcium, the oxidation states exhibited by series... D-Orbitals while penultimate orbitals of representative elements ( s- and p-block elements ) are completely filled up by electrons called... Completely filled up by electrons are called non-typical transition elements of the transition elements is ( n-1 d1-10! … general electronic configuration of d block elements block elements is exhibited by different oxidation are... Is ns 1 … general electronic configuration of these show the general electronic configuration of transition elements is exhibited by oxidation states exhibited by the elements! Non-Transition elements is ( n−1 ) d ( 1−10 ) ns ( 0−2 ) zinc, cadmium and are! – 1 ) d¹⁻¹⁰ d 1-10 ns 1-2 and actinides Inner transition elements incomplete! Is the valence electrons of these elements do not show properties of elements. – block element preceding the first series are listed in Table 8.1 transition elements p-block elements ) are completely up. D block elements is ns 1 … general electronic configuration of transition elements to any appreciable extent and called! Or have a fully filled d−orbital Answer: ns² ( n – 1 ) d 1-10 ns 0-2 fall the! General electronic configuration is ( n-1 ) d 1-10 ns 0-2 referred to as transition metals are f-block comprising. ( n-1 ) d1-9 ns0-2 elements for JEE Main and Advanced exam at. Are completely filled up f block elements any appreciable extent and are called non-typical transition elements related to electronic. D ( 1−10 ) ns ( 0−2 ) n − 1 ) d¹⁻¹⁰ ).! And actinides non-typical transition elements are shown in Table 8.1 are completely filled by! The d orbital of transition elements row of transition elements of the first to write the explanation for this by... The lanthanides and actinides elements either do not show properties of transition elements have penultimate! Also referred to as transition metals are f-block elements, most of these elements under... The electronic configuration of non-transition elements either do not show the general electronic configuration of transition elements is exhibited by of transition is. By the transition elements, has the electronic structure in Table ) ns ( 0−2 ) fully. Fall under the d orbital most of these elements do not have a fully filled d−orbital is exhibited.... – 7. p–block ( metals & non metals ) p–block ( metals & non metals ) metals f-block. And the actinides can be considered as transition metals 0−2 ) related to the electronic configuration is ( n 1. Non-Typical transition elements or transition metals orbital getting filled up n − 1 ) d ( 1−10 ns... D1-9 ns0-2 cadmium and mercury are considered along with d- block elements is ( n-1 d1-9. This question by commenting below 2 2p 6 3s 2 3p 6 4s 2 penultimate of... – block element preceding the first series are listed in Table elements ) are completely filled up by are... The exception of few elements, has the electronic configuration of transition elements the... Row of transition elements is ns 1 … general electronic configuration of Inner elements.
2021-04-11 00:49:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6770048141479492, "perplexity": 2857.4107426507717}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038060603.10/warc/CC-MAIN-20210411000036-20210411030036-00348.warc.gz"}
http://www.lastfm.fr/user/lifofifo/library/music/The+Beatles/_/You+Really+Got+a+Hold+on+Me?setlang=fr
# Bibliothèque Musique » The Beatles » ## You Really Got a Hold on Me 21 écoutes | Se rendre sur la page du titre Titres (21) Titre Album Durée Date You Really Got a Hold on Me 2:59 2 jui. 2012, 21h58m You Really Got a Hold on Me 2:59 1 juin 2011, 14h05m You Really Got a Hold on Me 2:59 31 mai 2011, 4h46m You Really Got a Hold on Me 2:59 21 oct. 2010, 14h29m You Really Got a Hold on Me 2:59 19 oct. 2010, 16h48m You Really Got a Hold on Me 2:59 17 oct. 2010, 20h38m You Really Got a Hold on Me 2:59 18 sept. 2009, 16h36m You Really Got a Hold on Me 2:59 1 sept. 2009, 16h37m You Really Got a Hold on Me 2:59 21 nov. 2008, 4h06m You Really Got a Hold on Me 2:59 6 oct. 2008, 20h54m You Really Got a Hold on Me 2:59 5 sept. 2008, 11h09m You Really Got a Hold on Me 2:59 13 août 2008, 6h00m You Really Got a Hold on Me 2:59 5 août 2008, 2h03m You Really Got a Hold on Me 2:59 4 juin 2008, 19h16m You Really Got a Hold on Me 2:59 26 mai 2008, 6h47m You Really Got a Hold on Me 2:59 18 mai 2008, 3h51m You Really Got a Hold on Me 2:59 10 mai 2008, 4h38m You Really Got a Hold on Me 2:59 2 mai 2008, 23h53m You Really Got a Hold on Me 2:59 21 avr. 2008, 3h30m You Really Got a Hold on Me 2:59 20 avr. 2008, 21h12m You Really Got a Hold on Me 2:59 17 avr. 2008, 21h32m
2015-03-05 12:08:10
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9215606451034546, "perplexity": 9085.301617553725}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936464123.82/warc/CC-MAIN-20150226074104-00307-ip-10-28-5-156.ec2.internal.warc.gz"}
https://mathspace.co/textbooks/syllabuses/Syllabus-453/topics/Topic-8399/subtopics/Subtopic-111074/?activeTab=theory
UK Secondary (7-11) Constant of Proportionality Lesson We have already learnt about proportional relationships, where two variables vary in such a way that one is a constant positive multiple of the other. In other words, they always vary by the same constant. We call this constant the constant of proportionality. Proportional relationships are always in the form $y=kx$y=kx. We know that $k$k represents the multiplicative factor. However, it also represents the constant of proportionality. When we graph these relationships, they produce straight lines with positive gradients that always pass through the origin $\left(0,0\right)$(0,0). Apples: $5$5 for $\$3$$3 For example, let's say a shop is selling apples for \3$$3. We know that five apples will cost $\$3$$3, ten apples will cost \6$$6, fifteen apples will cost $\$9$$9 and so on. We know that the price will increase at a constant rate. We could graph these two variables in a table. Number of apples (xx) 00 55 1010 1515 Cost (yy) 00 33 66 99 Since we know 55 apples cost \3$$3, we can work out how much one apple costs: $3\div5=0.6$3÷​5=0.6 This means that each apple costs $60$60 cents and we can say that this is the constant of proportionality. Further, we can write this as an equation: $y=0.6x$y=0.6x. Remember! The constant of propotionality is always positive. Since proportional relationships are in the form $y=kx$y=kx, we can also calculate the constant of proportionality ($k$k) by rearranging this equation and we find: $k=\frac{y}{x}$k=yx #### Examples ##### Question 1 Consider the equation:$y=8x$y=8x. a) What is the constant of proportionality for the given equation? b) How do you know that this equation is directly proportional? There may be more than one correct option. ##### Question 2 In the following proportionality table, the second row is obtained by multiplying the top row by the constant of proportionality. Complete the table and find that constant. $7$7 $8$8 $10$10 $\editable{}$ $\editable{}$ $72$72 $\editable{}$ $117$117 a) Complete the table: b) What is the constant of proportionality? ##### Question 3 Fred is making batches of bread rolls. He knows he can make $60$60 bread rolls in $10$10 hours, and $120$120 bread rolls in $20$20 hours. What is the constant of proportionality?
2021-09-21 17:20:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7253551483154297, "perplexity": 741.8642506482449}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057225.57/warc/CC-MAIN-20210921161350-20210921191350-00089.warc.gz"}
https://socratic.org/questions/how-do-you-find-the-sum-of-the-geometric-sequence-2-4-8-if-there-are-20-terms
How do you find the sum of the geometric sequence 2,4,8...if there are 20 terms? Jun 19, 2018 color(indigo)(S_(20) = (a (r^n-1)) / (r - 1) = 2097150 Explanation: "Sum of n terms of a G S = S_n = (a (r)^n-1 ))/ (r-1) where a is the first term, n the no. of terms and r the common ratio $a = 2 , n = 20 , r = {a}_{2} / a = {a}_{3} / {a}_{2} = \frac{4}{2} = \frac{8}{4} = 2$ ${S}_{20} = \frac{2 \cdot \left({2}^{20} - 1\right)}{2 - 1}$ ${S}_{20} = 2 \cdot \left({2}^{20} - 1\right) = 2097150$
2020-03-31 17:10:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49797523021698, "perplexity": 1509.7522834436231}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370502513.35/warc/CC-MAIN-20200331150854-20200331180854-00529.warc.gz"}
https://wiki.seg.org/wiki/Magnitudes_of_seismic_wave_parameters
# Magnitudes of seismic wave parameters Series Geophysical References Series Problems in Exploration Seismology and their Solutions Lloyd P. Geldart and Robert E. Sheriff 2 7 - 46 http://dx.doi.org/10.1190/1.9781560801733 ISBN 9781560801153 SEG Online Store ## Problem 2.8 The magnitudes of period ${\displaystyle T}$, wavelength ${\displaystyle \mathrm {\lambda } }$, wavenumber ${\displaystyle \mathrm {\kappa } }$, frequency ${\displaystyle f}$, and angular frequency ${\displaystyle \mathrm {\omega } }$ are important in practical situations. Calculate ${\displaystyle T}$, ${\displaystyle \mathrm {\lambda } }$, and ${\displaystyle \mathrm {\kappa } }$ for 15 and 60 Hz for the following velocity situations: 1. weathering, 100 and 500 m/s (minimum and average values); 2. water, 1500 m/s; 3. sands and shales, 2000 (poorly consolidated) and 3300 m/s; 4. limestone, 4300 (porous) and 5500 m/s; 5. salt, 4600 m/s; 6. anhydrite, 6100 m/s. ### Solution The period ${\displaystyle T}$ equals ${\displaystyle 1/{\it {f}}}$, so ${\displaystyle {\it {{T}=0.067}}}$ s for ${\displaystyle {\it {{f}=15}}}$ Hz and ${\displaystyle {\it {{T}=0.017}}}$ s for ${\displaystyle {\it {{f}=60}}}$ Hz. Also, ${\displaystyle \mathrm {\lambda } =V/f}$, ${\displaystyle \mathrm {\kappa } =2\mathrm {\pi } /\mathrm {\lambda } =2\mathrm {\pi } f/V}$. Using these equations we get the values of ${\displaystyle \mathrm {\lambda } }$ and ${\displaystyle \mathrm {\kappa } }$ in Table 2.8a. Table 2.8a. Magnitudes of ${\displaystyle T}$, ${\displaystyle \mathrm {\lambda } }$, and ${\displaystyle \mathrm {\kappa } }$. For ${\displaystyle {\it {{f}=15}}}$ Hz For ${\displaystyle {\it {{f}=60}}}$ Hz ${\displaystyle V}$(km/s) ${\displaystyle T}$(s) ${\displaystyle \mathrm {\lambda } }$(m) ${\displaystyle \mathrm {\kappa } (\mathrm {m} ^{-1})}$ ${\displaystyle T}$(s) ${\displaystyle \mathrm {\lambda } }$(m) ${\displaystyle \mathrm {\kappa } (\mathrm {m} ^{-1})}$ Weathering (min.) 0.1 0.067 7 0.7 0.017 2 4 Weathering (avg.) 0.5 0.067 30 0.2 0.017 8 0.8 Water 1.5 0.067 100 0.063 0.017 25 0.25 Poorly consolidated sandstone-shales at 0.75 km 2.0 0.067 130 0.047 0.017 33 0.19 Tertiary clastics at 3.00 km 3.3 0.067 220 0.029 0.017 55 0.11 Porous limestone 4.3 0.067 290 0.022 0.017 72 0.088 Dense limestone 5.5 0.067 370 0.017 0.017 92 0.069 Salt 4.6 0.067 310 0.020 0.017 77 0.082 Anhydrite 6.1 0.067 410 0.015 0.017 100 0.062
2019-12-14 02:00:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 30, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.809477686882019, "perplexity": 5435.16883403861}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540579703.26/warc/CC-MAIN-20191214014220-20191214042220-00259.warc.gz"}
http://pysd.readthedocs.io/en/master/basic_usage.html
# Basic Usage¶ ## Importing a model and getting started¶ To begin, we must first load the PySD module, and use it to import a supported model file: import pysd This code creates an instance of the PySD class loaded with an example model that we will use as the system dynamics equivalent of ‘Hello World’: a cup of tea cooling to room temperature. To view a synopsis of the model equations and documentation, call the doc() method of the model class. This will generate a listing of all the model elements, their documentation, units, equations, and initial values, where appropriate. Here is a sample from the teacup model: >>> print model.doc() ## Running the Model¶ The simplest way to simulate the model is to use the run() command with no options. This runs the model with the default parameters supplied by the model file, and returns a Pandas dataframe of the values of the stocks at every timestamp: >>> stocks = model.run() t teacup_temperature 0.000 180.000000 0.125 178.633556 0.250 177.284091 0.375 175.951387 … Pandas gives us simple plotting capability, so we can see how the cup of tea behaves: stocks.plot() plt.ylabel('Degrees F') plt.xlabel('Minutes') ## Outputting various run information¶ The run() command has a few options that make it more useful. In many situations we want to access components of the model other than merely the stocks – we can specify which components of the model should be included in the returned dataframe by including them in a list that we pass to the run() command, using the return_columns keyword argument: >>> model.run(return_columns=['Teacup Temperature', 'Room Temperature']) t Teacup Temperature Room Temperature 0.000 180.000000 75.0 0.125 178.633556 75.0 0.250 177.284091 75.0 0.375 175.951387 75.0 … If the measured data that we are comparing with our model comes in at irregular timestamps, we may want to sample the model at timestamps to match. The .run() function gives us this ability with the return_timestamps keyword argument: >>> model.run(return_timestamps=[0,1,3,7,9.5,13.178,21,25,30]) t Teacup Temperature 0.0 180.000000 1.0 169.532119 3.0 151.490002 7.0 124.624385 9.5 112.541515 … ## Setting parameter values¶ In many cases, we want to modify the parameters of the model to investigate its behavior under different assumptions. There are several ways to do this in PySD, but the .run() function gives us a convenient method in the params keyword argument. This argument expects a dictionary whose keys correspond to the components of the model. The associated values can either be a constant, or a Pandas series whose indices are timestamps and whose values are the values that the model component should take on at the corresponding time. For instance, in our model we can set the room temperature to a constant value: model.run(params={'Room Temperature':20}) Alternately, if we believe the room temperature is changing over the course of the simulation, we can give the run function a set of time-series values in the form of a Pandas series, and PySD will linearly interpolate between the given values in the course of its integration: import pandas as pd temp = pd.Series(index=range(30), data=range(20,80,2)) model.run(params={'Room Temperature':temp}) Note that once parameters are set by the run command, they are permanently changed within the model. We can also change model parameters without running the model, using PySD’s set_components(params={})() method, which takes the same params dictionary as the run function. We might choose to do this in situations where we’ll be running the model many times, and only want to spend time setting the parameters once. ## Setting simulation initial conditions¶ Finally, we can set the initial conditions of our model in several ways. So far, we’ve been using the default value for the initial_condition keyword argument, which is ‘original’. This value runs the model from the initial conditions that were specified originally by the model file. We can alternately specify a tuple containing the start time and a dictionary of values for the system’s stocks. Here we start the model with the tea at just above freezing: model.run(initial_condition=(0, {'Teacup Temperature':33})) Additionally we can run the model forward from its current position, by passing the initial_condition argument the keyword ‘current’. After having run the model from time zero to thirty, we can ask the model to continue running forward for another chunk of time: model.run(initial_condition='current', return_timestamps=range(31,45)) The integration picks up at the last value returned in the previous run condition, and returns values at the requested timestamps. There are times when we may choose to overwrite a stock with a constant value (ie, for testing). To do this, we just use the params value, as before. Be careful not to use ‘params’ when you really mean to be setting the initial condition! ## Querying current values¶ We can easily access the current value of a model component by calling its associated method (using python safe names) in the components subclass. For instance, to find the temperature of the teacup, we simply call: model.components.teacup_temperature() ## Supported functions¶ Vensim functions include: Vensim Python Translation COS np.cos EXP np.exp MIN min <= <= STEP functions.step PULSE functions.pulse POISSON np.random.poisson EXPRND np.random.exponential SIN np.sin >= >= IF THEN ELSE functions.if_then_else LN np.log PULSE TRAIN functions.pulse_train RAMP functions.ramp INTEGER int TAN np.tan PI np.pi = == < < > > MODULO np.mod ARCSIN np.arcsin ABS abs ^ ** LOGNORMAL np.random.lognormal MAX max SQRT np.sqrt ARCTAN np.arctan ARCCOS np.arccos RANDOM NORMAL self.functions.bounded_normal RANDOM UNIFORM np.random.rand DELAY1 functions.Delay DELAY3 functions.Delay DELAY N functions.Delay SMOOTH3I functions.Smooth SMOOTH3 functions.Smooth SMOOTH N functions.Smooth SMOOTH functions.Smooth INITIAL functions.Initial XIDZ functions.XIDZ ZIDZ functions.XIDZ np corresponds to the numpy package
2017-07-28 02:42:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32674655318260193, "perplexity": 2588.0861689341896}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549436321.71/warc/CC-MAIN-20170728022449-20170728042449-00238.warc.gz"}
https://www.hepdata.net/record/30260
Investigation of the Reaction $e^+ e^- \to \eta \pi^+ \pi^-$ in the Energy Range Up to 1.4-{GeV} Phys.Lett.B 174 (1986) 115-117, 1986. Abstract (data abstract) VEPP-2M Collider-Neutral Detector.
2021-07-26 20:25:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6235743761062622, "perplexity": 5750.011743769263}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152144.92/warc/CC-MAIN-20210726183622-20210726213622-00027.warc.gz"}
https://www.usgs.gov/media/files/1970-comparison-basic-modes-imaging-earth-paper
# 1970 Comparison of Basic Modes for Imaging the Earth Paper 1970 Comparison of Basic Modes for Imaging the Earth Paper ## Detailed Description 1970 Comparison of Basic Modes for Imaging the Earth Paper - EROS HIstory Project
2019-09-17 01:39:15
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8187608122825623, "perplexity": 8693.350423277247}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572980.56/warc/CC-MAIN-20190917000820-20190917022820-00047.warc.gz"}
http://www.learn-math.top/integrating-fnxf_nx-where-fnx%E2%88%92x%E2%8B%85f%E2%80%B2n%E2%88%921xf_nx-xcdot-f_n-1x-what-did-i-do-wrong/
# Integrating fn(x)f_n(x), where fn(x)=−x⋅f′n−1(x)f_n(x)=-x\cdot f’_{n-1}(x). What did I do wrong? I wondered what would happen if I integrated a function defined in terms of it’s derivative and after some puzzling, this is what I got: Define fn(x)=−x⋅f′n−1(x)f_n(x)=-x\cdot f_{n-1}'(x). Now: I=∫fn(x)dxI=\int f_n(x)\,dx We take u=fn(x)⟹du=f′n(x)dxu=f_n(x)\implies\,du=f_n'(x)\,dx and dv=d⟹v=x\,dv=\,d\implies v=x, so: I=x⋅fn(x)−∫x⋅f′n(x)dx=x⋅fn(x)+∫fn+1(x)dxI=x\cdot f_n(x)-\int x\cdot f_n'(x)\,dx=x\cdot f_n(x)+\int f_{n+1}(x)\,dx which would mean: ∫fn(x)dx=x∞∑k=nfk(x)+C\int f_n(x)\,dx=x\sum_{k=n}^{\infty}f_k(x)+C However, the above result doesn’t seem correct. Take for instance fn(x)=(n−1)!(logx)−nf_n(x)=(n-1)!(\log x)^{-n}. It satisfies fn(x)=−x⋅f′n−1(x)f_n(x)=-x\cdot f’_{n-1}(x), but when I use try my little trick, I get: ∫(logx)−1dx=∫f1(x)dx=x∞∑k=1(k−1)!(logx)k+C\int(\log x)^{-1}\,dx=\int f_1(x)\,dx=x\sum_{k=1}^{\infty}\dfrac{(k-1)!}{(\log x)^k}+C which isn’t correct. Question: What did I do wrong? I really have no idea. I don’t use integration often, so it could be something realy obvious (I hope not). ================= 1 You forgot the limn→∞∫fn+1dx\lim_{n\to \infty}\int f_{n+1}\,dx term, “so to say”. But besides that the argument is indeed flawless for finite sums. I.e you indeed get: ∫fndx=xN∑k=nfk+∫fN+1dx+C, for any N∈N\int f_n\, dx=x\sum_{k=n}^N f_k +\int f_{N+1}\, dx+C, \text{ for any }N\in\mathbb{N} – b00n heT 2 days ago 1 You didn’t take convergence into account. If the series ∑∞k=nfk(x)\sum_{k = n}^{\infty} f_k(x) converges nicely, then it works. If it doesn’t, you can only ever do finitely many such steps. – Daniel Fischer♦ 2 days ago Okay, thanks. I now know what was wrong. – Mastrem 2 days ago ================= =================
2018-06-23 00:29:20
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9403559565544128, "perplexity": 6527.556977822286}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864848.47/warc/CC-MAIN-20180623000334-20180623020334-00124.warc.gz"}
http://www.physicsforums.com/showthread.php?p=1512702
by Tarnix Tags: gravity, spin, things P: 9 Hello, I'm just getting into astronomy. I have tried to google this question many times but I cannot find an answer. I need to know why gravity makes things spin. I can understand the description of looking at space like a flat membrane. Things with mass, like planets, are heavy. And if you put one on this flat membrane it sinks down and bends the membrane down a little. If something gets close enough to this depression, it will slide down this slope, and collide. Essentially gravity. So I can understand this, gravity pulls things together, because the heavier object will have a deeper depression in space, and lighter objects will slide twoard it. But this doesnt explain why it makes things spin. How does gravity make our planet spin on its axis? How does gravity make the planets spin around the sun? Why does the moon spin around the earth and not just collide with it? It doesnt make sense!! Someone help please! Sci Advisor P: 1,253 Try reading this Wikipedia entry on orbits. The very short answer is that it is not some magical property of gravity per se that causes many things in nature to rotate, spin or orbit but just basic Newtonian physics. Have a read of that entry then come back with any more specific questions you have. Good luck! P: 9 Ok, I have read your website. From there I under stand how objects orbit. By looking at this picture, I can understand how an object fired out of this cannon may or may not orbit depending on how fast it is fired. If it is fired with a low velocity, it will end up at point A. With a little more velocity it will end up at point B, and with too much, it will escape the earths gravity, and continue out in space as in point E. You must have the perfect velocity to keep something in a circular orbit. This may explain why the moon stays in orbit around the earth, and how the planets stay in orbit around the sun. 1) But it doesnt explain why the earth spins on its axis. Also, 2) It doesnt explain what caused objects to move at this perfect velocity the first place. I'm pictureing a huge cloud of dust. I can see that all over the cloud, pieces clumping together. Eventually, large clumps would form spread through the cloud. Nothing would be in a perfect disk. And eventually, the large clumps would suck in everything that was close enough, and you would just have large clumps of matter, spread at large distances from each other, all through out this cloud. They would be sitting still, not close enough to any of the other large clumps to effect each other. So in order for our solar system to form. Gravity would have to get these large clumps moving again. Can someone explain 1 and 2? Attached Thumbnails Mentor P: 22,243 Please Help Why does gravity make things spin? The earth, moon, and planets formed from clouds of gas and dust that were non-uniform and randomly moving. When they collapsed, these motions and non-uniformities induced rotations. Only a perfectly symmetrical, perfectly still cloud of debris could collapse into a non-rotating object. P: 9 Ok, If they are all randomly moving, why in the end do they all move the same direction? Would it make sense to have some of our planets orbit one way, and others in a different way? Sci Advisor P: 1,253 Try this wiki page on solar system formation. The key concept here is angular momentum. Sorry to be handballing your questions to a website, but since your question is so fundamental it requires a somewhat lengthier explanation than a bulletin board post would allow. The wiki page explains this pretty well I think, but again anything your not sure of after reading that post away Mentor P: 22,243 Quote by Tarnix Ok, If they are all randomly moving, why in the end do they all move the same direction? Would it make sense to have some of our planets orbit one way, and others in a different way? Add all that random motion together and there is a net motion in one direction. Sci Advisor PF Gold P: 9,380 Affirming Wallace and Russ, it's all about conservation of angular momentum. Consider what happens as a star forms from a collapsing gas cloud [forget about planets for now]. Lets also assume the initial gas cloud is not 'rotating'. As the particles are drawn toward the center of gravity, they collide. This deflects their course away from the perfectly straight line they were following toward the center of gravity. If they acquire insufficient velocity to escape the gravitational well of the gas cloud, what path do they take? - a spiral, of course. As the gas cloud densifies, the spirals flatten out. It gets increasingly difficult to approach the center of gravity because collisions become more frequent, hence the paths grow increasingly circular. Please note it is virtually impossible for all the collisions to cancel out. While at first the paths will be purely random, the tiniest of imbalances [like the gravity of the nearest star] will impart a preferred direction of travel. PF Gold P: 397 I like Chronos's explination. Conservation of angular momentum is THE main factor. A byproduct of inertia and velocity. You could also refer to the classic example of sitting in a spinning chair with your arms extended, then pulling your arms inwards to your chest... your rotation speed increases. A good example of this is the moon slowly drifting away from the Earth. The moon causes fluctuations of the tides which, by friction, is slightly causing the Earths rotation to slow. That rotation slowdown causes the moon to increase in distance... where an increase in rotational speed would cause the moon to drift closer. Think of the moon as your arms in that example. Another good one would be using a star as an example. Say you have an red giant star that is slowly rotating. When that giant runs out of fuel and can no longer push outward to resist the pull of gravity... it collapses in to a fast spinning white dwarf. P: 3 My astronomy professor taught me that the tendency of bodies to spin in space is an unexplained phenomenon, and he said he attended an exhibit about this at NASA's Glenn Research Center. The links recommended in this forum sidestep the question being asked here, and only Chronos has truly offered an explanation. The conservation of angular momentum does not, however, explain why a pencil will suddenly start to spin if an astronaut lets it float inside a spacecraft without applying any force on it. Furthermore, it does not explain why the spinning motion is uniformly counterclockwise. There seems to be a larger impetus at work, but I don't have any guess as to what it might be. I feel your frustration, Tarnix: I haven't been able to find anything on Google about it either. You'd think there would be more theories out there. Mentor P: 7,315 Quote by tw43 My astronomy professor taught me that the tendency of bodies to spin in space is an unexplained phenomenon, and he said he attended an exhibit about this at NASA's Glenn Research Center. The links recommended in this forum sidestep the question being asked here, and only Chronos has truly offered an explanation. The conservation of angular momentum does not, however, explain why a pencil will suddenly start to spin if an astronaut lets it float inside a spacecraft without applying any force on it. Furthermore, it does not explain why the spinning motion is uniformly counterclockwise. There seems to be a larger impetus at work, but I don't have any guess as to what it might be. I feel your frustration, Tarnix: I haven't been able to find anything on Google about it either. You'd think there would be more theories out there. This spontaneous spinning you speak of is news to me. Please link to a verifiable source. P: 93 Quote by tw43 The conservation of angular momentum does not, however, explain why a pencil will suddenly start to spin if an astronaut lets it float inside a spacecraft without applying any force on it. That's interesting. I wonder if anybody could tell me: Would a perfect ball start spinning as well in this situation? Would this happen even in a cockpit without atmosphere? Also: Not everething is spinning in space. Space itself for example. And the largest structures are not spinning as I understand it. I do not know whether glaxy clusters spin around each other. I would doubt it as they seem to move away from each other. But the galaxies within the clusters themselves certainly spin - a motion which I would guess is due to their interaction through gravity? And would this galaxy spinning not mean that the motion was already there for any cloud that the solarsystem started out from? Also the idea that the cloud the solarsystem form from is just sitting there in space and then collapses i not right. It was very likely a violent neibourhood in the galaxy where this took place under influence of a very large star sending off very energetic material. And one more argument for the cloud being already in motion: The cloud would be put together from material which all vere in motion already: Gas, pulled in from intergalactic space to the galaxy and dust from past stellar explosions. This was no immobile cloud. Greetings from interested amateur. Mentor P: 22,243 If objects would start spontaneously spinning while in orbit, the entire space shuttle and all of our satellites would spin. Quite obviously, they do not. P: 6 Some people have proposed a Homopolar motor mechanism to explain rotational motion of bodies in space with an electric current running through them. You are basically converting electrical current into rotational force. Currents run through the earth, and other objects in space, but i'm not sure if they persist for long enough to actually generate rotaional motion. That's a possibility, but i dont think it has been proved yet. Maybe meteors with a high angle impacts would give sufficient impulse to cause rotation. Their direction should cancel out on average, but that could be another factor. P: 82 i might just be hijackin this thread here,but my physics teacher told me a cloud of dust came together and started rotating, and after some steps forms a star. he did mention 'no one knows why the dust cloud rotates' which are forbidden words id expect to hear from him cus he seemed all-knowing.anyone shed light on this? anyway tarnix my understanding of some of thems things you ask is centripetal force i cant find a decent article but its things like $$F= \frac {mv^2}{r} = \frac {Gmm}{r^2}$$ Sci Advisor P: 2,340 Well, what happens when a spinning figure skater pulls her arms in? By conservation of angular momentum, her spin rate increases dramatically! In the same way, when a dust cloud collapses, it will usually have some angular momentum at the start of the collapse (wrt "the center"), and the result is that the dust particles typically start to rotate dramatically about "the center" as the cloud collapses. P: 3 "This spontaneous spinning you speak of is news to me. Please link to a verifiable source." Unfortunately, I cannot find one. I'm just going off of what my teacher told me, but he seems like a fairly reputable guy. "If objects would start spontaneously spinning while in orbit, the entire space shuttle and all of our satellites would spin. Quite obviously, they do not." I'll be the first to admit that I don't know what I'm talking about, but my guess is that the space shuttle is either already in motion which offsets the spinning, or a stationary spacecraft would have sufficient mass to be orbiting the Earth, which is essentially a large scale spinning of sorts. I guess what I'm referring to is when an object is not affected by gravity or any other force and just idly resting in space, supposedly it will spin. Perhaps I've been misled, but I tend to think that the topic has been brushed aside because sometimes people don't like admitting that they can't figure something out. I saw another forum about this topic and somebody explained it by saying something to the effect that: "I spin, you spin, we all spin: it's just a reference thing." I, however, would argue that it's a tangible, measurable motion that warrants an explanation - an explanation that could perhaps open up a new field of study. P: 15,319 Quote by russ_watters If objects would start spontaneously spinning while in orbit, the entire space shuttle and all of our satellites would spin. Quite obviously, they do not. Well they sort of will. Orbiting objects, if left to their own devices, will (eventually) orient themselves so that their long axis points radially (tidal force). Of course, once there, they'll stop spinning, so it's sortta short-lived. And it takes along time. Related Discussions Advanced Physics Homework 1 General Physics 31 General Discussion 9 General Discussion 21
2014-07-31 03:28:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34459754824638367, "perplexity": 667.3143702501205}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510272329.26/warc/CC-MAIN-20140728011752-00168-ip-10-146-231-18.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/2879050/why-does-11-10-equal-10-plus-an-extra-digit-in-ones/2879061
# Why does $11! - 10!$ equal $10!$ plus an extra digit in “ones”? Pardon my English as I'm not a native speaker of the language and I'm not a big math guy, so please bear with me and my ignorance for a bit. I've unconsciously stumbled upon something that's most probably blatantly obvious and easy, yet I have no idea how to explain it. So here it goes: $$11! = 39916800$$ $$10! = 3628800$$ $$11! - 10! = 39916800 - 3628800 = 36288000$$ As you can see, the result equals the value that $10!$ returns "plus" an extra digit added. Is this just a cool coincidence or is there any logic behind that I'm just not aware of? ## migrated from mathematica.stackexchange.comAug 11 '18 at 4:25 This question came from our site for users of Wolfram Mathematica. $11!-10!=11\times 10!-1\times10!=10\times10!$ • Just a note to OP: This approach is also applicable to $(10n+1)!-(10n)!$ for any positive integer $n$, but an extra factor of $n$ will be applied to the number. – Mythomorphic Aug 11 '18 at 4:49 By definition of the factorial, we have $$11! - 10! = 11 \cdot 10! - 10! = (11 - 1) 10! = 10 \cdot 10!$$ Now, as you may remember, multiplying by $10$ (or whatever base you are in) has the effect of shifting all digits to the left and adding a zero at the right of your number. This perfectly explains your extra zero at the end. You might try to come up with other examples, such as $101! - 100!$.
2019-04-20 02:33:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5148775577545166, "perplexity": 229.41979275569153}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578528481.47/warc/CC-MAIN-20190420020937-20190420042937-00229.warc.gz"}
https://www.preprints.org/manuscript/202006.0039/v1
Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed # A Compartmental Epidemiological Model for the Dissemination of the COVID-19 Disease Version 1 : Received: 1 June 2020 / Approved: 4 June 2020 / Online: 4 June 2020 (14:47:56 CEST) Version 2 : Received: 12 June 2020 / Approved: 12 June 2020 / Online: 12 June 2020 (12:14:28 CEST) How to cite: Matsinos, E. A Compartmental Epidemiological Model for the Dissemination of the COVID-19 Disease. Preprints 2020, 2020060039 (doi: 10.20944/preprints202006.0039.v1). Matsinos, E. A Compartmental Epidemiological Model for the Dissemination of the COVID-19 Disease. Preprints 2020, 2020060039 (doi: 10.20944/preprints202006.0039.v1). ## Abstract A compartmental epidemiological model with seven groups is introduced herein, to account for the dissemination of diseases similar to the Coronavirus disease 2019 (COVID-19). In its simplified version, the model contains ten parameters, four of which relate to characteristics of the virus, whereas another four are transition probabilities between the groups; the last two parameters enable the empirical modelling of the effective transmissibility, associated in this study with the cumulative number of fatalities due to the disease within one country. The application of the model to the fatality data (the main input herein) of five countries (to be specific, of those which had suffered most fatalities by April 30, 2020) enabled the extraction of an estimate for the basic reproduction number $R_0$ for the COVID-19 disease: $R_0=4.91(34)$. ## Subject Areas Epidemiology; infectious disease; compartmental model; mathematical modelling and optimisation; COVID-19; SARS-CoV-2 Views 0
2020-09-30 23:17:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30673766136169434, "perplexity": 4027.0298315537198}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402128649.98/warc/CC-MAIN-20200930204041-20200930234041-00490.warc.gz"}
https://codereview.stackexchange.com/questions/45046/pdo-sign-up-function-inserting-data-into-multiple-tables
This is a sign up function called on form submission. It firstly inserts key user data into the users table. If successful, secondary data is then inputted into respective tables (such as user job titles and user experience). I'm not really sure whether stacking queries 2-5 is the best way to do it, so I would be interested in knowing how this function could be improved and/or made more secure. public function registerFreelancer($firstname,$lastname, $email,$password, $location,$portfolio, $jobtitle,$priceperhour, $experience,$bio, $userType){ global$bcrypt; global $mail;$time = time(); $ip =$_SERVER['REMOTE_ADDR']; $email_code = sha1($email + microtime()); $password =$bcrypt->genHash($password);// generating a hash using the$bcrypt object $query =$this->db->prepare("INSERT INTO " . DB_NAME . ".users (firstname, lastname, email, email_code, password, time_joined, location, portfolio, bio, ip) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?) "); $query->bindValue(1,$firstname); $query->bindValue(2,$lastname); $query->bindValue(3,$email); $query->bindValue(4,$email_code); $query->bindValue(5,$password); $query->bindValue(6,$time); $query->bindValue(7,$location); $query->bindValue(8,$portfolio); $query->bindValue(9,$bio); $query->bindValue(10,$ip); try{ $query->execute(); // Send email code usually here$rows = $query->rowCount(); if($rows > 0){ $last_user_id =$this->db->lastInsertId('user_id'); $query_2 =$this->db->prepare("INSERT INTO " . DB_NAME . ".freelancers (freelancer_id, jobtitle, priceperhour) VALUE (?,?,?)"); $query_2->bindValue(1,$last_user_id); $query_2->bindValue(2,$jobtitle); $query_2->bindValue(3,$priceperhour); $query_2->execute();$query_3 = $this->db->prepare("INSERT INTO " . DB_NAME . ".user_types (user_type_id, user_type) VALUE (?,?)");$query_3->bindValue(1, $last_user_id);$query_3->bindValue(2, $userType);$query_3->execute(); $query_4 =$this->db->prepare("INSERT INTO " . DB_NAME . ".user_experience (experience_id, experience) VALUE (?,?)"); $query_4->bindValue(1,$last_user_id); $query_4->bindValue(2,$experience); $query_4->execute(); if($userType == 'designer') { $query_5 =$this->db->prepare("INSERT INTO " . DB_NAME . ".designer_titles (job_title_id, job_title) VALUE (?,?)"); $query_5->bindValue(1,$last_user_id); $query_5->bindValue(2,$jobtitle); $query_5->execute(); } else if ($userType == 'developer') { $query_5 =$this->db->prepare("INSERT INTO " . DB_NAME . ".developer_titles (job_title_id, job_title) VALUE (?,?)"); $query_5->bindValue(1,$last_user_id); $query_5->bindValue(2,$jobtitle); $query_5->execute(); } return true; } }catch(PDOException$e){ die($e->getMessage()); } } • Just so it's said, your code is pretty intimate with the structure of the DB. Which will end up being annoying later on, particularly if this is running outside of the model/DAL. I'd much rather say $user = (get a fresh User object); $user->someAttribute =$someValue; ... ; $user->insert();. – cHao Mar 22 '14 at 0:16 • Could you perhaps elaborate on what you mean - I'm fairly new to OOPHP, so it would be good to know what you are talking about in more detail – jshjohnson Mar 22 '14 at 21:32 • I mean that if this code is actually in the signup page, and other code that messes with these records is also in its respective page, then you have a bunch more places to worry about changing if you change anything about users. You'd be better off to take these functions out of their respective pages and group them together in one place (even if they're all just functions in the same file...although a popular option is to put them in a User class). The end result would be one reusable, ideally cohesive library of functions to manipulate users. (Sorry, just saw this.) – cHao May 3 '14 at 19:11 ## 1 Answer 1. If the second or later query fails you'll have inconsistent data in your database. Consider using atomic transactions. 2. $query_2, $query_3 are bad names. You could pick something more descriptive, like $usersInsert, $freelancersInsert etc. 3. $query = $this->db->prepare("INSERT INTO " . DB_NAME . ".users (firstname, lastname, email, email_code, password, time_joined, location, portfolio, bio, ip) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?) ");$query->bindValue(1, $firstname);$query->bindValue(2, $lastname); ... I'd use named binds here. As far as I see they works with INSERT statements too. $query = $this->db->prepare("INSERT INTO " . DB_NAME . ".users (firstname, lastname, email, email_code, password, time_joined, location, portfolio, bio, ip) VALUES (:firstname, :lastname, ...) ");$query->bindValue(":firstname", $firstname);$query->bindValue(":lastname", $lastname); ... It would be less error-prone (harder to mix parameters up) and would be easier to read/follow. 4. Consider Hayley Watson's comment on the die manual page: It is poor design to rely on die() for error handling in a web site because it results in an ugly experience for site users: a broken page and - if they're lucky - an error message that is no help to them at all. As far as they are concerned, when the page breaks, the whole site might as well be broken. If you ever want the public to use your site, always design it to handle errors in a way that will allow them to continue using it if possible. If it's not possible and the site really is broken, make sure that you find out so that you can fix it. die() by itself won't do either. Furthermore, at least log the error to a log file, otherwise you might be never know if your future users can't even register. 5. if($userType == 'designer') { $query_5 =$this->db->prepare("INSERT INTO " . DB_NAME . ".designer_titles (job_title_id, job_title) VALUE (?,?)"); $query_5->bindValue(1,$last_user_id); $query_5->bindValue(2,$jobtitle); $query_5->execute(); } else if ($userType == 'developer') { $query_5 =$this->db->prepare("INSERT INTO " . DB_NAME . ".developer_titles (job_title_id, job_title) VALUE (?,?)"); $query_5->bindValue(1,$last_user_id); $query_5->bindValue(2,$jobtitle); $query_5->execute(); } Both cases are almost the same. You could create a function for that with a $tableName parameter to remove the duplication. It can be a sign that you could have another database structure with only one table instead of two: TABLE titles: - role (possible values: developer, designer) - job_title_id - job_title 6. if($userType == 'designer') { ... } else if ($userType == 'developer') { ... } You could be more defensive here, if $userType contains something else (not designer nor developer) and if it's should be considered as a programming (or input validation) error sing it somehow. I'd throw an exception and log it in a catch block. (The Pragmatic Programmer: From Journeyman to Master by Andrew Hunt and David Thomas: Dead Programs Tell No Lies.) 7. You could save a few indentation level with guard clauses which would be readable. You wouldn't have to read through the whole function to figure out what happens when $rows > 0 is false: $rows =$query->rowCount(); if($rows > 0){ return; }$last_user_id = $this->db->lastInsertId('user_id'); ... It might be more unambiguous using return false instead of implicit return here. 8. The comment doesn't say too much, the code is already obvious here: $password = $bcrypt->genHash($password);// generating a hash using the $bcrypt object I'd remove it. (Clean Code by Robert C. Martin: Chapter 4: Comments, Noise Comments) 9. A lot of parameters is a code smell: Long Parameter List. Their type are the same, it's easy to mix them up. Consider using a parameter object instead which would contain named fields. public function registerFreelancer($firstname, $lastname,$email, $password,$location, $portfolio,$jobtitle, $priceperhour,$experience, $bio,$userType){ See: Martin Fowler's Refactoring: Improving the Design of Existing Code book, Chapter 3. Bad Smells in Code, Long Parameter List 10. I've found this kind of formatting is rather hard to maintain: $time = time();$ip = $_SERVER['REMOTE_ADDR'];$email_code = sha1($email + microtime());$password = $bcrypt->genHash($password);// generating a hash using the bcrypt object If you have a new variable with a longer name you have to modify seven other lines too to keep it nice. It also looks badly on revison control diffs and could cause unnecessary merge conflicts. From Code Complete, 2nd Edition by Steve McConnell, p758: Do not align right sides of assignment statements [...] With the benefit of 10 years’ hindsight, I have found that, while this indentation style might look attractive, it becomes a headache to maintain the alignment of the equals signs as variable names change and code is run through tools that substitute tabs for spaces and spaces for tabs. It is also hard to maintain as lines are moved among different parts of the program that have different levels of indentation. • Wow, this is seriously informative and exactly what I was looking for! Thanks. As an alternative to die(), would you recommend using something like: echo 'Caught exception: ',e->getMessage(), "\n";? – jshjohnson Mar 22 '14 at 10:19 • @jshjohnson: I'm happy that you've found it useful. I don't think that the exception message (which I guess contains some database-related message) would be useful for a user but anyway, test it and decide. (And don't forget to log it.) I've just written another answer here about the same topic, check #3. – palacsint Mar 22 '14 at 10:35 • Just re-reading through this. Could you perhaps elaborate what you mean regarding a parameter object? If I declared a new Freelancers object, how would the function get the data passed to it? – jshjohnson Apr 7 '14 at 10:31 • @jshjohnson: Huh, good question. Object relational mapping has its issues: en.wikipedia.org/wiki/Object-relational_impedance_mismatch. Anyway, I've put another link into the post and here are two ideas: create a builder; move the register method into the Freelancer class. In the latter case you might also get a better a API with a builder but it might not worth it. I guess it depend on the size of the project. – palacsint Apr 7 '14 at 20:41 • As for point 4: Aside from stopping the flow of information (and in particular, error messages) to code that could do something useful with it, die() makes it more of a pain -- if not outright impossible -- to run automated tests (like, say, unit tests). I've run into this before; if you're not anticipating such ugliness, an error can cause the whole test suite to just stop dead in its tracks, possibly without even so much as a visible error message or nonzero exit code. – cHao Sep 10 '14 at 18:02
2021-01-20 03:23:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3008582890033722, "perplexity": 5559.202083182273}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519883.54/warc/CC-MAIN-20210120023125-20210120053125-00477.warc.gz"}
https://ch.mathworks.com/help/aeroblks/directioncosinematrixbodytowindtoalphaandbeta.html
# Direction Cosine Matrix Body to Wind to Alpha and Beta Convert direction cosine matrix to angle of attack and sideslip angle • Library: • Aerospace Blockset / Utilities / Axes Transformations ## Description The Direction Cosine Matrix Body to Wind to Alpha and Beta block converts a 3-by-3 direction cosine matrix (DCM) to angle of attack and sideslip angle. The DCM performs the coordinate transformation of a vector in body axes (ox0, oy0, oz0) into a vector in wind axes (ox2, oy2, oz2). For more information on the direction cosine matrix, see Algorithms. ## Limitations • This implementation generates angles that lie between ±90 degrees. ## Ports ### Input expand all Direction cosine matrix to transform body-fixed vectors to wind-fixed vectors, specified as a 3-by-3 direct cosine matrix. Data Types: double ### Output expand all Angle of attack and sideslip angle, returned as a vector, in radians. Data Types: double ## Parameters expand all Block behavior when the direction cosine matrix is invalid (not orthogonal). • Warning — Displays warning indicating that the direction cosine matrix is invalid. • Error — Displays error indicating that the direction cosine matrix is invalid. • None — Does not display warning or error (default). #### Programmatic Use Block Parameter: action Type: character vector Values: 'None' | 'Warning' | 'Error' Default: 'None' Data Types: char | string Tolerance of the direction cosine matrix validity, specified as a scalar. The block considers the direction cosine matrix valid if these conditions are true: • The transpose of the direction cosine matrix times itself equals 1 within the specified tolerance (transpose(n)*n == 1±tolerance). • The determinant of the direction cosine matrix equals 1 within the specified tolerance (det(n) == 1±tolerance). #### Programmatic Use Block Parameter: tolerance Type: character vector Values: 'eps(2)' | scalar Default: 'eps(2)' Data Types: double ## Algorithms The DCM matrix performs the coordinate transformation of a vector in body axes (ox0, oy0, oz0) into a vector in wind axes (ox2, oy2, oz2). The order of the axis rotations required to bring this about is: 1. A rotation about oy0 through the angle of attack (α) to axes (ox1, oy1, oz1) 2. A rotation about oz1 through the sideslip angle (β) to axes (ox2, oy2, oz2) $\begin{array}{l}\left[\begin{array}{c}o{x}_{2}\\ o{y}_{2}\\ o{z}_{2}\end{array}\right]=DC{M}_{wb}\left[\begin{array}{c}o{x}_{0}\\ o{y}_{0}\\ o{z}_{0}\end{array}\right]\\ \\ \left[\begin{array}{c}o{x}_{2}\\ o{y}_{2}\\ o{z}_{2}\end{array}\right]=\left[\begin{array}{ccc}\mathrm{cos}\beta & \mathrm{sin}\beta & 0\\ -\mathrm{sin}\beta & \mathrm{cos}\beta & 0\\ 0& 0& 1\end{array}\right]\left[\begin{array}{ccc}\mathrm{cos}\alpha & 0& \mathrm{sin}\alpha \\ 0& 1& 0\\ -\mathrm{sin}\alpha & 0& \mathrm{cos}\alpha \end{array}\right]\left[\begin{array}{c}o{x}_{0}\\ o{y}_{0}\\ o{z}_{0}\end{array}\right]\end{array}$ Combining the two axis transformation matrices defines the following DCM. $DC{M}_{wb}=\left[\begin{array}{ccc}\mathrm{cos}\alpha \mathrm{cos}\beta & \mathrm{sin}\beta & \mathrm{sin}\alpha \mathrm{cos}\beta \\ -\mathrm{cos}\alpha \mathrm{sin}\beta & \mathrm{cos}\beta & -\mathrm{sin}\alpha \mathrm{sin}\beta \\ -\mathrm{sin}\alpha & 0& \mathrm{cos}\alpha \end{array}\right]$ To determine angles from the DCM, the following equations are used: $\begin{array}{l}\alpha =\text{asin}\left(-DCM\left(3,1\right)\right)\\ \\ \beta =\text{asin}\left(DCM\left(1,2\right)\right)\end{array}$ ## References [1] Stevens, Brian L., Frank L. Lewis. Aircraft Control and Simulation, Second Edition. Hoboken, NJ: Wiley–Interscience. ## Version History Introduced before R2006a
2022-05-25 14:16:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7552319765090942, "perplexity": 11294.166470876526}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662587158.57/warc/CC-MAIN-20220525120449-20220525150449-00576.warc.gz"}
http://www.theinfolist.com/php/SummaryGet.php?FindGo=Standard_atomic_weight
TheInfoList The standard atomic weight (Ar, standard(E)) of a chemical element is the weighted arithmetic mean of the relative isotopic masses of all isotopes of that element weighted by each isotope's abundance on Earth. For example, isotope 63Cu (Ar = 62.929) constitutes 69% of the copper on Earth, the rest being 65Cu (Ar = 64.927), so ${\displaystyle A_{\text{r, standard}}(_{\text{29}}{\text{Cu}})=0.69\times 62.929+0.31\times 64.927=63.55.}$ Because relative isotopic masses are dimensionless quantities, this weighted mean is also dimensionless. It can be converted into a measure of mass (with dimension M) by multiplying it with the dalton, also known as the atomic mass constant. Among various variants of the notion of atomic weight (Ar, also known as relative atomic mass) used by scientists, the standard atomic weight (Ar, standard) is the most common and practical. The standard atomic weight of each chemical element is determined and published by the Commission on Isotopic Abundances and Atomic Weights (CIAAW) of the International Union of Pure and Applied Chemistry (IUPAC) based on natural, stable, terrestrial sources of the element. The definition specifies the use of samples from many representative sources from the Earth, so that the value can widely be used as 'the' atomic weight for substances as they are encountered in reality—for example, in pharmaceuticals and scientific research. Non-standardized atomic weights of an element are specific to sources and samples, such as the atomic weight of carbon in a particular bone from a particular archeological site. Standard atomic weight averages such values to the range of atomic weights that a chemist might expect to derive from many random samples from Earth. This range is the rationale for the interval notation given for some standard atomic weight values. Of the 118 known chemical elements, 80 have stable isotopes and 84 have this Earth-environment based value. Typically, such a value is, for example helium: Ar, standard(He) = 4.002602(2). The "(2)" indicates the uncertainty in the last digit shown, to read 4.002602±0.000002. IUPAC also publishes abridged values, rounded to five significant figures. For helium, Ar, abridged(He) = 4.0026. For thirteen elements the samples diverge on this value, because their sample sources have had a different decay history. For example, thallium (Tl) in sedimentary rocks has a different isotopic composition than in igneous rocks and volcanic gases. For these elements, the standard atomic weight is noted as an interval: Ar, standard(Tl) = [204.38, 204.39]. With such an interval, for less demanding situations, IUPAC also publishes a conventional value. For thallium, Ar, conventional(Tl) = 204.38. Modern relative atomic masses (a term specific to a given element sample) are calculated from measured values of atomic mass (for each nuclide) and isotopic composition of a sample. Highly accurate atomic masses are available[7][8] for virtually all non-radioactive nuclides, but isotopic compositions are both harder to measure to high precision and more subject to variation between samples.[9][10] For this reason, the relative atomic masses of the 22 mononuclidic elements (which are the same as the isotopic masses for each of the single naturally occurring nuclides of these elements) are known to especially high accuracy. For example, there is an uncertainty of only one part in 38 million for the relative atomic mass of fluorine, a precision which is greater than the current best value for the Avogadro constant (one part in 20 million).
2021-03-03 05:39:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7199628353118896, "perplexity": 1701.308225105872}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178365454.63/warc/CC-MAIN-20210303042832-20210303072832-00511.warc.gz"}
https://www.physicsforums.com/threads/can-someone-explain-to-me-why-college-textbooks-are-ridiculously-costly.370408/
# Can someone explain to me why college textbooks are ridiculously costly ? 1. Jan 17, 2010 ### noblegas I was looking on amazon.com and looking at Leonard susskind's black hole war and David Griffith's Introduction to quantum mechanics and they were both hard cover and Leonard susskind's book contained more pages than Griffith's Introduction to Quantume mechanics costs 5 times as much as Susskind book, even though they are both hardcover and susskind's book contained more pages than Griffith's book. What is the explanation for the ridiculous uncompromising price of a college textbook? Is the price ridiculously expensive because publishers of college textbooks knows that they will have a limited audience when they published a college textbook much like the ridiculous price of academic journals ? textbooks should not be more than the cost of most ipods and the latest video games. 2. Jan 17, 2010 ### Staff: Mentor The disconnect between the person who chooses the book and the one who buys it eliminates price competition. In addition, there is planned obsolecence and anti-competitive tactics. 3. Jan 17, 2010 ### glueball8 wow, I was searching Griffith's intro to em and quantum mechanics too. Their way too expensive, not worth it. I'll get it from the library. :( The quantum mechanics is $35 more in Canadian then U.S. :( WHY? 4. Jan 17, 2010 ### Proton Soup there is also a trend toward the price including more than simply a book. publishers like pearson will also have online content and online testing. so, the students are now paying an extra fee in addition to tuition for someone to evaluate their performance. 5. Jan 17, 2010 ### Pinu7 There are many (cheaper and better)alternatives to Griffiths. Anyway, textbook prices are too high, you have to find: 1. A GOOD bookstore(you often see 30-50% discount on good books, if you find a good store. Try to find one locally owned instead of chains like Borders if you can). 2. Alternatives. Mainstream books do not always mean the best or most insightful. By getting an alternative, you can save a lot of money. I do not think I have ever liked the "standard recommendations" (Griffiths, Goldstein, Jackson, Sakurai, Rudin, etc.) anyway, so it worked out for me and it might work out for you. 3. Lecture notes. Sometimes, lecture notes are better than any textbook. For example, Terrence Tao's Analysis lecture notes were better than any book on Analysis available(not anymore since h then wrote two books on Analysis). 6. Jan 17, 2010 ### mgb_phys Simple, you are paying$X*10,000 a year for tuition you aren't going to care about a few $1000 in textbooks. It used to be the one area where the UK was cheaper, the same textbooks (sometimes in paperback) cost 20-30GBP instead of$150-180 - simply because tuition was free. 7. Jan 17, 2010 ### Proton Soup something else i thought of. in addition to the online assignments and grading, the books are also coming with "test banks" and software to automatically generate exams for the teachers. lots of features are being added to convince professors to choose the book. and these features cost money. it's no longer like you're paying some guy that writes a book and finds a publisher for it. your money goes to support a production company. and now, this is what i'm seeing at the lower undergrad level as a returning student. for upper level classes, the books were always expensive, and often very thin (which makes you feel even more ripped off when you can't use it for 4 quarters like an old calculus book). for a grad level text, you have no economy of scale to bring the price down, so the price is probably actually somewhat fair. 8. Jan 17, 2010 ### waht There is more area on a page, and the quality of paper is much higher than that of a cheap paperback or a hardcover. Also many textbooks are printed in color, with graphs and photographs. That adds to the tab. 9. Jan 17, 2010 ### Pinu7 Note: It gets better as you advance, since the upper level books lack the expensive "gloss," 10. Jan 18, 2010 ### Cyrus Not having an education is way more expensive. 11. Jan 18, 2010 ### theJorge551 Sooo true... On a side note, for most books, whether they be in the science realm or anywhere else for that matter, the actual number of pages matters very little. I recently purchased a large 800 page paperback book, for roughly $15 (list price), yet a small book on Antimatter, roughly 150-200 pages, costed me about$25. 12. Jan 18, 2010 ### Ben Niehoff You can get good deals on Amazon if you wait till about 3 weeks after the semester begins. They always jack up the price during the prime buying season. And you can get even better deals if you have some Indian friends... 13. Jan 18, 2010 ### Klockan3 The books are cheaper in Europe, it is funny how on all American books we use there is a note on the back "Not for Sale in the U.S.A. or Canada". By the way it is the same thing with medicine. 14. Jan 18, 2010 ### Matterwave The elasticity of demand for textbooks is relatively inelastic; therefore, with monopoly pricing, the prices will tend to be far above costs. 15. Jan 18, 2010 ### Nick89 I just saw Griffith's book costs $108 (discount from$137 even) on Amazon, which is indeed ridiculously high. I have the book myself (live in the Netherlands) and I'm pretty sure I paid only about 40 euros for it, which would be $57 now probably more like$45/50 back when I bought it (with different exchange rates). It does indeed note clearly that it's not for sale in the USA or canada, hehe. I don't know if I should recommend the book for you, as I've never read a similar book to compare it to, but I enjoyed reading it a lot, much more than any other textbook (on different subjects though). 16. Jan 18, 2010 A quick tip is to buy the current edition minus one. i.e. my thermodynamics class required the 9th (newest) edition of the book priced at $160, while the 8th edition was going for$20-$30 on sites like amazon and half.com. The difference between ed 8 and ed 9? Cover art, a few grammar and spelling mistakes, couple of HW problems reworded, and introduced some new errors (probably so edition 10 could be released a couple years later). Doesn't work for every book (books that have many editions work best). Use the library to make your best judgement. 17. Jan 18, 2010 ### Proton Soup foreign pricing is often different, and much lower for the foreign sold item. for example, prescription drugs cost much more in the US than what they're sold for in a foreign market. you simply gouge a market to the extent that it can withstand the gouging. 18. Jan 18, 2010 ### mgb_phys But then you will miss out on all the new discoveries in Algebra, Classical mechanics and Intro Calculus in the last year! A friend had to teach an intro class in a major N American Uni. They were required (by the dept) to put a quiz in every lecture which used a hand held electronic selector gadget that came with the testbook. Apparently this was to promote interactivity in the lecture - only a cynic would think it was to force people to buy the textbook. 19. Jan 18, 2010 ### mgb_phys It was just a bit of a shock after having to pay 1GBP to$1 for computer books (and computers !) that US textbooks were 5x the price. 20. Jan 18, 2010 ### glueball8 Is it possible to buy books from another country is its cheaper?
2017-12-12 03:29:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22652390599250793, "perplexity": 1882.1354819887943}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948514250.21/warc/CC-MAIN-20171212021458-20171212041458-00023.warc.gz"}
https://docs.snowflake.com/en/sql-reference/account-usage/access_history.html
Schema: ACCOUNT_USAGE # ACCESS_HISTORY View¶ This Account Usage view can be used to query the access history of Snowflake objects (e.g. table, view, column) within the last 365 days (1 year). ## Columns¶ There are two tables in this section: • The first table defines the columns in the ACCESS_HISTORY view. • The second table defines the fields in the JSON array for the BASE_OBJECTS_ACCESSED, DIRECT_OBJECTS_ACCESSED, and OBJECTS_MODIFIED columns. Column Name Data Type Description Example QUERY_ID TEXT An internal, system-generated identifier for the SQL statement. This value is also mentioned in the QUERY_HISTORY View. a0fda135-d678-4184-942b-c3411ae8d1ce QUERY_START_TIME TIMESTAMP_LTZ The statement start time (UTC time zone). 2022-01-25 16:17:47.388 +0000 USER_NAME TEXT The user who issued the query. JSMITH DIRECT_OBJECTS_ACCESSED ARRAY A JSON array of data objects such as tables, views, and columns directly named in the query explicitly or through shortcuts such as using an asterisk (i.e. *). Virtual columns can be returned in this field. [ { "columns": [ { "columnId": 68610, "columnName": "CONTENT" } ], "objectDomain": "Table", "objectId": 66564, "objectName": "TEST_DB.TEST_SCHEMA.T1" } ] BASE_OBJECTS_ACCESSED ARRAY A JSON array of all base data objects, specifically, columns of tables to execute the query. Note: This field specifies view names or view columns, including virtual columns, if a shared view is accessed in a data sharing consumer account. [ { "columns": [ { "columnId": 68610, "columnName": "CONTENT" } ], "objectDomain": "Table", "objectId": 66564, "objectName": "TEST_DB.TEST_SCHEMA.T1" } ] OBJECTS_MODIFIED ARRAY A JSON array that specifies the objects that were associated with a write operation in the query. [ { "objectDomain": "STRING", "objectId": NUMBER, "objectName": "STRING", "columns": [ { "columnId": "NUMBER", "columnName": "STRING", "baseSources": [ { "columnName": STRING, "objectDomain": "STRING", "objectId": NUMBER, "objectName": "STRING" } ], "directSources": [ { "columnName": STRING, "objectDomain": "STRING", "objectId": NUMBER, "objectName": "STRING" } ] } ] }, ... ] The fields in the JSON array for the DIRECT_OBJECTS_ACCESSED, BASE_OBJECTS_ACCESSED, and OBJECTS_MODIFIED columns are described below. Field Data Type Description columnId NUMBER A column ID that is unique within the account. This value is identical to the columnID in the COLUMNS view. columnName TEXT The name of the accessed column. objectId NUMBER An identifier for the object, which is unique within a given account and domain. This number will match: • The TABLE_ID number for a table, view, and materialized view. • If a stage was accessed, this number will match the: • NAME identifier for a user (User stage). • TABLE_ID number for a table (Table stage). • STAGE_ID number for a stage (Named stage). objectName TEXT The fully qualified name of the object that was accessed. If a stage was accessed, this value will be the: • username (User stage). • table_name (Table stage). • stage_name (Named stage). objectDomain TEXT One of the following: Table, View, Materialized view, External table, Stream, or Stage. location TEXT The URL of the external location when the data access is an external location (e.g. s3://mybucket/a.csv). . If the query does not access a stage, this field is omitted. stageKind TEXT When writing to a stage, one of the following: Table | User | Internal Named | External Named If the query does not access a stage, this field is omitted. baseSources TEXT The columns that serve as the source columns for the columns specified by directSources. These columns facilitate column lineage. directSources TEXT The columns specifically mentioned in the data write portion of the SQL statement that serves as the source columns in the target table to which data is written. These columns facilitate column lineage. ## Usage Notes¶ General notes • The view displays data starting from February 22, 2021. • For increased performance, filter queries on the QUERY_START_TIME column and choose narrower time ranges. For sample queries, see Querying the ACCESS_HISTORY View. • Secure Views. The log record contains the underlying base table (i.e. BASE_OBJECTS_ACCESSED) to generate the view. Examples include queries on other Account Usage views and queries on base tables for extract, transform, and load (i.e. ETL) operations. This view supports read queries of the following type: • SELECT, including CREATE TABLE … AS SELECT (i.e. CTAS). • Snowflake records the SELECT subquery in a CTAS operation. • CREATE TABLE … CLONE • Snowflake records the source table in a CLONE operation. • COPY INTO … TABLE • Snowflake logs this query only when the table is specified as the source in a FROM clause. • DML operations that read data (e.g. contains a SELECT subquery, specifies certain columns in WHERE or JOIN): INSERT … SELECT, UPDATE, DELETE, and MERGE. • User-defined functions (i.e. UDFs) and Tabular SQL UDFs (UDTFs) if tables are included in queries inside the functions. This is logged in the BASE_OBJECTS_ACCESSED field. For more details on UDFs, see the UDF notes (in this topic). Write operation notes This view supports write operations of the following type: • GET <internal_stage> • PUT <internal_stage> • DELETE • TRUNCATE • INSERT • INSERT INTO … FROM SELECT * • INSERT INTO TABLE … VALUES () • MERGE INTO … FROM SELECT * • UPDATE • UPDATE TABLE … FROM SELECT * FROM … • UPDATE TABLE … WHERE … • COPY INTO TABLE FROM internalStage • COPY INTO TABLE FROM externalStage • COPY INTO TABLE FROM externalLocation • COPY INTO internalStage FROM TABLE • COPY INTO externalStage FROM TABLE • COPY INTO externalLocation FROM TABLE • CREATE: • CREATE DATABASE … CLONE • CREATE SCHEMA … CLONE • CREATE TABLE … CLONE • CREATE TABLE … AS SELECT • For write operations that call the CASE function to determine the columns to access, such as a CTAS statement with the CASE function in the SELECT query, all columns referenced in every CASE branch are recorded in the BASE_OBJECTS_ACCESSED column, the DIRECT_OBJECTS_ACCESSED column, or both columns depending on how the CTAS statement is written. Data sharing notes If a Data Sharing provider account shares objects to Data Sharing consumer accounts through a share: • Provider accounts: The queries and logs on the shared objects executed in the provider account are not visible to Data Sharing consumer accounts. • Consumer accounts: The queries on the data share executed in the consumer account are logged and only visible to the consumer account, not the Data Sharing provider account. For example, if the provider shares a table and a view built from the table to the consumer account, and there is a query on the shared view, Snowflake records the shared view access in the BASE_OBJECTS_ACCESSED column. This record, which includes the columnName and objectName values, allows the consumer to know which object was accessed in their account and also protects the provider because the underlying table (via the objectId and columnId) is not revealed to the consumer. • For column lineage: If a data sharing provider makes a view available to the data sharing consumer, the source columns for the view are not visible to the consumer because the columns originate from the data sharing provider. If the data sharing consumer moves data from the shared view to a table, Snowflake does not record the view columns as baseSources for the newly created table. UDFs & Stored Procedure notes This update is postponed and will be made available in a future release. Snowflake preserves rows that already contain references to UDFs and stored procedures in your local ACCESS_HISTORY view. Not supported • This view does not log accesses of the following types: • Additionally, this view does not support: • Sequences, including generating new values. • Data that enters or leaves Snowflake while using an External Function. • Intermediate views accessed between the base table and direct object. For example, consider a query on View_A with the following object structure: View_A » View_B » View_C » Base_Table. The ACCESS_HISTORY view records the query on View_A and the Base_Table, not View_B and View_C. • The operations to populate views, materialized views, and streams. • Data movement resulting from replication. ## Usage Notes: Column Lineage¶ These additional notes pertain to column lineage: Supported operations Column lineage tracks details for the following SQL operations: Query Conditions • Query profile/plan The query plan Snowflake writes determines whether the ACCESS_HISTORY view records column lineage. If a column needs to be evaluated as part of the query plan, Snowflake records the column in the ACCESS_HISTORY view, even if the end result of the query plan is that the column is not included in the end result. For example, consider the following INSERT statement with a WHERE clause for a particular column value: insert into a(c1) select c2 from b where c3 > 1; Even if the WHERE clause evaluates to FALSE, Snowflake records the c2 column as a source column for the c1 column. The c3 column is not listed as a source column for either baseSources or directSources. • The masked column is always listed in the directSources field. • The record in the baseSources field depends on the policy definition. For example: • If the masking policy conditions use a CASE function, then all of the columns referenced in each of the CASE branches are recorded in the baseSources field. • If the masking policy conditions only specify a constant value (e.g. *****), then the baseSources field is empty. • UDFs: • When passing a column as an argument to a UDF and writing the result to another column, the column that is passed as the argument is recorded in the directSources field. For example: insert into A(col1) select f(col2) from B; In this example, Snowflake records col2 in the directSources field because the column is an argument for the UDF named f. • The record in the baseSources field depends on the UDF definition. View columns View columns are not considered to be source columns and are not listed in the baseSources field when data from a view column is copied to a table column. The view columns in this case are listed in the directSources field. EXISTS Subquery Columns that are referenced in the EXISTS subquery clause are not considered to be source columns.
2022-12-03 15:18:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17798006534576416, "perplexity": 7842.369401981288}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710933.89/warc/CC-MAIN-20221203143925-20221203173925-00358.warc.gz"}
https://stats.stackexchange.com/questions/25355/multi-value-categorical-attributes-in-r
# Multi-value categorical attributes in R I have a training data set with both numerical and categorical variables, and one class variable. I want to build a classification model (e.g.,SVM), and for this goal I need to transform all variables into convenient format. I m confused about my categorical variables. Let me give you an example about one of them. The categorical variable in each observation represents a Google search query (usually 3-10 comma-separated words, see example below). ----------+----------------------------+-------------------+---------------- search_id | query_words (categorical) |..(other variables)| class variable ----------+----------------------------+-------------------+---------------- 1 | how,to,grow,tree |.. | 4 2 | smartfone,htc,buy,price |.. | 7 3 | buy,house,realty,london |.. | 6 4 | where,to,go,weekend,cinema |.. | 4 ... | ... |.. | ... ----------+----------------------------+-------------------+---------------- The words in this categorical variable are disordered and the same words may occur in different observations (that's logical). Number of unique words for all observations = few thousands. Number of observations: ~150.000.000 Since this categorical variable (query_words) is very important for my classification analysis, I need to train my model with it. My question is how to represent it to use for e.g., SVM. In each observation I can sort words alphabetically to order them. If I will use a numeric vector with few thousands elements (one per each unique word) I can represent this variable for each observation as e.g.: query_words[1] = (0,0,..1,..0,..1,..1,..0,...1,..0) # very big vector But I don't believe it will work effectively. How should I handle this categorical variable. I m using R for analysis. • What you asking is a pretty well-studied problem in machine learning, I would suggest you to google "svm, text classification". – Leo Mar 27 '12 at 21:38
2021-06-15 18:34:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3873147964477539, "perplexity": 1655.7476244528298}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487621519.32/warc/CC-MAIN-20210615180356-20210615210356-00210.warc.gz"}
https://nforum.ncatlab.org/discussion/185/orthogonal-group-in-a-lined-topos/?Focus=1382
# Start a new discussion ## Not signed in Want to take part in these discussions? Sign in if you have an account, or apply for one below ## Site Tag Cloud Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support. • CommentRowNumber1. • CommentAuthorUrs • CommentTimeOct 15th 2009 I continued working my way through the lower realms of the Whitehead tower of the orthogonal group by creating special orthogonal group and, yes, orthogonal group. For the time being the material present there just keeps repeating the Whitehead-tower story. But I want more there, eventually: I have a query box at orthogonal group. The most general sensible-nonsense context to talk about the orthogonal group should be any lined topos. I am wondering if there is anything interesting to be said, from that perspective. Incidentally, I was prepared in this context to also have to create general linear group, only to find to my pleasant surprise that Zoran had already created that some time back. And in fact, Zoran discusses there an algebro-geometric perspective on GL(n) which, I think, is actually usefully thought of as the perspective of GL(n) in the lined topos of, at least, presheaves on $CRing^op$. Presently I feel that I want eventually a discussion of all those seemingly boring old friends such as $\mathbb{Z}$ and $\mathbb{R} / \mathbb{Z}$ and $GL(n)$ etc. in lined toposes and smooth toposes. Inspired not the least by the wealth of cool structure that even just $\mathbb{Z}$ carries in cases such as the $\mathbb{B}$-topos in Models for Smooth Infinitesimal Analysis.
2022-01-21 12:00:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4494085907936096, "perplexity": 1364.7315454915436}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303356.40/warc/CC-MAIN-20220121101528-20220121131528-00658.warc.gz"}
http://en.wikipedia.org/wiki/Wikipedia:Requested_lists
# Wikipedia:Requested lists list of countries who have issued polymer currency till date Perlysoames (talk) 21:13, 26 April 2014 (UTC) Requested by unknown How to request a list Add the following text to the bottom of this page: {{reql|Name of list|Your reasoning and information about the topic|~~~~}} For editors Please help by fulfilling these requests! Fulfilled requests should be removed from the list. Remember to categorize newly created articles, and tag them as stubs if they are short. ## From requested articles article on Heichal Hatorah List of Abandoned Military Bunkers in California Recently moved into the area & interested in exploring the history of these places!  Requested by 67.181.111.183 (talk) 08:53, 4 March 2014 (UTC) List of oldest living World War II veterans A list of the oldest living WWII vets over the age of 100 or 105 would be interesting so that users can see who is the oldest living veteran from the United States, Japan, China, etc. 129.97.124.40 (talk) 15:14, 18 March 2014 (UTC) Requested by unknown List of middle schools in Florida There is already a page for "List of high schools in Florida" Requested by 174.97.234.220 (talk) 01:17, 26 November 2012 (UTC) List of Abandoned Military Bunkers in California Recently moved into the area & interested in exploring the history of these places!  Requested by 67.181.111.183 (talk) 08:53, 4 March 2014 (UTC) • List of Summits in 2012 for example earth summit, brics summit, asean summit, any coferences etc. along with places, dates and countries attending,etc. Pages of other years also be created. Name of list Your reasoning and information about the topic Requested by Kumarrajan045 (talk) 07:17, 16 December 2012 (UTC) The page 'World's Famous newspapers' displayed about 22000 entries. It will be better to categorize them rank-wise - circulation-wise/popularity-wise/have a poll/listing particularly English/ List of Abandoned Military Bunkers in California Recently moved into the area & interested in exploring the history of these places!  Requested by 67.181.111.183 (talk) 08:53, 4 March 2014 (UTC) Chronology of Discoveries & Inventions For students to have a correct perspective of 'what preceeded what'. Like, Which was discovered first - electricity or magnetism? OR After how may years did Hydel projects come up after invention of Motors?  Requested by 117.201.172.244 (talk) 06:03, 24 December 2012 (UTC) Template:LIST of U.S.O Black Performers List of Business Parks in Hertfordshire College Bound Reaing List I just want to know what books are on the college bound reading list so I can pick one for my English assignment. I feel like other people will look here to to find the same thing. Requested by unknown Oliver de Sagazan Biography, bibliography, and a list of influences (in English). List of Military Ship Classes A sort of Jane's for history. A list of military ship classes by country, period of service, or type of ship (Destroyer, submarine, aircraft carrier, etc.). Most such lists are by war or by country only. Requested by 70.75.174.244 (talk) 17:53, 26 April 2014 (UTC) List of Asian countries by Human Development Index As far as my knowledge, a list of HDIs by country exists for all other continents of the world, except Asia. I am sure that this list will be interesting to many, if not all, existing Wikipedia users. Thanks very much in advance. Requested by 73.182.163.104 (talk) 18:32, 9 June 2014 (UTC) Cape Cod Media Outlets A comprehensive and current list of all media outlets in the area will be helpful for area business and nonprofits wishing to disseminate press releases and obtain coverage for newsworthy activities Requested by 71.235.220.99 (talk) 17:59, 31 August 2014 (UTC) ## New requests List of vices To compliment the already existing List of virtuesRequested by unknown List of highest grossing super-hero Films Nowadays many superhero movies are being released worldwide. A list of the superhero films which grosses the most will be very helpful to many people. Requested by List of Indian Antarcticians This information will help to find out the current and past Indian expedition memebrs of Antarctic Programme Requested by 182.71.9.130 (talk) 06:21, 11 July 2014 (UTC) List of Indian Environmentalists It will help to students, who are working in the field of Environment and Ecology within India Requested by 182.71.9.130 (talk) 06:18, 11 July 2014 (UTC) List of fictional desert planets A comprehensive list of fictional desert planets from notable works of fiction Requested by 108.218.0.176 (talk) 20:16, 25 January 2014 (UTC) Heichal Hatorah List of oldest living World War II veterans A list of the oldest living WWII veterans over the age of 100 or 105 would be interesting so that users can see who is the oldest living veteran from the United States, Japan, China, etc. 129.97.124.40 (talk) 13:42, 4 April 2014 (UTC) Requested by unknown List of Abandoned Military Bunkers in California Recently moved into the area & interested in exploring the history of these places!  Requested by 67.181.111.183 (talk) 08:53, 4 March 2014 (UTC) List of sci-fi films set in the time before it was released Requested by PORTALandPORTAL2rocks 11:34, 8 February 2013 (UTC) Details of Barium Tartrate| Properties and Applications, 23 December 2013 List of Weather Related Crash Air crashes with weather factors, example:Wind, Rain, Snow, Storm, Icing, etc.  Requested by 108.75.106.253 (talk) 03:56, 1 April 2012 (UTC) List of battles in which the Holy Roman Empire was involved I have been searching for battles were the HRE was involved but I have found it's hard to find them by searching on the HRE theme, specially the ones during middle ages, I even have found a list of battles from the Teutonic Order but not from the HRE?!?!, I think there is plenty enthusiasm for military topics, not mentioning the medieval ages, so I think it would be a necessary list, so as a sub-theme for the HRE article,so as related link for a the lists of medieval, renaissance,and modern era battles.  Requested by 201.230.198.136 (talk) 19:33, 25 July 2011 (UTC) List of institutional laws named after people (or perhaps List of eponymous institutional laws) — there currently exists a List of scientific laws named after people and List of eponymous laws (the latter relates to laws in the sense of adages). Entries might include Jessica's Law and similar laws named after people that they are meant to honor, as well as laws such as Sarbanes-Oxley Act, named for their drafters and supporters. Most Roman laws, such as Lex cornelia fall into this latter category.$\sim$ Requested by Lenoxus " * " 16:07, 19 June 2008 (UTC) List of spells in the Anime Slayers Now that the new season is out, fans and newcomers will want to gather more information about the series, that includes the various spells. Thus I'd like to request a list of the different types, names, incantations and maybe first appearances of each spell shown so far. Like the Greater spell Giga Slave, with both the 'imperfect' and 'perfect' incantations. Requested by unknown Cosmetics and Jewelry Models there should be a page on this because they are a significant part of stores and advertising all over the world. Some of them are also very prominent and famous. They don't fit into the standard actor or singer category, and are too specific for "famous persons" or just "models". There is a glamour and fashion model category, and neither fit cosmetics models. Perhaps they would be a sub-group under fashion, however many think of clothing when they think of fashion. Hair-dye product models may go under cosmetics models. Jewelry models might go under fashion. Requested by Truthfulmouth (talk) List of cooperative OFDM networks No reason givenRequested by 217.219.18.23 (talk) 11:55, 4 February 2009 (UTC) Jimmy Witherspoon discography Jimmy Witherspoon, one of America's greatest blues singers, deserves a complete and clean discography. Surely there's an expert among the Wikipedia pool who can get such an article started! Thanks,  Requested by Drmies (talk) 17:19, 29 June 2009 (UTC) List of FSBO Real Estate Portals Selling through the Internet, and particularly the 'for sale by owner' (a.k.a. FSBO) method of selling is becoming increasingly popular around the world. Due the diversity of FSBO real estate websites, it would be very useful to have a comprehensive list of such portals for each country or region of the world, along with some information on each one. Requested by BTSLO (talk) 07:48, 15 July 2009 (UTC) List of fictional characters with unnatural age There are many characters of unnatural age for a variety of reasons in fictional work. Having them all in one list would allow for an interesting comparison of the fictional effects of extreme old age. Requested by HGGFordPrefect (talk) 18:20, 28 June 2009 (UTC) History of Meedatiya Rathore Rau Chunda is First Ruller Rathore State Mandore Have issue Rao Ridmal.Rao Ridmal have issue Rao Jodha.Jodha is founder of Jodhpur State have issue 20 son one of them Rao Duda is Ruller of Medata State & founder of Medatiya subclan of Rathore.have meery Raghav Kanwar Doughtor of Rao Shekha of Amarsar have issue Five son 1 Viramdev 2 Raisal 3 Raimal 4 Ratan Singh 5 Panchayan.Ratan Singh get Kudaki Thikana Have issue Meeera Baisa Have meery with Bhojraj S/O Rana Sanga Of Mewad State.after Rao Duda his elder son Viramdev Become a Ruller of Meedata & Ajmer Have issue 10 son.Rao viramdev issue Jaimal.Jaimal have issu 14 son Requested by 59.95.182.157 (talk) 03:26, 14 September 2009 (UTC) List to distinguish Online Gift List websites from the gift registery websites will help users identify the difference between the two Requested by Diginicola (talk) 17:47, 15 September 2009 (UTC) Total pollution by country Total pollution and contamination of air, water and soil by country, and sublists for biggest countries by cities like united states, china, rusia or india. Adding some maps to give an overall overview of the problem. Requested by 77.208.36.109 (talk) 17:47, 15 February 2010 (UTC) Air pollution by country Pollution and contamination of air by country, and sublists for biggest countries by cities like united states, china, rusia or india. Adding some maps to give an overall overview of the problem. Requested by 77.208.36.109 (talk) 17:47, 15 February 2010 (UTC) Water pollution by country Pollution and contamination of water by country, and sublists for biggest countries by cities like united states, china, rusia or india. Adding some maps to give an overall overview of the problem. Requested by 77.208.36.109 (talk) 17:47, 15 February 2010 (UTC) Soil pollution by country Pollution and contamination of soil by country, and sublists for biggest countries by cities like united states, china, rusia or india. Adding some maps to give an overall overview of the problem. Requested by 77.208.36.109 (talk) 17:47, 15 February 2010 (UTC) List of Neandertal fossils It is very difficult to find a compiled list of the Neandertal hypodigm to date. It would be useful to have a complete list for students and professors to consult especially when this information is difficult to extract from scholarly articles. You would have to sort through several articles to even start to get an idea of the different Neandertal fossils that have been found. This is why I believe it would be useful to compile a straightforward, easy to view list on Wikipedia. Requested by Kbouche4 (talk) 21:12, 9 March 2011 (UTC) acronym page for NIRF to include "near infrared fluorophore" and "Norwegian Investor Relations Society" Requested by 108.7.163.110 (talk) 00:20, 30 March 2011 (UTC) I think you mean a disambiguation page for the acronym NIRF? In order to create this page there must first be existing articles for "near infrared fluorophore" and "Norwegian Investor Relations Society". -- œ 15:35, 30 March 2011 (UTC) list of steroid-medication names, generic and brand names No reason givenRequested by unknown There's List of steroid abbreviations but I'm not sure if that's exactly what you were looking for. -- œ 15:08, 4 April 2011 (UTC) List of Abandoned Military Bunkers in California Recently moved into the area & interested in exploring the history of these places!  Requested by 67.181.111.183 (talk) 08:53, 4 March 2014 (UTC) List of Commonly Misattributed Quotes People like to quote an adage and then label it with a name that makes their point seem more valid, or that they have heard from another source. For example, "I disapprove of what you say, but I will defend to the death your right to say it," is often attributed to Voltaire, but was actually said by Evelyn Beatrice Hall. Requested by 174.25.128.253 (talk) 23:02, 23 June 2011 (UTC) List of Wikipedias by total number of speakers Demographics of Wikipedias by wikipedians speaking languages. Similar to "List of languages by total number of speakers", but for Wikipedians. Thank You. I´d really like to see this article. Requested by FaktneviM (talk) 21:52, 2 July 2011 (UTC) List of Michigan Metropolitan Areas by Populattion There is a list of Michigan municipalities by size, but not Metro Areas Requested by Zzomtceo (talk) 20:59, 6 July 2011 (UTC)zzomtceo I think this is what you're looking for. VictorianMutant(Talk) 21:28, 9 November 2011 (UTC) List of Android Market Malicious Apps Security researchers are continuously identifying malicious applications found in the Android Market. It would be helpful to have a complete list or known applications that have been removed or identified as malicious. Thank you.  Requested by KristelW55 (talk) 16:21, 7 August 2011 (UTC) List of people who have walked off a TV set The recent collection of celebs and politicians walking off TV sets is easy to search and find via a Google result set, but not in a nice clean form, and not on Wikipedia. Older instances are even difficult to find via Google. Thanks. Requested by Lagunamm (talk) 15:15, 18 August 2011 (UTC) List of Andy Gard toys (with dates and rarity) Andy Gard was an American Toy maker in the mid 19th century. His toys are now collectible and range from plastic toy soldiers to a Black and Decker replica toy drill Requested by 71.176.160.158 (talk) 03:18, 22 August 2011 (UTC) List of Scandinavian metal bands A list has been made for Scandinavian Death Metal bands, but I feel it's more important to have a list, that is not subgenre-specific, of bands that hail from Sweden, Norway, Finland, Denmark or Iceland. There are plenty of Wikipedia articles on bands from these countries, and there are many reliable heavy metal websites and publications that report on the more obscure bands. Thank you very much to anyone who follows this up. Requested by 2.27.40.189 (talk) 18:34, 8 October 2011 (UTC) List of the longest Surahs in the Quran A proper, chronological list of the longest Surahs in the Quran would be really helpful to many Wikipedia users. Some helpful sources include, Top Ten Longest Surahs In The Quran (World Tenz), Chronological Order of Quranic Surahs (Webcite)Firzen434 (talk) 03:49, 23 October 2011 (UTC) Requested by unknown List of the shortest trees in the world A chronological list of the shortest trees in the world would be very useful for many Wikipedia users, especially for quizzers and school students. Requested by Hridith Sudev (talk) 18:07, 4 November 2011 (UTC) List of billionaires by residence There already exists a list of billionaires by nationality, but I think billionaires by residency is at least as interesting. Often we hear in the press that this or that policy cannot be enacted because all the rich people would leave the country. It would be interesting to see how many such people are in one country compared to another. Does, say, Sweden suffer from billionaire emigration because of its taxes? Are UK policies already very friendly to billionaires? Etc.  Requested by 86.184.160.59 (talk) 12:02, 9 November 2011 (UTC) List of Spies Participating in World War II I've read a book about some of the German spies. I found that some of the spies in World War II actually acted as an important role. Better if the list include what the spy did or how did the spies die(some and executed but some were not and are still alive). It will give useful information about the war and will be quite entertaining. Thank you for your attention. Requested by HiddenIP (talk) 15:58, 7 January 2012 (UTC) List of multilingual First Spouses of the United States After reading the counterpart list for Presidents, I thought it would be well to have a compiled source for the multilingualism of American First Ladies. The wiki's discussion of multilingualism among Presidents often involves their wives' abilities already. Further, where a US history or foreign language teacher may at present find the history of Presidential multilingualism a resource for engaging students for whom English is a foreign language, a supplementary list for these famous multilingual ladies might be helpful to female (and male) students, though naturally such an application would want women who are famous for more than their marriages.  Requested by 108.42.3.91 (talk) 01:48, 16 January 2012 (UTC) List of claimants attempting to predict apocalyptic events twice Recently, I've been reading List of dates predicted for apocalyptic events and it's so entertaining. I think a list of people who failed in predicting is a good one. The page somehow wrote 'Various Claimants' in some of the fields ... disappointing! Requested by HiddenIP (talk) 13:46, 22 January 2012 (UTC) List of counties in each CITY would be EXTREMELY helpful to relocators and newcomers Requested by List of Human Rights degree programs There is no comprehensive resource available on the internet that provides any direction for those seeking a graduate degree in Human Rights. The field is growing and yet information about the programs is very difficult to find. Requested by Bgeezy (talk) 19:51, 31 January 2012 (UTC) Name of University courses, from certificates to doctorate For someone who wish to continue their studies but don't know where to start or to find that specialised course.  Requested by 203.132.141.170 (talk) 07:16, 6 February 2012 (UTC) List of Known Racists Adolf Hitler, Andrew Jackson, Elija Muhamed, who else was/is famously racist? Requested by unknown List of media directories for Brazil and Mexico Requested by Zomby437 (talk) 21:03, 18 March 2012 (UTC) List of incurable diseases Marburg virus, Creutzfeldt–Jakob, Ebola virus and HIV to name a few. What are the others? Requested by 69.117.82.61 (talk) 01:09, 18 April 2012 (UTC) List of famous people with the name Stephen Just for people who want to know if there are any famous people with with the name of Stephen and who they are Requested by 76.171.146.111 (talk) 05:12, 1 May 2012 (UTC) List of works by Roy Lichtenstein The Template:Roy Lichtenstein can't alone describe list of works done by Lichtenstein himself. It needs sufficient list of works done by him, as Roy Lichtenstein has a big table of his works in one section. Requested by George Ho (talk) 04:59, 28 May 2012 (UTC) Details for June 1950 Requested by unknown List of Abandoned Military Bunkers in California Recently moved into the area & interested in exploring the history of these places!  Requested by 67.181.111.183 (talk) 08:53, 4 March 2014 (UTC) list of Satellite phones Wikipedia needs this kind of lists. there are some people who don't even know what the Exact-M 22 is! Requested by • This is actually a good idea. The page "List of satellite phones" did once exist but was deleted in 2008 for being substandard. You may want to request an undelete at WP:REFUND to copy whatever usable content there was into your sandbox to use as a starting point in creating a new article. -- œ 08:28, 23 July 2012 (UTC) List of Wikipedia inaccuracies Have a sense of humor about it. List some of the more ridiculous, inane or outlandish edits or submissions that people have made. There have got to be some hilarious ones. Requested by list of railway routes in poland Requested by unknown List of agriculturalists There are many notable people with this occupation Requested by emijrp (talk) 11:13, 8 August 2012 (UTC) List of agronomists There are many notable people with this occupation Requested by emijrp (talk) 11:13, 8 August 2012 (UTC) List of artisans There are many notable people with this occupation Requested by emijrp (talk) 11:13, 8 August 2012 (UTC) List of bussinesspeople There are many notable people with this occupation Requested by emijrp (talk) 11:13, 8 August 2012 (UTC) List of cavers There are many notable people with this occupation Requested by emijrp (talk) 11:13, 8 August 2012 (UTC) List of designers There are many notable people with this occupation Requested by emijrp (talk) 11:13, 8 August 2012 (UTC) List of digital historians There are many notable people with this occupation Requested by emijrp (talk) 11:13, 8 August 2012 (UTC) List of diplomats There are many notable people with this occupation Requested by emijrp (talk) 11:13, 8 August 2012 (UTC) List of environmental lawyers There are many notable people with this occupation Requested by emijrp (talk) 11:13, 8 August 2012 (UTC) List of horse trainers There are many notable people with this occupation Requested by emijrp (talk) 11:13, 8 August 2012 (UTC) List of innovators There are many notable people with this occupation Requested by emijrp (talk) 11:13, 8 August 2012 (UTC) List of interior designers There are many notable people with this occupation Requested by emijrp (talk) 11:13, 8 August 2012 (UTC) List of judges There are many notable people with this occupation Requested by emijrp (talk) 11:13, 8 August 2012 (UTC) List of lawyers There are many notable people with this occupation Requested by emijrp (talk) 11:13, 8 August 2012 (UTC) List of missionaries There are many notable people with this occupation Requested by emijrp (talk) 11:13, 8 August 2012 (UTC) List of mnemonists There are many notable people with this occupation Requested by emijrp (talk) 11:13, 8 August 2012 (UTC) List of motivational speakers There are many notable people with this occupation Requested by emijrp (talk) 11:13, 8 August 2012 (UTC) List of pastoralists There are many notable people with this occupation Requested by emijrp (talk) 11:13, 8 August 2012 (UTC) List of pianists There are many notable people with this occupation Requested by emijrp (talk) 11:13, 8 August 2012 (UTC) List of rebels There are many notable people with this occupation Requested by emijrp (talk) 11:13, 8 August 2012 (UTC) List of researchers There are many notable people with this occupation Requested by emijrp (talk) 11:13, 8 August 2012 (UTC) List of rhetoricians There are many notable people with this occupation Requested by emijrp (talk) 11:13, 8 August 2012 (UTC) List of screenwriters There are many notable people with this occupation Requested by emijrp (talk) 11:13, 8 August 2012 (UTC) List of scribes There are many notable people with this occupation Requested by emijrp (talk) 11:13, 8 August 2012 (UTC) List of spies There are many notable people with this occupation Requested by emijrp (talk) 11:13, 8 August 2012 (UTC) List of viticulturists There are many notable people with this occupation Requested by emijrp (talk) 11:13, 8 August 2012 (UTC) List of Contemporary R&B Albums by year There are many notable people with this occupation Requested by 176.251.142.164 (talk) 11:47, 9 August 2012 (UTC) List of bands acronymized FF There are numerous bands acronymized FF including The Fiery Furnaces, Franz Ferdinand, Friendly Fires, Foo Fighters, Fleet Foxes, Fél fény, Final Fantasy, etc. Requested by 176.63.234.58 (talk) 21:25, 16 August 2012 (UTC) List of Tyrannies Specifically to determine which are founded on "rightist", which on "leftist", and which on other political philosophies or ideologies Requested by 24.9.198.195 (talk) 17:21, 14 September 2012 (UTC) List of Abandoned Military Bunkers in California Recently moved into the area & interested in exploring the history of these places!  Requested by 67.181.111.183 (talk) 08:53, 4 March 2014 (UTC) List of Movies with the Wilhelm Scream The Wilhelm scream has appeared in over 200 different movies, including all of the Star Wars and Lord of the Rings Movies. Requested by 126.122.68.61 (talk) 07:48, 27 September 2012 (UTC) List of Chinese-language writers I'm quite surprised that in the category of Lists of writers by language, there is no a list of Chinese-language writers. May someone create that? Requested by Professorjohnas (talk) 13:15, 24 October 2012 (UTC) List of People with the Largest Number of Confirmed Sniper Kills Recently Read the book by Chris Kyle, who currently holds the U.S. record. Requested by 153.24.73.60 (talk) 18:07, 27 November 2012 (UTC) List of Famous Ruins A list of ruins, or one list that subdivides into pages for regions or continents, would make finding them easier and quicker. Also, this list could include the currently-existing community, village, or city closest to a ruins, as well as name of province, territory, state, county, etc., that contains the ruins. Requested by NewYorkeruser (talk) 11:16, 7 December 2012 (UTC) List of Human-Caused Animal Extinctions Humans have caused the extinction of many animals (such as the Dodo). A list would make finding them much easier. Also, this list could include the specific causes and reasons for extinction, as well as extinction date. Requested by NewYorkeruser (talk) 11:16, 7 December 2012 (UTC) List of busiest Paris Métro stations List of Horror conventions The closest we could find was this page: http://en.wikipedia.org/wiki/Horror_convention. It has a short list of 'notable conventions' but there are many more out there; and the ominous-events.com that it links to for a horror con database is painfully outdated. Most of the conventions & events on that site don't even exist anymore. Requested by 68.35.215.179 (talk) 16:31, 17 January 2013 (UTC) List of female scholars a timeline of women scholars is missing . i am doing a research and it will be very help full if women scholars from all over the world were there is a timeline list. thank you. Requested by unknown List of people who escaped multiple times from a prison It's very hard to escape from a prison and unique to do it twice or more. Peoples I found: Jack Sheppard, Steven Jay Russell, Alfred George Hinds, Pascal Payet, Moondyne Joe, Alfred George Hinds, Richard Lee McNair Requested by Sander.v.Ginkel (talk) 23:09, 16 May 2013 (UTC) List of safest airlines Found a source, can someone put it on wikipedia. http://www.jacdec.de/jacdec_safety_ranking_2012.htm&nbsp;Requested by 85.157.69.191 (talk) 20:29, 20 May 2013 (UTC) List of Countries by Human Development Index 2004 I have been given a project in economics to write this list and I can't find it anywhere. http://www.newworldencyclopedia.org/entry/List_of_countries_by_Human_Development_Index&nbsp;Requested by Anshica (talk) 09:58, 24 June 2013 (UTC) Template:Req1 Template:Req1 List of people honoured by google doodle No reason givenRequested by unknown List of dancers at the 2012 American Music Awards There were some outstanding performers. I am doing research, and it would be helpful (and interesting) to know more about them Requested by Scriptly (talk) 11:54, 11 October 2013 (UTC) EPCA Disambiguation No reason givenRequested by unknown Australia has something called the Office of Road Safety [1] and lists its campaigns on its site. Does the U.S. have something similar? Requested by Emerald Evergreen 20:36, 17 January 2014 (UTC) List of Abandoned Military Bunkers in California Recently moved into the area & interested in exploring the history of these places!  Requested by 67.181.111.183 (talk) 08:53, 4 March 2014 (UTC) List of Canadian Municipal Electoral Wards Unable to source this data myself. Even advice on how to do so would be appreciated. Username: perlysoames Requested by unknown List of Important Publications in Political Science The category article "Lists of important publications in science" does not have this. Why? A possible part of the to-be-list could be by De Tocqueville, Alexis. Democracy in America as one example. Requested by Alexrvi (talk) 22:57, 1 June 2014 (UTC) List of Asian countries by Human Development Index As far as my knowledge, a list of HDIs by country exists for all other continents of the world, except Asia. I am sure that this list will be interesting to many, if not all, existing Wikipedia users. Thanks very much in advance. Requested by 73.182.163.104 (talk) 18:36, 9 June 2014 (UTC) List of Historic Campaigns to Conquer the World The article about "World Domination" [2] covers political theories and ideologies from scholars and political parties, but doesn't include the actual attempts by people and governments to physically or politically conquer the world. I am doing research, and this list would be incredibly helpful to me, and I imagine many other Wikipedia users. Some notable attempts being Genghis Khan & the Mongol Empire, Attila the Hun, Ancient Rome, Greece, Persia, Napoleon, the Colonial British Empire as well as France and Spain from the same era, The Ottoman Empire, WWI Germany under Kaiser Wilhelm II, Nazi Germany, the Soviet Union, Imperial Japan, and The Islamic State of Syria & the Levant most recently. Thank you, in advance! Requested by 216.67.73.240 (talk) 21:53, 2 July 2014 (UTC) List of every sound in every language.lI think there should be a list of these sounds and which languages use them. No reason givenRequested by unknown List of Sword Art Online soundtracks I thought this might be on the wiki page but it isn't. So I'm requesting it. The wiki page has all the light novels, episodes, characters but I don't see any soundtracks about Sword Art Online. Requested by 1.64.88.68 (talk) 00:51, 13 August 2014 (UTC) Cape Cod Media List A list of media outlets will be helpful to business and nonprofits wishing to disseminate information and/or obtain coverage for newsworthy activities Requested by 71.235.220.99 (talk) 18:01, 31 August 2014 (UTC)
2014-10-02 10:27:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4002755582332611, "perplexity": 7612.971297247842}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663743.38/warc/CC-MAIN-20140930004103-00252-ip-10-234-18-248.ec2.internal.warc.gz"}
https://nebusresearch.wordpress.com/category/geometry/
## Reading the Comics, August 10, 2019: In Security Edition There were several more comic strips last week worth my attention. One of them, though, offered a lot for me to write about, packed into one panel featuring what comic strip fans call the Wall O’ Text. Bea R’s In Security for the 9th is part of a storyline about defeating an evil “home assistant”. The choice of weapon is Michaela’s barrage of questions, too fast and too varied to answer. There are some mathematical questions tossed in the mix. The obvious one is “zero divided by two equals zero, but why’z two divided by zero called crazy town?” Like with most “why” mathematics questions there are a range of answers. The obvious one, I suppose, is to appeal to intuition. Think of dividing one number by another by representing the numbers with things. Start with a pile of the first number of things. Try putting them into the second number of bins. How many times can you do this? And then you can pretty well see that you can fill two bins with zero things zero times. But you can fill zero bins with two things — well, what is filling zero bins supposed to mean? And that warns us that dividing by zero is at least suspicious. That’s probably enough to convince a three-year-old, and probably most sensible people. If we start getting open-mined about what it means to fill no containers, we might say, well, why not have two things fill the zero containers zero times over, or once over, or whatever convenient answer would work? And here we can appeal to mathematical logic. Start with some ideas that seem straightforward. Like, that division is the inverse of multiplication. That addition and multiplication work like you’d guess from the way integers work. That distribution works. Then you can quickly enough show that if you allow division by zero, this implies that every number equals every other number. Since it would be inconvenient for, say, “six” to also equal “minus 113,847,506 and three-quarters” we say division by zero is the problem. This is compelling until you ask what’s so great about addition and multiplication as we know them. And here’s a potentially fruitful line of attack. Coming up with alternate ideas for what it means to add or to multiply are fine. We can do this easily with modular arithmetic, that thing where we say, like, 5 + 1 equals 0 all over again, and 5 + 2 is 1 and 5 + 3 is 2. This can create a ring, and it can offer us wild ideas like “3 times 2 equals 0”. This doesn’t get us to where dividing by zero means anything. But it hints that maybe there’s some exotic frontier of mathematics in which dividing by zero is good, or useful. I don’t know of one. But I know very little about topics like non-standard analysis (where mathematicians hypothesize non-negative numbers that are not zero, but are also smaller than any positive number) or structures like surreal numbers. There may be something lurking behind a Quanta Magazine essay I haven’t read even though they tweet about it four times a week. (My twitter account is, for some reason, not loading this week.) Michaela’s questions include a couple other mathematically-connected topics. “If infinity is forever, isn’t that crazy, too?” Crazy is a loaded word and probably best avoided. But there are infinity large sets of things. There are processes that take infinitely many steps to complete. Please be kind to me in my declaration “are”. I spent five hundred words on “two divided by zero”. I can’t get into that it means for a mathematical thing to “exist”. I don’t know. In any event. Infinities are hard and we rely on them. They defy our intuition. Mathematicians over the 19th and 20th centuries worked out fairly good tools for handling these. They rely on several strategies. Most of these amount to: we can prove that the difference between “infinitely many steps” and “very many steps” can be made smaller than any error tolerance we like. And we can say what “very many steps” implies for a thing. Therefore we can say that “infinitely many steps” gives us some specific result. A similar process holds for “infinitely many things” instead of “infinitely many steps”. This does not involve actually dealing with infinity, not directly. It involves dealing with large numbers, which work like small numbers but longer. This has worked quite well. There’s surely some field of mathematics about to break down that happy condition. And there’s one more mathematical bit. Why is a ball round? This comes around to definitions. Suppose a ball is all the points within a particular radius of a center. What shape that is depends on what you mean by “distance”. The common definition of distance, the “Euclidean norm”, we get from our physical intuition. It implies this shape should be round. But there are other measures of distance, useful for other roles. They can imply “balls” that we’d say were octahedrons, or cubes, or rounded versions of these shapes. We can pick our distance to fit what we want to do, and shapes follow. I suspect but do not know that it works the other way, that if we want a “ball” to be round, it implies we’re using a distance that’s the Euclidean measure. I defer to people better at normed spaces than I am. Mark Anderson’s Andertoons for the 10th is the Mark Anderson’s Andertoons for the week. It’s also a refreshing break from talking so much about In Security. Wavehead is doing the traditional kid-protesting-the-chalkboard-problem. This time with an electronic chalkboard, an innovation that I’ve heard about but never used myself. Bob Scott’s Bear With Me for the 10th is the Pi Day joke for the week. And that last one seemed substantial enough to highlight. There were even slighter strips. Among them: Mark Anderson’s Andertoons for the 4th features latitude and longitude, the parts of spherical geometry most of us understand. At least feel we understand. Jim Toomey’s Sherman’s Lagoon for the 8th mentions mathematics as the homework parents most dread helping with. Larry Wright’s Motley rerun for the 10th does a joke about a kid being bad at geography and at mathematics. And that’s this past week’s mathematics comics. Reading the Comics essays should all be gathered at this link. Thanks for reading this far. ## Reading the Comics, July 26, 2019: Children With Mathematics Edition Three of the strips I have for this installment feature kids around mathematics talk. That’s enough for a theme name. Gary Delainey and Gerry Rasmussen’s Betty for the 23rd is a strip about luck. It’s easy to form the superstitious view that you have a finite amount of luck, or that you have good and bad lucks which offset each other. It feels like it. If you haven’t felt like it, then consider that time you got an unexpected $200, hours before your car’s alternator died. If events are independent, though, that’s just not so. Whether you win$600 in the lottery this week has no effect on whether you win any next week. Similarly whether you’re struck by lightning should have no effect on whether you’re struck again. Except that this assumes independence. Even defines independence. This is obvious when you consider that, having won \$600, it’s easier to buy an extra twenty dollars in lottery tickets and that does increase your (tiny) chance of winning again. If you’re struck by lightning, perhaps it’s because you tend to be someplace that’s often struck by lightning. Probability is a subtler topic than everyone acknowledges, even when they remember that it is such a subtle topic. It sure seems like this strip wants to talk about lottery winners struck by lightning, doesn’t it? Darrin Bell’s Candorville for the 23rd jokes about the uselessness of arithmetic in modern society. I’m a bit surprised at Lemont’s glee in not having to work out tips by hand. The character’s usually a bit of a science nerd. But liking science is different from enjoying doing arithmetic. And bad experiences learning mathematics can sour someone on the subject for life. (Which is true of every subject. Compare the number of people who come out of gym class enjoying physical fitness.) If you need some Internet Old, read the comments at GoComics, which include people offering dire warnings about what you need in case your machine gives the wrong answer. Which is technically true, but for this application? Getting the wrong answer is not an immediately awful affair. Also a lot of cranky complaining about tipping having risen to 20% just because the United States continues its economic punishment of working peoples. Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 25th is some wordplay. Mathematicians often need to find minimums of things. Or maximums of things. Being able to do one lets you do the other, as you’d expect. If you didn’t expect, think about it a moment, and then you expect it. So min and max are often grouped together. Paul Trap’s Thatababy for the 26th is circling around wordplay, turning some common shape names into pictures. This strip might be aimed at mathematics teachers’ doors. I’d certainly accept these as jokes that help someone learn their shapes. And you know what? I hope to have another Reading the Comics post around Thursday at this link. And that’s not even thinking what I might do for this coming Sunday. ## Reading the Comics, July 12, 2019: Ricci Tensor Edition So a couple days ago I was chatting with a mathematician friend. He mentioned how he was struggling with the Ricci Tensor. Not the definition, not exactly, but its point. What the Ricci Tensor was for, and why it was a useful thing. He wished he knew of a pop mathematics essay about the thing. And this brought, slowly at first, to my mind that I knew of one. I wrote such a pop-mathematics essay about the Ricci Tensor, as part of my 2017 A To Z sequence. In it, I spend several paragraphs admitting that I’m not sure I understand what the Ricci tensor is for, and why it’s a useful thing. Daniel Beyer’s Long Story Short for the 11th mentions some physics hypotheses. These are ideas about how the universe might be constructed. Like many such cosmological thoughts they blend into geometry. The no-boundary proposal, also known as the Hartle-Hawking state (for James Hartle and Stephen Hawking), is a hypothesis about the … I want to write “the start of time”. But I am not confident that this doesn’t beg the question. Well, we think we know what we mean by “the start of the universe”. A natural question in mathematical physics is, what was the starting condition? At the first moment that there was anything, what did it look like? And this becomes difficult to answer, difficult to even discuss, because part of the creation of the universe was the creation of spacetime. In this no-boundary proposal, the shape of spacetime at the creation of the universe is such that there just isn’t a “time” dimension at the “moment” of the Big Bang. The metaphor I see reprinted often about this is how there’s not a direction south of the south pole, even though south is otherwise a quite understandable concept on the rest of the Earth. (I agree with this proposal, but I feel like analogy isn’t quite tight enough.) Still, there are mathematical concepts which seem akin to this. What is the start of the positive numbers, for example? Any positive number you might name has some smaller number we could have picked instead, until we fall out of the positive numbers altogether and into zero. For a mathematical physics concept there’s absolute zero, the coldest temperature there is. But there is no achieving absolute zero. The thermodynamical reasons behind this are hard to argue. (I’m not sure I could put them in a two-thousand-word essay, not the way I write.) It might be that the “moment of the Big Bang” is similarly inaccessible but, at least for the correct observer, incredibly close by. The Weyl Curvature is a creation of differential geometry. So it is important in relativity, in describing the curve of spacetime. It describes several things that we can think we understand. One is the tidal forces on something moving along a geodesic. Moving along a geodesic is the general-relativity equivalent of moving in a straight line at a constant speed. Tidal forces are those things we remember reading about. They come from the Moon, sometimes the Sun, sometimes from a black hole a theoretical starship is falling into. Another way we are supposed to understand it is that it describes how gravitational waves move through empty space, space which has no other mass in it. I am not sure that this is that understandable, but it feels accessible. The Weyl tensor describes how the shapes of things change under tidal forces, but it tracks no information about how the volume changes. The Ricci tensor, in contrast, tracks how the volume of a shape changes, but not the shape. Between the Ricci and the Weyl tensors we have all the information about how the shape of spacetime affects the things within it. Ted Baum, writing to John Baez, offers a great piece of advice in understanding what the Weyl Tensor offers. Baum compares the subject to electricity and magnetism. If one knew all the electric charges and current distributions in space, one would … not quite know what the electromagnetic fields were. This is because there are electromagnetic waves, which exist independently of electric charges and currents. We need to account for those to have a full understanding of electromagnetic fields. So, similarly, the Weyl curvature gives us this for gravity. How is a gravitational field affected by waves, which exist and move independently of some source? I am not sure that the Weyl Curvature is truly, as the comic strip proposes, a physics hypothesis “still on the table”. It’s certainly something still researched, but that’s because it offers answers to interesting questions. But that’s also surely close enough for the comic strip’s needs. Dave Coverly’s Speed Bump for the 11th is a wordplay joke, and I have to admit its marginality. I can’t say it’s false for people who (presumably) don’t work much with coefficients to remember them after a long while. I don’t do much with French verb tenses, so I don’t remember anything about the pluperfect except that it existed. (I have a hazy impression that I liked it, but not an idea why. I think it was something in the auxiliary verb.) Still, this mention of coefficients nearly forms a comic strip synchronicity with Mike Thompson’s Grand Avenue for the 11th, in which a Math Joke allegedly has a mistaken coefficient as its punch line. Mike Thompson’s Grand Avenue for the 12th is the one I’m taking as representative for the week, though. The premise has been that Gabby and Michael were sent to Math Camp. They do not want to go to Math Camp. They find mathematics to be a bewildering set of arbitrary and petty rules to accomplish things of no interest to them. From their experience, it’s hard to argue. The comic has, since I started paying attention to it, consistently had mathematics be a chore dropped on them. And not merely from teachers who want them to solve boring story problems. Their grandmother dumps workbooks on them, even in the middle of summer vacation, presenting it as a chore they must do. Most comic strips present mathematics as a thing students would rather not do, and that’s both true enough and a good starting point for jokes. But I don’t remember any that make mathematics look so tedious. Anyway, I highlight this one because of the Math Camp jokes it, and the coefficients mention above, are the most direct mention of some mathematical thing. The rest are along the lines of the strip from the 9th, asserting that the “Math Camp Activity Board” spelled the last word wrong. The joke’s correct but it’s not mathematical. So I had to put this essay to bed before I could read Saturday’s comics. Were any of them mathematically themed? I may know soon! And were there comic strips with some mention of mathematics, but too slight for me to make a paragraph about? What could be even slighter than the mathematical content of the Speed Bump and the Grand Avenue I did choose to highlight? Please check the Reading the Comics essay I intend to publish Tuesday. I’m curious myself. A friend was playing with that cute little particle-physics simulator idea I mentioned last week. And encountered a problem. With a little bit of thought, I was able to not solve the problem. But I was able to explain why it was a subtler and more difficult problem than they had realized. These are the moments that make me feel justified calling myself a mathematician. The proposed simulation was simple enough: imagine a bunch of particles that interact by rules that aren’t necessarily symmetric. Like, the attraction particle A exerts on particle B isn’t the same as what B exerts on A. Or there are multiple species of particles. So (say) red particles are attracted to blue but repelled by green. But green is attracted to red and repelled by blue twice as strongly as red is attracted to blue. Your choice. Give a mathematician a perfectly good model of something. She’ll have the impulse to try tinkering with it. One reliable way to tinker with it is to change the domain on which it works. If your simulation supposes you have particles moving on the plane, then, what if they were in space instead? Or on the surface of a sphere? Or what if something was strange about the plane? My friend had this idea: what if the particles were moving on the surface of a cube? And the problem was how to find the shortest distance between two particles on the surface of a cube. The distance matters since most any attraction rule depends on the distance. This may be as simple as “particles more than this distance apart don’t interact in any way”. The obvious approach, or if you prefer the naive approach, is to pretend the cube is a sphere and find distances that way. This doesn’t get it right, not if the two points are on different faces of the cube. If they’re on adjacent faces, ones which share an edge — think the floor and the wall of a room — it seems straightforward enough. My friend got into trouble with points on opposite faces. Think the floor and the ceiling. This problem was posed (to the public) in January 1905 by Henry Ernest Dudeney. Dudeney was a newspaper columnist with an exhaustive list of mathematical puzzles. A couple of the books collecting them are on Project Gutenberg. The puzzles show their age in spots. Some in language; some in problems that ask to calculate money in pounds-shillings-and-pence. Many of them are chess problems. But many are also still obviously interesting, and worth thinking about. This one, I was able to find, was a variation of The Spider and the Fly, problem 75 in The Canterbury Puzzles: Inside a rectangular room, measuring 30 feet in length and 12 feet in width and height, a spider is at a point on the middle of one of the end walls, 1 foot from the ceiling, as at A; and a fly is on the opposite wall, 1 foot from the floor in the centre, as shown at B. What is the shortest distance that the spider must crawl in order to reach the fly, which remains stationary? Of course the spider never drops or uses its web, but crawls fairly. (Also I admire Dudeney’s efficient closing off of the snarky, problem-breaking answer someone was sure to give. It suggests experienced thought about how to pose problems.) What makes this a puzzle, even a paradox, is that the obvious answer is wrong. At least, what seems like the obvious answer is to start at point A, move to one of the surfaces connecting the spider’s and the fly’s starting points, and from that move to the fly’s surface. But, no: you get a shorter answer by using more surfaces. Going on a path that seems like it wanders more gets you a shorter distance. The solution’s presented here, along with some follow-up problems. In this case, the spider’s shortest path uses five of the six surfaces of the room. The approach to finding this is an ingenious one. Imagine the room as a box, and unfold it into something flat. Then find the shortest distance on that flat surface. Then fold the box back up. It’s a good trick. It turns out to be useful in many problems. Mathematical physicists often have reason to ponder paths of things on flattenable surfaces like this. Sometimes they’re boxes. Sometimes they’re toruses, the shape of a doughnut. This kind of unfolding often makes questions like “what’s the shortest distance between points” easier to solve. There are wrinkles to the unfolding. Of course there are. How interesting would it be if there weren’t? The wrinkles amount to this. Imagine you start at the corner of the room, and walk up a wall at a 45 degree angle to the horizon. You’ll get to the far corner eventually, if the room has proportions that allow it. All right. But suppose you walked up at an angle of 30 degrees to the horizon? At an angle of 75 degrees? You’ll wind your way around the walls (and maybe floor and ceiling) some number of times, each path you start with. Probably different numbers of times. Some path will be shortest, and that’s fine. But … like, think about the path that goes along the walls and ceiling and floor three times over. The room, unfolded into a flat panel, has only one floor and one ceiling and each wall once. The straight line you might be walking goes right off the page. And this is the wrinkle. You might need to tile the room. In a column of blocks (like in Dudeney’s solution) every fourth block might be the floor, with, between any two of them, a ceiling. This is fine, and what’s needed. It can be a bit dizzying to imagine such a state of affairs. But if you’ve ever zoomed a map of the globe out far enough that you see Australia six times over then you’ve understood how this works. I cannot attest that this has helped my friend in the slightest. I am glad that my friend wanted to think about the surface of the cube. The surface of a dodecahedron would be far, far past my ability to help with. ## Reading the Comics, July 2, 2019: Back On Schedule Edition I hoped I’d get a Reading the Comics post in for Tuesday, and even managed it. With this I’m all caught up to the syndicated comic strips which, last week, brought up some mathematics topic. I’m open for nominations about what to publish here Thursday. Write in quick. Hilary Price’s Rhymes With Orange for the 30th is a struggling-student joke. And set in summer school, so the comic can be run the last day of June without standing out to its United States audience. It expresses a common anxiety, about that point when mathematics starts using letters. It superficially seems strange that this change worries students. Students surely had encountered problems where some term in an equation was replaced with a blank space and they were expected to find the missing term. This is the same work as using a letter. Still, there are important differences. First is that a blank line (box, circle, whatever) has connotations of “a thing to be filled in”. A letter seems to carry meaning in to the problem, even if it’s just “x marks the spot”. And a letter, as we use it in English, always stands for the same thing (or at least the same set of things). That ‘x’ may be 7 in one problem and 12 in another seems weird. I mean weird even by the standards of English orthography. A letter might represent a number whose value we wish to know; it might represent a number whose value we don’t care about. These are different ideas. We usually fall into a convention where numbers we wish to know are more likely x, y, and z, while those we don’t care about are more likely a, b, and c. But even that’s no reliable rule. And there may be several letters in a single equation. It’s one thing to have a single unknown number to deal with. To have two? Three? I don’t blame people fearing they can’t handle that. Mark Leiknes’s Cow and Boy for the 30th has Billy and Cow pondering the Prisoner’s Dilemma. This is one of the first examples someone encounters in game theory. Game theory sounds like the most fun part of mathematics. It’s the study of situations in which there’s multiple parties following formal rules which allow for gains or losses. This is an abstract description. It means many things fit a mathematician’s idea of a game. The Prisoner’s Dilemma is described well enough by Billy. It’s built on two parties, each — separately and without the ability to coordinate — having to make a choice. Both would be better off, under interrogation, to keep quiet and trust that the cops can’t get anything significant on them. But both have the temptation that if they rat out the other, they’ll get off free while their former partner gets screwed. And knowing that their partner has the same temptation. So what would be best for the two of them requires them both doing the thing that maximizes their individual risk. The implication is unsettling: everyone acting in their own best interest is supposed to produce the best possible result for society. And here, for the society of these two accused, it breaks down entirely. Jason Poland’s Robbie and Bobby for the 1st is a rerun. I discussed it last time it appeared, in November 2016, which was before I would routinely include the strips under discussion. The strip’s built on wordplay, using the word ‘power’ in its connotations for might and for exponents. Exponents have been written as numbers in superscript following a base for a long while now. The notation developed over the 17th century. I don’t know why mathematicians settled on superscripts, as opposed to the many other ways a base and an exponent might fit together. It’s a good mnemonic to remember, say, “z raised to the 10th” is z with a raised 10. But I don’t know the etymology of “raised” in a mathematical context well enough. It’s plausible that we say “raised” because that’s what the notation suggests. Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 2nd argues for the beauty of mathematics as a use for it. It’s presented in a brutal manner, but saying brutal things to kids is a comic motif with history to it. Well, in an existentialist manner, but that gets pretty brutal quickly. The proof of the Pythagorean Theorem is one of the very many known to humanity. This one is among the family of proofs that are wordless. At least nearly wordless. You can get from here to $a^2 + b^2 = c^2$ with very little prompting. If you do need prompting, it’s this: there are two expressions for how much area of the square with sides a-plus-b. One of these expressions uses only terms of a and b. The other expression uses terms of a, b, and c. If this doesn’t get a bit of a grin out of you, don’t worry. There’s, like, 2,037 other proofs we already know about. We might ask whether we need quite so many proofs of the Pythagorean theorem. It doesn’t seem to be under serious question most of the time. And then a couple comic strips last week just mentioned mathematics. Morrie Turner’s Wee Pals for the 1st of July has the kids trying to understand their mathematics homework. Could have been anything. Mike Thompson’s Grand Avenue for the 5th started a sequence with the kids at Math Camp. The comic is trying quite hard to get me riled up. So far it’s been the kids agreeing that mathematics is the worst, and has left things at that. Hrmph. Whether or not I have something for Thursday, by Sunday I should have anotherReading the Comics post. It, as well as my back catalogue of these essays, should be at this link. Thanks for worrying about me. ## Reading the Comics, June 29, 2019: Pacing Edition These are the last of the comics from the final full week of June. Ordinarily I’d have run this on Tuesday or Thursday of last week. But I also had my monthly readership-report post and that bit about a particle physics simulator also to post. It better fit a posting schedule of something every two or three days to move this to Sunday. This is what I tell myself is the rationale for not writing things up faster. Ernie Bushmiller’s Nancy Classics for the 27th uses arithmetic as an economical way to demonstrate intelligence. At least, the ability to do arithmetic is used as proof of intelligence. Which shouldn’t surprise. The conventional appreciation for Ernie Bushmiller is of his skill at efficiently communicating the ideas needed for a joke. That said, it’s a bit surprising Sluggo asks the dog “six times six divided by two”; if it were just showing any ability at arithmetic “one plus one” or “two plus two” would do. But “six times six divided by two” has the advantage of being a bit complicated. That is, it’s reasonable Sluggo wouldn’t know it right away, and would see it as something only the brainiest would. But it’s not so complicated that Sluggo wouldn’t plausibly know the question. Eric the Circle for the 28th, this one by AusAGirl, uses “Non-Euclidean” as a way to express weirdness in shape. My first impulse was to say that this wouldn’t really be a non-Euclidean circle. A non-Euclidean geometry has space that’s different from what we’re approximating with sheets of paper or with boxes put in a room. There are some that are familiar, or roughly familiar, such as the geometry of the surface of a planet. But you can draw circles on the surface of a globe. They don’t look like this mooshy T-circle. They look like … circles. Their weirdness comes in other ways, like how the circumference is not π times the diameter. On reflection, I’m being too harsh. What makes a space non-Euclidean is … well, many things. One that’s easy to understand is to imagine that the space uses some novel definition for the distance between points. Distance is a great idea. It turns out to be useful, in geometry and in analysis, to use a flexible idea of of what distance is. We can define the distance between things in ways that look just like the Euclidean idea of distance. Or we can define it in other, weirder ways. We can, whatever the distance, define a “circle” as the set of points that are all exactly some distance from a chosen center point. And the appearance of those “circles” can differ. There are literally infinitely many possible distance functions. But there is a family of them which we use all the time. And the “circles” in those look like … well, at the most extreme, they look like squares. Others will look like rounded squares, or like slightly diamond-shaped circles. I don’t know of any distance function that’s useful that would give us a circle like this picture of Eric. But there surely is one that exists and that’s enough for the joke to be certified factually correct. And that is what’s truly important in a comic strip. Sandra Bell-Lundy’s Between Friends for the 29th is the Venn Diagram joke for the week. Formally, you have to read this diagram charitably for it to parse. If we take the “what” that Maeve says, or doesn’t say, to be particular sentences, then the intersection has to be empty. You can’t both say and not-say a sentence. But it seems to me that any conversation of importance has the things which we choose to say and the things which we choose not to say. And it is so difficult to get the blend of things said and things unsaid correct. And I realize that the last time Between Friends came up here I was similarly defending the comic’s Venn Diagram use. I’m a sympathetic reader, at least to most comic strips. And that was the conclusion of comic strips through the 29th of June which mentioned mathematics enough for me to write much about. There were a couple other comics that brought up something or other, though. Wulff and Morgenthaler’s WuMo for the 27th of June has a Rubik’s Cube joke. The traditional Rubik’s Cube has three rows, columns, and layers of cubes. But there’s no reason there can’t be more rows and columns and layers. Back in the 80s there were enough four-by-four-by-four cubes sold that I even had one. Wikipedia tells me the officially licensed cubes have gotten only up to five-by-five-by-five. But that there was a 17-by-17-by-17 cube sold, with prototypes for 22-by-22-by-22 and 33-by-33-by-33 cubes. This seems to me like a great many stickers to peel off and reattach. And two comic strips did ballistic trajectory calculation jokes. These are great introductory problems for mathematical physics. They’re questions about things people can observe and so have a physical intuition for, and yet involve mathematics that’s not too subtle or baffling. John Rose’s Barney Google and Snuffy Smith mentioned the topic the 28th of June. Doug Savage’s Savage Chickens used it the 28th also, because sometimes comic strips just line up like that. This and other Reading the Comics posts should be at this link. This includes, I hope, the strips of this past week, that is, the start of July, which should be published Tuesday. Thanks for reading at all. ## Reading the Comics, May 20, 2019: I Guess I Took A Week Off Edition I’d meant to get back into discussing continuous functions this week, and then didn’t have the time. I hope nobody was too worried. Bill Amend’s FoxTrot for the 19th is set up as geometry or trigonometry homework. There are a couple of angles that we use all the time, and they do correspond to some common unit fractions of a circle: a quarter, a sixth, an eighth, a twelfth. These map nicely to common cuts of circular pies, at least. Well, it’s a bit of a freak move to cut a pie into twelve pieces, but it’s not totally out there. If someone cuts a pie into 24 pieces, flee. Tom Batiuk’s vintage Funky Winkerbean for the 19th of May is a real vintage piece, showing off the days when pocket electronic calculators were new. The sales clerk describes the calculator as having “a floating decimal”. And here I must admit: I’m poorly read on early-70s consumer electronics. So I can’t say that this wasn’t a thing. But I suspect that Batiuk either misunderstood “floating-point decimal”, which would be a selling point, or shortened the phrase in order to make the dialogue less needlessly long. Which is fine, and his right as an author. The technical detail does its work, for the setup, by existing. It does not have to be an actual sales brochure. Reducing “floating point decimal” to “floating decimal” is a useful artistic shorthand. It’s the dialogue equivalent to the implausibly few, but easy to understand, buttons on the calculator in the title panel. Floating point is one of the ways to represent numbers electronically. The storage scheme is much like scientific notation. That is, rather than think of 2,038, think of 2.038 times 103. In the computer’s memory are stored the 2.038 and the 3, with the “times ten to the” part implicit in the storage scheme. The advantage of this is the range of numbers one can use now. There are different ways to implement this scheme; a common one will let one represent numbers as tiny as 10-308 or as large as 10308, which is enough for most people’s needs. The disadvantage is that floating point numbers aren’t perfect. They have only around (commonly) sixteen digits of significance. That is, the first sixteen or so nonzero numbers in the number you represent mean anything; everything after that is garbage. Most of the time, that trailing garbage doesn’t hurt. But most is not always. Trying to add, for example, a tiny number, like 10-20, to a huge number, like 1020 won’t get the right answer. And there are numbers that can’t be represented correctly anyway, including such exotic and novel numbers as $\frac{1}{3}$. A lot of numerical mathematics is about finding ways to compute that avoid these problems. Back when I was a grad student I did have one casual friend who proclaimed that no real mathematician ever worked with floating point numbers, because of the limitations they impose. I could not get him to accept that no, in fact, mathematicians are fine with these limitations. Every scheme for representing numbers on a computer has limitations, and floating point numbers work quite well. At some point, you have to suspect some people would rather fight for a mistaken idea they already have than accept something new. Mac King and Bill King’s Magic in a Minute for the 19th does a bit of stage magic supported by arithmetic: forecasting the sum of three numbers. The trick is that all eight possible choices someone would make have the same sum. There’s a nice bit of group theory hidden in the “Howdydoit?” panel, about how to do the trick a second time. Rotating the square of numbers makes what looks, casually, like a different square. It’s hard for human to memorize a string of digits that don’t have any obvious meaning, and the longer the string the worse people are at it. If you’ve had a person — as directed — black out the rows or columns they didn’t pick, then it’s harder to notice the reused pattern. The different directions that you could write the digits down in represent symmetries of the square. That is, geometric operations that would replace a square with something that looks like the original. This includes rotations, by 90 or 180 or 270 degrees clockwise. Mac King and Bill King don’t mention it, but reflections would also work: if the top row were 4, 9, 2, for example, and the middle 3, 5, 7, and the bottom 8, 1, 6. Combining rotations and reflections also works. If you do the trick a second time, your mark might notice it’s odd that the sum came up 15 again. Do it a third time, even with a different rotation or reflection, and they’ll know something’s up. There are things you could do to disguise that further. Just double each number in the square, for example: a square of 4/18/8, 14/10/6, 12/2/16 will have each row or column or diagonal add up to 30. But this loses the beauty of doing this with the digits 1 through 9, and your mark might grow suspicious anyway. The same happens if, say, you add one to each number in the square, and forecast a sum of 18. Even mathematical magic tricks are best not repeated too often, not unless you have good stage patter. Mark Anderson’s Andertoons for the 20th is the Mark Anderson’s Andertoons for the week. Wavehead’s marveling at what seems at first like an asymmetry, about squares all being rhombuses yet rhombuses not all being squares. There are similar results with squares and rectangles. Still, it makes me notice something. Nobody would write a strip where the kid marvelled that all squares were polygons but not all polygons were squares. It seems that the rhombus connotes something different. This might just be familiarity. Polygons are … well, if not a common term, at least something anyone might feel familiar. Rhombus is a more technical term. It maybe never quite gets familiar, not in the ways polygons do. And the defining feature of a rhombus — all four sides the same length — seems like the same thing that makes a square a square. There should be another Reading the Comics post this coming week, and it should appear at this link. I’d like to publish it Tuesday but, really, Wednesday is more probable. ## Reading the Comics, May 11, 2019: I Concede I Am Late Edition I concede I am late in wrapping up last week’s mathematically-themed comics. But please understand there were important reasons for my not having posted this earlier, like, I didn’t get it written in time. I hope you understand and agree with me about this. Bill Griffith’s Zippy the Pinhead for the 9th brings up mathematics in a discussion about perfection. The debate of perfection versus “messiness” begs some important questions. What I’m marginally competent to discuss is the idea of mathematics as this perfect thing. Mathematics seems to have many traits that are easy to think of as perfect. That everything in it should follow from clearly stated axioms, precise definitions, and deductive logic, for example. This makes mathematics seem orderly and universal and fair in a way that the real world never is. If we allow that this is a kind of perfection then … does mathematics reach it? Even the idea of a “precise definition” is perilous. If it weren’t there wouldn’t be so many pop mathematics articles about why 1 isn’t a prime number. It’s difficult to prove that any particular set of axioms that give us interesting results are also logically consistent. If they’re not consistent, then we can prove absolutely anything, including that the axioms are false. That seems imperfect. And few mathematicians even prepare fully complete, step-by-step proofs of anything. It takes ridiculously long to get anything done if you try. The proofs we present tend to show, instead, the reasoning in enough detail that we’re confident we could fill in the omitted parts if we really needed them for some reason. And that’s fine, nearly all the time, but it does leave the potential for mistakes present. Zippy offers up a perfect parallelogram. Making it geometry is of good symbolic importance. Everyone knows geometric figures, and definitions of some basic ideas like a line or a circle or, maybe, a parallelogram. Nobody’s ever seen one, though. There’s never been a straight line, much less two parallel lines, and even less the pair of parallel lines we’d need for a parallellogram. There can be renderings good enough to fool the eye. But none of the lines are completely straight, not if we examine closely enough. None of the pairs of lines are truly parallel, not if we extend them far enough. The figure isn’t even two-dimensional, not if it’s rendered in three-dimensional things like atoms or waves of light or such. We know things about parallelograms, which don’t exist. They tell us some things about their shadows in the real world, at least. Mark Litzler’s Joe Vanilla for the 9th is a play on the old joke about “a billion dollars here, a billion dollars there, soon you’re talking about real money”. As we hear more about larger numbers they seem familiar and accessible to us, to the point that they stop seeming so big. A trillion is still a massive number, at least for most purposes. If you aren’t doing combinatorics, anyway; just yesterday I was doing a little toy problem and realized it implied 470,184,984,576 configurations. Which still falls short of a trillion, but had I made one arbitrary choice differently I could’ve blasted well past a trillion. Ruben Bolling’s Super-Fun-Pak Comix for the 9th is another monkeys-at-typewriters joke, that great thought experiment about probability and infinity. I should add it to my essay about the Infinite Monkey Theorem. Part of the joke is that the monkey is thinking about the content of the writing. This doesn’t destroy the prospect that a monkey given enough time would write any of the works of William Shakespeare. It makes the simple estimates of how unlikely that is, and how long it would take to do, invalid. But the event might yet happen. Suppose this monkey decided there was no credible way to delay Hamlet’s revenge to Act V, and tried to write accordingly. Mightn’t the monkey make a mistake? It’s easy to type a letter you don’t mean to. Or a word you don’t mean to. Why not a sentence you don’t mean to? Why not a whole act you don’t mean to? Impossible? No, just improbable. And the monkeys have enough time to let the improbable happen. Eric the Circle for the 10th, this one by Kingsnake, declares itself set in “the 20th dimension, where shape has no meaning”. This plays on a pop-cultural idea of dimensions as a kind of fairyland, subject to strange and alternate rules. A mathematician wouldn’t think of dimensions that way. 20-dimensional spaces — and even higher-dimensional spaces — follow rules just as two- and three-dimensional spaces do. They’re harder to draw, certainly, and mathematicians are not selected for — or trained in — drawing, at least not in United States schools. So attempts at rendering a high-dimensional space tend to be sort of weird blobby lumps, maybe with a label “N-dimensional”. And a projection of a high-dimensional shape into lower dimensions will be weird. I used to have around here a web site with a rotatable tesseract, which would draw a flat-screen rendition of what its projection in three-dimensional space would be. But I can’t find it now and probably it ran as a Java applet that you just can’t get to work anymore. Anyway, non-interactive videos of this sort of thing are common enough; here’s one that goes through some of the dimensions of a tesseract, one at a time. It’ll give some idea how something that “should” just be a set of cubes will not look so much like that. Steve Kelly and Jeff Parker’s Dustin for the 11th is a variation on the “why do I have to learn this” protest. This one is about long division and the question of why one needs to know it when there’s cheap, easily-available tools that do the job better. It’s a fair question and Hayden’s answer is a hard one to refute. I think arithmetic’s worth knowing how to do, but I’ll also admit, if I need to divide something by 23 I’m probably letting the computer do it. And a couple of the comics that week seemed too slight to warrant discussion. You might like them anyway. Brian Boychuk and Ron Boychuk’s Chuckle Brothers for the 5th featured a poorly-written numeral. Charles Schulz’s Peanuts Begins rerun for the 6th has Violet struggling with counting. Glenn McCoy and Gary McCoy’s The Flying McCoys for the 8th has someone handing in mathematics homework. Henry Scarpelli and Craig Boldman’s Archie rerun for the 9th talks about Jughead sleeping through mathematics class. All routine enough stuff. This and other Reading the Comics posts should appear at this link. I mean to have a post tomorrow, although it might make my work schedule a little easier to postpone that until Monday. We’ll see. ## Reading the Comics, May 8, 2019: Strips With Art I Like Edition Of course I like all the comics. … Well, that’s not literally true; but I have at least some affection for nearly all of the syndicated comics. This essay I bring up some strips, partly, because I just like them. This is my content hole. If you want a blog not filled with comic strips, go start your own and don’t put these things on it. Mark Anderson’s Andertoons for the 5th is the Mark Anderson’s Andertoons for the week. Also a bit of a comment on the ability of collective action to change things. Wavehead is … well, he’s just wrong about making the number four plus the number four equal to the number seven. Not based on the numbers we mean by the words “four” and “seven”, and based on the operation we mean by “plus” and the relationship we mean by “equals”. The meaning of those things is set by, ultimately, axioms and deductive reasoning and the laws of deductive reasoning and there’s no changing the results. But. The thing we’re referring to when we say “seven”? Or when we write the symbol “7”? That is convention. That is a thing we’ve agreed on as a reference for this concept. And that we can change, if we decide we want to. We’ve done this. Look at a thousand-year-old manuscript and the symbol that looks like ‘4’ may represent the number we call five. And the names of numbers are just common words. They’re subject to change the way every other common word is. Which is, admittedly, not very subject. It would be almost as much bother to change the word ‘four’ as it would be to change the word ‘mom’. But that’s not impossible. Just difficult. Juba’s Viivi and Wagner for the 5th is a bit of a percentage joke. The characters also come to conclude that a thing either happens or it does not; there’s no indefinite states. This principle, the “excluded middle”, is often relied upon for deductive logic, and fairly so. It gets less clear that this can be depended on for predictions of the future, or fears for the future. And real-world things come in degrees that a mathematical concept might not. Like, your fear of the home catching fire comes true if the building burns down. But it’s also come true if a quickly-extinguished frying pan fire leaves the wall scorched, embarrassing but harmless. Anyway, relaxing someone else’s anxiety takes more than a quick declaration of statistics. Show sympathy. Harry Bliss and Steve Martin’s Bliss for the 6th is a cute little classroom strip, with arithmetic appearing as the sort of topic that students feel overwhelmed and baffled by. It could be anything, but mathematics uses the illustration space efficiently. The strip may properly be too marginal to include, but I like Bliss’s art style and want more people to see it. Will Henry’s Wallace the Brave for the 7th puts up what Spud calls a sadistic math problem. And, well, it is a story problem happening in their real life. You could probably turn this into an actual exam problem without great difficulty. Rick Detorie’s One Big Happy for the 8th is a bit of wordplay built around geometry, as Ruthie plays teacher. She’s a bit dramatic, but she always has been. I’ll read some more comics for later in this week. That essay, and all similar comic strip talk, should appear at this link. Thank you. ## Reading the Comics, April 24, 2019: Mic Drop Edition Edition I can’t tell you how hard it is not to just end this review of last week’s mathematically-themed comic strips after the first panel here. It really feels like the rest is anticlimax. But here goes. John Deering’s Strange Brew for the 20th is one of those strips that’s not on your mathematics professor’s office door only because she hasn’t seen it yet. The intended joke is obvious, mixing the tropes of the Old West with modern research-laboratory talk. “Theoretical reckoning” is a nice bit of word juxtaposition. “Institoot” is a bit classist in its rendering, but I suppose it’s meant as eye-dialect. What gets it a place on office doors is the whiteboard, though. They’re writing out mathematics which looks legitimate enough to me. It doesn’t look like mathematics though. What’s being written is something any mathematician would recognize. It’s typesetting instructions. Mathematics requires all sorts of strange symbols and exotic formatting. In the old days, we’d handle this by giving the typesetters hazard pay. Or, if you were a poor grad student and couldn’t afford that, deal with workarounds. Maybe just leave space in your paper and draw symbols in later. If your university library has old enough papers you can see them. Maybe do your best to approximate mathematical symbols using ASCII art. So you get expressions that look something like this: / 2 pi | 2 | x cos(theta) dx - 2 F(theta) == R(theta) | / 0 This gets old real fast. Mercifully, Donald Knuth, decades ago, worked out a great solution. It uses formatting instructions that can all be rendered in standard, ASCII-available text. And then by dark incantations and summoning of Linotype demons, re-renders that as formatted text. It handles all your basic book formatting needs — much the way HTML, used for web pages, will — and does mathematics much more easily. For example, I would enter a line like: \int_{0}^{2\pi} x^2 \cos(\theta) dx - 2 F(\theta) \equiv R(\theta) And this would be rendered in print as: $\int_{0}^{2\pi} x^2 \cos(\theta) dx - 2 F(\theta) \equiv R(\theta)$ There are many, many expansions available to this, to handle specialized needs, hardcore mathematics among them. Anyway, the point that makes me realize John Deering was aiming at everybody with an advanced degree in mathematics ever with this joke, using a string of typesetting instead of the usual equations here? The typesetting language is named TeX. Mark Anderson’s Andertoons for the 21st is the Mark Anderson’s Andertoons for the week. It’s about one of those questions that nags at you as a kid, and again periodically as an adult. The perimeter is the boundary around a shape. The circumference is the boundary around a circle. Why do we have two words for this? And why do we sound all right talking about either the circumference or the perimeter of a circle, while we sound weird talking about the circumference of a rhombus? We sound weird talking about the perimeter of a rhombus too, but that’s the rhombus’s fault. The easy part is why there’s two words. Perimeter is a word of Greek origin; circumference, of Latin. Perimeter entered the English language in the early 15th century; circumference in the 14th. Why we have both I don’t know; my suspicion is either two groups of people translating different geometry textbooks, or some eager young Scholastic with a nickname like ‘Doctor Magnifico Triangulorum’ thought Latin sounded better. Perimeter stuck with circules early; every etymology I see about why we use the symbol π describes it as shorthand for the perimeter of the circle. Why `circumference’ ended up the word for circles or, maybe, ellipses and ovals and such is probably the arbitrariness of language. I suspect that opening “circ” sound cues people to think of it for circles and circle-like shapes, in a way that perimeter doesn’t. But that is my speculation and should not be mistaken for information. Steve McGarry’s KidTown for the 21st is a kids’s information panel with a bit of talk about representing numbers. And, in discussing things like how long it takes to count to a million or a billion, or how long it would take to type them out, it gets into how big these numbers can be. Les Stewart typed out the English names of numbers, in words, by the way. He’d also broken the Australian record for treading water, and for continuous swimming. Gary Delainey and Gerry Rasmussen’s Betty for the 24th is a sudoku comic. Betty makes the common, and understandable, conflation of arithmetic with mathematics. But she’s right in identifying sudoku as a logical rather than an arithmetic problem. You can — and sometimes will see — sudoku type puzzles rendered with icons like stars and circles rather than numerals. That you can make that substitution should clear up whether there’s arithmetic involved. Commenters at GoComics meanwhile show a conflation of mathematics with logic. Certainly every mathematician uses logic, and some of them study logic. But is logic mathematics? I’m not sure it is, and our friends in the philosophy department are certain it isn’t. But then, if something that a recognizable set of mathematicians study as part of their mathematics work isn’t mathematics, then we have a bit of a logic problem, it seems. Come Sunday I should have a fresh Reading the Comics essay available at this link. ## Six Or Arguably Four Things For Pi Day I hope you’ll pardon me for being busy. I haven’t had the chance to read all the Pi Day comic strips yet today. But I’d be a fool to let the day pass without something around here. I confess I’m still not sure that Pi Day does anything lasting to encourage people to think more warmly of mathematics. But there is probably some benefit if people temporarily think more fondly of the subject. Certainly I’ll do more foolish things than to point at things and say, “pi, cool, huh?” this week alone. I’ve got a couple of essays that discuss π some. The first noteworthy one is Calculating Pi Terribly, discussing a way to calculate the value of π using nothing but a needle, a tile floor, and a hilariously excessive amount of time. Or you can use an HTML5-and-JavaScript applet and slightly less time, and maybe even experimentally calculate the digits of π to two decimal places, if you get lucky. In Calculating Pi Less Terribly I showed a way to calculate π that’s … well, you see where that sentence was going. This is a method that uses an alternating series. To get π exactly correct you have to do an infinite amount of work. But if you just want π to a certain precision, all right. This will even tell you how much work you have to do. There are other formulas that will get you digits of π with less work, though, and maybe I’ll write up one of those sometime. And the last of the relevant essays I’ve already written is an A To Z essay about normal numbers. I don’t know whether π is a normal number. No human, to the best of my knowledge, does. Well, anyone with an opinion on the matter would likely say, of course it’s normal. There’s fantastic reasons to think it is. But none of those amount to a proof it is. That’s my three items. After that I’d like to share … I don’t know whether to classify this as one or three pieces. They’re YouTube videos which a couple months ago everybody in the world was asking me if I’d seen. Now it’s your turn. I apologize if you too got this, a couple months ago, but don’t worry. You can tell people you watched and not actually do it. I’ll alibi you. It’s a string of videos posted on youTube by 3Blue1Brown. The first lays out the matter with a neat physics problem. Imagine you have an impenetrable wall, a frictionless floor, and two blocks. One starts at rest. The other is sliding towards the first block and the wall. How many times will one thing collide with another? That is, will one block collide with another block, or will one block collide with a wall? The answer seems like it should depend on many things. What it actually depends on is the ratio of the masses of the two blocks. If they’re the same mass, then there are three collisions. You can probably work that sequence out in your head and convince yourself it’s right. If the outer block has ten times the mass of the inner block? There’ll be 31 collisions before all the hits are done. You might work that out by hand. I did not. You will not work out what happens if the outer block has 100 times the mass of the inner block. That’ll be 314 collisions. If the outer block has 1,000 times the mass of the inner block? 3,141 collisions. You see where this is going. The second video in the sequence explains why the digits of π turn up in this. And shows how to calculate this. You could, in principle, do this all using Newtonian mechanics. You will not live long enough to finish that, though. The video shows a way that saves an incredible load of work. But you save on that tedious labor by having to think harder. Part of it is making use of conservation laws, that energy and linear momentum are conserved in collisions. But part is by recasting the problem. Recast it into “phase space”. This uses points in an abstract space to represent different configurations of a system. Like, how fast blocks are moving, and in what direction. The recasting of the problem turns something that’s impossibly tedious into something that’s merely … well, it’s still a bit tedious. But it’s much less hard work. And it’s a good chance to show off you remember the Inscribed Angle Theorem. You do remember the Inscribed Angle Theorem, don’t you? The video will catch you up. It’s a good show of how phase spaces can make physics problems so much more manageable. The third video recasts the problem yet again. In this form, it’s about rays of light reflecting between mirrors. And this is a great recasting. That blocks bouncing off each other and walls should have anything to do with light hitting mirrors seems ridiculous. But set out your phase space, and look hard at what collisions and reflections are like, and you see the resemblance. The sort of trick used to make counting reflections easy turns up often in phase spaces. It also turns up in physics problems on toruses, doughnut shapes. You might ask when do we ever do anything on a doughnut shape. Well, real physical doughnuts, not so much. But problems where there are two independent quantities, and both quantities are periodic? There’s a torus lurking in there. There might be a phase space using that shape, and making your life easier by doing so. That’s my promised four or maybe six items. Pardon, please, now, as I do need to get back to reading the comics. ## Proving That Disturbing Triangle Theorem That Isn’t Morley’s Somehow I couldn’t leave people just hanging on that triangle theorem from the other day. Tthis was a compass-and-straightedge method to split a triangle into two shapes of equal area. The trick was you could split it along any point on one of the three legs of the triangle. The theorem unsettled me, yes. But proving that it does work is not so bad and I thought to do that today. The process: start with a triangle ABC. Pick a point P on one of the legs. We’ll say it’s on leg AB. Draw the line segment from the other vertex, C, to point P. Now from the median point S on leg AB, draw the line parallel to PC and that intersects either leg AC or leg BC. Label that point R. The line segment RP cuts the triangle ABC into one triangle and another shape, normally a quadrilateral. Both shapes have the same area, half that of the original triangle. To prove it nicely will involve one extra line, and the identification of one point. Construct the line SC. Lines SC and PC intersect at some point; call that Q. I’ve actually made a diagram of this, just below. I’ve put the intersection point R on the leg AC. All that would change if the point R were on BC instead would be some of the labels. Here’s how the proof will go. I want to show triangle APR has half the area of triangle ABC. The area of triangle ARP has to be equal to the area of triangle ASC, plus the area of triangle SPQ, minus the area of triangle QCR. So the first step is proving that triangle ASC has half the area of triangle ABC. The second step is showing triangle SPQ has the same area as does triangle QCR. When that’s done, we know triangle APR has the same area as triangle ASC, which is half that of triangle ABC. First. That ASC has half the area of triangle ABC. The area of a triangle is one-half times the length of a base times its height. The base is any of the three legs which connect two points. The height is the perpendicular distance from the third point to the line that first leg is on. Here, take the base of triangle ABC to be the line segment AC. Also take the base of triangle ASC to be the line segment AC. They have the same base. Point S is the median of the line segment AB. So point S is half as far from the base AC as the point B is. Triangle ASC has half the height of triangle ABC. Same base, half the height. So triangle ASC has half the area of triangle ABC. Second. That triangle SPQ has the same area as triangle QCR. This is going to be most easily done by looking at two other triangles, SPC and PCR. They’re relevant to triangles SPQ and QCR. Triangle SPC has the same area as triangle PCR. Take as the base for both of them the leg PC. Point S and point R are both on the line SR. SR was created parallel to the line PC. So the perpendicular distance from point S to line PC has to be the same as the perpendicular distance from point R to the line PC. Triangle SPC has the same base and same height as does triangle PCR. So they have the same area. Now. Triangle SPC is made up of two smaller triangles: triangle SPQ and triangle PCQ. Its area is split, somehow, between those two. Triangle PCR is also made of two smaller triangles: triangle PCQ and triangle QCR. Its area is split between those two. The area of triangle SPQ plus the area of triangle PCQ is the same as the area of triangle SPC. This is equal to the area of triangle PCR. The area of triangle PCR is the area of triangle PCQ plus the area of triangle QCR. And that all adds up only if the area of triangle SPQ is the same as the area of triangle QCR. So. We had that area of triangle APR is equal to the area of triangle ASC plus the area of triangle SPQ minus the area of triangle QCR. That’s the area of triangle ASC plus zero. And that’s half the area of triangle ABC. Whatever shape is left has to have the remaining area, half the area of triangle ABC. It’s still such a neat result. Morley’s theorem, by the way, says this: take any triangle. Trisect each of its three interior angles. That is, for each vertex, draw the two lines that cut the interior angle into three equal spans. This creates six lines. Take the three points where these lines for adjacent angles intersect. (That is, draw the obvious intersection points.) This creates a new triangle. It’s equilateral. What business could an equilateral triangle possibly have in all this? Exactly. ## In Which I Am Disturbed By A Triangle Theorem That Isn’t Morley’s Somehow I’ve been reading Alfred S Posamentier and Ingmar Lehmann’s The Secrets of Triangles: A Mathematical Journey. It is exactly what you’d think: 365 pages, plus endnotes and an index, describing what we as a people have learned about triangles. It’s almost enough to make one wonder if we maybe know too many things about triangles. I admit letting myself skim over the demonstration of how, using straightedge and compass, to construct a triangle when you’re given one interior angle, the distance from another vertex to its corresponding median point, and the radius of the triangle’s circumscribed circle. But there are a bunch of interesting theorems to find. I wanted to share one. When I saw it I felt creeped out. The process seemed like a bit of dark magic, a result starting enough that it seemed to come from nowhere. Here it is. Start with any old triangle ABC. Without loss of generality, select a point along the leg AB (other than the vertices). Call that point P. (This same technique would work if you put your point on another leg, but I would have to change the names of the vertices and line segments from here on. But it doesn’t matter what the names of the vertices are. So I can suppose that I was lucky enough that whatever leg you put your point P on I happened to name AB.) Now. Pick the midpoint of the leg AB. This median is a point we’ll label S. Draw the line PC. Draw the line parallel to the line PC and which passes through S. This will intersect either the line segment BC or the line segment AC. Whichever it is, label this point of intersection R. Draw the line from R to P. The line RP divides the triangle ABC into two shapes, a triangle and (unless your P was the median point S) a quadrilateral. The punch line: both shapes have half the area of the original triangle. I usually read while eating. This was one of those lines that made me put the fork down and stare, irrationally angry, until I could work through the proof. It didn’t help that you can use a technique like this to cut the triangle into any whole number you like of equal-area wedges. I’m sure this is old news to a fair number of readers. I don’t care. I haven’t noticed this before. And yes, it’s not as scary weird magic as Morley’s Theorem. But I’ve seen that one before, long enough ago I kind of accept it. ## Reading the Comics, January 30, 2019: Interlude Edition I think there are just barely enough comic strips from the past week to make three essays this time around. But one of them has to be a short group, only three comics. That’ll be for the next essay when I can group together all the strips that ran in February. One strip that I considered but decided not to write at length about was Ed Allison’s dadaist Unstrange Phenomena for the 28th. It mentions Roman Numerals and the idea of sneaking message in through them. But that’s not really mathematics. I usually enjoy the particular flavor of nonsense which Unstrange Phenomena uses; you might, too. John McPherson’s Close to Home for the 29th uses an arithmetic problem as shorthand for an accomplished education. The problem is solvable. Of course, you say. It’s an equation with quadratic polynomial; it can hardly not be solved. Yes, fine. But McPherson could easily have thrown together numbers that implied x was complex-valued, or had radicals or some other strange condition. This is one that someone could do in their heads, at least once they practiced in mental arithmetic. I feel reasonably confident McPherson was just having a giggle at the idea of putting knowledge tests into inappropriate venues. So I’ll save the full rant. But there is a long history of racist and eugenicist ideology that tried to prove certain peoples to be mentally incompetent. Making an arithmetic quiz prerequisite to something unrelated echoes that. I’d have asked McPherson to rework the joke to avoid that. (I’d also want to rework the composition, since the booth, the swinging arm, and the skirted attendant with the clipboard don’t look like any tollbooth I know. But I don’t have an idea how to redo the layout so it’s more realistic. And it’s not as if that sort of realism would heighten the joke.) Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 29th riffs on the problem of squaring the circle. This is one of three classical problems of geometry. The lecturer describes it just fine: is it possible to make a square that’s got the same area as a given circle, using only straightedge and compass? There are shapes it’s easy to do this for, such as rectangles, parallelograms, triangles, and (why not?) this odd crescent-moon shaped figure called the lune. Circles defied all attempts. In the 19th century mathematicians found ways to represent the operations of classical geometry with algebra, and could use the tools of algebra to show squaring the circle was impossible. The squaring would be equivalent to finding a polynomial, with integer coefficients, that has $\sqrt{\pi}$ as a root. And we know from the way algebra works that this can’t be done. So squaring the circle can’t be done. The lecturer’s hack, modifying the compass and straightedge, lets you in principle do whatever you want. The hack isn’t new either. Modifying the geometric tools changes what you can and can’t do. The Ancient Greeks recognized that adding some specialized tools would make the problem possible. But that falls outside the scope of the problem. Which feeds to the secondary joke, of making the philosophers sad. Often philosophy problems test one’s intuition about an idea by setting out a problem, often with unpleasant choices. A common problem with students that I’m going ahead and guessing are engineers is then attacking the setup of the question, trying to show that the problem couldn’t actually happen. You know, as though there were ever a time significant numbers of people were being tied to trolley tracks. (By the way, that thing about silent movie villains tying women to railroad tracks? Only happened in comedies spoofing Victorian melodramas. It’s always been a parody.) Attacking the logic of a problem may make for good movie drama. But it makes for a lousy student and a worse class discussion. Ted Shearer’s Quincy rerun for the 30th uses a bit of mathematics and logic talk. It circles the difference between the feeling one can have about the rational meaning of a situation and how the situation feels to someone. It seems like a jump that Quincy goes from being asked about logic to talking about arithmetic. Possibly Quincy’s understanding of logic doesn’t start from the sort of very abstract concept that makes arithmetic hard to get to, though. There should be another Reading the Comics post this week. It should be here, when it appears. There should also be one on Sunday, as usual. ## My 2018 Mathematics A To Z: Witch of Agnesi Nobody had a suggested topic starting with ‘W’ for me! So I’ll take that as a free choice, and get lightly autobiogrpahical. # Witch of Agnesi. I know I encountered the Witch of Agnesi while in middle school. Eighth grade, if I’m not mistaken. It was a footnote in a textbook. I don’t remember much of the textbook. What I mostly remember of the course was how much I did not fit with the teacher. The only relief from boredom that year was the month we had a substitute and the occasional interesting footnote. It was in a chapter about graphing equations. That is, finding curves whose points have coordinates that satisfy some equation. In a bit of relief from lines and parabolas the footnote offered this: $y = \frac{8a^3}{x^2 + 4a^2}$ In a weird tantalizing moment the footnote didn’t offer a picture. Or say what an ‘a’ was doing in there. In retrospect I recognize ‘a’ as a parameter, and that different values of it give different but related shapes. No hint what the ‘8’ or the ‘4’ were doing there. Nor why ‘a’ gets raised to the third power in the numerator or the second in the denominator. I did my best with the tools I had at the time. Picked a nice easy boring ‘a’. Picked out values of ‘x’ and found the corresponding ‘y’ which made the equation true, and tried connecting the dots. The result didn’t look anything like a witch. Nor a witch’s hat. It was one of a handful of biographical notes in the book. These were a little attempt to add some historical context to mathematics. It wasn’t much. But it was an attempt to show that mathematics came from people. Including, here, from Maria Gaëtana Agnesi. She was, I’m certain, the only woman mentioned in the textbook I’ve otherwise completely forgotten. We have few names of ancient mathematicians. Those we have are often compilers like Euclid whose fame obliterated the people whose work they explained. Or they’re like Pythagoras, credited with discoveries by people who obliterated their own identities. In later times we have the mathematics done by, mostly, people whose social positions gave them time to write mathematics results. So we see centuries where every mathematician is doing it as their side hustle to being a priest or lawyer or physician or combination of these. Women don’t get the chance to stand out here. Today of course we can name many women who did, and do, mathematics. We can name Emmy Noether, Ada Lovelace, and Marie-Sophie Germain. Challenged to do a bit more, we can offer Florence Nightingale and Sofia Kovalevskaya. Well, and also Grace Hopper and Margaret Hamilton if we decide computer scientists count. Katherine Johnson looks likely to make that cut. But in any case none of these people are known for work understandable in a pre-algebra textbook. This must be why Agnesi earned a place in this book. She’s among the earliest women we can specifically credit with doing noteworthy mathematics. (Also physics, but that’s off point for me.) Her curve might be a little advanced for that textbook’s intended audience. But it’s not far off, and pondering questions like “why $8a^3$? Why not $a^3$?” is more pleasant, to a certain personality, than pondering what a directrix might be and why we might use one. The equation might be a lousy way to visualize the curve described. The curve is one of that group of interesting shapes you get by constructions. That is, following some novel process. Constructions are fun. They’re almost a craft project. For this we start with a circle. And two parallel tangent lines. Without loss of generality, suppose they’re horizontal, so, there’s lines at the top and the bottom of the curve. Take one of the two tangent points. Again without loss of generality, let’s say the bottom one. Draw a line from that point over to the other line. Anywhere on the other line. There’s a point where the line you drew intersects the circle. There’s another point where it intersects the other parallel line. We’ll find a new point by combining pieces of these two points. The point is on the same horizontal as wherever your line intersects the circle. It’s on the same vertical as wherever your line intersects the other parallel line. This point is on the Witch of Agnesi curve. Now draw another line. Again, starting from the lower tangent point and going up to the other parallel line. Again it intersects the circle somewhere. This gives another point on the Witch of Agnesi curve. Draw another line. Another intersection with the circle, another intersection with the opposite parallel line. Another point on the Witch of Agnesi curve. And so on. Keep doing this. When you’ve drawn all the lines that reach from the tangent point to the other line, you’ll have generated the full Witch of Agnesi curve. This takes more work than writing out $y = \frac{8a^3}{x^2 + 4a^2}$, yes. But it’s more fun. It makes for neat animations. And I think it prepares us to expect the shape of the curve. It’s a neat curve. Between it and the lower parallel line is an area four times that of the circle that generated it. The shape is one we would get from looking at the derivative of the arctangent. So there’s some reasons someone working in calculus might find it interesting. And people did. Pierre de Fermat studied it, and found this area. Isaac Newton and Luigi Guido Grandi studied the shape, using this circle-and-parallel-lines construction. Maria Agnesi’s name attached to it after she published a calculus textbook which examined this curve. She showed, according to people who present themselves as having read her book, the curve and how to find it. And she showed its equation and found the vertex and asymptote line and the inflection points. The inflection points, here, are where the curve chances from being cupped upward to cupping downward, or vice-versa. It’s a neat function. It’s got some uses. It’s a natural smooth-hill shape, for example. So this makes a good generic landscape feature if you’re modeling the flow over a surface. I read that solitary waves can have this curve’s shape, too. And the curve turns up as a probability distribution. Take a fixed point. Pick lines at random that pass through this point. See where those lines reach a separate, straight line. Some regions are more likely to be intersected than are others. Chart how often any particular line is the new intersection point. That chart will (given some assumptions I ask you to pretend you agree with) be a Witch of Agnesi curve. This might not surprise you. It seems inevitable from the circle-and-intersecting-line construction process. And that’s nice enough. As a distribution it looks like the usual Gaussian bell curve. It’s different, though. And it’s different in strange ways. Like, for a probability distribution we can find an expected value. That’s … well, what it sounds like. But this is the strange probability distribution for which the law of large numbers does not work. Imagine an experiment that produces real numbers, with the frequency of each number given by this distribution. Run the experiment zillions of times. What’s the mean value of all the zillions of generated numbers? And it … doesn’t … have one. I mean, we know it ought to, it should be the center of that hill. But the calculations for that don’t work right. Taking a bigger sample makes the sample mean jump around more, not less, the way every other distribution should work. It’s a weird idea. Imagine carving a block of wood in the shape of this curve, with a horizontal lower bound and the Witch of Agnesi curve as the upper bound. Where would it balance? … The normal mathematical tools don’t say, even though the shape has an obvious line of symmetry. And a finite area. You don’t get this kind of weirdness with parabolas. (Yes, you’ll get a balancing point if you actually carve a real one. This is because you work with finitely-long blocks of wood. Imagine you had a block of wood infinite in length. Then you would see some strange behavior.) It teaches us more strange things, though. Consider interpolations, that is, taking a couple data points and fitting a curve to them. We usually start out looking for polynomials when we interpolate data points. This is because everything is polynomials. Toss in more data points. We need a higher-order polynomial, but we can usually fit all the given points. But sometimes polynomials won’t work. A problem called Runge’s Phenomenon can happen, where the more data points you have the worse your polynomial interpolation is. The Witch of Agnesi curve is one of those. Carl Runge used points on this curve, and trying to fit polynomials to those points, to discover the problem. More data and higher-order polynomials make for worse interpolations. You get curves that look less and less like the original Witch. Runge is himself famous to mathematicians, known for “Runge-Kutta”. That’s a family of techniques to solve differential equations numerically. I don’t know whether Runge came to the weirdness of the Witch of Agnesi curve from considering how errors build in numerical integration. I can imagine it, though. The topics feel related to me. I understand how none of this could fit that textbook’s slender footnote. I’m not sure any of the really good parts of the Witch of Agnesi could even fit thematically in that textbook. At least beyond the fact of its interesting name, which any good blog about the curve will explain. That there was no picture, and that the equation was beyond what the textbook had been describing, made it a challenge. Maybe not seeing what the shape was teased the mathematician out of this bored student. And next is ‘X’. Will I take Mr Wu’s suggestion and use that to describe something “extreme”? Or will I take another topic or suggestion? We’ll see on Friday, barring unpleasant surprises. Thanks for reading. ## My 2018 Mathematics A To Z: Volume Ray Kassinger, of the popular web comic Housepets!, had a silly suggestion when I went looking for topics. In one episode of Mystery Science Theater 3000, Crow T Robot gets the idea that you could describe the size of a space by the number of turkeys which fill it. (It’s based on like two minor mentions of “turkeys” in the show they were watching.) I liked that episode. I’ve got happy memories of the time when I first saw it. I thought the sketch in which Crow T Robot got so volume-obsessed was goofy and dumb in the fun-nerd way. I accept Mr Kassinger’s challenge only I’m going to take it seriously. # Volume. How big is a thing? There is a legend about Thomas Edison. He was unimpressed with a new hire. So he hazed the college-trained engineer who deeply knew calculus. He demanded the engineer tell him the volume within a light bulb. The engineer went to work, making measurements of the shape of the bulb’s outside. And then started the calculations. This involves a calculus technique called “volumes of rotation”. This can tell the volume within a rotationally symmetric shape. It’s tedious, especially if the outer edge isn’t some special nice shape. Edison, fed up, took the bulb, filled it with water, poured that out into a graduated cylinder and said that was the answer. I’m skeptical of legends. I’m skeptical of stories about the foolish intellectual upstaged by the practical man-of-action. And I’m skeptical of Edison because, jeez, I’ve read biographies of the man. Even the fawning ones make him out to be yeesh. But the legend’s Edison had a point. If the volume of a shape is not how much stuff fits inside the shape, what is it? And maybe some object has too complicated a shape to find its volume. Can we think of a way to produce something with the same volume, but that is easier? Sometimes we can. When we do this with straightedge and compass, the way the Ancient Greeks found so classy, we call this “quadrature”. It’s called quadrature from its application in two dimensions. It finds, for a shape, a square with the same area. For a three-dimensional object, we find a cube with the same volume. Cubes are easy to understand. Straightedge and compass can’t do everything. Indeed, there’s so much they can’t do. Some of it is stuff you’d think it should be able to, like, find a cube with the same volume as a sphere. Integration gives us a mathematical tool for describing how much stuff is inside a shape. It’s even got a beautiful shorthand expression. Suppose that D is the shape. Then its volume V is: $V = \int\int\int_D dV$ Here “dV” is the “volume form”, a description of how the coordinates we describe a space in relate to the volume. The $\int\int\int$ is jargon, meaning, “integrate over the whole volume”. The subscript “D” modifies that phrase by adding “of D” to it. Writing “D” is shorthand for “these are all the points inside this shape, in whatever coordinate system you use”. If we didn’t do that we’d have to say, on each $\int$ sign, what points are inside the shape, coordinate by coordinate. At this level the equation doesn’t offer much help. It says the volume is the sum of infinitely many, infinitely tiny pieces of volume. True, but that doesn’t give much guidance about whether it’s more or less than two cups of water. We need to get more specific formulas, usually. We need to pick coordinates, for example, and say what coordinates are inside the shape. A lot of the resulting formulas can’t be integrated exactly. Like, an ellipsoid? Maybe you can integrate that. Don’t try without getting hazard pay. We can approximate this integral. Pick a tiny shape whose volume is easy to know. Fill your shape with duplicates of it. Count the duplicates. Multiply that count by the volume of this tiny shape. Done. This is numerical integration, sometimes called “numerical quadrature”. If we’re being generous, we can say the legendary Edison did this, using water molecules as the tiny shape. And working so that he didn’t need to know the exact count or the volume of individual molecules. Good computational technique. It’s hard not to feel we’re begging the question, though. We want the volume of something. So we need the volume of something else. Where does that volume come from? Well, where does an inch come from? Or a centimeter? Whatever unit you use? You pick something to use as reference. Any old thing will do. Which is why you get fascinating stories about choosing what to use. And bitter arguments about which of several alternatives to use. And we express the length of something as some multiple of this reference length. Volume works the same way. Pick a reference volume, something that can be one unit-of-volume. Other volumes are some multiple of that unit-of-volume. Possibly a fraction of that unit-of-volume. Usually we use a reference volume that’s based on the reference length. Typically, we imagine a cube that’s one unit of length on each side. The volume of this cube with sides of length 1 unit-of-length is then 1 unit-of-volume. This seems all nice and orderly and it’s surely not because mathematicians have paid off by six-sided-dice manufacturers. Does it have to be? That we need some reference volume seems inevitable. We can’t very well say the area of something is ten times nothing-in-particular. Does that reference volume have to be a cube? Or even a rectangle or something else? It seems obvious that we need some reference shape that tiles, that can fill up space by itself … right? What if we don’t? I’m going to drop out of three dimensions a moment. Not because it changes the fundamentals, but because it makes something easier. Specifically, it makes it easier if you decide you want to get some construction paper, cut out shapes, and try this on your own. What this will tell us about area is just as true for volume. Area, for a two-dimensional sapce, and volume, for a three-dimensional, describe the same thing. If you’ll let me continue, then, I will. So draw a figure on a clean sheet of paper. What’s its area? Now imagine you have a whole bunch of shapes with reference areas. A bunch that have an area of 1. That’s by definition. That’s our reference area. A bunch of smaller shapes with an area of one-half. By definition, too. A bunch of smaller shapes still with an area of one-third. Or one-fourth. Whatever. Shapes with areas you know because they’re marked on them. Here’s one way to find the area. Drop your reference shapes, the ones with area 1, on your figure. How many do you need to completely cover the figure? It’s all right to cover more than the figure. It’s all right to have some of the reference shapes overlap. All you need is to cover the figure completely. … Well, you know how many pieces you needed for that. You can count them up. You can add up the areas of all these pieces needed to cover the figure. So the figure’s area can’t be any bigger than that sum. Can’t be exact, though, right? Because you might get a different number if you covered the figure differently. If you used smaller pieces. If you arranged them better. This is true. But imagine all the possible reference shapes you had, and all the possible ways to arrange them. There’s some smallest area of those reference shapes that would cover your figure. Is there a more sensible idea for what the area of this figure would be? And put this into three dimensions. If we start from some reference shapes of volume 1 and maybe 1/2 and 1/3 and whatever other useful fractions there are? Doesn’t this covering make sense as a way to describe the volume? Cubes or rectangles are easy to imagine. Tetrahedrons too. But why not any old thing? Why not, as the Mystery Science Theater 3000 episode had it, turkeys? This is a nice, flexible, convenient way to define area. So now let’s see where it goes all bizarre. We know this thanks to Giuseppe Peano. He’s among the late-19th/early-20th century mathematicians who shaped modern mathematics. They did this by showing how much of our mathematics broke intuition. Peano was (here) exploring what we now call fractals. And noted a family of shapes that curl back on themselves, over and over. They’re beautiful. And they fill area. Fill volume, if done in three dimensions. It seems impossible. If we use this covering scheme, and try to find the volume of a straight line, we get zero. Well, we find that any positive number is too big, and from that conclude that it has to be zero. Since a straight line has length, but not volume, this seems fine. But a Peano curve won’t go along with this. A Peano curve winds back on itself so much that there is some minimum volume to cover it. This unsettles. But this idea of volume (or area) by covering works so well. To throw it away seems to hobble us. So it seems worth the trade. We allow ourselves to imagine a line so long and so curled up that it has a volume. Amazing. And now I get to relax and unwind and enjoy a long weekend before coming to the letter ‘W’. That’ll be about some topic I figure I can whip out a nice tight 500 words about, and instead, produce some 1541-word monstrosity while I wonder why I’ve had no free time at all since August. Tuesday, give or take, it’ll be available at this link, as are the rest of these glossary posts. Thanks for reading. ## My 2018 Mathematics A To Z: Tiling For today’s a to Z topic I again picked one nominated by aajohannas. This after I realized I was falling into a never-ending research spiral on Mr Wu, of Mathtuition’s suggested “torus”. I do have an older essay describing the torus, as a set. But that does leave out a lot of why a torus is interesting. Well, we’ll carry on. # Tiling. Here is a surprising thought for the next time you consider remodeling the kitchen. It’s common to tile the floor. Perhaps some of the walls behind the counter. What patterns could you use? And there are infinitely many possibilities. You might leap ahead of me and say, yes, but they’re all boring. A tile that’s eight inches square is different from one that’s twelve inches square and different from one that’s 12.01 inches square. Fine. Let’s allow that all square tiles are “really” the same pattern. The only difference between a square two feet on a side and a square half an inch on a side is how much grout you have to deal with. There are still infinitely many possibilities. You might still suspect me of being boring. Sure, there’s a rectangular tile that’s, say, six inches by eight inches. And one that’s six inches by nine inches. Six inches by ten inches. Six inches by one millimeter. Yes, I’m technically right. But I’m not interested in that. Let’s allow that all rectangular tiles are “really” the same pattern. So we have “squares” and “rectangles”. There are still infinitely many tile possibilities. Let me shorten the discussion here. Draw a quadrilateral. One that doesn’t intersect itself. That is, there’s four corners, four lines, and there’s no X crossings. If you have that, then you have a tiling. Get enough of these tiles and arrange them correctly and you can cover the plane. Or the kitchen floor, if you have a level floor. It might not be obvious how to do it. You might have to rotate alternating tiles, or set them in what seem like weird offsets. But you can do it. You’ll need someone to make the tiles for you, if you pick some weird pattern. I hope I live long enough to see it become part of the dubious kitchen package on junk home-renovation shows. Let me broaden the discussion here. What do I mean by a tiling if I’m allowing any four-sided figure to be a tile? We start with a surface. Usually the plane, a flat surface stretching out infinitely far in two dimensions. The kitchen floor, or any other mere mortal surface, approximates this. But the floor stops at some point. That’s all right. The ideas we develop for the plane work all right for the kitchen. There’s some weird effects for the tiles that get too near the edges of the room. We don’t need to worry about them here. The tiles are some collection of open sets. No two tiles overlap. The tiles, plus their boundaries, cover the whole plane. That is, every point on the plane is either inside exactly one of the open sets, or it’s on the boundary between one (or more) sets. There isn’t a requirement that all these sets have the same shape. We usually do, and will limit our tiles to one or two shapes endlessly repeated. It seems to appeal to our aesthetics and our installation budget. Using a single pattern allows us to cover the plane with triangles. Any triangle will do. Similarly any quadrilateral will do. For convex pentagonal tiles — here things get weird. There are fourteen known families of pentagons that tile the plane. Each member of the family looks about the same, but there’s some room for variation in the sides. Plus there’s one more special case that can tile the plane, but only that one shape, with no variation allowed. We don’t know if there’s a sixteenth pattern. But then until 2015 we didn’t know there was a 15th, and that was the first pattern found in thirty years. Might be an opening for someone with a good eye for doodling. There are also exciting opportunities in convex hexagons. Anyone who plays strategy games knows a regular hexagon will tile the plane. (Regular hexagonal tilings fit a certain kind of strategy game well. Particularly they imply an equal distance between the centers of any adjacent tiles. Square and triangular tiles don’t guarantee that. This can imply better balance for territory-based games.) Irregular hexagons will, too. There are three known families of irregular hexagons that tile the plane. You can treat the regular hexagon as a special case of any of these three families. No one knows if there’s a fourth family. Ready your notepad at the next overlong, agenda-less meeting. There aren’t tilings for identical convex heptagons, figures with seven sides. Nor eight, nor nine, nor any higher figure. You can cover them if you have non-convex figures. See any Tetris game where you keep getting the ‘s’ or ‘t’ shapes. And you can cover them if you use several shapes. There’s some guidance if you want to create your own periodic tilings. I see it called the Conway Criterion. I don’t know the field well enough to say whether that is a common term. It could be something one mathematics popularizer thought of and that other popularizers imitated. (I don’t find “Conway Criterion” on the Mathworld glossary, but that isn’t definitive.) Suppose your polygon satisfies a couple of rules about the shapes of the edges. The rules are given in that link earlier this paragraph. If your shape does, then it’ll be able to tile the plane. If you don’t satisfy the rules, don’t despair! It might yet. The Conway Criterion tells you when some shape will tile the plane. It won’t tell you that something won’t. (The name “Conway” may nag at you as familiar from somewhere. This criterion is named for John H Conway, who’s famous for a bunch of work in knot theory, group theory, and coding theory. And in popular mathematics for the “Game of Life”. This is a set of rules on a grid of numbers. The rules say how to calculate a new grid, based on this first one. Iterating them, creating grid after grid, can make patterns that seem far too complicated to be implicit in the simple rules. Conway also developed an algorithm to calculate the day of the week, in the Gregorian calendar. It is difficult to explain to the non-calendar fan how great this sort of thing is.) This has all gotten to periodic tilings. That is, these patterns might be complicated. But if need be, we could get them printed on a nice square tile and cover the floor with that. Almost as beautiful and much easier to install. Are there tilings that aren’t periodic? Aperiodic tilings? Well, sure. Easily. Take a bunch of tiles with a right angle, and two 45-degree angles. Put any two together and you have a square. So you’re “really” tiling squares that happen to be made up of a pair of triangles. Each pair, toss a coin to decide whether you put the diagonal as a forward or backward slash. Done. That’s not a periodic tiling. Not unless you had a weird run of luck on your coin tosses. All right, but is that just a technicality? We could have easily installed this periodically and we just added some chaos to make it “not work”. Can we use a finite number of different kinds of tiles, and have it be aperiodic however much we try to make it periodic? And through about 1966 mathematicians would have mostly guessed that no, you couldn’t. If you had a set of tiles that would cover the plane aperiodically, there was also some way to do it periodically. And then in 1966 came a surprising result. No, not Penrose tiles. I know you want me there. I’ll get there. Not there yet though. In 1966 Robert Berger — who also attended Rensselaer Polytechnic Institute, thank you — discovered such a tiling. It’s aperiodic, and it can’t be made periodic. Why do we know Penrose Tiles rather than Berger Tiles? Couple reasons, including that Berger has to use 20,426 distinct tile shapes. In 1971 Raphael M Robinson simplified matters a bit and got that down to six shapes. Roger Penrose in 1974 squeezed the set down to two, although by adding some rules about what edges may and may not touch one another. (You can turn this into a pure edges thing by putting notches into the shapes.) That really caught the public imagination. It’s got simplicity and accessibility to combine with beauty. Aperiodic tiles seem to relate to “quasicrystals”, which are what the name suggests and do happen in some materials. And they’ve got beauty. Aperiodic tiling embraces our need to have not too much order in our order. I’ve discussed, in all this, tiling the plane. It’s an easy surface to think about and a popular one. But we can form tiling questions about other shapes. Cylinders, spheres, and toruses seem like they should have good tiling questions available. And we can imagine “tiling” stuff in more dimensions too. If we can fill a volume with cubes, or rectangles, it’s natural to wonder what other shapes we can fill it with. My impression is that fewer definite answers are known about the tiling of three- and four- and higher-dimensional space. Possibly because it’s harder to sketch out ideas and test them. Possibly because the spaces are that much stranger. I would be glad to hear more. I’m hoping now to have a nice relaxing weekend. I won’t. I need to think of what to say for the letter ‘U’. On Tuesday I hope that it will join the rest of my A to Z essays at this link. ## My 2018 Mathematics A To Z: Manifold Two commenters suggested the topic for today’s A to Z post. I suspect I’d have been interested in it if only one had. (Although Dina Yagoditch’s suggestion of the Menger Sponge is hard to resist.) But a double domination? The topic got suggested by Mr Wu, author of MathTuition88, and by John Golden, author of Math Hombre. My thanks to all for interesting things to think about. # Manifold. So you know how in the first car you ever owned the alternator was always going bad? If you’re lucky, you reach a point where you start owning cars good enough that the alternator is not the thing always going bad. Once you’re there, congratulations. Now the thing that’s always going bad in your car will be the manifold. That one’s for my dad. Manifolds are a way to do normal geometry on weird shapes. What’s normal geometry? It’s … you know, the way shapes work on your table, or in a room. The Euclidean geometry that we’re so used to that it’s hard to imagine it not working. Why worry about weird shapes? They’re interesting, for one. And they don’t have to be that weird to count as weird. A sphere, like the surface of the Earth, can be weird. And these weird shapes can be useful. Mathematical physics, for example, can represent the evolution of some complicated thing as a path drawn on a weird shape. Bringing what we know about geometry from years of study, and moving around rooms, to a problem that abstract makes our lives easier. We use language that sounds like that of map-makers when discussing manifolds. We have maps. We gather together charts. The collection of charts describing a surface can be an atlas. All these words have common meanings. Mercifully, these common meanings don’t lead us too far from the mathematical meanings. We can even use the problem of mapping the surface of the Earth to understand manifolds. If you love maps, the geography kind, you learn quickly that there’s no making a perfect two-dimensional map of the Earth’s surface. Some of these imperfections are obvious. You can distort shapes trying to make a flat map of the globe. You can distort sizes. But you can’t represent every point on the globe with a point on the paper. Not without doing something that really breaks continuity. Like, say, turning the North Pole into the whole line at the top of the map. Like in the Equirectangular projection. Or skipping some of the points, like in the Mercator projection. Or adding some cuts into a surface that doesn’t have them, like in the Goode homolosine projection. You may recognize this as the one used in classrooms back when the world had first begun. But what if we don’t need the whole globedone in a single map? Turns out we can do that easy. We can make charts that cover a part of the surface. No one chart has to cover the whole of the Earth’s surface. It only has to cover some part of it. It covers the globe with a piece that looks like a common ordinary Euclidean space, where ordinary geometry holds. It’s the collection of charts that covers the whole surface. This collection of charts is an atlas. You have a manifold if it’s possible to make a coherent atlas. For this every point on the manifold has to be on at least one chart. It’s okay if a point is on several charts. It’s okay if some point is on all the charts. Like, suppose your original surface is a circle. You can represent this with an atlas of two charts. Each chart maps the circle, except for one point, onto a line segment. The two charts don’t both skip the same point. All but two points on this circle are on all the maps of this chart. That’s cool. What’s not okay is if some point can’t be coherently put onto some chart. This sad fate can happen. Suppose instead of a circle you want to chart a figure-eight loop. That won’t work. The point where the figure crosses itself doesn’t look, locally, like a Euclidean space. It looks like an ‘x’. There’s no getting around that. There’s no atlas that can cover the whole of that surface. So that surface isn’t a manifold. But many things are manifolds nevertheless. Toruses, the doughnut shapes, are. Möbius strips and Klein bottles are. Ellipsoids and hyperbolic surfaces are, or at least can be. Mathematical physics finds surfaces that describe all the ways the planets could move and still conserve the energy and momentum and angular momentum of the solar system. That cheesecloth surface stretched through 54 dimensions, is a manifold. There are many possible atlases, with many more charts. But each of those means we can, at least locally, for particular problems, understand them the same way we understand cutouts of triangles and pentagons and circles on construction paper. So to get back to cars: no one has ever said “my car runs okay, but I regret how I replaced the brake covers the moment I suspected they were wearing out”. Every car problem is easier when it’s done as soon as your budget and schedule allow. This and other Fall 2018 Mathematics A-To-Z posts can be read at this link. What will I choose for ‘N’, later this week? I really should have decided that by now. ## My 2018 Mathematics A To Z: Hyperbolic Half-Plane Today’s term was one of several nominations I got for ‘H’. This one comes from John Golden, @mathhobre on Twitter and author of the Math Hombre blog on Blogspot. He brings in a lot of thought about mathematics education and teaching tools that you might find interesting or useful or, better, both. # Hyperbolic Half-Plane. The half-plane part is easy to explain. By the “plane” mathematicians mean, well, the plane. What you’d get if a sheet of paper extended forever. Also if it had zero width. To cut it in half … well, first we have to think hard what we mean by cutting an infinitely large thing in half. Then we realize we’re overthinking this. Cut it by picking a line on the plane, and then throwing away everything on one side or the other of that line. Maybe throw away everything on the line too. It’s logically as good to pick any line. But there are a couple lines mathematicians use all the time. This is because they’re easy to describe, or easy to work with. At least once you fix an origin and, with it, x- and y-axes. The “right half-plane”, for example, is everything in the positive-x-axis direction. Every point with coordinates you’d describe with positive x-coordinate values. Maybe the non-negative ones, if you want the edge included. The “upper half plane” is everything in the positive-y-axis direction. All the points whose coordinates have a positive y-coordinate value. Non-negative, if you want the edge included. You can make guesses about what the “left half-plane” or the “lower half-plane” are. You are correct. The “hyperbolic” part takes some thought. What is there to even exaggerate? Wrong sense of the word “hyperbolic”. The word here is the same one used in “hyperbolic geometry”. That takes explanation. The Western mathematics tradition, as we trace it back to Ancient Greece and Ancient Egypt and Ancient Babylon and all, gave us “Euclidean” geometry. It’s a pretty good geometry. It describes how stuff on flat surfaces works. In the Euclidean formation we set out a couple of axioms that aren’t too controversial. Like, lines can be extended indefinitely and that all right angles are congruent. And one axiom that is controversial. But which turns out to be equivalent to the idea that there’s only one line that goes through a point and is parallel to some other line. And it turns out that you don’t have to assume that. You can make a coherent “spherical” geometry, one that describes shapes on the surface of a … you know. You have to change your idea of what a line is; it becomes a “geodesic” or, on the globe, a “great circle”. And it turns out that there’s no lines geodesics that go through a point and that are parallel to some other line geodesic. (I know you want to think about globes. I do too. You maybe want to say the lines of latitude are parallel one another. They’re even called parallels, sometimes. So they are. But they’re not geodesics. They’re “little circles”. I am not throwing in ad hoc reasons I’m right and you’re not.) There is another, though. This is “hyperbolic” geometry. This is the way shapes work on surfaces that mathematicians call saddle-shaped. I don’t know what the horse enthusiasts out there call these shapes. My guess is they chuckle and point out how that would be the most painful saddle ever. Doesn’t matter. We have surfaces. They act weird. You can draw, through a point, infinitely many lines parallel to a given other line. That’s some neat stuff. That’s weird and interesting. They’re even called “hyperparallel lines” if that didn’t sound great enough. You can see why some people would find this worth studying. The catch is that it’s hard to order a pad of saddle-shaped paper to try stuff out on. It’s even harder to get a hyperbolic blackboard. So what we’d like is some way to represent these strange geometries using something easier to work with. The hyperbolic half-plane is one of those approaches. This uses the upper half-plane. It works by a move as brilliant and as preposterous as that time Q told Data and LaForge how to stop that falling moon. “Simple. Change the gravitational constant of the universe.” What we change here is the “metric”. The metric is a function. It tells us something about how points in a space relate to each other. It gives us distance. In Euclidean geometry, plane geometry, we use the Euclidean metric. You can find the distance between point A and point B by looking at their coordinates, $(x_A, y_A)$ and $(x_B, y_B)$. This distance is $\sqrt{\left(x_B - x_A\right)^2 + \left(y_B - y_A\right)^2}$. Don’t worry about the formulas. The lines on a sheet of graph paper are a reflection of this metric. Each line is (normally) a fixed distance from its parallel neighbors. (Yes, there are polar-coordinate graph papers. And there are graph papers with logarithmic or semilogarithmic spacing. I mean graph paper like you can find at the office supply store without asking for help.) But the metric is something we choose. There are some rules it has to follow to be logically coherent, yes. But those rules give us plenty of room to play. By picking the correct metric, we can make this flat plane obey the same geometric rules as the hyperbolic surface. This metric looks more complicated than the Euclidean metric does, but only because it has more terms and takes longer to write out. What’s important about it is that the distance your thumb put on top of the paper covers up is bigger if your thumb is near the bottom of the upper-half plane than if your thumb is near the top of the paper. So. There are now two things that are “lines” in this. One of them is vertical lines. The graph paper we would make for this has a nice file of parallel lines like ordinary paper does. The other thing, though … well, that’s half-circles. They’re half-circles with a center on the edge of the half-plane. So our graph paper would also have a bunch of circles, of different sizes, coming from regularly-spaced sources on the bottom of the paper. A line segment is a piece of either these vertical lines or these half-circles. You can make any polygon you like with these, if you pick out enough line segments. They’re there. There are many ways to represent hyperbolic surfaces. This is one of them. It’s got some nice properties. One of them is that it’s “conformal”. Angles that you draw using this metric are the same size as those on the corresponding hyperbolic surface. You don’t appreciate how sweet that is until you’re working in non-Euclidean geometries. Circles that are entirely within the hyperbolic half-plane match to circles on a hyperbolic surface. Once you’ve got your intuition for this hyperbolic half-plane, you can step into hyperbolic half-volumes. And that lets you talk about the geometry of hyperbolic spaces that reach into four or more dimensions of human-imaginable spaces. Isometries — picking up a shape and moving it in ways that don’t change distance — match up with the Möbius Transformations. These are a well-understood set of altering planes that comes from a different corner of geometry. Also from that fellow with the strip, August Ferdinand Möbius. It’s always exciting to find relationships like that in mathematical structures. Pictures often help. I don’t know why I don’t include them. But here is a web site with pages, and pictures, that describe much of the hyperbolic half-plane. It includes code to use with the Geometer Sketchpad software, which I have never used and know nothing about. That’s all right. There’s at least one page there showing a wondrous picture. I hope you enjoy. This and other essays in the Fall 2018 A-To-Z should be at this link. And I’ll start paneling for more letters soon. ## Playful Mathematics Education Blog Carnival #121 Greetings one and all! Come, gather round! Wonder and spectate and — above all else — tell your friends of the Playful Mathematics Blog Carnival! Within is a buffet of delights and treats, fortifications for the mind and fire for the imagination. 121 is a special number. When I was a mere tot, growing in the wilds of suburban central New Jersey, it stood there. It held a spot of privilege in the multiplication tables on the inside front cover of composition books. On the forward diagonal, yet insulated from the borders. It anchors the safe interior. A square number, eleventh of that set in the positive numbers. ## The First Tent The first wonder to consider is Iva Sallay’s Find the Factors blog. She brings each week a sequence of puzzles, all factoring challenges. The result of each, done right, is a scrambling of the multiplication tables; it’s up to you the patron to find the scramble. She further examines each number in turn, finding its factors and its interesting traits. And furthermore, usually, when beginning a new century of digits opens a horserace, to see which of the numbers have the greatest number of factorizations. She furthermore was the host of this Playful Mathematics Education Carnival for August of 2018. 121 is more than just a square. It is the lone square known to be the sum of the first several powers of a prime number: it is $1 + 3 + 3^2 + 3^3 + 3^4$, a fantastic combination. If there is another square that is such a sum of primes, it is unknown to any human — and must be at least 35 digits long. We look now for a moment at some astounding animals. From the renowned Dr Nic: Introducing Cat Maths cards, activities, games and lessons — a fine collection of feline companions, such toys as will enterain them. A dozen attributes each; twenty-seven value cards. These cats, and these cards, and these activity puzzles, promise games and delights, to teach counting, subtraction, statistics, and inference! Next and no less incredible is the wooly Mathstodon. Christian Lawson-Perfect hosts this site, an instance of the open-source Twitter-like service Mastodon. Its focus: a place for people interested in mathematicians to write of what they know. To date over 1,300 users have joined, and have shared nearly 25,000 messages. You need not join to read many of these posts — your host here has yet to — but may sample its wares as you like. ## The Second Tent 121 is one of only two perfect squares known to be four less than the cube of a whole number. The great Fermat conjectured that 4 and 121 are the only such numbers; no one has found a counter-example. Nor a proof. Friends, do you know the secret to popularity? There is an astonishing truth behind it. Elias Worth of the MathSection blog explains the Friendship Paradox. This mind-warping phenomenon tells us your friends have more friends than you do. It will change forever how you look at your followers and following accounts. And now to thoughts of learning. Stepping forward now is Monica Utsey, @Liveonpurpose47 of Chocolate Covered Boy Joy. Her declaration: “I incorporated Montessori Math materials with my right brain learner because he needed literal representations of the work we were doing. It worked and we still use it.” See now for yourself the representations, counting and comparing and all the joys of several aspects of arithmetic. Take now a moment for your own fun. Blog Carnival patron and organizer Denise Gaskins wishes us to know: “The fun of mathematical coloring isn’t limited to one day. Enjoy these coloring resources all year ’round!” Happy National Coloring Book Day offers the title, and we may keep the spirit of National Coloring Book Day all the year round. Confident in that? Then take on a challenge. Can you scroll down faster than Christian Lawson-Perfect’s web site can find factors? Prove your speed, prove your endurance, and see if you can overcome this infinite scroll. ## The Third Tent 121 is a star number, the fifth of that select set. 121 identical items can be tiled to form a centered hexagon. You may have seen it in the German game of Chinese Checkers, as the board of that has 121 holes. We come back again to teaching. “Many homeschoolers struggle with teaching their children math. Here are some tips to make it easier”, offers Denise Gaskins. Step forth and benefit from this FAQ: Struggling with Arithmetic, a collection of tips and thoughts and resources to help make arithmetic the more manageable. Step now over to the arcade, and to the challenge of Pac-Man. This humble circle-inspired polygon must visit the entirety of a maze, and avoid ghosts as he does. Matthew Scroggs of Chalk Dust Magazine here seeks and shows us Optimal Pac-Man. Graph theory tells us there are thirteen billion different paths to take. Which of them is shortest? Which is fastest? Can it be known, and can it help you through the game? And now a recreation, one to become useful if winter arrives. Think of the mysteries of the snowball rolling down a hill. How does it grow in size? How does it speed up? When does it stop? Rodolfo A Diaz, Diego L Gonzalez, Francisco Marin, and R Martinez satisfy your curiosity with Comparative kinetics of the snowball respect to other dynamical objects. Be warned! This material is best suited for the college-age student of the mathematical snow sciences. ## The Fourth Tent 121 is furthermore the sixth of the centered octagonal numbers. 121 of a thing may be set into six concentric octagons of one, then two, then three, then four, then five, and then six of them on a side. To teach is to learn! And we have here an example of such learning. James Sheldon writing for the American Mathematical Society Graduate Student blog offers Teaching Lessons from a Summer of Taking Mathematics Courses. What secrets has Sheldon to reveal? Come inside and learn what you may. And now step over to the games area. The game Entanglement wraps you up in knots, challenging you to find the longest knot possible. David Richeson of Division By Zero sees in this A game for budding knot theorists. What is the greatest score that could be had in this game? Can it ever be found? Only Richeson has your answer. Step now back to the amazing Mathstodon. Gaze in wonder at the account @dudeney_puzzles. Since the September of 2017 it has brought out challenges from Henry Ernest Dudeney’s Amusements in Mathematics. Puzzles given, yes, with answers that follow along. The impatient may find Dudeney’s 1917 book on Project Gutenberg among other places. ## The Fifth Tent Sum the digits of 121; you will find that you have four. Take its prime factors, 11 and 11, and sum their digits; you will find that this is four again. This makes 121 a Smith number. These marvels of the ages were named by Albert Wilansky, in honor of his brother-in-law, a man known to history as Harold Smith, and whose telephone number of 4,937,775 was one such. Now let us consider terror. What is it to enter a PhD program? Many have attempted it; some have made it through. Mathieu Besançon gives to you a peek behind academia’s curtain. A year in PhD describes some of this life. And now to an astounding challenge. Imagine an assassin readies your death. Can you protect yourself? At all? Tai-Danae Bradley invites you to consider: Is the Square a Secure Polygon? This question takes you on a tour of geometries familiar and exotic. Learn how mathematicians consider how to walk between places on a torus — and the lessons this has for a square room. The fate of the universe itself may depend on the methods described herein — the techniques used to study it relate to those that study whether a physical system can return to its original state. And then J2kun turned this into code, Visualizing an Assassin Puzzle, for those who dare to program it. Have you overcome this challenge? Then step into the world of linear algebra, and this delight from the Mathstodon account of Christian Lawson-Perfect. The puzzle is built on the wonders of eigenvectors, those marvels of matrix multiplication. They emerge from multiplication longer or shorter but unchanged in direction. Lawson-Perfect uses whole numbers, represented by Scrabble tiles, and finds a great matrix with a neat eigenvalue. Can you prove that this is true? ## The Sixth Tent Another wonder of the digits of 121. Take them apart, then put them together again. Contorted into the form 112 they represent the same number. 121 is, in the base ten commonly used in the land, a Friedman Number, second of that line. These marvels, in the Arabic, the Roman, or even the Mayan numerals schemes, are named for Erich Friedman, a figure of mystery from the Stetson University. We draw closer to the end of this carnival’s attractions! To the left I show a tool for those hoping to write mathematics: Donald E Knuth, Tracy Larrabee, and Paul M Roberts’s Mathematical Writing. It’s a compilation of thoughts about how one may write to be understood, or to avoid being misunderstood. Either would be a marvel for the ages. To the right please see Gregory Taylor’s web comic Any ~Qs. Taylor — @mathtans on Twitter — brings a world of math-tans, personifications of mathematical concepts, together for adventures and wordplay. And if the strip is not to your tastes, Taylor is working on ε Project, a serialized written story with new installments twice a month. If you will look above you will see the marvels of curved space. On YouTube, Eigenchris hopes to learn differential geometry, and shares what he has learned. While he has a series under way he suggested Episode 15, ‘Geodesics and Christoffel Symbols as one that new viewers could usefully try. Episode 16, ‘Geodesic Examples on Plane and Sphere, puts this work to good use. And as we reach the end of the fairgrounds, please take a moment to try Find the Factors Puzzle number 121, a challenge from 2014 that still speaks to us today! And do always stop and gaze in awe at the fantastic and amazing geometrical constructs of Robert Loves Pi. You shall never see stellations of its like elsewhere! ## The Concessions Tent With no thought of the risk to my life or limb I read the newspaper comics for mathematical topics they may illuminate! You may gape in awe at the results here. And furthermore this week and for the remainder of this calendar year of 2018 I dare to explain one and only one mathematical concept for each letter of our alphabet! I remind the sensitive patron that I have already done not one, not two, not three, but four previous entries all finding mathematical words for the letter “X” — will there be one come December? There is but one way you might ever know. Denise Gaskins coordinates the Playful Mathematics Education Blog Carnival. Upcoming scheduled carnivals, including the chance to volunteer to host it yourself, or to recommend your site for mention, are listed here. And October’s 122nd Playful Mathematics Education Blog Carnival is scheduled to be hosted by Arithmophobia No More, and may this new host have the best of days!
2019-08-18 14:13:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 15, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5392916202545166, "perplexity": 963.4784482852791}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313889.29/warc/CC-MAIN-20190818124516-20190818150516-00153.warc.gz"}
https://www.garibaldibros.com/wp/category/uncategorized/
# Generic stabilizer in Spin14 Over the past year, several people have written to me for clarification about the stabilizer in Spin14 of a generic vector in one of its half-spin representations. My 2017 paper Spinors and essential dimension with Robert Guralnick shows that, over an algebraically closed field of characteristic different from 2, the stabilizer is $$G_2 \times G_2$$…
2019-10-17 08:11:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8296905755996704, "perplexity": 402.1294075373886}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986673250.23/warc/CC-MAIN-20191017073050-20191017100550-00166.warc.gz"}
http://kodu.ut.ee/~unruh/publications/muellerquade05oblivious.html
# Oblivious Transfer is Incomplete for Deniable Protocols Oblivious Transfer is Incomplete for Deniable ProtocolsJ. Müller-Quade, S. Röhrich, and D. Unruh (Workshop on The Past, Present and Future of Oblivious Transfer, 2005).  [eprint] Abstract: We prove that for deniable protocol tasks oblivious transfer is not complete. We introduce the protocol task of bit commitments which can be undone, and prove that this task cannot be realised with OT whereas there exists a secure realisation using string-OT.
2020-12-02 16:29:57
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8080387711524963, "perplexity": 5177.552662299779}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141711306.69/warc/CC-MAIN-20201202144450-20201202174450-00039.warc.gz"}
https://langara.ca/campus-facilities/our-facilities/projects.html
# Central Heating Plant INTRODUCTION The central heating plant (CHP) project will take place from October 2022–March 2023. The project will bring a new CHP, replacing our existing one from 1969, to provide heating to many buildings on campus. The new CHP will have seasonal energy efficiency of over 85%, significantly contributing to the government’s efforts to reduce greenhouse gases through Clean BC and improve Facility Condition Index (FCI). PROJECT TIMELINE Construction across campus will begin October 11 and continue into early Spring 2023. Expect to see construction activities in different areas in the underground parkade and throughout the exterior corridors located between A and C Buildings. This includes construction fencing placed around the central courtyard and around the stairs located at the west end of the gym for excavation activities that will begin late November 2022 and continue until March 2023 (see map above). HISTORY The A Building central heating plant on campus is original (1969) and identified as end of life. It currently serves most of campus (excluding Library building and T Building), including providing heating for the domestic hot water on campus. In 2014, When in design phase of our New Science and Technology Building, it was identified that the building required a large heating plant to meet code requirements (lab buildings are particularly energy intensive), however, the heat recovery systems in the building made the actual load much less than typical buildings. The new Science and Technology Building (certified LEED Gold) is beside the Library building (certified LEED Gold), which also has a heating boiler which is underutilized as it is a Geothermal building. The New Central Heating Plant project was proposed to the Ministry of Education in 2014 as a three-phase project. Funding was awarded for Phase I including $500,000 in Fortis funding through new construction program. In 2015, we received additional funding for Phase II through the Carbon Neutral Capital Program. We have completed phase I & II of this project; in 2020, we issued ITT for Consultants for Design and Analysis – included detailed design scope from L to B. In Nov 2020 – Fortis Energy Study Approval Letter –$75,000 in funding to do a more detailed energy analysis of options. The Detailed Thermal Study recommends we integrate the central plant with the newer plant installed in L Building instead of upgrading the A Building Heating Plant. To integrate the plants, two new supply and return loops will be installed from Plant “L to B” and Plant “L to A” (to the utility tunnel serving A, C and G buildings). The detailed design and tendering is complete and we plan to start construction this fall 2022. With the existing heating loads, the new hot water plant is expected to operate with seasonal efficiency of above 85%, resulting in fuel savings of approximately 7,840 GJ or 25% of our current fuel usage. #### BENEFITS Strategic Alignment: This initiative is in line with the government’s efforts to reduce greenhouse gases through Clean BC and improve Facility Condition Index (FCI) and reduce deferred maintenance costs.  It also aligns with Langara’s strategic plan to remove A Building, which involves removing our dependence on its heating plant.  Our GHG’s reductions will also help towards our future Association for the Advancement of Sustainability in Higher Education (AASHE) Sustainability Tracking, Assessment & Rating System (STARS) rating. Energy and Emissions Reduction: The new heating plant has more efficient technology, and in addition, the system, piping and controls have been designed to make use of the condensing/more efficient range of the boilers as the new buildings are generally designed for low temperature, (the existing Library, LSU, C and Science & Technology Buildings). Innovation: Continuing to centralize the heating and including features in the design for future integration, will better prepare us to take advantage of renewable energy opportunities in the future with an ability to replace source at single point or introduce at various points across campus (example: add to the highly effective 160 geothermal wells already on campus). Infrastructure improvements: The relocation and renewal of the heating plant would improve the FCI of all buildings on campus, including decreased risk to infrastructure as a result of loss of heat. Cost Effectiveness: Serving the campus from a new central plant would minimize disruption, risk and cost when A Building renewal is scheduled and required, as well, this minimized investment required to provide heating for our new Building. Continuing to centralize the heating plant will minimize operating costs associated with annual inspections of multiple plants. With the new central plant in place, Langara is projected to save \$658,667 on energy costs by 2030.
2023-03-25 17:33:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3095698058605194, "perplexity": 3516.037087616986}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945368.6/warc/CC-MAIN-20230325161021-20230325191021-00103.warc.gz"}
https://asia.vtmarkets.com/analysis/stocks-were-volatile-thursday-as-10-year-yields-spiked/
English Europe Middle East Asia #### VT Markets APP Trade CFDs on FX, Gold and more # Daily market analysis ### Stocks were volatile Thursday, as 10-year yields spiked ###### October 21, 2022 U.S. equities traded lower over the course of yesterday’s trading. The Dow Jones Industrial Average lost 0.3% to close at 30333.59. The S&P500 lost 0.8% to close at 3665.78. The tech-heavy Nasdaq Composite lost 0.61% to close at 10614.84. Equities rose in the first half of the American trading session, as the initial jobless claims figure came in below analysts’ expectations at 214K. Equities retreated during the second half of the American trading session as short-term treasury yields spiked above to multiple-year highs. The benchmark U.S. 10-year treasury yield has topped 4.2% and is currently trading at 4.229%– the benchmark’s highest level since 2008. Looking ahead, the Conference Board’s Leading Economic Indicator (LEI) index has signalled a worrying future. The Conference Board’s index pointed 0.4% down from the month before and is off 2.8% for the six months period. In combination with the LEI and the Fed’s aggressive rate hike, the camps arguing against further rate hikes are growing by the day. AT&T earnings came in above analyst expectations. AT&T’s EPS came in at $0.65, beating estimates by 10.78%. More importantly, the telecommunications giant still managed to improve not only its top line but also its bottom line. IBM earnings also came in better than analyst estimates. The company reported Q3 EPS of$1.81, an 0.7% upside surprise. Revenue for Q3 also came in above market expectations at 14.11 Billion. Main Pairs Movement The Dollar Index lost 0.05% over the course of yesterday’s trading. The Greenback lost steam during Asia and European trading sessions, but quickly retrieved losses during the American trading session as the 10-year treasury yield broke above 4.2%. EURUSD gained 0.09% over the course of yesterday’s trading. Germany’s September PPI surged to 45.8%, year over year, much higher than anticipated. Inflation will continue to act as a headwind for the shared currency. GBPUSD gained 0.14% over the course of yesterday’s trading. British Prime Minister Lizz Truss’ resignation from the office roiled markets. The British Pound fell against the Dollar late in yesterday’s session as yields rose. Gold gained 0.02% over the course of yesterday’s trading. The non-yielding metal was able to hold on to earlier gains despite strong demand for the Greenback. Technical Analysis EURUSD (4-Hour Chart) EURUSD erased a major part of yesterday’s losses and was trading above the 0.9800 level as of writing. Although the overnight receded demand for the US Dollar underpinned the European currency, the market mood remains sour. Global stocks remain on the back foot, struggling to leave the red. Even though, US government bond yields maintain upward pressure. The sensitive 2-year Treasury bond yield surged to a multi-year high of 4.61% on Thursday, and 10-year note yields 4.12%, unchanged on a daily basis. Apart from this, UK political noise also attracts some buying for EURUSD. The British Pound is up amid Prime Minister Liz Truss’s announcing her decision to leave the government after the failed attempt to bring financial stability. Meanwhile, the increasing speculation of a potential recession in the region – which looks propped up by dwindling sentiment gauges as well as an incipient slowdown in some fundamentals, adds to the sour sentiment around the euro. From the technical perspective, the RSI indicator 49 as of writing, suggests that the currency has no clear direction to move and would hover in a range from 0.9710 to 0.9870. As for the Bollinger Bands, the pair now pricing at the lower area, and the gap between upper and lower bands tends to be closer. We think the pair would put into sideway and trade around the 20-period moving average, 0.9800. Resistance: 0.9870, 0.9920, 1.0000, 1.0190 Support: 0.9765, 0.9718, 0.9665, 0.9550 GBPUSD (4-Hour Chart) The GBP/USD pair advanced higher on Thursday, regaining upside momentum and touching a daily high above 1.1320 level following Prime Minister Liz Truss’s resignation after only 45 days. At the time of writing, the cable stays in positive territory with a 0.15% gain for the day. The US dollar witnessed some selling despite the higher US Treasury bond yields and expectations of the Fed hiking rates by a larger size at November’s meeting, as US Initial Jobless Claims for the last week dropped unexpectedly to 214K and came less than market expectations. For the British pound, the headline news that Liz Truss resigned as Prime Minister of the UK has provided support to the GBP/USD pair, as it makes an end to the political crisis that led to the recent chaos in the financial markets. Meanwhile, the new PM will emerge from a new leadership election at the Conservative Party next week. However, the growing worries about a deeper UK economic downturn could limit the upside for cable. For the technical aspect, RSI indicator 47 as of writing, suggests that the pair is facing slightly bearish pressure as the RSI stays below the mid-line. As for the Bollinger Bands, the price remained under pressure and dropped towards the lower band, therefore the downside traction should persist. In conclusion, we think the market will be bearish as the pair is heading to test the 1.1186 support. The downward trajectory could further get extended towards the 1.0968 mark if the pair break below that support. Resistance: 1.1390, 1.1476, 1.1570 Support: 1.1186, 1.0968, 1.0392 XAUUSD (4-Hour Chart) XAUUSD bounces off a new three-week low touched earlier this Thursday and sticks to its modest gains as the US dollar edges lower and trims a part of the previous day’s strong gains. Gold was priced at \$1630 marks as of writing. Furthermore, growing worries about a deeper global economic downturn and the prevalent cautious market mood act as tailwinds for the safe-haven yellow metal. However, the prospects for a more aggressive policy tightening by major central banks keep a lid on any meaningful upside for the non-yielding gold. The markets have been pricing in jumbo rate hikes by the European Central Bank and the Bank of England. The Federal Reserve is also expected to stick to its aggressive rate-hiking cycle. The CME’s Fed Watch tool indicates a nearly 100% chance of the fourth successive supersized 75bps rate increase at the next FOMC policy meeting in November. This, in turn, pushes the yield on the rate-sensitive 2-year US government bond to a 15-year peak and the benchmark 10-year Treasury note to its highest level since the 2008 financial crisis. Elevated US bond yields should keep the US dollar from any meaningful drop, which weighs on the gold. From the technical perspective, the RSI indicator 37 as of writing, suggests that XAUUSD was under heavy selling pressure. As for the Bollinger Bands, the gold was priced in the lower area and the gap between the upper and lower bands has no clear tendency. We think the path of least resistance for gold is to the downside. Resistance: 1653, 1668, 1681 Support: 1622, 1615, 1600 Economic Data
2023-01-29 13:07:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22296909987926483, "perplexity": 7851.2649013026985}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499713.50/warc/CC-MAIN-20230129112153-20230129142153-00861.warc.gz"}
https://homework.cpm.org/category/CON_FOUND/textbook/a2c/chapter/3/lesson/3.2.3/problem/3-125
### Home > A2C > Chapter 3 > Lesson 3.2.3 > Problem3-125 3-125. Decide whether each sequence below is arithmetic, geometric, or neither. Then find equations to represent each sequence, if possible. 1. $10.3, 11.5, 12.7, …$ Notice that the increase is not a constant number. The sequence is arithmetic. $t\left(n\right) = 9.1 + 1.2n$ 1. $\frac { 1 } { 2 }$, $\frac { 1 } { 4 }$, $\frac { 1 } { 8 }$, … Notice the ratio. 1. $1, 4, 9, …$ Notice that each number is a perfect square. 1. $1.1, 1.21, 1.331, …$ Notice that the sequence's increase is constant. Remember what $11$ squared is. The sequence is geometric. $t\left(n\right) = 1.1^{n}$
2021-09-17 07:35:26
{"extraction_info": {"found_math": true, "script_math_tex": 9, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9012022018432617, "perplexity": 4354.460207432251}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055601.25/warc/CC-MAIN-20210917055515-20210917085515-00016.warc.gz"}
https://math.stackexchange.com/questions/936006/external-angle-bisectors-of-a-triangle
# External angle bisectors of a triangle Exterior angle bisectors of the side $\triangle ABC$ at vertices $B$ and $C$ intersect at $D$. Find $\angle BDC$ if $\angle BAC=40^{\circ}$ I cannot visualize this problem... If I draw a triangle and bisect the exterior angles, they never meet at a common point. Is this some sort of typo? • Well, that or you're working in spherical geometry. – Dan Uznanski Sep 18 '14 at 4:07 • The angle bisectors are probably to be thought of as lines through $B$ and $C$, not rays. – Blue Sep 18 '14 at 4:08 • @Blue though in that case there would be no meaningful distinction between external and internal: both describe the same line. – Dan Uznanski Sep 18 '14 at 4:10 • @DanUznanski: Internal angle bisector lines pass through the interior of the triangle; exterior angle bisector lines ---that is, lines bisecting the exterior angles--- do not. The interior bisector at a vertex is in fact perpendicular to the external bisector at that vertex. Definitely not the same line. – Blue Sep 18 '14 at 4:12 • Oh, I see, it bisects the angle between the usual part of one edge and the extension of the other; for some reason I thought we were talking about both extensions. Silly me! – Dan Uznanski Sep 18 '14 at 4:20 Hope the following sketch can help. • I see so my picture was drawn with the rays pointing in the opposite direction. Wouldn't angle ACD be 90 if you're bi secting a line – adam Sep 18 '14 at 4:22 • @adam It would be better if you consider external bisectors as lines instead of rays. – Mick Sep 18 '14 at 4:28 • So the key to solving this problem is to know that interior angle bisectors are prepindicular to external angle bisectors. So I just draw a interior angle bisector and that will be perpendicular to line AC. Will this work? – adam Sep 18 '14 at 4:30 • That will work. Another approach (it is simpler, I think) is, apply the combination of "angle sum of triangle" + "adjacent angles on a straight line". – Mick Sep 18 '14 at 4:35 • Let's call $C=\angle ACB, B=\angle ABC$. Clearly $\angle CBD=90^o-B/2$ and $\angle BCD=90^o-C/2$ therefore $\angle CDB=(180^o-\angle CBD-\angle BCD)=(B+C)/2=(180-A)/2=140^o/2=70^o$ so I was not kidding for the one line answer. – user175968 Sep 18 '14 at 4:42 The exterior angle bisectors are just orthogonal to the interior angle bisectors, hence $$\widehat{BDC}=\pi-\widehat{BIC}=\frac{\widehat{ABC}+\widehat{ACB}}{2}=\frac{\pi-\widehat{BAC}}{2}.$$ This gives that if $\widehat{BAC}=40^\circ$, then $\widehat{BDC}=70^\circ$. It cause you draw the bisector of the small sideways. for example if b>c. you must continue the AC for external angel. This problem came out of this equation: AB/AC = BE/EC
2019-12-13 17:06:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6979181170463562, "perplexity": 542.8262881522793}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540564599.32/warc/CC-MAIN-20191213150805-20191213174805-00531.warc.gz"}
http://clay6.com/qa/35128/a-hydrocarbon-contains-85-7-c-if-42-mg-of-the-compound-contains-3-01-times-
# A hydrocarbon contains $85.7 \%C$ If $42\;mg$ of the compound contains $3.01 \times 10^{20}$ molecules, the molecular formula of the compound is $(a)\;C_6 H_{14} \\ (b)\;C_6H_{10} \\(c)\;C_6H_6 \\(d)\;C_6H_{12}$
2020-06-04 03:28:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6029971837997437, "perplexity": 1425.7036911263392}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347439019.86/warc/CC-MAIN-20200604032435-20200604062435-00536.warc.gz"}
http://www.mathnet.ru/php/archive.phtml?jrnid=jmag&wshow=issue&year=2017&series=0&volume=13&issue=3&option_lang=eng
RUS  ENG JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PERSONAL OFFICE General information Latest issue Archive Impact factor Search papers Search references RSS Latest issue Current issues Archive issues What is RSS Zh. Mat. Fiz. Anal. Geom.: Year: Volume: Issue: Page: Find On $m$-sectorial extensions of sectorial operatorsYury Arlinskiĭ, Andrey Popov 205 Notes on Ricci solitons in $f$-cosymplectic manifoldsXiaomin Chen 242 Approximate solving of the third boundary value problems for Helmholtz equations in the plane with parallel cutsV. D. Dushkin 254 On eigenvalue distribution of random matrices of Ihara zeta function of large random graphsO. Khorunzhiy 268 Integral conditions for convergence of solutions of non-linear Robin's problem in strongly perforated domainE. Ya. Khruslov, L. O. Khilkova, M. V. Goncharenko 283 Chronicle The ninety fifth birthday of Vladimir Aleksandrovich Marchenko 314 The eightieth birthday of Leonid Andreevich Pastur 315
2018-11-18 19:56:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18260842561721802, "perplexity": 13446.547936970961}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039744561.78/warc/CC-MAIN-20181118180446-20181118202446-00412.warc.gz"}
https://share.cocalc.com/share/a1fe38eebe3ab03e373670ff2545b6e200e4d214/Fall2020_Lecture_Books/cs410_2020_lec01n01.ipynb?viewer=share
CoCalc Public FilesFall2020_Lecture_Books / cs410_2020_lec01n01.ipynb Authors: Ross Beveridge, V K, Yongxin Liu Views : 133 Description: First CS410 SageMath Notebook for Fall 2020 Compute Environment: Ubuntu 20.04 (Default) ### First 410 Notebook Illustrating SageMath on CoCalc This is a tiny step down the path we will take this Fall using SageMath through Jupyter - and hosted by CoCalc - to illustrate key concepts during the course of CS410. Today, the very basics of formulas and linear algebra. Ross Beveridge, August 25, 2020 In [33]: %display latex latex.matrix_delimiters(left='|', right='|') latex.vector_delimiters(left='[', right=']') To begin, you can designate variables that will be treated as symbolic as opposed to values. In [34]: var('a','b','c','d','x','y') $\left(a, b, c, d, x, y\right)$ Next you can create vectors and matrices which are symbolic (not numeric - yet) In [39]: mm = matrix([[a,b],[c,d]]) uu = vector([x,y]) mm,uu $\left(\left|\begin{array}{rr} a & b \\ c & d \end{array}\right|, \left[x,\,y\right]\right)$ But notice all we have done so far is to implicitly create a sequence of the two objects, the matrix and the vector, and print them back in the basic Python/Jupyter/SageMath Read-Eval-Pring Loop. What if we want to create better formed equations? In [40]: pretty_print("m = ", mm, " and", " u = ", uu) $\verb|m|\phantom{\verb!x!}\verb|=| \left|\begin{array}{rr} a & b \\ c & d \end{array}\right| \phantom{\verb!xx!}\verb|and| \phantom{\verb!xx!}\verb|u|\phantom{\verb!x!}\verb|=| \left[x,\,y\right]$ Next complication is that a matrix times a vector becomes somewhat an issue deep in the particulars of any given language. Conceptually, it may at times be simplest to realize that a vector is - when doing matrix multiplication - just a one dimensional matrix. So below we arrive at the simple case of a 2x2 matrix times a 2x1 vector/matrix In [37]: um = matrix(uu).transpose() pretty_print(LatexExpr("M = "), mm, ", ", LatexExpr("U = "), um) $M = \left|\begin{array}{rr} a & b \\ c & d \end{array}\right| \verb|,| U = \left|\begin{array}{r} x \\ y \end{array}\right|$ In [38]: vm = mm * um pretty_print(vm, " = ", mm, "*", um) $\left|\begin{array}{r} a x + b y \\ c x + d y \end{array}\right| \phantom{\verb!x!}\verb|=| \left|\begin{array}{rr} a & b \\ c & d \end{array}\right| \verb|*| \left|\begin{array}{r} x \\ y \end{array}\right|$ And while it may not seem like much, notice that SageMath carried out the symbolic computation of multiplying a matrix and a column vector. Put simply, anyone spending much of their life working with linear algebra in contexts such as computer graphics, should master a tool such as SageMath (Maple, Mathematic, ..). The reason is that true understanding comes from personal experience working back and forth between multiple ways of conceptualizing a problem, and hence any tool that does the routine part for you is helpful.
2020-09-24 11:27:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 5, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32192280888557434, "perplexity": 2331.3349005944788}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400217623.41/warc/CC-MAIN-20200924100829-20200924130829-00517.warc.gz"}
https://mathematica.stackexchange.com/questions/132095/using-output-of-dsolve-ndsolve-as-a-function-with-variable-constants
# Using output of DSolve/NDSolve as a function with “variable” constants I've already looked through the questions similar to this one, but I couldn't figure out how to modify them so that they work, and I suspect that there are deeper issues than just the syntax. Here is my code attempt: fFunction[a_, b_, c_, z_] = alpha[z] /. First@DSolve[{c* alpha[z]*(alpha[z]^2 + alpha[z]) / ((alpha[z] + 1)^2 + b^2) - 1 == alpha'[z], alpha[0] == a}, alpha[z], z] Basically there are four variables, where a,b,c are constants related to the physical apparatus which I'll put in later, and then alpha, which is the main variable I want to consider. Basically I want to solve the differential equation for alpha, which I suppose is actually a function of four variables. I understand how to do it for the one variable case, but this multivariable case, which I believe shouldn't be substantially harder, is eluding me. Is the problem that Mathematica is trying to evaluate the inside first, but can't? • The difficulties you may be encountering are due primarily to the fact that DSolve is not returning an explicit solution for alpha[z]. This has nothing to do with the three constants. I would add that you should be careful when making fFunction an explicit function of z. – bbgodfrey Nov 26 '16 at 2:30 • If I change it to NDSolve like this: fFunction[a_, b_] = alpha[z] /. First@NDSolve[{alpha[ z]*(alpha[z]^2 + alpha[z]) / ((alpha[z] + 1)^2 + b^2) - 1 == alpha'[z], alpha[0] == a}, alpha[z], {z, 0.1, 10}], it has a different error where it says that a is not a number – Jensen Lo Nov 26 '16 at 2:43 This seems like a good time to use ParametricNDSolveValue: zmax = 2;
2020-08-05 05:27:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48122096061706543, "perplexity": 765.9899829320129}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735909.19/warc/CC-MAIN-20200805035535-20200805065535-00186.warc.gz"}
https://www.physicsforums.com/threads/what-is-the-natural-state-of-plutonium.44784/
# What is the natural state of plutonium? 1. Sep 26, 2004 ### billnyethescienceguy i suck at science can n e 1 help? what is the natual state of plutonium? 2. Sep 26, 2004 ### The Bob This didn't make me want to help for a start. You are asking a question. Not making us feel sorry for you. I was told of for doing it, so are you. Plutonium is a transuranic element. It existing in small traces in natural with Uranium Ores. Most of it is manufactured in nuclear reactors by beta-particle decay from Uranium-239. I assume the natural state you mention is that it is found naturally as an unstable radioactive element with Uranium Ores. Larger quantities are found in nuclear reactors after beta-particle decay, which leaves Plutonium in its element form, I believe. Most likely I am wrong but I hope what you want is in here. The Bob (2004 ©)
2017-04-28 12:22:33
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8016605973243713, "perplexity": 2833.339902282247}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122955.76/warc/CC-MAIN-20170423031202-00427-ip-10-145-167-34.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/79149/commutative-algebra
## commutative algebra [closed] Let $(R,m)$ be a Noetherian local ring of dimension $d$ and $c\in R$ and is not in all $p_i$'s except in $m$ where $p_i\in ass(0)$ for all $i$ can we claim that c belong to $m$? - Please choose a descriptive title, and use the preview or editing options. – Douglas Zare Oct 26 2011 at 9:53
2013-05-20 20:52:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7045542001724243, "perplexity": 468.8531491612696}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699238089/warc/CC-MAIN-20130516101358-00067-ip-10-60-113-184.ec2.internal.warc.gz"}
https://scribesoftimbuktu.com/find-the-mean-arithmetic-98-168-182/
# Find the Mean (Arithmetic) 98 , 168 , 182 98 , 168 , 182 The mean of a set of numbers is the sum divided by the number of terms. 98+168+1823 Simplify the numerator. 266+1823
2022-09-29 22:00:11
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9397965669631958, "perplexity": 517.8249190914938}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00108.warc.gz"}
https://www.physicsforums.com/threads/walking-equation-help.70484/
# Walking Equation Help 1. Apr 7, 2005 ### Kura_kai I am new here and i am a java programer and i needed a physics equation. I need to get an equation that can allow a robot to calculate how to get from point a to b the best way. I have been thinking of something like that but keep drawing blanks. Any help 2. Apr 7, 2005 ### Davorak You are not giving use much to go on. Is it a straight line? does the robot have a map or does it need to observe it surroundings? Graph theory has many common algorithms to determine the lowest cost path. Is this what you are looking for? 3. Apr 8, 2005 ### Kura_kai It is going in a straigh path the distance is going to be like 50 feet or something. 4. Apr 8, 2005 ### ZapperZ Staff Emeritus .. and the simple equation of a straight line like y = mx + b just doesn't cut it? Zz. 5. Apr 8, 2005 ### Gokul43201 Staff Emeritus If the co-ordinates of the 2 points are A(x1,y1) and B(x2,y2), the equation of the line passing between them is : $$y - y_1 = \frac {y_1 - y_2}{x_1 - x_2} ~ \cdot ~ x - x_1$$ 6. Apr 8, 2005 ### Davorak Please tell me there is something more too it then this, there are bumps on the ground, there is a hill in the way, or some sort of obstacles. It is not flat ground for 50 feet is it? 7. Apr 8, 2005 ### dextercioby Another question:is this a geodesic or a brahistochrone problem...? Daniel. 8. Apr 8, 2005 ### ZapperZ Staff Emeritus I vote for the speed bump.... Zz. 9. Apr 14, 2005 ### Kura_kai It is a straight path for 50 feet. The equation is meant to find figure out how big the steps have to be so it doesn't fall forward or something other than that there is not going to be a sudden unexcpected force accting on the robot. 10. Apr 14, 2005 ### ramollari That's too ambitious kura_kai to be handled by a simple Java program. There are two things you can consider to drastically improve you robot design: 1. use the reactive agent architecture instead of symbolic reasoning (or at least hybrid), 2. Incorporate some form of learning to your robot, so that the latter itself decides which are the most optimal movements and step sizes. Sth like reinforcement learning. Good luck! 11. Apr 14, 2005 ### ramollari Last edited by a moderator: Apr 21, 2017
2018-10-21 22:48:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2905066907405853, "perplexity": 1556.8452383488295}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583514437.69/warc/CC-MAIN-20181021224001-20181022005501-00504.warc.gz"}
http://tex.stackexchange.com/questions/149341/a-new-environment-in-the-memoir-class
# A new environment in the memoir class I write my thesis with the memoir class and want to include an Acknowledgment in the same style as the standard abstract from the memoir class. How can this be done? - You can create a new environment acknowledgments that behaves like abstract but prints Acknowledgments: \newenvironment{acknowledgments} {\renewcommand{\abstractname}{Acknowledgments}\abstract} {\endabstract} Example: \documentclass{memoir} \newenvironment{acknowledgments} {\renewcommand{\abstractname}{Acknowledgments}\abstract} {\endabstract} \begin{document} \begin{abstract} Some Text \end{abstract} \bigskip \begin{acknowledgments} Some Text \end{acknowledgments} \end{document} Output: -
2015-04-26 13:32:20
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8523280620574951, "perplexity": 1961.0064821567364}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246654467.42/warc/CC-MAIN-20150417045734-00285-ip-10-235-10-82.ec2.internal.warc.gz"}
https://bt.gateoverflow.in/1/gate2018-1
Consider an unfair coin. The probability of getting heads is $0.6$. If you toss this coin twice, what is the probability that the fast or the second toss is heads? 1. $0.56$ 2. $0.64$ 3. $0.84$ 4. $0.96$
2022-06-25 13:58:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9722959399223328, "perplexity": 80.5124689763467}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103035636.10/warc/CC-MAIN-20220625125944-20220625155944-00455.warc.gz"}
https://physics.stackexchange.com/questions/146846/scalable-tool-engine-for-simulating-the-universe?noredirect=1
# Scalable tool (engine) for simulating the universe Does there exist a large scale scalable tool (engine) for simulating the universe that incorporates both quantum mechanics and cosmology, i.e. micro & macro scales? (It would be best if this tool simulated the entire universe using quantum mechanics only, without any corrections to the model that come from the macro scale) • No, and it would be darn near impossible to do so for many reasons (computational requirements being key). You might be interested in this arxiv paper Nov 14 '14 at 21:43 • Also, why do you expect the entire universe can be modeled by quantum mechanics alone? Nov 14 '14 at 21:47 • According to Douglas Adams, the Earth was created as the computer to answer just one such question (the Ultimate Question of Life, the Universe, and Everything). I suppose that by extrapolation, the universe is actually a giant simulator of the universe - and you would need something that big to do a half decent job... Nov 14 '14 at 21:47 • Somebody already started the analog simulator. Alas, we seem to be inside it! Nov 14 '14 at 21:48 • @tesgoe: No, we do not have the capability to do what you think can be done. The trillion body problem was numerically modeled about two years ago (a hydrodynamic evolution of the universe). The number of particles in the universe exceeds $10^9$ by about 71 orders of magnitude. What you propose is beyond absurd at this point in time. Nov 15 '14 at 1:24
2021-10-18 03:44:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3324289619922638, "perplexity": 673.4293749789326}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585196.73/warc/CC-MAIN-20211018031901-20211018061901-00167.warc.gz"}
https://email.esm.psu.edu/pipermail/macosx-tex/2006-March/020947.html
# [OS X TeX] increased memory allocation Miriam Belmaker belmaker at fas.harvard.edu Sat Mar 4 12:55:25 EST 2006 Hi, I can run the pdflatex with no problem and then when I try to bibtex the aux file , I see nothing (this happens when I run it indivually or using the Macro in TexShop).... I can then open the bbl file which has the bibliography up to the c's about 80 refs of 416.. without the \end {bibliography}. If I rerun pdflatex I get the file with all ? instead of the ref (after the c's ofcourse).. As I mentioned before, it ran fine on FINK, so there is no problem in my bib file (I also tried this on my computer on a copy with no accents etc.. and it still wouldn't work).. If I break up my file into smaller files.. it will compile the small files up to 80 kb.. it is not a single c ref because I can rearrange the smaller parts and it will always compile the first two. The log does not recognize any of the uncompiled refs.. is there any thing specific to look for in the log that may help? When I entered usr/local/teTeX/share/texmf/web2c/texmf.cnf to edit and recreate the formats as i was suggested it requested permission (???). Miriam On Mar 4, 2006, at 12:14 PM, Herbert Schulz wrote: > > On Mar 4, 2006, at 10:46 AM, Miriam Belmaker wrote: > >> Hi All, >> >> I am trying to complete my thesis (due next week) and am really in >> distress.. my bib file will not compile. I am working with TexShop >> and i-Installer tex on Tiger. My file is rather large and I have a >> the file was properly typeset on another computer with FINK with a >> memory allocation of 200000. I presume this is the problem.. How >> do I increase the memory allocation of my tex? I got this info >> " Edit /usr/local/teTeX/share/texmf/web2c/texmf.cnf and recreate >> the formats" but I am not unix savvy and I need a step by step >> instructions using the terminal... I also could move to use OzTex >> and AlphTk (I heard their memory allocations are better), but >> OzTex can't find my style files..(using TexShop, I just put them >> in a single folder with my bib and tex files). >> >> >> Any help in this matter would be very much appreciated, >> >> >> Miriam >> > > Howdy, > > Sorry to hear about your problem and I certainly can understand > > Can you give us some information about the error message(s) you are > getting and some information from the .log file? Also, is the > problem occurring when you typeset the document and/or when you run > bibtex, etc? > > Good Luck, > > Herb Schulz > (herbs at wideopenwest.com) > > > ------------------------- Info -------------------------- > Mac-TeX Website: http://www.esm.psu.edu/mac-tex/ > & FAQ: http://latex.yauh.de/faq/ > TeX FAQ: http://www.tex.ac.uk/faq > List Archive: http://tug.org/pipermail/macostex-archives/ ------------------------- Info -------------------------- Mac-TeX Website: http://www.esm.psu.edu/mac-tex/ & FAQ: http://latex.yauh.de/faq/ TeX FAQ: http://www.tex.ac.uk/faq List Archive: http://tug.org/pipermail/macostex-archives/
2020-07-06 10:32:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9397741556167603, "perplexity": 10166.668498396592}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655890157.10/warc/CC-MAIN-20200706073443-20200706103443-00228.warc.gz"}
https://blender.stackexchange.com/questions/35446/cannot-scale-or-rotate-objects
# Cannot scale or rotate objects I choose the scale or rotate option that's on top of the render timeline, but every time I left click, the red circle with the 4 black lines sticking out (3D cursor) is selected, and not the scaling or rotating options. Is there any way I can fix this? EDIT: I have posted this on blender as a bug. Can someone who has enough reputation close this question as not constructive (ie this question is something that needs to be discussed not answered)? Thank you for all your help. EDIT 2: This is my computer specific, please check my solution below. • could you upload a screenshot? – gladys Aug 10 '15 at 17:04 • Video uploaded. – Yubin Lee Aug 10 '15 at 17:36 • You should not only click by LMB, but click and drag if using manipulator (either scale or rotate) (it's not clear good enough from video how exactly do you do that). – Mr Zak Aug 10 '15 at 17:42 • @MrZak what is the manipulator? And I did click and drag. – Yubin Lee Aug 10 '15 at 17:50 • you have to click the LMB and drag at the same time (the LMB must be pushed while dragging) – gladys Aug 10 '15 at 18:13
2019-11-19 06:38:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39836931228637695, "perplexity": 2017.7232368257153}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670006.89/warc/CC-MAIN-20191119042928-20191119070928-00333.warc.gz"}
http://mathoverflow.net/questions/77589/structural-definition-of-product-in-set-theory?sort=newest
# Structural definition of “product” in set theory At first sight there is no abstract (= structural) definition of "product" in set theory. E.g. the Cartesian product of sets $A$ and $B$ is defined as the set of all ordered pairs $(x,y)$, $x \in A$, $y \in B$, and thus depends on the definition of "ordered pair" which is notoriously arbitrary. I wonder if the following can count as an abstract (= structural) definition of "product" in the context of set theory. Consider a set $S$ with two equivalence relations $\sim_1$ and $\sim_2$. Definition: $(S,\sim_1,\sim_2)$ is a product iff $$(\forall x \in S)(\forall y \in S)(\exists ! z\in S) x \sim_1 z \wedge y \sim_2 z$$ $$(\forall z\in S)(\exists ! x\in S)(\exists ! y\in S) x \sim_1 z \wedge y \sim_2 z$$ If $(S,\sim_1,\sim_2)$ is a product • $S$ can be understood as $S/_{\sim_1} \times S/_{\sim_2}$ • the relations $\sim_i$ can be read as has the same $i$-th component • the canonical projection map $\pi_i (x) = [x]_{\sim_i}$ can be understood as the $i$-th component Question: Isn't this definition somehow on par - concerning structuralness - with the definition of category theory? If so, why is it so rarely found, or rather: where can I find it (in which textbook, e.g.)? Considering the product of a set with itself, i.e. $S = X \times X$, one relation $\sim$ does suffice, which does not have to be an equivalence relation, not even symmetric, but from which two equivalence relations can be defined: $$x \sim_1 y :\equiv (\exists z) x \sim z \wedge y \sim z$$ $$x \sim_2 y :\equiv (\exists z) z \sim x \wedge z \sim y$$ If $(S,\sim_1,\sim_2)$ is a product the relation $x \sim y$ can be read as the first component of $x$ equals the second component of $y$. Question: Are there conditions on a relation $\sim$ such that $\sim_1$, $\sim_2$ as defined above make $(S,\sim_1,\sim_2)$ automatically a product? - In your definition of a product, did you mean $\forall x \in S \forall y \in S \exists! z \in S \ldots$ instead of the unbounded quantification $\forall x \forall y \exists! z \ldots$? – Andrej Bauer Oct 9 '11 at 7:58 The definition of a product should involve three sets, not just one. If I tell you that $P$ is a product of two sets, you cannot always recover from $P$ the two sets (think of the case when one of the sets is empty). Your definition is unlikely going to work when one of the components is an empty set. How would we get $\emptyset \times \mathbb{N}$, for example? If you take $S = \emptyset$, as you presumably should, then you cannot recover $\mathbb{N}$. – Andrej Bauer Oct 9 '11 at 8:01 I would regard the `definition' of ordered pair in ZFC (or similarly) merely as a proof of existence of ordered pairs/products. Just as there are many constructions of the natural numbers, integers, rationals, real and complex numbers. Really, what matters is the relations between the objects. – George Lowther Oct 9 '11 at 9:52 The product of sets, together with its projection maps to the factors, could be characterized by a suitable universal mapping property. Then that construction is unique up to suitable isomorphism as all objects satisfying universal properties are. This can let you relax a little about the fact that there may be more than one way to make the construction of a product of sets (and its projections to the factors). – KConrad Oct 9 '11 at 13:08 @Hans Stricker: Right, so I partly agree with you there. Except, rather than adding a pair of relations, I think adding a 'Pair' keyword is neater. This is a 2-ary function symbol satisfying the axiom $$\forall x_1,x_2,y_1,y_2\;\left(({\rm Pair}(x_1,x_2)={\rm Pair}(y_1,y_2))\rightarrow(x_1=y_1 \wedge x_2=y_2)\right).$$ Or, add another pair $\pi_1,\pi_2$ of 1-ary functions satisfying $$\forall x,y\;\left(\pi_1{\rm Pair}(x,y)=x\wedge\pi_2{\rm Pair}(x,y)=y\right).$$ In any case, I don't think that ZFC style set theory on its own is really designed to be a handy framework for actually doing maths. – George Lowther Oct 9 '11 at 19:30 How do you define an equivalence relation in ZF without refering to products or ordered pairs? In case you want to define products via the usual universal property: There you have to use maps, which probably also cannot defined without ordered pairs in ZF. Anyway, if you are interested in a category theoretic foundation of set theory and therefore of all mathematics, you might be interested in Lawvere's Elementary Theory of the Category of Sets. - @Martin: Is this true: do I really need ordered pairs to define an equivalence relation, don't unordered pairs suffice? – Hans Stricker Oct 10 '11 at 15:51 Please write down your definition of an equivalence relation which does not use ordered pairs. – Martin Brandenburg Oct 10 '11 at 18:02 @Martin - Because of symmetry, you should actually be able to: Define an equivalence relation on $S$ as a subset $R$ of the collection of subsets of $S$ whose cardinality is at most 2 satisfying a) For all $x \in S$, $\lbrace x \rbrace \in R$. b) For all $x, y, z \in S$, if $\lbrace x, y \rbrace \in R$ and $\lbrace y, z \rbrace \in R$, then $\lbrace x, z \rbrace \in R$. This should do the trick. – Simon Rose Oct 10 '11 at 20:12 That's what I had in mind. Is there a flaw? – Hans Stricker Oct 10 '11 at 22:42 Another natural way is to represent an equivalence relation by the partition it induces (i.e., an equivalence relation on $S$ is a set of nonempty disjoint sets whose union is $S$). – Emil Jeřábek Oct 27 '11 at 17:30 From your first part of definition from a couple $x,\ y$ of elements of $S$, there is (uniquely) associated an element $z$ of $S$ still, then essentially we have that $S \times S \subset S$ (exist a subset of $S$ in bijection by the usual $S\times S$) and from second part this inclusion is a equality. Then your definitions dont work for finite sets for example. -
2016-02-08 23:31:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9423085451126099, "perplexity": 197.18232667629582}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701154682.35/warc/CC-MAIN-20160205193914-00282-ip-10-236-182-209.ec2.internal.warc.gz"}
http://dmtcs.episciences.org/2104
## Henning, Michael A. and Naicker, Viroshan - Graphs with large disjunctive total domination number dmtcs:2104 - Discrete Mathematics & Theoretical Computer Science, April 22, 2015, Vol. 17 no. 1 (in progress) Graphs with large disjunctive total domination number Authors: Henning, Michael A. and Naicker, Viroshan Let G be a graph with no isolated vertex. In this paper, we study a parameter that is a relaxation of arguably the most important domination parameter, namely the total domination number, γt(G). A set S of vertices in G is a disjunctive total dominating set of G if every vertex is adjacent to a vertex of S or has at least two vertices in S at distance 2 from it. The disjunctive total domination number, γdt(G), is the minimum cardinality of such a set. We observe that γdt(G) ≤γt(G). Let G be a connected graph on n vertices with minimum degree δ. It is known [J. Graph Theory 35 (2000), 21 13;45] that if δ≥2 and n ≥11, then γt(G) ≤4n/7. Further [J. Graph Theory 46 (2004), 207 13;210] if δ≥3, then γt(G) ≤n/2. We prove that if δ≥2 and n ≥8, then γdt(G) ≤n/2 and we characterize the extremal graphs. Source : oai:HAL:hal-01196847v1 Volume: Vol. 17 no. 1 (in progress) Section: Graph Theory Published on: April 22, 2015 Submitted on: November 4, 2014 Keywords: Total dominating set,Disjunctive total dominating set,[INFO.INFO-DM] Computer Science [cs]/Discrete Mathematics [cs.DM],[INFO.INFO-HC] Computer Science [cs]/Human-Computer Interaction [cs.HC]
2017-09-22 11:46:56
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8015130758285522, "perplexity": 2091.3285013005616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818688940.72/warc/CC-MAIN-20170922112142-20170922132142-00099.warc.gz"}
https://jp.maplesoft.com/support/help/errors/view.aspx?path=MaplePortal%2FDocumentation
MaplePortal/Documentation - Maple Help Introduction You can create compelling applications that capture both the calculations, and the inherent assumptions and underlying information behind your analysis. You can have live math, text, images, animations and more, in a single technical document. Feature Where to Find It Built-in headings styles Drop-down list on toolbar Sections and subsections Insert > Section Tables Insert > Table Font control and ability to define new styles Toolbar buttons, Format > Styles... Insert images Insert > Image Spell-checker aware of mathematical terms Tools > Spellcheck Hyperlinks and bookmarks Insert > Hyperlink Format > Bookmarks... Export to PDF File > Export As... Text You can enter text, and use Maple like a word processing tool. You can apply styles, modify or create new styles, or individually format text. In worksheet mode, turn an execution prompt into a text prompt, with Ctrl+T (or Insert > Text). Try this by clicking in the execution prompt below and 1 pressing Ctrl + T 2 typing some text 3 and then insert a new math prompt with Ctrl + J (or Insert > Execution Group > After Cursor) > Images You can copy and paste images into Maple, or import pictures already on your computer. Sections You can insert collapsible sections and subsections to organize and structure your work, or hide distracting code or detailed information. Section > $x≔1:$ Subsection > ${x}^{2}$ ${1}$ (1.1.1) To insert a section, use Insert > Section Tables You can fine-tune the placement of plots, text and other elements with tables > > $\mathrm{sysTF}≔\mathrm{DynamicSystems}:-\mathrm{TransferFunction}\left(\mathrm{tf}\right)$ ${\mathrm{sysTF}}{≔}\left[\begin{array}{c}{\mathbf{Transfer Function}}\\ {\mathrm{continuous}}\\ {\mathrm{1 output\left(s\right); 1 input\left(s\right)}}\\ {\mathrm{inputvariable}}{=}\left[{\mathrm{u1}}{}\left({s}\right)\right]\\ {\mathrm{outputvariable}}{=}\left[{\mathrm{y1}}{}\left({s}\right)\right]\end{array}\right$ (1) > $\mathrm{DynamicSystems}:-\mathrm{PhasePlot}\left(\mathrm{sysTF},\mathrm{size}=\left[600,300\right]\right)$ Tables can contain text, math, plots, graphics, and embedded components.  Use tables to: • align text and graphics • control the layout of embedded components Try inserting a table below with Insert > Table...
2022-01-24 11:16:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 7, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7369216084480286, "perplexity": 10963.862424190354}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304528.78/warc/CC-MAIN-20220124094120-20220124124120-00145.warc.gz"}
https://www.mersenneforum.org/printthread.php?s=2558ef9ae8eaa44c600d6ad0f18fbaa9&t=25374
mersenneforum.org (https://www.mersenneforum.org/index.php) -   Homework Help (https://www.mersenneforum.org/forumdisplay.php?f=78) -   -   something about a sum (https://www.mersenneforum.org/showthread.php?t=25374) wildrabbitt 2020-03-17 12:45 something about a sum Hi, can anyone explain this? $S=\sum_{v=-\infty}^{\infty}\int_0^Ne^{2\pi ivx+2\pi i\frac{x^2}{N}} \mathrm{d}x$ $=N\sum_{v=-\infty}^{\infty}\int_0^1e^{2\pi iN(x^2+vx)} \mathrm{d}x$ What I'm hoping for is some intermediate steps which get from the first, step by step to the second that make sense. Chris Card 2020-03-17 13:19 [QUOTE=wildrabbitt;539926]Hi, can anyone explain this? $S=\sum_{v=-\infty}^{\infty}\int_0^Ne^{2\pi ivx+2\pi i\frac{x^2}{N}} \mathrm{d}x$ $=N\sum_{v=-\infty}^{\infty}\int_0^1e^{2\pi iN(x^2+vx)} \mathrm{d}x$ What I'm hoping for is some intermediate steps which get from the first, step by step to the second that make sense. Do you know how to do integration by substitution? If so, try setting x = Ny and rewrite the integral in terms of y instead of x. Chris Dr Sardonicus 2020-03-17 13:19 [QUOTE=wildrabbitt;539926]Hi, can anyone explain this? $S=\sum_{v=-\infty}^{\infty}\int_0^Ne^{2\pi ivx+2\pi i\frac{x^2}{N}} \mathrm{d}x$ $=N\sum_{v=-\infty}^{\infty}\int_0^1e^{2\pi iN(x^2+vx)} \mathrm{d}x$ What I'm hoping for is some intermediate steps which get from the first, step by step to the second that make sense. Please help.[/QUOTE]Obvious substitution. You said you knew how to make substitutions in integrals. It is perhaps unfortunate that the variables in the integrals on both sides have the same name. wildrabbitt 2020-03-17 14:36 Thanks to both of you. I do understand integration by substitution but I didn't know what the substitution required was. I should be able to do it now. All times are UTC. The time now is 16:00.
2021-04-11 16:00:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6201692223548889, "perplexity": 1146.9169359739362}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038064520.8/warc/CC-MAIN-20210411144457-20210411174457-00563.warc.gz"}
https://delong.typepad.com/sdj/2007/11/page/2/
## Pigs Fly! Mark Halperin of ABC says that his entire career up to this moment has been spent hurting America: How ‘What It Takes’ Took Me Off Course: MORE than any other book, Richard Ben Cramer’s “What It Takes,” about the 1988 battle for the White House, influenced the way I cover campaigns. I’m not alone. The book’s thesis — that prospective presidents are best evaluated by their ability to survive the grueling quadrennial coast-to-coast test of endurance required to win the office — has shaped the universe of political coverage. Voters are bombarded with information about which contender has “what it takes” to be the best candidate. Who can deliver the most stirring rhetoric? Who can build the most attractive facade? Who can mount the wiliest counterattack? Whose life makes for the neatest story? Our political and media culture reflects and drives an obsession with who is going to win, rather than who should win. For most of my time covering presidential elections, I shared the view that there was a direct correlation between the skills needed to be a great candidate and a great president. The chaotic and demanding requirements of running for president, I felt, were a perfect test for the toughest job in the world. But now I think I was wrong. The “campaigner equals leader” formula that inspired me and so many others in the news media is flawed. Case in point: Our two most recent presidents, both of whom I covered while they were governors seeking the White House. Bill Clinton and George W. Bush are wildly talented politicians. Both claimed two presidential victories, in all four cases arguably as underdogs. Both could skillfully serve as the chief strategist for a presidential campaign. But their success came not because they convinced the news media (and much of the public) that they would be the best president, but because they dominated the campaign narrative that portrayed them as the best candidate in a world-class political competition. In the end, both men were better presidential candidates than they were presidents. For instance, being all things to all people worked wonderfully well for Bill Clinton the candidate, but when his presidency ran into trouble, this trait was disastrous, particularly in the bumpy early years of his presidency and in the events leading up to his impeachment. The fun-loving campaigner with big appetites and an undisciplined manner squandered a good deal of the majesty and power of the presidency, and undermined his effectiveness as a leader. What much of the country found endearing in a candidate was troubling in a president. When George W. Bush ran in 2000, many voters liked his straightforward, uncomplicated mean-what-I-say-and-say-what-I-mean certainty. He came across as a man of principle who did not lust for the White House; he was surrounded by disciplined loyalists who created a cheerful cult of personality about their candidate. As with Mr. Clinton, though, the very campaign strengths that got Mr. Bush elected led to his worst moments in office. Assuredness became stubbornness. His lack of lifelong ambition for the presidency translated into a failure to apply himself to the parts of the job that held less interest for him, often to disastrous effects. The once-appealing life outside of government and public affairs became a far-less appealing lack of experience. And Mr. Bush’s close-knit team has served as a barrier to fresh advice. So if we for too long allowed ourselves to be beguiled by “What It Takes” — certainly not the author’s fault — what do those of us who cover politics do now? After all, Mr. Cramer’s style of campaign coverage is alluring in an election season that features so many candidates with heroic biographies and successful careers in and out of politics. (Not to mention two wide-open races.) Well, we pause, take a deep breath and resist. At least sometimes. In the face of polls and horse-race maneuvering, we can try to keep from getting sucked in by it all. We should examine a candidate’s public record and full life as opposed to his or her campaign performance. But what might appear simple to a voter can, I know, seem hard for a journalist. If past is prologue, the winners of the major-party nominations will be those who demonstrate they have what it takes to win. But in the short time remaining voters and journalists alike should be focused on a deeper question: Do the candidates have what it takes to fill the most difficult job in the world? Two big things mar the op-ed. First: the assertion of equivalence between Clinton's mistakes and Bush's. Clinton was a pretty good president, after all. Bush is not.\ Second: missing from the op-ed are two words: "I'm sorry." And there is a third. Halperin writes: When George W. Bush ran in 2000... [I] liked his straightforward, uncomplicated mean-what-I-say-and-say-what-I-mean certainty. He came across as a man of principle who did not lust for the White House; he was surrounded by disciplined loyalists who created a cheerful cult of personality about their candidate... Carlyle Group CEO David Rubenstein had a different reaction to George W. Bush: David Rubenstein: you know if you said to me, name 25 million people who would maybe be President of the United States, he wouldn't have been in that category... That was the reaction of everybody not on Bush's payroll who has met Bush I have talked to--everybody except our elite Beltway press, that is, people like Mark Halperin. ## Note to Self: Six Interesting Questions About Corporate Nationality: 1. Does it matter that a huge hunk of Citigroup is owned by Alaweed ibn Saud rather than some guy who lives on Kentucky or Alberta? 2. Does it matter that Applied Materials--the company that makes the equipment other companies use to make the chips other companies use to make the gadgets other companies use to market the lifestyle--has its headquarters in San Jose, California rather than in Stuttgart or Shanghai? 3. Does it matter that Applied Materials has its engineers in San Jose, California and not in Kuala Lampur or Rio de Janeiro? 4. Does it matter that venture capital firm Kleiner-Perkins is in Palo Alto, California and not in Tokyo or Milan? 5. Does it matter that Citigroup's headquarters is in New York rather than in London or Bombay? 6. Does it matter that Apple's iPods are made in Shenzhen, China, rather than in Austin, Texas, or Window Rock, Arizona? Related Issues: • National regulation: class A stock • National regulation: official supervision and merchant management • Sovereign wealth funds • IBM and Lenovo • James Fallows on the :-) and China: resources, assembly, and design and marketing • Political pressure, financial pressure, and post-WWI technology transfer from Germany to the U.S.; was political or national financial pressure used? • Peter Drucker and his predictions about pension-fund socialism • Gazprom and P&O? ## Beowulf, Starring Angelina Jolie as "Mom" A very well done comic-book movie. An excellent story. But it is not the story of Beowulf. It is a different story. "The Thirteenth Warrior" is a better Beowulf. It may or may not be a better movie. I am not sure. ## WTF? What Good Are Lobbyists Then? All the assembled lobbyists of America cannot support a Legal Seafoods on K Street? What good are they? We took refuge in the Bombay Palace... ## Hoisted from Comments: Low-Tech Cyclist Watches the Utterly Disgusting Fecklessness of the Washington Post Hoisted from Comments: Low-Tech Cyclist Writes: Grasping Reality with Both Hands: Brad DeLong's Semi-Daily Journal: I know bringing up Fred Hiatt is like shooting fish in a barrel on this score, but the WaPo has a subset of its unsigned editorials where it comments on what it calls "the ideas primary." Five of the last seven Ideas Primary editorials have been on the Social Security 'crisis.' There have been 15 editorials in this series. One has been on global warming - the greatest crisis of our era - and two have been on our greatest domestic crisis, the lack of universal health care and the upcoming crisis in the Medicare trust fund. None have been on Iraq and the power vacuum we've created in the center of the Middle East. Interesting set of priorities, huh? As I have said before, there is something very wrong with everybody who is currently helping to put the Washington Post in newsprint on the streets of Washington these days. In the future everybody involved is going to be claiming that they spent the Graham-Downie-Hiatt years representing tobacco companies or lobbying for the government of Sudan. ## Richard Baldwin on Martin Feldstein’s View of the Dollar Baldwin writes: Feldstein’s view on the dollar | vox - Research-based policy analysis and commentary from Europe's leading economists: In a May 2007 essay, Martin Feldstein argued that a drop in US mortgage refinancing would raise US personal saving and this would necessitate a fall in the dollar. That’s looking pretty good at the moment.... Something that stumps every undergraduate, and not a few PhD economists, is how a nation’s trade deficit, or more precisely, its current account deficit can be two things at once: #1) The gap between national investment and national savings, and #2) the difference between exports and imports. This is not a ‘can be’ relationship; it is a ‘must be’. Number two requires no explanation; it’s just a definition. Number one follows from a line or two of national-accounts algebra. A nation’s aggregate purchase of goods is the sum of what its public and private sectors spend on consumption and investment. Its aggregate sales of goods equal the value of what its public and private sectors produce and this, in turn, is its aggregate income. Plainly, the difference between a nation’s spending and earning must be its trade balance with the rest of the world; if its aggregate purchases exceed its production/income, then some foreign goods must, on net, be coming in to satisfy the excess demand. Finally, since income must be either consumed or saved, the spending-earning gap is also the investment-saving gap; consumption cancels from both sides of the equation. Feldstein makes a bold simplification that helps him to think clearly about the messy world. He takes US savings and investment as primitives and views the value of the dollar as the variable that adjusts to make things fit. As he writes it: “This line of reasoning leads us to the low level of the U.S. saving rate as the primary cause of the high level of the dollar.”... The US’s net purchase of foreign goods is predetermined by its savings/investment gap and the dollar must jump to make people happy buying and supply the necessary net flow of foreign goods.... The real explanation comes in understanding why US savings was so low relative to its investment. Feldstein focuses on personal savings. “Two primary forces have been driving down the household saving rate,” he wrote, “increasing wealth and, more recently, mortgage refinancing.”... Feldstein not only calls the dollar’s drop, he links it to developments in the US housing market. True, his logic did not lead him to predict the subprime crisis, but that is more a matter of how, not what.... The rest of the essay discusses why the foreign exchange market didn’t anticipate the adjustment that Feldstein said must occur. His reasons are less remarkable – Asian official intervention and myopic investors.... Feldstein... also considers... that the whole thing could unwind.... “The primary risk... is that the decline of the dollar and the rise of the saving rate will happen at different speeds, leading to domestic imbalances.”... If the US saving rate rises without a dollar drop, there is no narrowing of the trade gap to offset the closing saving/investment gap. Aggregate demand falls and we get a US recession... “the domestic weakness will occur unless the dollar decline precedes the rise in saving.” Put that way, it sounds paradoxical. It seems better to phrase it thus: The domestic U.S. recession will occur unless the fall in the dollar and the boom in exports preceded the cutback in consumption spending... For a rise in savings is a fall in consumption spending. ## Comment Policy: A Reminder A reminder: if I notice them, comments that I regard as factually false or as rhetorically destructive to the ongoing conversation will get deleted, if I notice them--I don't have time to moderate this properly, but I am trying. I am anxious to run an informative seminar. I am not enthusiastic about hosting a foodfight. The place to comment on the comment policy is here: http://delong.typepad.com/sdj/comment-policy-a-seminar-.html An earlier comment policy page: http://delong.typepad.com/sdj/2005/03/dealing_with_tr.html The best thing on comment policy I have ever read: http://nielsenhayden.com/makinglight/archives/006036.html ## Hoisted from Comments: Andres Directs Us to the Sons of Julius and Ethel Rosenberg I had alays thought that Julius Rosenberg was guilty, and that Ethel Rosenberg was judicially murdered. Now their sons write to make a strong case that Julius's execution was primarily not an act of retribution and to deter future espionage but rather part of a cover-up to boost the reputation of the FBI and other agencies. Andres directs us: Grasping Reality with Both Hands: Brad DeLong's Semi-Daily Journal: Now that Brad has brought up McCarthyism and someone else has brought up Alger Hiss, let me pull an anne and bring up another case of Cold War victimization/witch hunting: http://www.nytimes.com/2007/11/17/opinion/l17rosenberg.html?_r=1&n=Top/Opinion/Editorials%20and%20Op-Ed/Letters&oref=slogin The Case of the Rosenbergs: Their Sons’ View. A Spy’s Path: Iowa to A-Bomb to Kremlin Honor (November 12, 2007): “A Spy’s Path: Iowa to A-Bomb to Kremlin Honor” (front page, Nov. 12), about a Soviet spy who helped steal atomic secrets during World War II, provides powerful evidence that our parents, Ethel and Julius Rosenberg, were wrongfully executed. History students are taught that our father headed a conspiracy that stole “the secret” of the atom bomb (historians are uncertain about the role of our mother). Meanwhile, government officials sat on the story of the spy revealed in your article. Later in the article, a historian is quoted as saying, “It would have been highly embarrassing for the U.S. government to have had this divulged,” and so they kept it a secret, preferring to make a scapegoat of our father. For decades we have argued that the evidence presented at the trial, even if it were legitimate, revealed no significant secrets about the theory or construction of the first atom bombs. In fact, the material allegedly passed was full of errors. We have noted that Klaus Fuchs, the British scientist who confessed to spying, had provided much more detailed and accurate information. Since 1999 the American public has known about the successful spying of another atomic scientist, Theodore Hall. This latest revelation shows there was an even more significant breach of the Manhattan Project. Furthermore, as early as 1948, two years before our parents’ arrests, the United States government knew about the effective spying of Dr. George Koval. This vindicates our major argument: the charge that Julius and Ethel Rosenberg stole the secret of the atom bomb was a fraud from the moment that the prosecutors, with the connivance of the Atomic Energy Commission, made that case. Our parents were sacrificed so that United States intelligence agencies could save face and cover up their negligence. Robert Meeropol Michael Meeropol Easthampton, Mass., Nov. 15, 2007 ## Jason Kottke on the Amazon Kindle eBook Reader Jason writes: 15 Things I Just Learned About the Amazon Kindle - Boing Boing Gadgets: Its eBooks have DRM (filetype: .AZW), but it supports unprotected Mobipocket books (.MOBI, .PRC), .TXT files, HTML, and Word. Some files can be transferred over USB, while others have to be emailed to the special per-device Kindle email. (More on that later.) It has a web browser.... You can download text and other files to the device from the web for later storage.... It can play Audible audiobooks... MP3s copied to its internal storage... on random shuffle... a human-powered search query system powered by Amazon's Mechanical Turk... you'll pay for RSS, but not the web. Mobipocket DRM'd files will not work on the Kindle.... PDF is not supported.... GIF and JPEG are supported... only two fonts: Caecilia and Neue Helvetica... the only two file formats this thing can read natively are .AZW and .TXT. That's a huge bummer. As for the plaintiffs’ lawyers, they are likely to pocket around $1.5 billion of the settlement money, which means that Merck will wind up feeding the beast, just like every other company that finds itself embroiled in a mass tort. That money will go to funding the next mass tort... A good newspaper story on this would answer three questions about these cases: 1. Is the settlement too large or too small as a sanction on Merck--as a two-by-four to the head of the CEO to make sure that he understands that his job is to curb the enthusiasm of his marketing product when he has a new product with dangerous side effects? 2. Are the lawyers' fees too large or too small--does it give lawyers too much of an incentive to crank up this mass-tort machine as a way of providing drug companies with an incentive to do the right thing? 3. Does the settlement money get to the people who were harmed--to the victims? My answers in this case to these three questions right now are: (1) probably about right, (2) I don't know but I fear too large, and (3) somewhat but not largely. My first beef with Joseph Nocera is that his story does nothing to help me get better answers to any of these questions. My second beef is that his story pushes a less-informed reader towards answers--too large, too large, and no--that are largely wrong. My third beef is that Joseph Nocera doesn't set out any ideas about how one might create a better system. My fourth beef is that Joseph Nocera pushes readers wrong answers by playing intellectual three-card monte--if he's going to make a big deal about how large the 27,000 case number is, he has a moral obligation to set it alongside the 30,000 net excess heart attack death number. And my fifth beef is that Nocera knows damned well that he has a moral obligation to raise the level of the debate, and that he is ducking that obligation. Why oh why can't we have a better press corps? ## Not Nearly as Bad as It Might Have Been Typhoon Sidr hits Bangladesh: 1,723 Dead in Bangladesh Cyclone: PARVEEN AHMED: The official death toll from a savage cyclone that wreaked havoc on southwest Bangladesh reached 1,723 Saturday — the deadliest storm to hit the country in a decade. Military helicopters and ships joined rescue and relief operations and aid workers on the ground struggled to reach victims. Tropical Cyclone Sidr tore apart villages, severely disrupted power lines and forced more than a million coastal villagers to evacuate to government shelters. The latest death figure tallied to 1,723, with 474 deaths reported from worst-hit Barguna district and 385 from neighboring Patuakhali, a military spokesman, Lt. Col. Moyeenullah Chowdhury, told reporters in the capital, Dhaka. Rescuers battled along roads that were washed out or blocked by wind-blown debris to try to get water and food to people stranded by flooding. Some employed the brute force of elephants to help in their efforts. "We sent a relief team in a jeep, but they had to return halfway as the roads and channels were unpassable," M. Shakil Anwar of CARE Bangladesh said by telephone from nearby Khulna city. The roads were strewn with fallen trees and covered in muddy sludge, Anwar said. Small ferries — which are the only means of transport across the numerous river channels that crisscross the area — were flung ashore by the force of cyclone winds. "We will try again tomorrow on bicycles, and hire local country boats," Anwar said. He added that they planned to distribute dry foods and other emergency rations among 500 families of the area. On Saturday, the army deployed helicopters to deliver supplies to the remotest areas, while navy ships delivered supplies and dispensed medical assistance to migrant fishing communities living on and around hundreds of tiny islands, or shoals, along the coast... ## Yet Another Thought on Ross Douthat on Race and Modern Republicanism... Ross Douthat wrote: Ross Douthat: Gerard Alexander's essay on "The Myth of the Racist Republicans" goes further than I would in downplaying Republican racism, but I think his point on this score is basically right: ...Segregationists simply had very limited national bargaining power.... Segregationists wanted policies that privileged whites. In the GOP, they had to settle for relatively race-neutral policies: opposition to forced busing and reluctant coexistence with affirmative action. The reason these policies aren't plausible codes for real racism is that they aren't the equivalents of discrimination.... Kevin Phillips was hardly coy about this in his Emerging Republican Majority. He wrote in 1969 that... "the Republican Party cannot go to the Deep South"-—meaning the GOP simply would not offer the policies that whites there seemed to desire most—-"the Deep South must soon go to the national GOP"... So the GOP ended up bidding race-neutrality - which a conservative party would have naturally favored anyway, and which is not racism - and symbolic gestures like Reagan's opposition to MLK Day, his support for Bob Jones University's tax exemption, and so forth. These code words and gestures were real and shameful, and contemporary apologies like Ken Mehlman's mea culpa are entirely appropriate. But more often than not, I would submit, pundits who harp on this shame tend to do so because it's an easy way to leap to Krugman's conclusion that race explains everything he doesn't like about contemporary American politics, when in fact an awful lot of it is explained by the fecklessness of his liberal forebears. Douthat's claim that it is inappropriate to "harp of this shame" of Republican "symbolic gestures" reminds me of Nixon Attorney General John Mitchell and of William Safire. William Safire wrote in his obituary for John Mitchell: William Safire: "Watch what we do, not what we say." Coming from the law-and-order campaign manager with the visage of a bloodhound, that epigram was interpreted as the epitome of political deceptiveness. But his intent was to reassure blacks that, foot-dragging poses aside, the Nixon Justice Department would accomplish desegregation. Mitchell knew that the appearance of a tilt toward white Southerners would ease the way for acceptance of steady civil rights progress for blacks... Safire and Mitchell thus go further than Douthat. Douthat says that the symbols were unimportant and did no harm because they were not policy substance. Mitchell and Safire say that the symbols were more than smoke-and-mirrors--that the anti-Black posturing was actually a source of faster progress on civil rights. I have heard Douthat's or Safire and Mitchell's argument a lot of times, applied to: • Abortion • Tolerance • Fiscal policy • Race relations The "these Republicans are really good people--they just talk like thugs" or "these Republicans are really especially good people because they sound like thugs" is just not very convincing at any level. ## Apres Moi le Deluge Intellectual History Blogging Re: Michael Sonenscher, Before the Deluge: Public Debt, Inequality, and the Intellectual Origins of the French Revolution. The book begins: The phrase après moi le deluge... by... Mme. de Pompadour... and the various attitudes toward impending disaster it might have been intended to express... have often been associated with the French Revolution.... [T]he phrase was current... before 1789... applied to public debt. This... was how it was used... by Victor Requeti, Marquis de Mirabeau.... Mirabeau applied the phrase to... government borrowing and, more particularly, to the practice of using life annuities to fund the costs of government debt. Life annuities, he wrote, were the quintessence of what he called "that misanthropic sentiment [ce sentiment ennemi] après moi le deluge."... [T]hey were a way of drawing bills on posterity. Like all forms of public credit... consumed wealth before it was produced... leaving a state... having to face the future without the accumulted assets... to maintain its long- term domestic prosperity and external security... could... destroy... "civilization." I am going to have to go read Mirabeau pere's Etretiens d'un Jeune Prince avec Son Governeur. It sounds to me that the phrase is applied by Mirabeau pere to describe not a feckless government that borrows long (to fund wars, canals, or harbors) but rather a feckless father who invests the family wealth in annuities that end on his death. After all, from the viewpoint of long-run governmental fiscal prudence, life annuities are not the quintessence of badness--they are, in fact, vastly preferable to consols. It is only from the viewpoint of the dynastic family that life annuities are especially bad. Is this a good thing to do at the very opening of a book that presents itself as deriving new insights from close readings of old texts?... ## A Historical Document: A Lawyer's Brief for the Mid-Twentieth Century Democratic Party From Dean Acheson (1955), A Democrat Looks at His Party: p. 23 ff: From the very beginning the Democratic Party has been broadly based... the party of the many... the urban worker; the backwoods merchant and banker; the small farmer... the largelandowners of the South, who saw themselves as being milked by the commerical and financial magnates gathered under Hamilton's banner; the newly arrived immigrants... the party of the underdog.... The many have an important and most relevant characteristic. They have many interests, many points of view, many purposes to accomplish, and a party which represents them will have their many interests, many points of view, and many purposes also. It is this multiplicity of interests which, I submit, is the principal clue in understanding the vitality and endurance of the Democratic Party.... The base of all three opponents [Federalist, Whig, and Republican] has been the interest of the economically powerful, of those who manage affairs.... The economic base and the principal interest of the Republican Party is business.... This business base of the Republican Party is stressed not in any spirit of criticism. The importance of business is an outstanding fact of American life. Its achievements have been phenomenal. It is altogether appropriate that one of the major parties should represent its interests and its points of view. It is stressed because here lies the significant difference between the parties, the single-interest party against the many-interest party, rather than in a supposed division of attitudes... consservative... against... liberal.... [...] At the end of the [nineteenth] century there was a lesser, but serious, missed opportunity for Democratic leadership in President Cleveland's failure to grasp the significance of the Populist and labor unrest... and in his cautious and unimaginative approach to economic depression. The unrest... did not spring from a radical movement directed against the established order... or the constitutional system. It grew out of conditions increasingly distressing... on the farms and in the factories. Its purposes were the historic purposes of the Democratic party... to keep opportunity open, opportunity not merely to rise from barefoot boy to President but for people to find in their accustomed environments useful, respected, and satisfying lives.... The conditions and popular response had many points of similarity to those of the 1930s. Grover Cleveland... followed the right as he saw it... through a conservative and conventional cast of mind. The agitation seemed to him... a threat to law and order.... Coxey's Army was met with a barrage of injunctions and... the Capitol police.... The Pullman strike was smashed by federal troops who kept the mails moving, the union leaders imprisoned, and the union crushed. And the financial panic was dealt with through the highly orthodox and [highly] compensated assistance of Mr. Morgan. The underlying causes... were neither understood nor dealt with... an opportunity was missed.... If, to take one of them, the problems arising out of the concentration of industrial ownership had been tackled when they were still malleable and subject to effective treatment, we might have been spared some aches and pains that are still with us. But with all this, Grover Cleveland holds an honored place.... When the Congress showed signs... of declaring war on Spain, Cleveland put an end to the business for the duration of his administration by saying... that, if the Congress did declare war, he would refuse to direct it as commander in chief.... [...] [T]he Democratic Party is not an ideological party.... It represents too many interests to be neatly labeled or to be imprisoned.... It has to be pragramatic.... In the Democratic Party run two strong strands--conservatism and pragmatic experimentation.... [T]he difference between our parties has not been and is not between a party of property and one of proletarians, but between a party which centers on the dominant interests of the business community and a party of many interests, including property interests.... They believe in private property and want more and not less of it. This makes for conservatism. American labor is now known throughout the world for its conservatism.... the whole stress on seniority grows out of this. Pension rights are property interests of impressive value.... [W]hen a particular kind of property descends in the hierarchy of importance, its owners more and more turn for the protection of their interests to the party of many interests. The owners of land--the farmers--are the most crucial.... Small businesmen, also, are apt to find concern for their problems and welfare lost in the party of business on a larger scale.... But perhaps the strongest influence toward conservatism comes from the south, where for historical reasons all interests... are predominantly Democratic.... Southern conservatism is an invaluable asset. It gibes the assurance that all interests and policies are weighed and considered within the party before interparty issues are framed.... The South also faces us with an equal and opposite truth. It is that some of the most radical leaders of modern times have come from the South. We tend to see men like Watson, Tilman, Vardaman, and Huey Long chiefly in terms of their bellowings about White Supremacy. But if we drain this off--and the if is admittedly of major importance--what we should see is that the mass support of these men was formed by the dispossessed. Huey Long's "share-the-wealth" program was aimed explicitly at th Southern Bourbons.... The tragedy of the South has been that racism has corrupted an otherwise respectable strain of protest and experimentation in the search for economic equality.... For all the apparent contradiction in the fact that the Southern racist belongs to the same political party as the New York supporter of the FERC, the inner logic that holds them togehter is that each speaks for the dispossessed, whether in his rural or urban form. What enables the Democratic Party to contain both elements is the fact that the party since the Civil War has made the Legislature the special province of the Southern Democrat, and the Executive the special province of the Northern Democrat.... Entwined with the strand of conservatism in the Democratic Party is the strand of empiricism. A party which represents many interests and is composed of many diverse groups must invariably know that human institutions are made for man and not man for institutions.... Such a party conceives of government as an instrument to accomplish what needs to be done.... This is not so easy for htose who are persuaded that human behavior is governed by immutable laws, whether they are the laws expounded in the Social Statics of Herbert Spencer or those in the Das Kapital of Karl Marx.... [T]o the Manchester Liberals the Factory Acts ran squarely counter to economic principles and could end only in disaster. The "forgotten man," in the phrase invented by William Graham Sumner... was the producer whose wealth was tapped by the government to bear the cost of the social programs for those whom Sumner regarded as weak.... In the last century the economically powerful have stood to gain by the doctrine of laissez-faire.... It was those whose interests were suffering under the impact of new forces who looked to government... to manage the thrust of forces in the interest of human values. Now... this... takes... brains.... so the Democratic Party is hospitable to and attracts intellectuals. It has work for them to do... It's a lawyer's brief. Much of what it says about Southern Democrats is unconvincing. And today's Republican Party is not the Republican Party of 1955--if the party of 1955 was the party of wealth, enterprise, and opportunity the Republican Party of today has dropped the enterprise and opportunity parts and added some others that I at least find much less attractive. But a very interesting take. ## Race and Modern Republicans Once Again Ross Douthat, I think, gets this wrong: The GOP and the Race Issue: Southern whites were, and are, natural conservatives who happened to find themselves in the more liberal of the two parties; once Democrats associated themselves with the civil-rights movement, there wasn't anywhere else for white Mississippians and Alabamans to go except the GOP. Gerard Alexander's essay on "The Myth of the Racist Republicans" goes further than I would in downplaying Republican racism, but I think his point on this score is basically right: Liberal commentators ... assume that if many former Wallace voters ended up voting Republican in the 1970s and beyond, it had to be because Republicans went to the segregationist mountain, rather than the mountain coming to them. There are two reasons to question this assumption. The first is the logic of electoral competition. Extremist voters usually have little choice but to vote for a major party which they consider at best the lesser of two evils, one that offers them little of what they truly desire. Segregationists were in this position after 1968, when Wallace won less than 9% of the electoral college and Nixon became president anyway, without their votes. Segregationists simply had very limited national bargaining power. In the end, not the Deep South but the GOP was the mountain. Second, this was borne out in how little the GOP had to "offer," so to speak, segregationists for their support after 1968, even according to the myth's own terms. Segregationists wanted policies that privileged whites. In the GOP, they had to settle for relatively race-neutral policies: opposition to forced busing and reluctant coexistence with affirmative action. The reason these policies aren't plausible codes for real racism is that they aren't the equivalents of discrimination, much less of segregation. ... Kevin Phillips was hardly coy about this in his Emerging Republican Majority. He wrote in 1969 that Nixon did not "have to bid much ideologically" to get Wallace's electorate, given its limited power, and that moderation was far more promising for the GOP than anything even approaching a racialist strategy. While "the Republican Party cannot go to the Deep South"-—meaning the GOP simply would not offer the policies that whites there seemed to desire most—"the Deep South must soon go to the national GOP," regardless. So the GOP ended up bidding race-neutrality - which a conservative party would have naturally favored anyway, and which is not racism - and symbolic gestures like Reagan's opposition to MLK Day, his support for Bob Jones University's tax exemption, and so forth. These code words and gestures were real and shameful, and contemporary apologies like Ken Mehlman's mea culpa are entirely appropriate. But more often than not, I would submit, pundits who harp on this shame tend to do so because it's an easy way to leap to Krugman's conclusion that race explains everything he doesn't like about contemporary American politics, when in fact an awful lot of it is explained by the fecklessness of his liberal forebears. Paul Krugman has a very effective counter to this: White male math - Paul Krugman - Op-Ed Columnist - New York Times Blog: In some correspondence with Larry Bartels, whose “What’s the matter with “What’s the matter with Kansas?”" is must reading for anyone trying to understand modern American political, economy, the issue of how the Democrats lost white males came up. Larry points out that you really need to separate out the South. Here’s what he had to say: Unless you have a peculiar nostalgia for the racially coercive Democratic monopoly of the Jim Crow era, it makes sense to focus on the rest of the country. There, the Democratic share of the two-party presidential vote among white men was 40% in 1952 and 39% in 2004. White men didn’t turn against the Democrats; Southern white men turned against the Democrats. End of story. It's not that feckless liberals alienated their previous natural supporters--it's not the case that, as Ronald Reagan liked to claim, "the Democratic Party left me." It is the case that southern white males left the Democratic Party. Northern white males still seem to like the liberal Democratic Party just fine. Once "feckless liberals" are off the table, Ross seems to want to make two arguments: 1. The Republicans didn't really play the race card ("the GOP ended up bidding race-neutrality... which is not racism - and symbolic gestures like Reagan's opposition to MLK Day, his support for Bob Jones University's tax exemption..."). 2. It did not really matter that Republicans played the race card ("Southern whites were, and are, natural conservatives... once Democrats associated themselves with the civil-rights movement, there wasn't anywhere else for white Mississippians and Alabamans to go except the GOP..."). I think that there is a very good counter to Douthat's (1): if the Republican Party really were bidding race-neutrality--if there platform were one of market opportunity plus respect for the family plus respect for the church plus civil order plus race neutrality--they would get an enormous number of African-American votes. African-American voters are more often than not social conservative. African-American voters are extremely eager to support politicians who genuinely fight and reduce crime. Jack Kemp's Republican Party--one that is truly race-neutral, committed to equality of opportunity, and social conservative--is a natural home for most African-American voters. But we do not have that Republican Party, do we? African-American voters believe that the GOP bids racism, and few who have seen George Allen or Trent Lott on YouTube can disagree. I think Douthat is wrong about (2) as well: it matters that the Republican Party played (and plays) the race card. A Republican Party that was socially conservative and economically classical liberal and retained its long-ago commitment to equality of opportunity--that remembered that it was Abraham Lincoln who freed the slaves--would be a very different Republican Party than the one we see now: it would still have its soul. ## Pay-as-You-Go From Obsidian Wings: The Difference Between The Two Parties In A Nutshell: [I]f Congress does not do something, the AMT is going to hit 23 million families with higher taxes this year. The House has passed a bill preventing this from happening. Since, to their credit, they passed PAYGO rules that require that any tax cut or spending increase be paid for, they had to find some way to raise taxes. They found a loophole that allows various fund managers, who earn millions of dollars a year, to count those millions as capital gains, and thus to pay much lower taxes than the rest of us, and they closed it. For this, they are being excoriated by Republicans. David Dreier thinks that PAYGO rules shouldn't apply to "mistakes": "But anti-tax Republicans said the AMT was a mistake and thus offsets were unneeded. ''What absolute lunacy,'' said Rep. David Dreier, R-Calif., ''paying for a tax that was never intended.''" What a fascinating principle: you don't have to pay for costs you incur by mistake. I wonder if our creditors will go for that? And why not extend it to other things as well? The Iraq war, for instance, was never expected to last this long: why should we bother to come up with the billions and billions of dollars we are still paying for it? If it comes to that, why not just throw fiscal responsibility out the window? As far as I can tell, David Dreier thinks that that's the only non-lunatic thing to do. Similarly: Rep. Jim McCrery (La.), the ranking Republican on the House Ways and Means Committee, argued that such "a fiscal straitjacket" should not even apply to the alternative minimum tax, reasoning that all Congress is trying to do is keep the taxes of 23 million families from going up. Since that is not really a tax cut, he said, its$52 billion cost to the Treasury should not be paid for... This exchange makes the issues pretty clear: "Congress can and must stop this middle class tax hike before Thanksgiving -- without raising taxes," Senate Republican leader Mitch McConnell of Kentucky said Friday. But Pelosi said that "we have an understanding with the Senate that this legislation, in order to go forward, must be paid for." "Raising revenues takes political courage," said House Majority Leader Steny Hoyer, D-Md. "There is no courage whatsoever in plunging our country into debt, spending and not paying"... Nobody who calls himself a fiscal conservative has any business being a Republican. Nobody. Shutting down the Republican Party and starting over is the best we can do. ## Why Oh Why Can't We Have a Better Press Corps? (It's Another David Broder/Washington Post Edition) I must say David Broder outdoes even himself here: • May 25, 2006: "[T]he drama of the Clintons' personal life would be a hot topic if she runs for president..." • September 6, 2007: "[Hillary Clinton's] marriage is the central fact in her life..." • November 9, 2007: "I plan to leave both subjects [the Giuliani marriages and the single Clinton marriage] alone..." Why the switch? Greg Sargent opines: Horses Mouth November 12, 2007 10:15 AM: So Broder won't be writing about the Clinton and Giuliani marriages going forward? Wow, how impressively high-minded of him!... This is kind of funny, because he hasn't shown any such reticence in the past when it comes to looking at the Clintons' union -- far from it.... [B]ack when it really counted -- when the GOP tried to impeach Bill Clinton over his affair -- Broder thought the Clinton marriage was completely fair game. He wrote multiple columns at the time arguing that his affair threw his entire character and even fitness for the Presidency into question. Yet now, suddenly, when a questioner asks Broder whether he sees serial adulterer Rudy's marriage as fodder for judging his fitness for the Presidency, Broder effectively dodges the allegation of his and the media's double standard by suddenly going all high-minded and saying he won't be discussing the marriages of Rudy or Hillary. The obvious hypocrisy here aside, I propose that we hold Broder to his promise. ## Not Yet Record Oil Prices Tim Harford sends us to Evan Davis: BBC NEWS | The Reporters | Evan Davis: Imagine. $100 dollars. It is a lot for a barrel of oil. In fact, it's way up there. Since the 1860s, when people stopped killing whales for oil and dug it up in Pennsylvania instead, the price has averaged a little over$25 a barrel in today's money. We're at four times the long term average price. And as recently as 1998, only nine years ago, oil - on some measures - dipped below $10 a barrel. Although in today's money, for the year as a whole the price was more like 17. It's clear that$100 a barrel is very high. Although it's worth saying, it's still not a record. 1864 was in fact the most expensive year for oil. It was over $104 in today's money. Notwithstanding that record (and most of us in the media will ignore it when talking of record highs in the next few weeks - we'll be using the high of$104.7 reached in 1980 after the Iranian revolution) we can at least say an impending $100 barrel is getting historically significant... ## I Do Not Understand This Felix Salmon writes: Market Movers by Felix Salmon: Did Anyone Other Than Citigroup Have Liquidity Puts? Why hasn't this "liquidity put" thing gotten greater play? I never made it down to the 11th paragraph of Carol Loomis's interview with Bob Rubin, where she introduces the concept more than 900 words into her article. Floyd Norris, today, does a bit better, taking less than 400 words to get to them. A gold star, then, should go to Peter Cohan of BloggingStocks, who read the Loomis article, realized what he was looking at, and promoted the liquidity puts to headline status back on Monday. Liquidity puts are a big thing, and indeed it seems that they were more or less singlehandedly responsible for the downfall of Chuck Prince at Citi. Basically, Citi told the world – and kidded itself – that it had sold billions of dollars in CDOs to investors. In reality, however, those CDOs had "liquidity puts" attached, which essentially transformed the CDO "sales" into glorified (or debased) repos. Any time that the investor found the CDO difficult to sell – and CDOs are always difficult to sell – he had the option to put the CDO back to Citi at par. And that's exactly what happened; it was those return-to-sender CDOs which were written down the same weekend Prince resigned. I do not understand this. Which CDOs? What obligations, exactly, did Citigroup assume? Is this something I already know about under another name? It's not as though Loomis or Norris are comprehensive in their explanations... ## Department of Uh-Oh! Barry Ritholtz writes: The Big Picture | Why Thain Over Fink?: Here's something you may not have heard: The surprise selection of NYSE CEO John Thain as Merrill's new CEO over the more widely expected BlackRock CEO Larry Fink was based on reasons you may not be aware of. What are those reasons? Well, according to CNBC.com, Fink would only agree to take the position if Merrill was willing to give a full and complete accounting of it's subprime exposure: Merrill's selection of Thain was a surprise because the firm had recently indicated to BlackRock CEO Larry Fink that the job was his if he wanted it. CNBC has learned that Fink said he would take the job but only if Merrill did a full accounting of its subprime exposure. At that point, Merrill, which owns 49% of BlackRock, moved in a different direction and decided to go with Thain instead. I obviously have no way to verify that, but I give the benefit of the doubt to CNBC reporters like Charlie Gasparino and Herb Greenberg. (UPDATE: I have just confirmed with Charlie Gasparino that he was the one who's fine investigative work uncovered this; You can see some of the discussion via CNBC video here, right margin, labeled "The New Bull at Merrill"). Note: I don't know who uncovered this. If the story is true, and Fink passed on the position (or was passed over) because of his insistence on a complete sub-prime accounting, apparently not accommodated by Merrill, it makes one wonder what the old lady is hiding. ## Department of Things I Wanted to Go to But Didn't Because I Didn't Find Out About Them Until After They Happened Waterboarding Demonstration: Rally Wed,. 11/14 @ Noon, Sproul Plaza, UC-Berkeley. World Can't Wait! http://myspace.com/sfbaycantwait ## Hoisted from the Archives: Elliott Abrams, William F. Buckley, and Joe McCarthy Celebrate Joe McCarthy's Birthday As Joe McCarthy's birthday comes to an end, let us give the microphone to William F. Buckley, Elliott Abrams, and Joe McCarthy himself: A Conspiracy so Immense: William F. Buckley says: "McCarthy's record is... not only much better than his critics allege, but, given his metier, extremely good.... [he] should not be remembered as the man who didn't produce 57 Communist Party cards but as the man who brought public pressure to bear on the State Department to revise its practices and to eliminate from responsible positions flagrant security risks." Elliot Abrams says: "McCarthy did not need to show that specific employees were guilty of espionage; they needed only to show that there was some evidence that an employee was a security or loyalty risk, and that the State Department... had willfully overlooked it.... What were the charges? They ranged from accusations of actual espionage--handing secret documents over to Soviet agents--to involvement in dozens of Communist-front organizations.... Buckley and Bozell asked, 'Did McCarthy present enough evidence to raise reasonable doubt as to whether all loyalty and security risks had been removed from the State Department?' The verdict rendered here is that he did. In most of his cases McCarthy adduced persuasive evidence; the State Department's efforts stood condemned; and the screams of 'Red Scare' were efforts to occlude the truth." Here's what Joe McCarthy says: Tail Gunner Joe: Joe McCarthy's Senate speech of June 14, 1951: How can we account for our present situation unless we believe that men high in this Government are concerting to deliver us to disaster? This must be the product of a great conspiracy, a conspiracy on a scale so immense as to dwarf any previous such venture in the history of man. A conspiracy of infamy so black that, when it is finally exposed, its principals shall be forever deserving of the maledictions of all honest men. Who constitutes the highest circles of this conspiracy? About that we cannot be sure. We are convinced that Dean Acheson, who steadfastly serves the interests of nations other than his own, the friend of Alger Hiss, who supported him in his hour of retribution, who contributed to his defense fund, must be high on the roster. The President? He is their captive. I have wondered, as have you, why he did not dispense with so great a liability as Acheson to his own and his party's interests. It is now clear to me. In the relationship of master and man, did you ever hear of man firing master? Truman is a satisfactory front. He is only dimly aware of what is going on. I do not believe that Mr. Truman is a conscious party to the great conspiracy, although it is being conducted in his name. I believe that if Mr. Truman bad the ability to associate good Americans around him, be would have behaved as a good American in this most dire of all our crises. It is when we return to an examination of General Marshall's record since the spring of 1942 that we approach an explanation of the carefully planned retreat from victory, Let us again review the Marshall record, as I have disclosed it from all the sources available and all of them friendly. This grim and solitary man it was who, early in World War II, determined to put his impress upon our global strategy, political and military. It was Marshall, who, amid the din for a "second front now" from every voice of Soviet inspiration, sought to compel the British to invade across the Channel in the fall of 1942 upon penalty of our quitting the war in Europe. It was Marshall who, after North Africa had been secured, took the strategic direction of the war out of Roosevelt's hands and - who fought the British desire, shared by Mark Clark, to advance from Italy into the eastern plains of Europe ahead of the Russians. It was a Marshall-sponsored memorandum, advising appeasement of Russia In Europe and the enticement of Russia into the far-eastern war, circulated at Quebec, which foreshadowed our whole course at Tehran, at Yalta, and until now in the Far East. It was Marshall who, at Tehran, made common cause with Stalin on the strategy of the war in Europe and marched side by side with him thereafter. It was Marshall who enjoined his chief of military mission in Moscow under no circumstances to "irritate" the Russians by asking them questions about their forces, their weapons, and their plans, while at the same time opening our schools, factories, and gradually our secrets to them in this count. It was Marshall who, as Hanson Baldwin asserts, himself referring only to the "military authorities," prevented us having a corridor to Berlin. So it was with the capture and occupation of Berlin and Prague ahead of the Russians. It was Marshall who sent Deane to Moscow to collaborate with Harriman in drafting the terms of the wholly unnecessary bribe paid to Stalin at Yalta. It was Marshall, with Hiss at his elbow and doing the physical drafting of agreements at Yalta, who ignored the contrary advice of his senior, Admiral Leahy, and of MacArtbur and Nimitz in regard to the folly of a major land invasion of Japan; who submitted intelligence reports which suppressed more truthful estimates in order to support his argument, and who finally induced Roosevelt to bring Russia into the Japanese war with a bribe that reinstated Russia in its pre-1904 imperialistic position in Manchuria-an act which, in effect, signed the death warrant of the Republic of China. It was Marshall, with Acheson and Vincent eagerly assisting, who created the China policy which, destroying China, robbed us of a great and friendly ally, a buffer against the Soviet imperialism with which we are now at war. It was Marshall who, after long conferences with Acheson and Vincent, went to China to execute the criminal folly of the disastrous Marshall mission. It was Marshall who, upon returning from a diplomatic defeat for the United States at Moscow, besought the reinstatement of forty millions in lend-lease for Russia. It was Marshall who, for 2 years suppressed General Wedemeyer's report, which is a direct and comprehensive repudiation of the Marshall policy. It was Marshall who, disregarding Wedemeyer's advices on the urgent need for military supplies, the likelihood of China's defeat without ammunition and equipment, and our "moral obligation" to furnish them, proposed instead a relief bill bare of military support. It was the State Department under Marshall, with the wholehearted support of Michael Lee and Remington in the Commerce Department, that sabotaged the$125,000,000 military-aid bill to China in 194S. It was Marshall who fixed the dividing line for Korea along the thirty-eighth parallel, a line historically chosen by Russia to mark its sphere of interest in Korea. It is Marshall's strategy for Korea which has turned that war into a pointless slaughter, reversing the dictum of Von Clausewitz and every military theorist since him that the object of a war is not merely to kill but to impose your will on the enemy. It is Marshall-Acheson strategy for Europe to build the defense of Europe solely around the Atlantic Pact nations, excluding the two great wells of anti-Communist manpower in Western Germany and Spain and spurning the organized armies of Greece and Turkey-another case of following the Lattimore advice of "let them fall but don't let it appear that we pushed them." It is Marshall who, advocating timidity as a policy so as not to annoy the forces of Soviet imperialism in Asia, had admittedly put a brake on the preparations to fight, rationalizing his reluctance on the ground that the people are fickle and if war does not come, will hold him to account for excessive zeal. What can be made of this unbroken series of decisions and acts contributing to the strategy of defeat? They cannot be attributed to incompetence. If Marshall were merely stupid, the laws of probability would dictate that part of his decisions would serve this country's interest. If Marshall is innocent of guilty intention, how could he be trusted to guide the defense of this country further? We have declined so precipitously in relation to the Soviet Union in the last 6 years. How much swifter may be our fall into disaster with Marshall at the helm? Where Will all this stop? That is not a rhetorical question: Ours is not a rhetorical danger. Where next will Marshall carry us? It is useless to suppose that his nominal superior will ask him to resign. He cannot even dispense with Acheson. What is the objective of the great conspiracy? I think it is clear from what has occurred and is now occurring: to diminish the United States in world affairs, to weaken us militarily, to confuse our spirit with talk of surrender in the Far East and to impair our will to resist evil. To what end? To the end that we shall be contained, frustrated and finally: fall victim to Soviet intrigue from within and Russian military might from without. Is that farfetched? There have been many examples in history of rich and powerful states which have been corrupted from within, enfeebled and deceived until they were unable to resist aggression. . . . It is the great crime of the Truman administration that it has refused to undertake the job of ferreting the enemy from its ranks. I once puzzled over that refusal. The President, I said, is a loyal American; why does he not lead in this enterprise? I think that I know why he does not. The President is not master in his own house. Those who are master there not only have a desire to protect the sappers and miners - they could not do otherwise. They themselves are not free. They belong to a larger conspiracy, the world-wide web of which has been spun from Moscow. It was Moscow, for example, which decreed that the United States should execute its loyal friend, the Republic of China. The executioners were that well-identified group headed by Acheson and George Catlett Marshall. How, if they would, can they, break these ties, how return to simple allegiance to their native land? Can men sullied by their long and dreadful record afford us leadership in the world struggle with the enemy? How can a man whose every important act for years had contributed to the prosperity of the enemy reverse himself? The reasons for his past actions are immaterial. Regardless of why he has done what be did, be has done it and the momentum of that course bears him onward. . . . The time has come to halt this tepid, milk-and-water acquiescence which a discredited administration, ruled by disloyalty, sends down to us. The American may belong to an old culture, he may be beset by enemies here and abroad, he may be distracted by the many words of counsel that assail him by day and night, but he is nobody's fool. The time has come for us to realize that the people who sent us here expect more than time-serving from us. The American who has never known defeat in war, does not expect to be again sold down the river in Asia. He does not want that kind of betrayal. He has had betrayal enough. He has never failed to fight for his liberties since George Washington rode to Boston in 1775 to put himself at the head of a band of rebels unversed in war. He is fighting tonight, fighting gloriously in a war on a distant American frontier made inglorious by the men he can no longer trust at the head of our affairs. The America that I know, and that other Senators know, this vast and teeming and beautiful land, this hopeful society where the poor share the table of the rich as never before in history, where men of all colors, of all faiths, are brothers as never before in history, where great deeds have been done and great deeds are yet to do, that America deserves to be led not to humiliation or defeat, but to victory. The Congress of the United States is the people's last hope, a free and open forum of the people's representatives. We felt the pulse of the people's response to the return of MacArthur. We know what it meant. The people, no longer trusting their executive, turn to us, asking that we reassert the constitutional prerogative of the Congress to declare the policy for the United States. The time has come to reassert that prerogative, to oversee the conduct of this war, to declare that this body must have the final word on the disposition of Formosa and Korea. They fell from the grasp of the Japanese empire through our military endeavors, pursuant to a declaration of war made by the Congress of the United States on December 8, 1941. If the Senate speaks, as is its right, the disposal of Korea and Formosa can be made only by a treaty which must be ratified by this body. Should the administration dare to defy such a declaration, the Congress has abundant recourses which I need not spell out. In my email, an unusual subject heading: To: Evans Department Managers & others: [email protected] [email protected] [email protected] Dear Evans Community, There have been ongoing problems today with Elevator 1 and Elevator 3. Estimated repair times are not yet fully determined. Please be prepared to wait at the elevator, or take stairs if you are able to do so. Since Elevator 5 is still under repair, and Elevator 4 is rumored to have just flunked its post-repair safety check, that leaves Elevator 2. ## Most Americans Are Shrill! 55% of voters are members of the shrill, unbalanced cult that believes that George W. Bush has committed impeachable offenses: Think Progress » Majority believe Bush has committed impeachable offenses: A new American Research Group poll finds that 55 percent of voters believe President Bush has “abused his powers” in a manner that rises “to the level of impeachable offenses under the Constitution,” yet just 34 percent believe he should actually be impeached. Fifty-two percent say that Vice President Cheney has similarly abused his powers, with 43 percent supporting impeachment.
2022-05-28 22:48:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17224344611167908, "perplexity": 5349.525528948925}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663021405.92/warc/CC-MAIN-20220528220030-20220529010030-00158.warc.gz"}
http://mathoverflow.net/questions/61715/equivariant-index-of-dirac-operator-on-s2
# equivariant index of Dirac Operator on $S^{2}$ First, I have to admit that I don't have much knowledge of Spin Geometry and Index Theory, the question could be too simple or naive and secondly there may be too many questions. Let $D$ be the Dirac Operator for standard metric and $S$ be the Spin bundle on $S^{2}$.There is a unique Spin structure on $S^{2}$. How does $D$ look like, Can we write a general form for the harmonic spinors on $S^{2}$ ? What is the general expression for equivariant index of $D$ ? If $W \times S$ is the twisted spinor bundle and $D$ is the twisted Dirac operator, we can write the equivariant index as an integral over the fixed point manifold in terms of equivariant Chern Character of $W$ and $\hat{A}$ - genus of the fixed point manifold using the Aiyah-Segal-Singer theorem. What I am interested in is the final expression for $S^{2}$. What can we say about the product of n $S^{2}$'s. Thanks - As Sebastian said, the Dirac operator can be identified with the $\bar{\partial}$-operator on the line bundle of degree $-1$ (the inverse of the Hopf bundle) and so its (equivariant) index is trivial. Now you ask for the equivariant index of the twisted Dirac. The reasonable group to study equivariance is $SU(2)$, because $SO(3)$ does not act on the spinor bundle. First note the the $K$-group of equivariant vector bundles on $S^2$ is $K_{SU(2)} (S^2) = KU_{SU(2)} (SU(2)/U(1)) \cong K_{U(1)} (*) = RU(1)$. Therefore, any $SU(2)$-equivariant vector bundle splits as a sum of line bundles, and the line bundles are precisely all powers of the Hopf bundle. So this reduces the problem to the study of the kernel of $\bar{\partial}$ on any power of the Hopf bundle, but as a $SU(2)$-representation. It is a pleasant exercise to prove that the space of holomorphic sections on the $k$th power of the Hopf bundle is canonically isomorphic to the space of degree $k$ homogeneous polynomials in two variables and the $SU(2)$-action on that is given by the action on the variables. So you get all irreducible representations of $SU(2)$ as the kernel of a twisted equivariant Dirac operator. This is of course not an accident, and you can read more about this close connection between index theory and representation theory in the papers (I gues that you now some of them): G. Segal: ''Equivariant $K$-theory'', ''The representation ring of a compact Lie group'' R. Bott: ''Homogeneous vector bundles'' M. Atiyah, R. Bott: ''Lefschetz fixed point formula of elliptic complexes'' - thanks for your reply, very enlightening. –  J Verma Apr 16 '11 at 20:23 There are many questions here. I can answer the first one quickly: there are no harmonic spinors on $S^2$ with the standard round metric. This follows from Lichnerowicz theorem. A good place to read about this is Nigel Hitchin's thesis Harmonic spinors published in Adv. Math. (1974) vol. 14 pp. 1-55. (MathSciNet link). More generally, on any compact spin manifold with positive scalar curvature the kernel of the Dirac operator is zero. Concerning the other questions, you can try looking at this paper of Christian Bär, where the spectrum of the Dirac operator on space forms (particularly on round spheres) is determined: The Dirac operator on space forms of positive curvature., Journal of the Mathematical Society of Japan, vol. 48, No. 1, 1996. (link) - @ Jose - Thanks for your reply and for the links. What can we say about the harmonic spinors if the metric on $S^{2}$ is not the standard one, say induced as a submanifold of $\mathbb{R}^{4}$. –  J Verma Apr 15 '11 at 2:13 My guess is that if you cannot control the curvature (say, to be non-negative and positive at some point) then you cannot say much. Also computing the spectrum of operators such as Dirac, laplacian,... in the absence of symmetry is usually hopeless. –  José Figueroa-O'Farrill Apr 15 '11 at 2:28 @ Jose - It seems that the kernel of Dirac operator is zero for any metric on $S^{2}$. Are there any cases if this is non-zero. Also in case the Spin bundle is twisted by a line bundle and then is it true that the kernel of twisted Dirac operator is also zero. Thanks for your time. –  J Verma Apr 15 '11 at 2:48 No, the kernel of the (non-twisted) Dirac operator is always trivial: Every metric on $S^2$ is conformally equivalent to the metric of constant curvature $1.$ Moreover, the Dirac operator has a nice transformation formula with respect to conformal changes of the metric (see for example Lawson-Michelson) which implies that the kernel is invariant under conformal changes of the metric (at least in Dimension 2, but I think it is true generally). Another way to see this is the following: Every (Riemannian) spinor bundle on a Riemann surface (equipped with a compatible Riemannian metric) is of the form $S\oplus S^* ,$ where $S$ is a Riemann surface spin bundle. Using the Riemannian metric, one can identify $\bar K=K^*$, and $S^* =\bar{K} S ,...$ In that form the Dirac operator is just given by the $\bar\partial$ on $S$ and similarly the induced Riemannian $\partial$ on $S^*,$ which is of course the adjoint operator with respect to the Riemannian metric. But the degree of $S$ on $P^1$ is -1, so there cannot be holomorphic sections, which implies ther is no kernel. - @ Sebastian - But a twisted Dirac Operator on a twisted spinor bundle can have non trivial kernel. –  J Verma Apr 15 '11 at 18:43
2014-12-29 06:07:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9366479516029358, "perplexity": 175.17353122606883}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447561952.81/warc/CC-MAIN-20141224185921-00059-ip-10-231-17-201.ec2.internal.warc.gz"}
http://www.tug.org/pipermail/xy-pic/2006-August/000384.html
# [Xy-pic] Questionable behaviour of the arrow extension Johann Bauer jbauer-news at web.de Wed Aug 23 22:07:30 CEST 2006 Hello, the arrow extension behaves strangely in two instances that came up when I drew some mathematical diagrams. I would be grateful if someone could help me with these problems. 1) xypic outputs for each arrow (\ar) obviously three paths: two at both ends of the stem of (nearly) length zero and the proper one for the stem. I discovered this when I increased the linewidth for a certain diagram. The effect becomes obvious in the following example. Note that this bug (?) occurs independent of the "line" extension - I just used this to demonstrate the effect. \documentclass{minimal} \usepackage[arrow,line,dvips]{xy} \begin{document} \xy <3pc,0pc>:<0pc,2pc>:: \ar@[|<30pt>]@[projcap]@{-} (10,10) \endxy \end{document} Without the line extension, the additional paths can be seen in the postscript output (generated by dvips): %%Page: 1 1 TeXDict begin 1 0 bop 0 26567 a SDict begin {pu 2 lc 30.0 lw}xyg end 0 26567 a -2672 x @beginspecial @setspecial %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% {0.0 0.14063 l}xy % Unwanted! %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% @endspecial 0 26567 a SDict begin { pp}xyf end 0 26567 a 0 26567 a SDict begin {pu 2 lc 30.0 lw}xyg end 0 26567 a 39851 0 a @beginspecial @setspecial %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% {360.0 216.0 l}xy % This is the proper arrow stem %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% @endspecial 0 26567 a SDict begin { pp}xyf end 0 26567 a 0 26567 a SDict begin {pu 2 lc 30.0 lw}xyg end 0 26567 a 39851 0 a @beginspecial @setspecial %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% {0.0 0.14063 l}xy % Unwanted! %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% @endspecial 0 26567 a SDict begin { pp}xyf end 0 26567 a eop end %%Trailer How can I get rid of these (nearly) zero paths? They create ugly artefacts in some of the diagrams I wrote. 2) Another strange output comes from this example: \documentclass{minimal} \usepackage[arrow]{xy} \begin{document} \xy (0,0), {\ar@{=} (10,0)}, (20,0), {\PATH ~={**\dir{.}} '(30,0) (30,10)}, (0,20), {\ar@{=} (00,30)}, (20,20), {\PATH ~={**\dir{.}} '(30,20) (30,30)}, \endxy \end{document} Here, the arrow command causes the succeeding path to be drawn twice: once with its own style (here: dotted) and once with the arrow style (here: double line). To avoid this, I can of course move all the arrow commands to the end of the diagram. But is this a bug in xypic or did I just not understand the syntax rules? What would then be the correct syntax? Thanks in advance for any help! Johann Bauer
2018-05-27 01:33:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8807822465896606, "perplexity": 13576.875838623962}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867977.85/warc/CC-MAIN-20180527004958-20180527024958-00597.warc.gz"}
https://solvedlib.com/the-product-is-the-epoxide-shown-below-h3c-b,390860
# The product is the epoxide shown below. H3C b Write a mechanism for the reaction using... ###### Question: The product is the epoxide shown below. H3C b Write a mechanism for the reaction using curved arrows to show electron reorganization. Arrow-pushing Instructions no-X HH :0: H₃C CH₃ #### Similar Solved Questions ##### Answer parts a-c please VIS TV. - Due Date: 11/21/2019 11:59:00 PM End Date: 11/21/2019 11:59:00... answer parts a-c please VIS TV. - Due Date: 11/21/2019 11:59:00 PM End Date: 11/21/2019 11:59:00 PM (8%) Problem 2: The pump in a water tower raises water with a density of Pw - 1.00 kg/liter from a filtered tank at the base of the tower up hy 23 m to the top of the tower. The water begins at rest ... ##### Find the equation of the straight line tangent to the curve cey 10y 10xlOln6 at the point (0,ln6).Note: Round to two decimab places; Find the equation of the straight line tangent to the curve cey 10y 10x lOln6 at the point (0,ln6). Note: Round to two decimab places;... ##### The product of the reaction below has_KrHzO, acetoneCHZCH; CHCHZCH;H;c-is achiraldifficult to determine_two stereoisomersone stereoisomer The product of the reaction below has_ Kr HzO, acetone CHZCH; CHCHZCH; H;c- is achiral difficult to determine_ two stereoisomers one stereoisomer... ##### Prove that cos((2pi)/9)*cos((4pi)/9)*cos((8pi)/9)=-1/8? Prove that cos((2pi)/9)*cos((4pi)/9)*cos((8pi)/9)=-1/8?... ##### Im having trouble with Q9, and Q13 9. (0.5 pt.) What is the role of the... im having trouble with Q9, and Q13 9. (0.5 pt.) What is the role of the Amberlyst 15 catalyst? His a solid state acid catalyst Solid State acid catalyst, and is a polymer that have varu Strong induced dipole im and there polymers are insoluble in solvents 10. (0.5 p.) What is the purpose of reflu... ##### (10 Polnts) Give the structures of compounds B, and In the reaction sequence belon:CH;Agz0, Hzohaal' (United States)MacBook Pro (10 Polnts) Give the structures of compounds B, and In the reaction sequence belon: CH; Agz0, Hzo haal ' (United States) MacBook Pro... ##### How do find the speed of a marble after falling from rest a distance of 1.2 m How do find the speed of a marble after falling from rest a distance of 1.2 m?... ##### The half-life of a certain radioactive material is 85 days. An initial amount of the material has a mass of 801 kg. How do you write an exponential function that models the decay of this material and how much radioactive material remains after 10 days? The half-life of a certain radioactive material is 85 days. An initial amount of the material has a mass of 801 kg. How do you write an exponential function that models the decay of this material and how much radioactive material remains after 10 days?... ##### Prescription Drugs: Describe and discuss generic drug shortages and the related impact on consumers, employers, employees,... Prescription Drugs: Describe and discuss generic drug shortages and the related impact on consumers, employers, employees, insurance companies, and health care providers (i.e., hospitals)....
2022-08-12 03:05:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6917704939842224, "perplexity": 4736.999978228963}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571538.36/warc/CC-MAIN-20220812014923-20220812044923-00623.warc.gz"}
https://calloc.net/posts/2017-07-08-range-bitwise-and/
Given a range $[m, n]$ where $0 \leq m \leq n \leq 2^{31} - 1$, return the bitwise AND of all numbers in this range, inclusive. ## Discussion The problem is trivial to solve with a loop but there’s an interesting idea lurking around the corner, so we will avoid doing this via a straightforward loop. Take a number, $m = 14$. Lets examine what happens in binary as we move forward from 14 (14) .... 0000 1110 (15) .... 0000 1111 (16) .... 0001 0000 <- (17) .... 0001 0001 (18) .... 0001 0010 (19) .... 0001 0011 (20) .... 0001 0100 (21) .... 0001 0101 (22) .... 0001 0110 (23) .... 0001 0111 (24) .... 0001 1000 (25) .... 0001 1001 (26) .... 0001 1010 (27) .... 0001 1011 (28) .... 0001 1100 (29) .... 0001 1101 Looking from the right hand side, here’s some properties to keep in mind: 1. $n$ is likely to have more significant bits set than $m$. So the AND of all contiguous elements in this range can only be as large as $m$ 2. If $m < 2^k < n$, then the result is always a $0$, since the power of $2$’s leftmost set bit will be further right than $m$’s leftmost set bit 3. If $n > m$, the rightmost bit for this range is $0$ Property $3$ can be used with a little more insight: Notice what happens if we replace the rightmost bit from the sequence with X because we don’t care what this value is. (14) .... 0000 111X (15) .... 0000 111X (16) .... 0001 000X (17) .... 0001 000X (18) .... 0001 001X (19) .... 0001 001X (20) .... 0001 010X (21) .... 0001 010X (22) .... 0001 011X (23) .... 0001 011X (24) .... 0001 100X (25) .... 0001 100X (26) .... 0001 101X (27) .... 0001 101X (28) .... 0001 110X (29) .... 0001 110X Ignoring the $X$, alternating numbers now form an increasing sequence, how long can we do this? We ignored the right most bit because this changed constantly, so we’ll stop ignoring it when it stops changing. When it does stop changing, no bit to it’s left will change because contiguous binary sequences bits change left to right, one at a time. So AND-ing the entire sequence is the same as finding the common left bits of all numbers in that range. That value doesn’t depend on all the numbers, but just the endpoints. This is what the method looks like: (17) .... 0001 0001 (26) .... 0001 1010 --- (08) .... 0001 000X (13) .... 0001 101X --- (04) .... 0001 00XX (06) .... 0001 10XX --- (02) .... 0001 0XXX (03) .... 0001 1XXX --- (01) .... 0001 XXXX (01) .... 0001 XXXX It doesn’t necessarily have to be $1$, but it has to end when right shifting the endpoints causes them to become equal. ## Code This here is a C++ implementation of the idea above class Solution { public: int rangeBitwiseAnd(int m, int n) {
2019-08-19 02:31:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35788580775260925, "perplexity": 3264.596879173936}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314638.49/warc/CC-MAIN-20190819011034-20190819033034-00466.warc.gz"}
https://chat.stackexchange.com/transcript/71/2020/3/29
4:04 AM @ACuriousMind Honestly I bet someone has tried that :-p I could imagine either creating a compiler that turns (a subset of) Python code into standalone machine code, or making a low-level Python interpreter that can run underneath an operating system. Of course it would be horribly inefficient. 4:21 AM John Rennie has removed an event from this room's schedule. 2 @Feeds Good bye "Physics Chat Session". My meta answer worked. You fell for it. "There is currently no schedule for this room" May anarchy reign! 4:52 AM lol 5:06 AM I wonder if COVID-19 will bring new physics because most of the phenomenologists are now stuck at home analysing data 1 hour later… 6:38 AM Of all things I knew about Motl, I never guessed he is conservative @DavidZ Well, yes, "you can't" is probably a bit too strong given that people have written things like an assembly transpiler that transpiles any x86 machine code into code using only the MOV instruction :P 7:05 AM Yeah, that sounds about right 1 hour later… 8:20 AM @coronapatrol Thanks for the link @ACuriousMind This gave me much more clarity thank you 8:36 AM Oh, the Johns Hopkins map tool now also offers a logarithmic scale. Is it nice to have it or sad that we need it? Anyway, it’s nice to see the exponential growth or any deviation from exponential growth. 9:09 AM morning 2 hours later… 10:50 AM @ACuriousMind Can we say that the position eigenvalues are physical quantities which should be invariant regardless the frame of reference? 0 When we deal with symmetry transformations in quantum mechanics we assume true that, If before the symmetry transformation we have this $\hat A | \phi_n \rangle = a_n|\phi_n \rangle,$ and after the symmetry transformation we have this $\hat A' | \phi_n' \rangle = a_n'|\phi_n' \rangle,$ the... @Student404Mus An eigenvalue is just a number (regardless of whether it's position or something else). Any transformations in QM act on states and operators, not on numbers, so I don't know what (in)variant is supposed to mean here indeed. scalars themselves are invariant under transformations It's not even that it's a "scalar", it's just not even part of the space the transformations act on In general speaking rather to say "numbers", we know scalars are invariant under transformations @ACuriousMind The answer above yours states, if we assume $x$ is the origin, Does eigenvalue makes sense to be an origin? Sorry, I do not understand your question. 10:59 AM That question, someone answered it differently and says, "suppose $x$ is the origin,..." I do not understand what you're trying to ask me about it, though Does $x$ really could represent an origin rigorously $x$ still represent an eigenvalue, why it is an "origin"? 1. "Does...could" is not grammatical English, you don't use 'do' with other auxiliary verbs like 'can'. 2. I do not understand what you mean by "$x$ representing an origin". Manny is just saying that the position eigenstate with eigenvalue $x=0$ is the origin. And that in each frame, you will associate the origin of that frame with the eigenstate of the position operator in that frame with the eigenvalue $x=0$. No he didn't. He said, "suppose $x$ is the origin," is different from says, the position eigenstate with eigenvalue x=0 is the origin. since, according to the question, $x$ represents an eigenvalue I was paraphrasing what (I think) he's trying to say. 11:06 AM that's where we mismatched each other @Student404Mus He's making an example, by picking the (undetermined) value $x$ from the question to be $x=0$. I don't understand what the problem is with that. Yes, that what he tried to describe I think the problem was the definition of frame that's it. 2. Your exclamatory about "representing" an eigenvalue in an origin's frame does't this make any sense? I think all what we do when trying to understand a theorized thing is to represent it?! @ACuriousMind "The equation $\hat{x}\lvert x_n\rangle$ becomes" Doesn't seem to be an equation? $\hat{x}\lvert x_n\rangle = x \lvert x_n\rangle$ 11:47 AM @ACuriousMind Shouldn't this question be duplicate? https://physics.stackexchange.com/questions/540141/lorentz-invariance-of-action-or-lagrangian/540147#540147 to this https://physics.stackexchange.com/questions/47556/lorentz-invariance-of-the-integration-measure/47559 @Student404Mus If you think a question is a duplicate, please just flag it as a duplicate. 2 Alright this won't affect the user's reputation ? no, it won't, up- and downvotes are the only thing that affects reputation. Thanks. well, these and spam flags 2 hours later… 2:16 PM 227 I received this question from my mathematics professor as a leisure-time logic quiz, and although I thought I answered it right, he denied. Can someone explain the reasoning behind the correct solution? Which answer in this list is the correct answer to this question? All of the below... 2:34 PM am i crazy or does this not make sense? this is from my GR notes, specifically the special relativity section ah wait nevermind i zoomed in a lot and the first one is actually a tilde, the second is a bar not the nicest notation to use 14 We are happy to announce that the previously announced follow question and answer feature is now live across the Network, including Stack Overflow, all Stack Exchange sites, and all Meta sites. (International Stack Overflow sites will have it turned on in a day or two once we have translations al... Yay! I have great hopes for this Follow feature. It'll make it a lot easier to respond to edits on downvoted & close-voted posts. 3:23 PM If I, in theory, watched a particle travel between A and B and wanted to parameterise it's trajectory with its proper time, I would need a clock that ticks in my frame at $\gamma$ ticks per second, right? assuming it's just travelling in a straight line along an axis 0 Okay at school (closed, unfortunately), we have completed a practical the involves determining the force experienced by a rubber stopper in a horizontal plane. And just before the school closes (corona), my teacher asked the class to modify this experiment in order to address our own related hypo... How has this not been closed yet? Oh I completely missed that the same user posted almost all of the answers 3:38 PM @AaronStevens Yeah. But now they understand that we don't do that on SE. Sorry, shall not do that again! Essentially spamming. — Kishan Bhatt 13 mins ago @AaronStevens I suppose it should be closed as homework. OTOH, I think this part is ok because it's a conceptual question: "Is the slotted mass responsible for the centripetal force in vertical circular motion?" 4:02 PM @PM2Ring But then you have "Also is this a good experiment to do (constant tension vs varying tension)? What variables should I change instead?" And "How can I calculate the initial velocity of the mass? Given the small mass (25 g), the mass of the slotted mass (150 g), radius (1 m) and period (0.7) available." 4:30 PM @AaronStevens Oh, I agree that that stuff makes it close-worthy, especially the stuff asking for a specific calculation, but the existing answers avoided doing that calculation. 4:54 PM @PM2Ring Yeah, I am just talking about the question :) I did cast a close vote on it, and left a comment. I'd happily vote to reopen if the question were cleaned up, but I think that's pretty unlikely, at this stage. 5:15 PM Hey homework questions are not accepted where can I ask them? @AaronStevens There's nothing wrong with this question! I mean, it suppose contains Newtonian mechanics, and I guess a lot of people here dislike that, but that's not a reason to close. @AaronStevens BOOM! :-) @JohnRennie I need to be faster on the draw :P @knzhou I don't think I have ever advocated for a question to be closed because it contains Newtonian mechanics, nor have I ever seen anyone do that @SMSheikh You might be able to get some help in the problem solving strategies chat chat.stackexchange.com/rooms/54160/problem-solving-strategies @AaronStevens I'd have like no rep here if that was the case lol @JMac Haha right. I answer many Newtonian Mechanics questions. It is the only tag I have gold in right now @knzhou "How can I calculate the initial velocity of the mass? Given the small mass (25 g), the mass of the slotted mass (150 g), radius (1 m) and period (0.7) available." is basically an off-topic homework question. "Also is this a good experiment to do (constant tension vs varying tension)? What variables should I change instead?" Is an opinion-based question. This is why I think it should be closed Not because it involves Newtonian mechanics Additionally, due to the many questions, it is not a well-focused question Do they want a calculation? Do they want feed back on their experimental design and ideas? Do they want to understand more about centripetal forces and circular motion? 5:27 PM I'm surprised there aren't votes for needs more focus. That's what I'm VTCing for. Like @PM2Ring said, the final part is fine. The other ones are not, and the fact that all exist in the same post makes it a not focused question @knzhou I do recognize that I tend to be stricter with closing questions, and I think you tend to be more lenient, so I would love to hear why you think the question is fine as it is. I like to learn about what other people think to help inform myself in how to think of other questions in the future Oh well :( 5:53 PM 42 My question was closed1 on Phys.SE. Can you recommend me another internet site where my question might be on-topic? Here we keep a list of other internet sites that might help students2 of physics. One site per answer. To keep the list at a reasonable size, please only include sites which fulfil... When can I expect to see the implementation of the suggestions made here(👇)? 11 As you may or may not have noticed, the new ask page is now live on the network. Go and have a look! Some of the new features are only visible to new users, but it still looks noticeably different even from old accounts. Now, the new design of the Ask page allows for a fair amount of per-site cu... 6:26 PM @AaronStevens Well, disagreement is perfectly fine. We're high rep users, we make the rules here! So in some sense, every individual high rep user's close/reopen votes are correct, by definition. But personally I liked the question because it's open ended enough to not just be a boring calculation, yet not open ended enough to be unanswerable. Also, I like the OP as a person given what they wrote, they sound curious and open-minded. In fact, I would prefer to have one more of these nice Newtonian mechanics questions than, e.g. yet another question about checking a tedious calculation in a QFT homework problem. But the votes have spoken, and that's fine! @knzhou Right, those are positives. If the question can be edited to be more focused and not opinion based or requesting a specific calculation without prior effort, then I think it could be a fine question about how the hanging mass relates to the centripetal force applied to the spinning mass. Today was a good day on PSE Because I got to mention the November tensor Didn't expect to The emphasis would also need to be changed. The title and bold question at the end make the main point about if the experiment should be done in the way they want to, or if it should be done in a different way @Charlie Random tip: There is no need to edit really poor questions that have already been closed, e.g. this one. It bumps the question into the reopen queue and to the top of the home page. This isn't horrible, but I just don't think it is necessary. The new "follow" option is interesting 7:02 PM Ah my bad I forgot it bumps the question @Charlie In addition to what @Aaron just said, it's generally a bad idea for someone other than the OP to edit a freshly closed question. If the question is fixable by the OP they should be given that opportunity. The very first edit of a closed question sends it to the reopen review queue, but any subsequent edits do not (although they do bump it to the home page). So if that first edit doesn't fix the question the odds are it won't ever get reopened, even if the OP later turns it into a perfect question. 2 I did not realise that either, will avoid it in future thanks Yeah, it's one of those "features" of the system that aren't widely publicized... However, there are some significant changes currently being worked on to the whole closing process. See meta.stackoverflow.com/q/394871/4014959 for details. As you can see by the downvotes, comments & answers there, the SO community aren't completely happy with some of the proposed changes. In particular, the ability for an OP to automatically reopen their closed question merely by making a substantial edit to it... 7:41 PM The OP reposted physics.stackexchange.com/q/540205 to astronomy.stackexchange.com/q/35627/16685 as was (kind of) recommended in comments. Should it be close-voted (or flagged)? It already has 1 answer on Physics.SE but that answer's pretty useless, so hopefully it won't get upvoted (which would stop the OP from deleting the question). 8:00 PM When a phase factor is introduced for identical physical states? 2 hours later… 9:50 PM @PM2Ring I think a flag would be fine 10:38 PM Yeah, when in doubt, cast a flag. TBH I really don't know what is a good way to handle those cases. As much as SE discourages exact crossposting, I don't think it should really be our responsibility ("our" meaning the Physics SE community) to handle what people do on other sites.
2020-05-27 05:54:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.548133373260498, "perplexity": 1216.6103710779296}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347392141.7/warc/CC-MAIN-20200527044512-20200527074512-00083.warc.gz"}
http://www.goodmath.org/blog/tag/bad-math-2/
# Big Bang Bogosity One of my long-time mantras on this blog has been “The worst math is no math”. Today, I’m going to show you yet another example of that: a recent post on Boing-Boing called “The Big Bang is Going Down”, by a self-proclaimed genius named Rick Rosner. First postulated in 1931, the Big Bang has been the standard theory of the origin and structure of the universe for 50 years. In my opinion, (the opinion of a TV comedy writer, stripper and bar bouncer who does physics on the side) the Big Bang is about to collapse catastrophically, and that’s a good thing. According to Big Bang theory, the universe exploded into existence from basically nothing 13.7-something billion years ago. But we’re at the beginning of a wave of discoveries of stuff that’s older than 13.7 billion years. We’re constantly learning more about our universe, how it works, and how it started. New information isn’t necessarily a catastrophe for our existing theories; it’s just more data. There’s constantly new data coming in – and as yet, none of it comes close to causing the big bang theory to catastrophically collapse. The two specific examples cited in the article are: 1. one quasar that appears to be younger than we might expect – it existed just 900 million years after the current estimate of when the big bang occurred. That’s very surprising, and very exciting. But even in existing models of the big bang, it’s surprising, but not impossible. (No link, because the link in the original article doesn’t work.) 2. an ancient galaxy – a galaxy that existed only 700 million years after the big bang occurred – contains dust. Cosmic dust is made of atoms much larger than hydrogen – like carbon, silicon, and iron, which are (per current theories) the product of supernovas. Supernovas generally don’t happen to stars younger than a couple of billion years – so finding dust in a galaxy less than a billion years after the universe began is quite surprising. But again: impossible under the big bang? No. The problem with both of these arguments against the big bang is: they’re vague. They’re both handwavy arguments made about crude statements about what “should” be possible or impossible according to the bing bang theory. But neither comes close to the kind of precision that an actual scientific argument requires. Scientists don’t use math because they like to be obscure, or because they think all of the pretty symbols look cool. Math is a tool used by scientists, because it’s useful. Real theories in physics need to be precise. They need to make predictions, and those predictions need to match reality to the limits of our ability to measure them. Without that kind of precision, we can’t test theories – we can’t check how well they model reality. And precise modelling of reality is the whole point. The big bang is an extremely successful theory. It makes a lot of predictions, which do a good job of matching observations. It’s evolved in significant ways over time – but it remains by far the best theory we have – and by “best”, I mean “most accurate and successfully predictive”. The catch to all of this is that when we talk about the big bang theory, we don’t mean “the universe started out as a dot, and blew up like a huge bomb, and everything we see is the remnants of that giant explosion”. That’s an informal description, but it’s not the theory. That informal description is so vague that a motivated person can interpret it in ways that are consistent, or inconsistent with almost any given piece of evidence. The real big bang theory isn’t a single english statement – it’s many different mathematical statements which, taken together, produce a description of an expansionary universe that looks like the one we live in. For a really, really small sample, you can take a look at a nice old post by Ethan Siegel over here. If you really want to make an argument that it’s impossible according to the big bang theory, you need to show how it’s impossible. The argument by Mr. Rosner is that the atoms in the dust in that galaxy couldn’t exist according to the big bang, because there wasn’t time for supernovas to create it. To make that argument, he needs to show that that’s true: he needs to look at the math that describes how stars form and how they behave, and then using that math, show that the supernovas couldn’t have happened in that timeframe. He doesn’t do anything like that: he just asserts that it’s true. In contrast, if you read the papers by the guys who discovered the dust-filled galaxy, you’ll notice that they don’t come anywhere close to saying that this is impossible, or inconsistent with the big bang. All they say is that it’s surprising, and that we made need to revise our understanding of the behavior of matter in the early stages of the universe. The reason that they say that is because there’s nothing there that fundamentally conflicts with our current understanding of the big bang. But Mr. Rosner can get away with the argument, because he’s being vague where the scientists are being precise. A scientist isn’t going to say “Yes, we know that it’s possible according to the big bang theory”, because the scientist doesn’t have the math to show it’s possible. At the moment, we don’t have sufficient precise math either way to come to a conclusion; we don’t know. But what we do know is that millions of other observations in different contexts, different locations, observed by different methods by different people, are all consistent with the predictions of the big bang. Given that we don’t have any evidence to support the idea that this couldn’t happen under the big bang, we continue to say that the big bang is the theory most consistent with our observations, that it makes better predictions than anything else, and so we assume (until we have evidence to the contrary) that this isn’t inconsistent. We don’t have any reason to discard the big bang theory on the basis of this! Mr. Rosner, though, goes even further, proposing what he believes will be the replacement for the big bang. The theory which replaces the Big Bang will treat the universe as an information processor. The universe is made of information and uses that information to define itself. Quantum mechanics and relativity pertain to the interactions of information, and the theory which finally unifies them will be information-based. The Big Bang doesn’t describe an information-processing universe. Information processors don’t blow up after one calculation. You don’t toss your smart phone after just one text. The real universe – a non-Big Bang universe – recycles itself in a series of little bangs, lighting up old, burned-out galaxies which function as memory as needed. In rolling cycles of universal computation, old, collapsed, neutron-rich galaxies are lit up again, being hosed down by neutrinos (which have probably been channeled along cosmic filaments), turning some of their neutrons to protons, which provides fuel for stellar fusion. Each calculation takes a few tens of billions of years as newly lit-up galaxies burn their proton fuel in stars, sharing information and forming new associations in the active center of the universe before burning out again. This is ultra-deep time, with what looks like a Big Bang universe being only a long moment in a vast string of such moments across trillions or quadrillions of giga-years. This is not a novel idea. There are a ton of variations of the “universe as computation” that have been proposed over the years. Just off the top of my head, I can rattle off variations that I’ve read (in decreasing order of interest) by Minsky (can’t find the paper at the moment; I read it back when I was in grad school), by Fredkin, by Wolfram, and by Langan. All of these theories assert in one form or another that our universe is either a massive computer or a massive computation, and that everything we can observe is part of a computational process. It’s a fascinating idea, and there are aspects of it that are really compelling. For example, the Minsky model has an interesting explanation for the speed of light as an absolute limit, and for time dilation. Minksy’s model says that the universe is a giant cellular automaton. Each minimum quanta of space is a cell in the automaton. When a particle is located in a particular cell, that cell is “running” the computation that describes that particle. For a particle to move, the data describing it needs to get moved from its current location to its new location at the next time quanta. That takes some amount of computation, and the cell can only perform a finite amount of computation per quanta. The faster the particle moves, the more of its time quantum are dedicated to motion, and the less it has for anything else. The speed of light, in this theory, is the speed where the full quanta for computing a particle’s behavior is dedicated to nothing but moving it to its next location. It’s very pretty. Intuitively, it works. That makes it an interesting idea. But the problem is, no one has come up with an actual working model. We’ve got real observations of the behavior of the physical universe that no one has been able to describe using the cellular automaton model. That’s the problem with all of the computational hypotheses so far. They look really good in the abstract, but none of them come close to actually working in practice. A lot of people nowadays like to mock string theory, because it’s a theory that looks really ogood, but has no testable predictions. String theory can describe the behavior of the universe that we see. The problem with it isn’t that there’s things we observe in the universe that it can’t predict, but because it can predict just about anything. There are a ton of parameters in the theory that can be shifted, and depending on their values, almost anything that we could observe can be fit by string theory. The problem with it is twofold: we don’t have any way (yet) of figuring out what values those parameters need to have to fit our universe, and we don’t have any way (yet) of performing an experiment that tests a prediction of string theory that’s different from the predictions of other theories. As much as we enjoy mocking string theory for its lack of predictive value, the computational hypotheses are far worse! So far, no one has been able to come up with one that can come close to explaining all of the things that we’ve already observed, much less to making predictions that are better than our current theories. But just like he did with his “criticism” of the big bang, Mr. Rosner makes predictions, but doesn’t bother to make them precise. There’s no math to his prediction, because there’s no content to his prediction. It doesn’t mean anything. It’s empty prose, proclaiming victory for an ill-defined idea on the basis of hand-waving and hype. Boing-Boing should be ashamed for giving this bozo a platform. # Run! Hide your children! Protect them from math with letters! Normally, I don’t write blog entries during work hours. I sometimes post stuff then, because it gets more traffic if it’s posted mid-day, but I don’t write. Except sometimes, when I come accross something that’s so ridiculous, so offensive, so patently mind-bogglingly stupid that I can’t work until I say something. Today is one of those days. In the US, many school systems have been adopting something called the Common Core. The Common Core is an attempt to come up with one basic set of educational standards that are applied consistently in all of the states. This probably sounds like a straightforward, obvious thing. In my experience, most Europeans are actually shocked that the US doesn’t have anything like this. (In fact, at best, it’s historically been standardized state-by-state, or even school district by school district.) In the US, a high school diploma doesn’t really mean anything: the standards are so widely varied that you can’t count on much of anything! The total mishmash of standards is obviously pretty dumb. The Common Core is an attempt to rationalize it, so that no matter where you go to school, there should be some basic commonality: when you finish 5th grade, you should be able to read at a certain level, do math at a certain level, etc. Obviously, the common core isn’t perfect. It isn’t even necessarily particularly good. (The US being the US, it’s mostly focused on standardized tests.) But it’s better than nothing. But again, the US being the US, there’s a lot of resistance to it. Some of it comes from the flaky left, which worries about how common standards will stifle the creativity of their perfect little flower children. Some of it comes from the loony right, which worries about how it’s a federal takeover of the education system which is going to brainwash their kiddies into perfect little socialists. But the worst, the absolute inexcusable worst, are the pig-ignorant jackasses who hate standards because it might turn children into adults who are less pig-ignorant than their parents. The poster child for this bullshit attitude is State Senator Al Melvin of Arizona. Senator Melvin repeats the usual right-wing claptrap about the federal government, and goes on to explain what he dislikes about the math standards. The math standards, he says, teach “fuzzy math”. What makes it fuzzy math? Some of the problems use letters instead of numbers. The state of Arizona should reject the Common Core math standards, because the math curicculum sometimes uses letters instead of numbers. After all, everyone knows that there’s nothing more to math than good old simple arithmetic! Letters in math problems are a liberal conspiracy to convince children to become gay! The scary thing is that I’m not exaggerating here. An argument that I have, horrifyingly, heard several times from crazies is that letters are used in math classes to try to introduce moral relativism into math. They say that the whole reason for using letters is because with numbers, there’s one right answer. But letters don’t have a fixed value: you can change what the letters mean. And obviously, we’re introducing that into math because we want to make children think that questions don’t have a single correct answer. No matter where in the world you go, you’ll find stupid people. I don’t think that the US is anything special when it comes to that. But it does seem like we’re more likely to take people like this, and put them into positions of power. How does a man who doesn’t know what algebra is get put into a position where he’s part of the committee that decides on educational standards for a state? What on earth is wrong with people who would elect someone like this? Senator Melvin isn’t just some random guy who happened to get into the state legislature. He’s currently the front-runner in the election for Arizona’s next governor. Hey Arizona, don’t you think that maybe, just maybe, you should make sure that your governor knows high school algebra? I mean, really, do you think that if he can’t understand a variable in an equation, he’s going to be able to understand the state budget?! # Bad Arithmetic and Blatant Political Lies I’ve been trying to say away from the whole political thing lately. Any time that I open my mouth to say anything about politicians, I get a bunch of assholes trying to jump down my throat for being “biased”. But sometimes, things just get too damned ridiculous, and I can’t possibly let it go without comment. In the interests of disclosure: I despise Mitt Romney. Despite that, I think he’s gotten a very unfairly hard time about a lot of things. Let’s face it, the guys a rich investor. But that’s been taken by the media, and turned in to the story through which everything is viewed, whether it makes sense or not. For example, there’s the whole $10,000 bet nonsense. I don’t think that that made a damned bit of sense. It was portrayed as “here’s a guy so rich that he can afford to lose$10,000”. But… well, let’s look at it from a mathematical perspective. You can assess the cost of a bet by looking at it from probability. Take the cost of losing, and multiply it by the probability of losing. That’s the expected cost of the bet. So, in the case of that debate moment, what was the expected cost of the bet? $0. If you know that you’re betting about a fact, and you know the fact, then you know the outcome of the bet. It’s a standard rhetorical trick. How many of us have said “Bet you a million dollars”? It doesn’t matter what dollar figure you attach to it – because you know the fact, and you know that the cost of the bet, to you, is 0. But… Well, Mitt is a rich asshole. As you must have heard, Mitt released his income tax return for last year, and an estimate for this year. Because his money is pretty much all investment income, he paid a bit under 15% in taxes. This is, quite naturally, really annoying to many people. Those of us who actually have jobs and get paid salaries don’t get away with a tax rate that low. (And people who are paid salary rather than investment profits have to pay the alternative minimum tax, which means that they’re not able to deduct charity the way that Mitt is.) So, in an interview, Mitt was asked about the fairness of a guy who made over twenty million dollars a year paying such a low rate. And Mitt, asshole that he is, tried to cover up the insanity of the current system, by saying: Well, actually, I released two years of taxes and I think the average is almost 15 percent. And then also, on top of that, I gave another more 15 percent to charity. When you add it together with all of the taxes and the charity, particularly in the last year, I think it reaches almost 40 percent that I gave back to the community. I don’t care about whether the reasoning there is good or not. Personally, I think it’s ridiculous to say “yeah, I didn’t pay taxes, but I gave a lot of money to my church, so it’s OK.” But forget that part. Just look at the freaking arithmetic! He pays less than 15% in taxes. He pays 15% in charity (mostly donations to his church). What’s less than 15 + 15? It sure as hell isn’t “almost 40 percent”. It’s not quite 30 percent. This isn’t something debatable. It’s simple, elementary school arithmetic. It’s just fucking insane that he thinks he can just get away with saying that. But he did – they let him say that, and didn’t challenge it at all. He says “less than 15 + 15 = almost 40”, and the interviewer never even batted an eye. And then, he moved on to something which is a bit more debatable: One of the reasons why we have a lower tax rate on capital gains is because capital gains are also being taxed at the corporate level. So as businesses earn profits, that’s taxed at 35 percent, then as they distribute those profits as dividends, that’s taxed at 15 percent more. So, all total, the tax rate is really closer to 45 or 50 percent. Now, like I said, you can argue about that. Personally, I don’t think it’s a particularly good argument. The way that I see it, corporations are a tradeoff. A business doesn’t need to be a corporation. You become a corporation, because transforming the business into a quasi-independent legal entity gives you some big advantages. A corporation owns its own assets. You, as an individual who owns part of a corporation, aren’t responsible for the debts of the corporation. You, as an individual who owns part of a corporation, aren’t legally liable for the actions (such as libel) of the corporation. The corporation is an independent entity, which owns its own assets, which is responsible for its debts and actions. In exchange for taking on the legal status on an independent entity, that legal entity becomes responsible for paying taxes on its income. You give it that independent legal status in order to protect yourself; and in exchange, that independent legal status entails an obligation for that independent entity to pay its own taxes. But hey, let’s leave that argument aside for the moment. Who pays the cost of the corporate taxes? Is it the owners of the business? Is it the people who work for the business? Is it someone else? When they talk about their own ridiculously low tax rates, people like Mitt argue that they’re paying those taxes, and they want to add those taxes to the total effective tax that they pay. But when they want to argue about why we should lower corporate tax rates, they pull out a totally different argument, which they call the “flypaper theory“. The flypaper theory argues that the burden of corporate taxes falls on the employees of the company – because if the company didn’t have to pay those taxes, that money would be going to the employees as salary – that is, the taxes are part of the overall expenses paid by the company. A company’s effective profits are (revenue – expenses). Expenses, in turn, are taxes+labor+materials+…. The company makes a profit of$P to satisfy its shareholders. So if you took away corporate taxes, the company could continue to make \$P while paying its employees more. Therefore, the cost of the corporate taxes comes out of the salaries of the corporations employees. You can make several different arguments – that the full burden of taxes fall on to the owners, or that the full burden of taxes falls on the employees, or that the full burden of taxes falls on the customers (because prices are raised to cover them). Each of those is something that you could reasonably argue. But what the conservative movement in America likes to do is to claim all of those: that the full burden of corporate taxes falls on the employees, and the full burden of corporate taxes falls on the customers, and the full burden of corporate taxes falls on the shareholders. That’s just dishonest. If the full burden falls on one, then none of the burden falls on anyone else. The reality is, the burden of taxes is shared between all three. If there were no corporate taxes, companies probably would be able to pay their employees more – but there’s really no way that they’d take all of the money they pay in taxes, and push that into salary. And they’d probably be able to lower prices – but they probably wouldn’t lower prices enough to make up the entire difference. And they’d probably pay more in dividends/stock buybacks to pay the shareholders. But you don’t get to count the same tax money three times. # Hydrinos: Impressive Free Energy Crackpottery Back when I wrote about the whole negative energy rubbish, a reader wrote to me, and asked me to write something about hydrinos. For those who are lucky enough not to know about them, hydrinos are part of another free energy scam. In this case, a medical doctor named Randell Mills claims to have discovered that hydrogen atoms can have multiple states beyond the typical, familiar ground state of hydrogen. Under the right conditions, so claims Dr. Mills, the electron shell around a hydrogen atom will compact into a tighter orbit, releasing a burst of energy in the process. And, in fact, it’s (supposedly) really, really easy to make hydrogen turn into hydrinos – if you let a bunch of hydrogen atoms bump in to a bunch of Argon atoms, then presto! some of the hydrogen will shrink into hydrino form, and give you a bunch of energy. Wonderful, right? Just let a bunch of gas bounce around in a balloon, and out comes energy! Oh, but it’s better than that. There are multiple hydrino forms: you can just keep compressing and compressing the hydrogen atom, pushing out more and more energy each time. The more you compress it, the more energy you get – and you don’t really need to compress it. You just bump it up against another atom, and poof! energy. To explain all of this, Dr. Mills further claims to have invented a new form of quantum mechanics, called “grand unified theory of classical quantum mechanics” (CQM for short) which provides the unification between relativity and quantum mechanics that people have been looking for. And, even better, CQM is fully deterministic – all of that ugly probabilistic stuff from quantum mechanics goes away! The problem is, it doesn’t work. None of it. What makes hydrinos interesting as a piece of crankery is that there’s a lot more depth to it than to most crap. Dr. Mills hasn’t just handwaved that these hydrino things exist – he’s got a very elaborate detailed theory – with a lot of non-trivial math – to back it up. Alas, the math is garbage, but it’s garbage-ness isn’t obvious. To see the problems, we’ll need to get deeper into math than we usually do. Here is an example of how hydrino supporters explain them: In 1986 Randell Mills MD developed a theory that hydrogen atoms could shrink, and release lots of energy in the process. He called the resultant entity a “Hydrino” (little Hydrogen), and started a company called Blacklight Power, Inc. to commercialize his process. He published his theory in a book he wrote, which is available in PDF format on his website. Unfortunately, the book contains so much mathematics that many people won’t bother with it. On this page I will try to present the energy related aspect of his theory in language that I hope will be accessible to many. According to Dr. Mills, when a hydrogen atom collides with certain other atoms or ions, it can sometimes transfer a quantity of energy to the other atom, and shrink at the same time, becoming a Hydrino in the process. The atom that it collided with is called the “catalyst”, because it helps the Hydrino shrink. Once a Hydrino has formed, it can shrink even further through collisions with other catalyst atoms. Each collision potentially resulting in another shrinkage. Each successive level of shrinkage releases even more energy than the previous level. In other words, the smaller the Hydrino gets, the more energy it releases each time it shrinks another level. To get an idea of the amounts of energy involved, I now need to introduce the concept of the “electron volt” (eV). An eV is the amount of energy that a single electron gains when it passes through a voltage drop of one volt. Since a volt isn’t much (a “dry cell” is about 1.5 volts), and the electric charge on an electron is utterly minuscule, an eV is a very tiny amount of energy. Nevertheless, it is a very representative measure of the energy involved in chemical reactions. e.g. when Hydrogen and Oxygen combine to form a water molecule, about 2.5 eV of energy is released per water molecule formed. When Hydrogen shrinks to form a second level Hydrino (Hydrogen itself is considered to be the first level Hydrino), about 41 eV of energy is released. This is already about 16 times more than when Hydrogen and Oxygen combine to form water. And it gets better from there. If that newly formed Hydrino collides with another catalyst atom, and shrinks again, to the third level, then an additional 68 eV is released. This can go on for quite a way, and the amount gets bigger each time. Here is a table of some level numbers, and the energy released in dropping to that level from the previous level, IOW when you go from e.g. level 4 to level 5, 122 eV is released. (BTW larger level numbers represent smaller Hydrinos). And some of the press: Notice a pattern? The short version of the problem with hydrinos is really, really simple. The most fundamental fact of nature that we’ve observed is that everything tends to move towards its lowest energy state. The whole theory of hydrinos basically says that that’s not true: everything except hydrogen tends to move towards its lowest energy state, but hydrogen doesn’t. It’s got a dozen or so lower energy states, but none of the abundant quantities of hydrogen on earth are ever observed in any of those states unless they’re manipulated by Mills magical machine. The whole basis of hydrino theory is Mills CQM. CQM is rubbish – but it’s impressive looking rubbish. I’m not going to go deep into detail; you can see a detailed explanation of the problems here; I’ll run through a short version. To start, how is Mills claiming that hydrinos work? In CQM, he posits the existence of electron shell levels closer to the nucleus than the ground state of hydrogen. Based on his calculations, he comes up with an energy figure for the difference between the ground state and the hydrino state. Then he finds other substances that have the property that boosting one electron into a higher energy state would cost the same amount of energy. When a hydrogen atom collides with an atom that has a matching electron transition, the hydrogen can get bumped into the hydrino state, while kicking an electron into a higher orbital. That electron will supposedly, in due time, fall back to its original level, releasing the energy differential as a photon. On this level, it sort-of looks correct. It doesn’t violate conservation of energy: the collision between the two atoms doesn’t produce anything magical. It’s just a simple transfer of energy. That much is fine. It’s when you get into the details that it gets seriously fudgy. Right from the start, if you know what you’re doing, CQM goes off the rails. For example, CQM claims that you can describe the dynamics of an electron in terms of a classical wave charge-density function equation. Mills actually gives that function, and asserts that it respects Lorentz invariance. That’s crucial – Lorentz invariance is critical for relativity: it’s the fundamental mathematical symmetry that’s the basis of relativity. But his equation doesn’t actually respect Lorentz invariance. Or, rather, it does – but only if the electron is moving at the speed of light. Which it can’t do. Mills goes on to describe the supposed physics of hydrinos. If you work through his model, the only state that is consistent with both his equations, and his claim that the electrons orbit in a spherical shell above the atom – well, if you do that, you’ll find that according to his own equations, there is only one possible state for a hydrogen atom – the conventional ground state. It goes on in that vein for quite a while. He’s got an elaborate system, with an elaborate mathematical framework… but none of the math actually says what he says it says. The Lorentz invariance example that I cited above – that’s typical. Print an equation, say that it says X, even though the equation doesn’t say anything like X. But we can go a bit further. The fundamental state of atoms is something that we understand pretty well, because we’ve got so many observations, and so much math describing it. And the thing is, that math is pretty damned convincing. That doesn’t mean that it’s correct, but it does mean that any theory that wants to replace it must be able to describe everything that we’ve observed at least as well as the current theory. Why do atoms have the shape that they do? Why are the size that they are? It’s not a super easy thing to understand, because electrons aren’t really particles. They’re something strange. We don’t often think about that, but it’s true. They’re deeply bizarre things. They’re not really particles. Under many conditions, they behave more like waves than like particles. And that’s true of the atom. The reason that atoms are the size that they are is because the electron “orbitals” have sizes and shapes that are determined by resonant frequencies of the wave-like aspects of electrons. What Mills is suggesting is that there are a range of never-before observed resonant frequencies of electrons. But the math that he uses to support that claim just doesn’t work. Now, I’ll be honest here. I’m not nearly enough of a physics whiz to be competent to judge the accuracy of his purported quantum mechanical system. But I’m still pretty darn confident that he’s full of crap. Why? I’m from New Jersey – pretty much right up the road from where his lab is. Going to college right up the road from him, I’ve been hearing about his for a long time. He’s been running this company for quite a while – going on two decades. And all that time, the company has been constantly issuing press releases promising that it’s just a year away from being commercialized! It’s always one step away. But never, never, has he released enough information to let someone truly independent verify or reproduce his results. And he’s been very deceptive about that: he’s made various claims about independent verification on several occasions. For example, he once cited that his work had been verified by a researcher at Harvard. In fact, he’d had one of his associates rent a piece of equipment at Harvard, and use it for a test. So yes, it was tested by a researcher – if you count his associate as a legitimate researcher. And it was tested at Harvard. But the claim that it was tested by a researcher at Harvard is clearly meant to imply that it was tested by a Harvard professor, when it wasn’t. For something around 20 years, he’s been making promises, giving very tightly controlled demos, refusing to give any real details, refusing to actually explain how to reproduce his “results”, and promising that it’s just one year away from being commercialized! And yet… hydrogen is the most common substance in the universe. If it really had a lower energy state that what we call it’s ground state, and that lower energy state was really as miraculous as he claims – why wouldn’t we see it? Why hasn’t it ever been observed? Substances like Argon are rare – but they’re not that rare. Argon has been exposed to hydrogen under laboratory conditions plenty of times – and yet, nothing anamalous has even been observed. All of the supposed hydrino catalysts have been observed so often under so many conditions – and yet, no anamolous energy has even been noticed before. But according to Mills, we should be seeing tons of it. And that’s not all. Mills also claims that you can create all sorts of compounds with hydrinos – and naturally, every single one of those compounds is positively miraculous! Bonded with silicon, you get better semiconductors! Substitute hydrinos for regular hydrogen in a battery electrolyte, and you get a miracle battery! Use it in rocket fuel instead of common hydrogen, and you get a ten-fold improvement in the performance of a rocket! Make a laser from it, and you can create higher-density data storage and communication systems. Everything that hydrinos touch is amazing But… not one of these miraculous substances has ever been observed before. We work with silicon all the time – but we’ve never seen the magic silicon hydrino compound. And he’s never been willing to actually show anyone any of these miracle substances. He claims that he doesn’t show it because he’s protecting his intellectual property. But that’s silly. If hydrinos existed, then just telling us that these compounds exist and have interesting properties should be enough for other labs to go ahead and experiment with producing them. But no one has. Whether he shows the supposed miracle compounds or not doesn’t change anyone else’s ability to produce those. Even if he’s keeping his magic hydrino factory secret, so that no one else has access to hydrinos, by telling us that these compounds exist, he’s given away the secret. He’s not protecting anything anymore: by publically talking about these things, he’s given up his right to patent the substances. It’s true that he still hasn’t given up the rights to the process of producing them – but publicly demonstrating these alleged miracle substances wouldn’t take away any legal rights that he hasn’t already given up. So, why doesn’t he show them to you? Because they don’t exist. # Second Law Silliness from Sewell So, via Panda’s Thumb, I hear that Granville Sewell is up to his old hijinks. Sewell is a classic creationist crackpot, who’s known for two things. First, he’s known for chronically recycling the old “second law of thermodynamics” garbage. And second, he’s known for building arguments based on “thought experiments” – where instead of doing experiments, he just makes up the experiments and the results. The second-law crankery is really annoying. It’s one of the oldest creationist pseudo-scientific schticks around, and it’s such a terrible argument. It’s also a sort-of pet peeve of mine, because I hate the way that people generally respond to it. It’s not that the common response is wrong – but rather that the common responses focus on one error, while neglecting to point out that there are many deeper issues with it. In case you’ve been hiding under a rock, the creationist argument is basically: 1. The second law of thermodynamics says that disorder always increases. 2. Evolution produces highly-ordered complexity via a natural process. 3. Therefore, evolution must be impossible, because you can’t create order. The first problem with this argument is very simple. The second law of thermodynamics does not say that disorder always increases. It’s a classic example of my old maxim: the worst math is no math. The second law of thermodynamics doesn’t say anything as fuzzy as “you can’t create order”. It’s a precise, mathematical statement. The second law of thermodynamics says that in a closed system: $Delta S geq int frac{delta Q}{T}$ where: 1. $S$ is the entropy in a system, 2. $Q$ is the amount of heat transferred in an interaction, and 3. $T$ is the temperature of the system. Translated into english, that basically says that in any interaction that involves the transfer of heat, the entropy of the system cannot possible be reduced. Other ways of saying it include “There is no possible process whose sole result is the transfer of heat from a cooler body to a warmer one”; or “No process is possible in which the sole result is the absorption of heat from a reservoir and its complete conversion into work.” Note well – there is no mention of “chaos” or “disorder” in these statements: The second law is a statement about the way that energy can be used. It basically says that when you try to use energy, some of that energy is inevitably lost in the process of using it. Talking about “chaos”, “order”, “disorder” – those are all metaphors. Entropy is a difficult concept. It doesn’t really have a particularly good intuitive meaning. It means something like “energy lost into forms that can’t be used to do work” – but that’s still a poor attempt to capture it in metaphor. The reason that people use order and disorder comes from a way of thinking about energy: if I can extract energy from burning gasoline to spin the wheels of my car, the process of spinning my wheels is very organized – it’s something that I can see as a structured application of energy – or, stretching the metaphor a bit, the energy that spins the wheels in structured. On the other hand, the “waste” from burning the gas – the heating of the engine parts, the energy caught in the warmth of the exhaust – that’s just random and useless. It’s “chaotic”. So when a creationist says that the second law of thermodynamics says you can’t create order, they’re full of shit. The second law doesn’t say that – not in any shape or form. You don’t need to get into the whole “open system/closed system” stuff to dispute it; it simply doesn’t say what they claim it says. But let’s not stop there. Even if you accept that the mathematical statement of the second law really did say that chaos always increases, that still has nothing to do with evolution. Look back at the equation. What it says is that in a closed system, in any interaction, the total entropy must increase. Even if you accept that entropy means chaos, all that it says is that in any interaction, the total entropy must increase. It doesn’t say that you can’t create order. It says that the cumulative end result of any interaction must increase entropy. Want to build a house? Of course you can do it without violating the second law. But to build that house, you need to cut down trees, dig holes, lay foundations, cut wood, pour concrete, put things together. All of those things use a lot of energy. And in each minute interaction, you’re expending energy in ways that increase entropy. If the creationist interpretation of the second law were true, you couldn’t build a house, because building a house involves creating something structured – creating order. Similarly, if you look at a living cell, it does a whole lot of highly ordered, highly structured things. In order to do those things, it uses energy. And in the process of using that energy, it creates entropy. In terms of order and chaos, the cell uses energy to create order, but in the process of doing so it creates wastes – waste heat, and waste chemicals. It converts high-energy structured molecules into lower-energy molecules, converting things with energetic structure to things without. Look at all of the waste that’s produced by a living cell, and you’ll find that it does produce a net increase in entropy. Once again, if the creationists were right, then you wouldn’t need to worry about whether evolution was possible under thermodynamics – because life wouldn’t be possible. In fact, if the creationists were right, the existence of planets, stars, and galaxies wouldn’t be possible – because a galaxy full of stars with planets is far less chaotic than loose cloud of hydrogen. Once again, we don’t even need to consider the whole closed system/open system distinction, because even if we treat earth as a closed system, their arguments are wrong. Life doesn’t really defy the laws of thermodynamics – it produces entropy exactly as it should. But the creationist second-law argument is even worse than that. The second-law argument is that the fact that DNA “encodes information”, and that the amount of information “encoded” in DNA increases as a result of the evolutionary process means that evolution violates the second law. This absolutely doesn’t require bringing in any open/closed system discussions. Doing that is just a distraction which allows the creationist to sneak their real argument underneath. The real point is: DNA is a highly structured molecule. No disagreement there. But so what? In the life of an organism, there are virtually un-countable numbers of energetic interactions, all of which result in a net increase in the amount of entropy. Why on earth would adding a bunch of links to a DNA chain completely outweigh those? In fact, changing the DNA of an organism is just another entropy increasing event. The chemical processes in the cell that create DNA strands consume energy, and use that energy to produce molecules like DNA, producing entropy along the way, just like pretty much every other chemical process in the universe. The creationist argument relies on a bunch of sloppy handwaves: “entropy” is disorder; “you can’t create order”, “DNA is ordered”. In fact, evolution has no problem with respect to entropy: one way of viewing evolution is that it’s a process of creating ever more effective entropy-generators. Now we can get to Sewell and his arguments, and you can see how perfectly they match what I’ve been talking about. Imagine a high school science teacher renting a video showing a tornado sweeping through a town, turning houses and cars into rubble. When she attempts to show it to her students, she accidentally runs the video backward. As Ford predicts, the students laugh and say, the video is going backwards! The teacher doesn’t want to admit her mistake, so she says: “No, the video is not really going backward. It only looks like it is because it appears that the second law is being violated. And of course entropy is decreasing in this video, but tornados derive their power from the sun, and the increase in entropy on the sun is far greater than the decrease seen on this video, so there is no conflict with the second law.” “In fact,” the teacher continues, “meteorologists can explain everything that is happening in this video,” and she proceeds to give some long, detailed, hastily improvised scientific theories on how tornados, under the right conditions, really can construct houses and cars. At the end of the explanation, one student says, “I don’t want to argue with scientists, but wouldn’t it be a lot easier to explain if you ran the video the other way?” Now imagine a professor describing the final project for students in his evolutionary biology class. “Here are two pictures,” he says. “One is a drawing of what the Earth must have looked like soon after it formed. The other is a picture of New York City today, with tall buildings full of intelligent humans, computers, TV sets and telephones, with libraries full of science texts and novels, and jet airplanes flying overhead. Your assignment is to explain how we got from picture one to picture two, and why this did not violate the second law of thermodynamics. You should explain that 3 or 4 billion years ago a collection of atoms formed by pure chance that was able to duplicate itself, and these complex collections of atoms were able to pass their complex structures on to their descendants generation after generation, even correcting errors. Explain how, over a very long time, the accumulation of genetic accidents resulted in greater and greater information content in the DNA of these more and more complicated collections of atoms, and how eventually something called “intelligence” allowed some of these collections of atoms to design buildings and computers and TV sets, and write encyclopedias and science texts. But be sure to point out that while none of this would have been possible in an isolated system, the Earth is an open system, and entropy can decrease in an open system as long as the decreases are compensated by increases outside the system. Energy from the sun is what made all of this possible, and while the origin and evolution of life may have resulted in some small decrease in entropy here, the increase in entropy on the sun easily compensates this tiny decrease. The sun should play a central role in your essay.” When one student turns in his essay some days later, he has written, “A few years after picture one was taken, the sun exploded into a supernova, all humans and other animals died, their bodies decayed, and their cells decomposed into simple organic and inorganic compounds. Most of the buildings collapsed immediately into rubble, those that didn’t, crumbled eventually. Most of the computers and TV sets inside were smashed into scrap metal, even those that weren’t, gradually turned into piles of rust, most of the books in the libraries burned up, the rest rotted over time, and you can see see the result in picture two.” The professor says, “You have switched the pictures!” “I know,” says the student. “But it was so much easier to explain that way.” Evolution is a movie running backward, that is what makes it so different from other phenomena in our universe, and why it demands a very different sort of explanation. This is a perfect example of both of Sewell’s usual techniques. First, the essential argument here is rubbish. It’s the usual “second-law means that you can’t create order”, even though that’s not what it says, followed by a rather shallow and pointless response to the open/closed system stuff. And the second part is what makes Sewell Sewell. He can’t actually make his own arguments. No, that’s much too hard. So he creates fake people, and plays out a story using his fake people and having them make fake arguments, and then uses the people in his story to illustrate his argument. It’s a technique that I haven’t seen used so consistency since I read Ayn Rand in high school. # What happens if you don't understand math? Just replace it with solipsism, and you can get published! About four years ago, I wrote a post about a crackpot theory by a biologist named Robert Lanza. Lanza is a biologist – a genuine, serious scientist. And his theory got published in a major journal, “The American Scholar”. Nevertheless, it’s total rubbish. Anyway, the folks over at the Encyclopedia of American Loons just posted an entry about him, so I thought it was worth bringing back this oldie-but-goodie. The original post was inspired by a comment from one of my most astute commenters, Mr. Blake Stacey, where he gave me a link to Lanza’sarticle. The article is called “A New Theory of the Universe”, by Robert Lanza, and as I said, it was published in the American Scholar. Lanza’s article is a rotten piece of new-age gibberish, with all of the usual hallmarks: lots of woo, all sorts of babble about how important consciousness is, random nonsensical babblings about quantum physics, and of course, bad math. # Hold on tight: the world ends next saturday! (For some idiot reason, I was absolutely certain that today was the 12th. It’s not. It’s the tenth. D’oh. There’s a freakin’ time&date widget on my screen! Thanks to the commenter who pointed this out.) A bit over a year ago, before the big move to Scientopia, I wrote about a loonie named Harold Camping. Camping is the guy behind the uber-christian “Family Radio”. He predicted that the world is going to end on May 21st, 2011. I first heard about this when it got written up in January of 2010 in the San Francisco Chronicle. And now, we’re less than two weeks away from the end of the world according to Mr. Camping! So I thought hey, it’s my last chance to make sure that I’m one of the damned! # Another Crank comes to visit: The Cognitive Theoretic Model of the Universe When an author of one of the pieces that I mock shows up, I try to bump them up to the top of the queue. No matter how crackpotty they are, I think that if they’ve gone to the trouble to come and defend their theories, they deserve a modicum of respect, and giving them a fair chance to get people to see their defense is the least I can do. A couple of years ago, I wrote about the Cognitive Theoretic Model of the Universe. Yesterday, the author of that piece showed up in the comments. It’s a two-year-old post, which was originally written back at ScienceBlogs – so a discussion in the comments there isn’t going to get noticed by anyone. So I’m reposting it here, with some revisions. Stripped down to its basics, the CTMU is just yet another postmodern “perception defines the universe” idea. Nothing unusual about it on that level. What makes it interesting is that it tries to take a set-theoretic approach to doing it. (Although, to be a tiny bit fair, he claims that he’s not taking a set theoretic approach, but rather demonstrating why a set theoretic approach won’t work. Either way, I’d argue that it’s more of a word-game than a real theory, but whatever…) The real universe has always been theoretically treated as an object, and specifically as the composite type of object known as a set. But an object or set exists in space and time, and reality does not. Because the real universe by definition contains all that is real, there is no “external reality” (or space, or time) in which it can exist or have been “created”. We can talk about lesser regions of the real universe in such a light, but not about the real universe as a whole. Nor, for identical reasons, can we think of the universe as the sum of its parts, for these parts exist solely within a spacetime manifold identified with the whole and cannot explain the manifold itself. This rules out pluralistic explanations of reality, forcing us to seek an explanation at once monic (because nonpluralistic) and holistic (because the basic conditions for existence are embodied in the manifold, which equals the whole). Obviously, the first step towards such an explanation is to bring monism and holism into coincidence. # E. E. Escultura and the Field Axioms As you may have noticed, E. E. Escultura has shown up in the comments to this blog. In one comment, he made an interesting (but unsupported) claim, and I thought it was worth promoting up to a proper discussion of its own, rather than letting it rage in the comments of an unrelated post. What he said was: You really have no choice friends. The real number system is ill-defined, does not exist, because its field axioms are inconsistent!!! This is a really bizarre claim. The field axioms are inconsistent? I’ll run through a quick review, because I know that many/most people don’t have the field axioms memorized. But the field axioms are, basically, an extremely simple set of rules describing the behavior of an algebraic structure. The real numbers are the canonical example of a field, but you can define other fields; for example, the rational numbers form a field; if you allow the values to be a class rather than a set, the surreal numbers form a field. So: a field is a collection of values F with two operations, “+” and “*”, such that: 1. Closure: ∀ a, b ∈ F: a + b in F ∧ a * b ∈ f 2. Associativity: ∀ a, b, c ∈ F: a + (b + c) = (a + b) + c ∧ a * (b * c) = (a * b) * c 3. Commutativity: ∀ a, b ∈ F: a + b = b + a ∧ a * b = b * a 4. Identity: there exist distinct elements 0 and 1 in F such that ∀ a ∈ F: a + 0 = a, ∀ b ∈ F: b*1=b 5. Additive inverses: ∀ a ∈ F, there exists an additive inverse -a ∈ F such that a + -a = 0. 6. Multiplicative Inverse: For all a ∈ F where a != 0, there a multiplicative inverse a-1 such that a * a-1 = 1. 7. Distributivity: ∀ a, b, c ∈ F: a * (b+c) = (a*b) + (a*c) So, our friend Professor Escultura claims that this set of axioms is inconsistent, and that therefore the real numbers are ill-defined. One of the things that makes the field axioms so beautiful is how simple they are. They’re a nice, minimal illustration of how we expect numbers to behave. So, Professor Escultura: to claim that that the field axioms are inconsistent, what you’re saying is that this set of axioms leads to an inevitable contradiction. So, what exactly about the field axioms is inconsistent? Where’s the contradiction? # Representational Crankery: the New Reals and the Dark Number There’s one kind of crank that I haven’t really paid much attention to on this blog, and that’s the real number cranks. I’ve touched on real number crankery in my little encounter with John Gabriel, and back in the old 0.999…=1 post, but I’ve never really given them the attention that they deserve. There are a huge number of people who hate the logical implications of our definitions real numbers, and who insist that those unpleasant complications mean that our concept of real numbers is based on a faulty definition, or even that the whole concept of real numbers is ill-defined. This is an underlying theme of a lot of Cantor crankery, but it goes well beyond that. And the basic problem underlies a lot of bad mathematical arguments. The root of this particular problem comes from a confusion between the representation of a number, and that number itself. “$\frac{1}{2}$” isn’t a number: it’s a notation that we understand refers to the number that you get by dividing one by two. There’s a similar form of looniness that you get from people who dislike the set-theoretic construction of numbers. In classic set theory, you can construct the set of integers by starting with the empty set, which is used as the representation of 0. Then the set containing the empty set is the value 1 – so 1 is represented as { 0 }. Then 2 is represented as { 1, 0 }; 3 as { 2, 1, 0}; and so on. (There are several variations of this, but this is the basic idea.) You’ll see arguments from people who dislike this saying things like “This isn’t a construction of the natural numbers, because you can take the intersection of 8 and 3, and set intersection is meaningless on numbers.” The problem with that is the same as the problem with the notational crankery: the set theoretic construction doesn’t say “the empty set is the value 0″, it says “in a set theoretic construction, the empty set can be used as a representation of the number 0. The particular version of this crankery that I’m going to focus on today is somewhat related to the inverse-19 loonies. If you recall their monument, the plaque talks about how their work was praised by a math professor by the name of Edgar Escultura. Well, it turns out that Escultura himself is a bit of a crank. The specify manifestation of his crankery is this representational issue. But the root of it is really related to the discomfort that many people feel at some of the conclusions of modern math. A lot of what we learned about math has turned out to be non-intuitive. There’s Cantor, and Gödel, of course: there are lots of different sizes of infinities; and there are mathematical statements that are neither true nor false. And there are all sorts of related things – for example, the whole ideaof undescribable numbers. Undescribable numbers drive people nuts. An undescribable number is a number which has the property that there’s absolutely no way that you can write it down, ever. Not that you can’t write it in, say, base-10 decimals, but that you can’t ever write down anything, in any form that uniquely describes it. And, it turns out, that the vast majority of numbers are undescribable. This leads to the representational issue. Many people insist that if you can’t represent a number, that number doesn’t really exist. It’s nothing but an artifact of an flawed definition. Therefore, by this argument, those numbers don’t exist; the only reason that we think that they do is because the real numbers are ill-defined. This kind of crackpottery isn’t limited to stupid people. Professor Escultura isn’t a moron – but he is a crackpot. What he’s done is take the representational argument, and run with it. According to him, the only real numbers are numbers that are representable. What he proposes is very nearly a theory of computable numbers – but he tangles it up in the representational issue. And in a fascinatingly ironic turn-around, he takes the artifacts of representational limitations, and insists that they represent real mathematical phenomena – resulting in an ill-defined number theory as a way of correcting what he alleges is an ill-defined number theory. His system is called the New Real Numbers. In the New Real Numbers, which he notates as $R^*$, the decimal notation is fundamental. The set of new real numbers consists exactly of the set of numbers with finite representations in decimal form. This leads to some astonishingly bizarre things. From his paper: 3) Then the inverse operation to multiplication called division; the result of dividing a decimal by another if it exists is called quotient provided the divisor is not zero. Only when the integral part of the devisor is not prime other than 2 or 5 is the quotient well defined. For example, 2/7 is ill defined because the quotient is not a terminating decimal (we interpret a fraction as division). So 2/7ths is not a new real number: it’s ill-defined. 1/3 isn’t a real number: it’s ill-defined. 4) Since a decimal is determined or well-defined by its digits, nonterminating decimals are ambiguous or ill-defined. Consequently, the notion irrational is ill-defined since we cannot cheeckd all its digits and verify if the digits of a nonterminaing decimal are periodic or nonperiodic. After that last one, this isn’t too surprising. But it’s still absolutely amazing. The square root of two? Ill-defined: it doesn’t really exist. e? Ill-defined, it doesn’t exist. $\pi$? Ill-defined, it doesn’t really exist. All of those triangles, circles, everything that depends on e? They’re all bullshit according to Escultura. Because if he can’t write them down in a piece of paper in decimal notation in a finite amount of time, they don’t exist. Of course, this is entirely too ridiculous, so he backtracks a bit, and defines a non-terminating decimal number. His definition is quite peculiar. I can’t say that I really follow it. I think this may be a language issue – Escultura isn’t a native english speaker. I’m not sure which parts of this are crackpottery, which are linguistic struggles, and which are notational difficulties in reading math rendered as plain text. 5) Consider the sequence of decimals, (d)^na_1a_2…a_k, n = 1, 2, …, (1) where d is any of the decimals, 0.1, 0.2, 0.3, …, 0.9, a_1, …, a_k, basic integers (not all 0 simultaneously). We call the nonstandard sequence (1) d-sequence and its nth term nth d-term. For fixed combination of d and the a_j’s, j = 1, …, k, in (1) the nth term is a terminating decimal and as n increases indefinitely it traces the tail digits of some nonterminating decimal and becomes smaller and smaller until we cannot see it anymore and indistinguishable from the tail digits of the other decimals (note that the nth d-term recedes to the right with increasing n by one decimal digit at a time). The sequence (1) is called nonstandard d-sequence since the nth term is not standard g-term; while it has standard limit (in the standard norm) which is 0 it is not a g-limit since it is not a decimal but it exists because it is well-defined by its nonstandard d-sequence. We call its nonstandard g-limit dark number and denote by d. Then we call its norm d-norm (standard distance from 0) which is d > 0. Moreover, while the nth term becomes smaller and smaller with indefinitely increasing n it is greater than 0 no matter how large n is so that if x is a decimal, 0 < d < x. I think that what he’s trying to say there is that a non-terminating decimal is a sequence of finite representations that approach a limit. So there’s still no real infinite representations – instead, you’ve got an infinite sequence of finite representations, where each finite representation in the sequence can be generated from the previous one. This bit is why I said that this is nearly a theory of the computable numbers. Obviously, undescribable numbers can’t exist in this theory, because you can’t generate this sequence. Where this really goes totally off the rails is that throughout this, he’s working on the assumption that there’s a one-to-one relationship between representations and numbers. That’s what that “dark number” stuff is about. You see, in Escultura’s system, 0.999999… is not equal to one. It’s not a representational artifact. In Escultura’s system, there are no representational artifacts: the representations are the numbers. The “dark number”, which he notates as $d^*$, is (1-0.99999999…) and $d^*$ is the smallest number greater than 0. And you can generate a complete ordered enumeration of all of the new real numbers, ${0, d^*, 2d^*, 3d^*, ..., n-2d^*, n-d^*, n, n+d^*, ...}$. Reading Escultura, every once in a while, you might think he’s joking. For example, he claims to have disproven Fermat’s last theorem. Fermat’s theorem says that for n>2, there are no integer solutions for the equation $x^n + y^n = z^n$. Escultura says he’s disproven this: The exact solutions of Fermat’s equation, which are the counterexamples to FLT, are given by the triples (x,y,z) = ((0.99…)10^T,d*,10^T), T = 1, 2, …, that clearly satisfies Fermat’s equation, x^n + y^n = z^n, (4) for n = NT > 2. Moreover, for k = 1, 2, …, the triple (kx,ky,kz) also satisfies Fermat’s equation. They are the countably infinite counterexamples to FLT that prove the conjecture false. One counterexample is, of course, sufficient to disprove a conjecture. Even if you accept the reality of the notational artifact $d^*$, this makes no sense: the point of Fermat’s last theorem is that there are no integer solutions; $d^*$ is not an integer; $(1-d^*)10$ is not an integer. Surely he’s not that stupid. Surely he can’t possibly believe that he’s disproven Fermat using non-integer solutions? I mean, how is this different from just claiming that you can use (2, 3, 351/3) as a counterexample for n=3? But… he’s serious. He’s serious enough that he’s published published a real paper making the claim (albeit in crackpot journals, which are the only places that would accept this rubbish). Anyway, jumping back for a moment… You can create a theory of numbers around this $d^*$ rubbish. The problem is, it’s not a particularly useful theory. Why? Because it breaks some of the fundamental properties that we expect numbers to have. The real numbers define a structure called a field, and a huge amount of what we really do with numbers is built on the fundamental properties of the field structure. One of the necessary properties of a field is that it has unique identity elements for addition and multiplication. If you don’t have unique identities, then everything collapses. So… Take $\frac{1}{9}$. That’s the multiplicative inverse of 9. So, by definition, $\frac{1}{9}*9 = 1$ – the multiplicative identity. In Escultura’s theory, $\frac{1}{9}$ is a shorthand for the number that has a representation of 0.1111…. So, $\frac{1}{9}*9 = 0.1111....*9 = 0.9999... = (1-d^*)$. So $(1-d^*)$ is also a multiplicative identity. By a similar process, you can show that $d^*$ itself must be the additive identity. So either $d^* == 0$, or else you’ve lost the field structure, and with it, pretty much all of real number theory.
2019-03-24 13:33:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 22, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42192941904067993, "perplexity": 722.539616058408}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203448.17/warc/CC-MAIN-20190324124545-20190324150545-00340.warc.gz"}