url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://cseducators.stackexchange.com/questions/6221/explaining-event-driven-programming-in-the-context-of-the-structured-programming/6255
# Explaining Event-Driven Programming in the context of the Structured Programming Theorem I'm a retired college teacher. I've been asked to "teach coding" to a dozen very bright seventh-graders using the Micro:Bit kits the school has on hand. I have a total of seven contact hours available. In three hours, we've learned about algorithms, variables, and operators. On Tuesday we'll write a program for "red light / green light" and learn the difference between sequence and selection. The following week we'll do something with a loop and I'll add "iteration" to the vocabulary. While I probably won't say "Structured Programming Theorem," I was going to tell the kids that sequence, selection, and iteration are "all you need." The trouble is, the MakeCode block language has "on-blocks" which fire when events occur, e.g. "on Button A pressed." You can't avoid these. So, how do I explain events, or "on-blocks" in the context of sequence, selection, and iteration? (The best I've been able to come up with so far is that there's a hidden loop of if statements checking for events, which is probably how the on-blocks are actually implemented, but unlikely to be satisfying to twelve-year-olds.) • Note that a Turing Machine has no notion of asynchronous action. A TM is deterministic. Event driven programming isn't. I'll try to come up with a satisfying answer, but you are, at best, simulating events using a polling loop. – Buffy Mar 1 '20 at 16:42 • Event-driven programming is non-deterministic in the sense that a specification for an event-driven language generally doesn't specify the order that event handlers will be executed in. That doesn't mean that an implementation of an event-driven language must be non-deterministic in the sense of not always producing the same output via the same sequence of internal states when executing the same program. Javascript for instance is event-driven but single-threaded, and most implementations are deterministic in the latter sense because to not be, they would have to intentionally not be. – kaya3 Mar 1 '20 at 17:56 • See the question on lying to students v being pedantic: cseducators.stackexchange.com/questions/4717/…. My answer suggests "lying honestly" or something. – Buffy Mar 2 '20 at 0:15 • Sequence, if & while are "all you need" for sequential programming, giving possible sequences of states or events per assignments & calls that they surround. But you have more to what you are doing than this--you have a sequential process & an event-generating process--so non-determinacy & concurrency. You need to give--just as when you only have a single sequential process--a "model of computation"--a system state structure & how it starts & changes per evolution of some process(es). (Similarly, teaching iteration should include how to reason re a loop constantly advancing to a goal state.) – philipxy Mar 4 '20 at 23:52 • I had considered adding, unfortunately these fundamental semantics are typically not given when presenting programming & then learners literally do not know what they are doing & the lucky ones sort of learn some stuff--though not to specify or justify programs. Now having read the kit documentation, alas, that is exactly the case with it. We are not told the semantics of "event handlers". Eg what if one event happens during processing of another?--Is the event lost? Is it enqued? Does a device wait, or detect further events? Etc etc. Sad. But typical. The blind blinding the blind. – philipxy Mar 5 '20 at 4:01 In the spirit of K.I.S.S., I believe that you can call this "selection" (it is), and no one will bat an eyelash. At this stage, selection within a program won't be fully differentiated from selection by a user in any case, so no one will be expecting anything different in any case. If you feel the need to explain further, just say that, if you think about it, these really seem like if statements, too. They just don't use the word for it. But if we were to go somewhere into the code that makes the whole program go, we'd discover that our instincts are correct, because there really are if statements that drive this, and they look something like: if button A gives a signal, find the "on Button A pressed" block and run from there. • This is likely to be true, but one could imagine an event-driven implementation working differently. For example, there could be a data structure mapping event kinds to event-handler lists, and when an event is triggered it immediately looks up the right list and invokes each handler, instead of adding the event to a queue to be dispatched later. That is how triggering events works in e.g. JS/jQuery; so it's possible to do it with neither conditional statements nor unbounded iteration. The missing piece is that "sequence, selection and iteration" doesn't include dynamic dispatch of functions. – kaya3 Mar 2 '20 at 13:16 • @kaya3 Definitely, and that's an interesting point. Though I certainly wouldn't bring that up at this point with these kids. And I suppose if I were being pedantic, I would say that that would ultimately push the if down to another layer of abstraction, so it's still not innaccurate to say that that if is somewhere in the code that makes the whole program go. 😁 – Ben I. Mar 2 '20 at 13:39 • Yes, I wouldn't bring it up in a lesson. I think it's a matter of philosophy, but I argue that it is inaccurate to say if must be in there somewhere. At the lowest level, the program is doing conditional jumps, sure; but a conditional jump forwards is not always logically an if statement, and a conditional jump backwards is not always logically a loop. The main benefit of "structured programming" is to avoid the direct manipulation of the instruction pointer required in assembly, so I think it would be perverse to count assembly programs as "structured programs" in that sense. – kaya3 Mar 2 '20 at 13:52 • @kaya3 I'm intrigued by your assertion that a conditional jump is not always an if statement, since I think of them as one and the same. I am open to learning something new, however. I agree that a large goal of structured programming is to prevent the human from having to work at a low level, but that doesn't make the low level less valid. It just means that we can step further away from the weeds. – Ben I. Mar 2 '20 at 18:50 • For example, a while loop might be compiled to (start); conditional jump to end; loop body; jump to start; (end). So the conditional jump is forwards but there was no if statement, and the repetition is achieved by an unconditional jump backwards. A conditional jump could also break out from the middle of a loop, which isn't possible with just "sequence, selection and iteration" in the structured program theorem. – kaya3 Mar 2 '20 at 18:59 I don't see that there's anything to explain, because there's no apparent contradiction between a set of language constructs being "all you need", and a language having more constructs than are "needed". Languages are designed to have constructs that are useful for human programmers to easily and concisely express what a program should do, not just a minimal set of constructs for it to be possible to express programs. It's not even necessary to explain that event-driven languages are implemented using hidden conditional statements and loops; and as Buffy notes in the comments, they may not actually be implemented that way anyway. If you want to tell your students that they could be, then I see no problem with that. But consider this: • Untyped lambda calculus is Turing complete, so "all you need" is function definitions and function application. Does that imply that conditional statements and loops must be implemented with hidden function definitions and hidden function applications? Well, no, it doesn't imply that. • The logical NAND operation is functionally complete, so "all you need" is NAND gates. Does that imply that computers are made only of NAND gates? Again, no, it doesn't. • As shown in this paper and amusingly demonstrated in this compiler, the x86 mov instruction is "all you need", i.e. any C program can be compiled to a sequence of unconditional writes to memory (at addresses read from memory). But the rest of the x86 instruction set is, of course, not implemented that way. What I will suggest is that if you do want to mention that sequence, conditional statements and loops are "all you need", then you should add that real programming languages have more control-flow constructs than just those, because the additional ones (e.g. subroutines/function calls and return, try/catch, event loops…) are convenient. Disclaimer: I am aware that this does not directly answer the question, but I it still useful nonetheless to question the question itself. IMHO, it seems you are overthinking this issue. You are introducing a whole new world of programming to 12 year olds in seven hours. There will be many huge gaps in their understanding and many things will remain partial knowledge. And that's OK. Make them passionate about the subject and they will search for more about the parts they are interested in. Micro:bits allows for lots of interesting experiments using the inputs/outputs. Instead of worrying about "Selection, Iteration and Sequence" you could focus on, you know, building cool stuff. If it were me, I would rather have students finishing the course with something of their own making and that they know how to explain how it works to their parents than worrying about abstract CS concepts many college students struggle to understand. • I disagree. The students should certainly have fun, and based on yesterday's game of Rock Paper Scissors Lizard Spock, with score keeping using the Micro:Bit buttons, they are. If I do this right, they'll also go away with at least some understanding of the science behind what they're doing. When they get to college, maybe they won't struggle with what you have called abstract concepts. – Bob Brown Mar 4 '20 at 13:24 • My reasoning was that college is years in the future. They may or may not choose CS as a major. They may or may not take a course in theoretical computing and, even if they do, they'll probably not remember a 16hour course taken almost 10 years ago. IMHO, you are doing a good job. Students are having fun and this is what would matter if I were in your shoes. – igordsm Mar 4 '20 at 18:10 • The semester has rocked along, moving to a virtual environment, and "my" kids can explain sequence-selection-iteration. At this point, the brightest ones can also say "Turing complete" and equate that to the "effectively computable" requirement for a procedure to be an algorithm. I told you these kids were smart! – Bob Brown Mar 26 '20 at 13:30 You might first begin by explaining how a program begins running. The program is not always running. It simply stored somewhere, but dormant. Some event external to the program causes it to run. Generally this is a click on an icon. Some running program is notified of this click and does the work necessary to get the selected program running. When you write an on-block you are defining some code to to be run when a specific external event occurs. That is, you don't see the invocation of this code within in your own program. But you have told the outside world that if an event occurs, notify me by running this block of code. If you never define such a block, then your program will not be notified that it has happened. Since the on-block is inside your program it can alter the state of the program. As Scott Rowe pointed out, your program is "listening". Ultimately, it all begins with the hardware and operating system. Actions like keyboard presses, or mouse movement cause an electrical signal. The interrupt handler looks at the signals and decides what to do with them. Eventually, it is passes them on to the program(s) that have expressed interest in the event. But just because program has expressed interest in a type of event, doesn't mean it will receive that event. A drawing program is interested in mouse movement so you can "draw" things by moving the mouse. But even if the program is running, it will only receive the event when the mouse is in the window, and the window has focus. And when your program terminates, it will no longer receive any events until it is run again. If you turn off your computer, no signals are handled. You can type on the keyboard, but there is nothing listening. Two ideas: 1. The recommendation for teaching is usually to explain things in a way that the students can understand. Usually this means giving an age-appropriate response to questions that young children ask. The key is to present your material, and only go outside it when someone asks a question. 2. It is fine to say that Sequence, Selection and Iteration are all that you need to write a program. But to have a computer you also need interrupts, otherwise the computer would not respond to us at all. When the subject of interrupts came up, I would say that originally, there was an interrupt line that went straight in to the CPU and which caused it to set aside what it was doing and invoke a different (small) program to handle the interrupt. Then I would explain that this is how the keyboard is read, the mouse handled, the screen redrawn, and basically almost everything else that makes a computer actually interact with people, rather than just calculating something invisibly. Interrupts are on a different level than programming. It changes a machine in to a tool that responds. Otherwise we would still be using punched cards. Programming is necessary to get the computer to 'talk' to us, but not sufficient for it to 'listen'.
2021-07-30 20:05:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25858110189437866, "perplexity": 1036.4654334743727}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153980.55/warc/CC-MAIN-20210730185206-20210730215206-00675.warc.gz"}
http://texhacks.blogspot.com/2009/09/numbering-every-paragraph.html
## Friday, September 25, 2009 ### Numbering every paragraph One occasionally finds cause to number every graf in a document. Well, I never have, but I've seen it done. There's a neat little trick that is mentioned in passing in an exercise in the TeXbook that easily enables this. This relies on a TeX primitive \everypar which is essentially a token register which is to say that it holds a list of tokens that can be used again and again. What exactly a token is is a topic for another post. When TeX starts a new paragraph (or more precisely, when it enters horizontal mode), it does two things. First, it inserts an empty box of width \parindent (this is the indentation that occurs at the start of every graf) and then it will process the tokens defined by \everypar before going on to process the rest of the tokens that make up the graf. The upshot of this is that we can cause TeX to do something at the start of every graf, but after it inserts the indentation glue. The way to use this is to doing something like the following. ```\newcounter{grafcounter} \setcounter{grafcounter}{0} ```\everypar={\addtocounter{grafcounter}{1}% ```\everypar=\expandafter{\the\everypar
2019-04-26 09:54:04
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9660294055938721, "perplexity": 745.5697712902428}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578765115.93/warc/CC-MAIN-20190426093516-20190426115516-00501.warc.gz"}
http://math.stackexchange.com/questions/355301/non-degenerate-solutions-to-constant-hamiltonian-flow
# Non-degenerate solutions to constant Hamiltonian flow As I'm trying to work my way through Dietmar Salamon's "Notes on Floer Homology", I'm having trouble with the very first exercise. Let $(M, \omega)$ be a compact symplectic manifold. Let $H$ be a real function on $M$, let $X$ be the vector field associated to $dH$ under the isomorphism $TM \simeq T^{*}M$ induced by $\omega$. Let $x \in M$ be a critical point for $H$, ie. $dH(x) = 0$. If $\phi _{t}: M \times \mathbb{R} \rightarrow M$ is the flow associated to $X$ (that is, $\frac{\partial}{\partial t} \phi _{t} = X \circ \phi_{t}$), then $x$ is a fixed point of this flow. In particular $\phi_{1}(x) = x$. We will say that $x$ is non-degenerate if $det(id - d\phi_{1}(x)) \neq 0$. This happens exactly when $d\phi_{1}(x)$ has no fixed vectors. The problem is as follows. Show that under these assumptions $x$ is also a non-degenerate critical point for $H$. One possible way to state that is that in any local coordinate system $x_{i}$ around $x$, the matrix $(\frac{\partial^{2}}{\partial x_{i} \partial x_{j}} H) _{i, j}$ is non-singular. I tried to reason as follows. Since $X(x) = 0$, infinitesimally at time $0$ and around $x$ points are (up to first order) not moved by $\phi$ at all. However, as I let the flow run for one unit of time I perform some form of "integration" that makes the second order changes by $\phi$ "go up an order" and become visible in $d\phi_{1}$. Thus, I should interpret the condition that $d\phi_{1}$ has no fixed vectors as a sing that $X = dH$ moves all vectors around $x$ in a second-order change, ie. $dH$ is non-degenerate. I was trying to formalize the "implication" $(\frac{\partial}{\partial t} \phi_{t} = X \circ \phi_{t}) \ \Rightarrow \ \phi_{1} = \int\limits_{0}^{1} X \circ \phi_{t} \ dt$ so that later I could differentiate both sides with respect to $x$, but I quickly run into trouble because I am not working in the euclidean space, where - up till now - I have always "gotten by" using coordinate-free descriptions, since the problem I worked on were rather simple-minded. I would be very interested in a coordinate-free and proof with coordinates, but it would be most useful if it was formal, because I think my problems stem from a large lack of experiance with these kinds of problems. Feel free to retag the question as needed. - Your idea was good and here is a formal proof. However, it does use local charts and I would be also interested in seeing a coordinate-free solution. Choose Darboux coordinates around the critical point $x_0$, so that its neighbourhood is identified with $\mathbb{R}^{n}$, the symplectic form $\omega$ is locally represented by a constant $n\times n$ antisymmetric matrix and $X = \omega^{-1} \nabla H$. For fixed $x$ from a small neighbourhood of $x_0$ we have $\psi_1(x) - x = \psi_1(x) - \psi_0(x) = \int_0^1 \frac{d}{dt}\left( \psi_t(x) \right) dt = \int_0^1 X(\psi_t(x)) dt = \int_0^1 \omega^{-1}\nabla H (\psi_t(x)).$ Differentiating both sides with respect to $x$ (note that $\omega$ is constant), we obtain $d \psi_1(x) - I = \int_0^1 \omega^{-1} \text{Hess}(H)(\psi_t(x)) \frac{d \psi_t(x)}{dx} dt,$ where $\text{Hess}(H)$ denotes the Hessian matrix of $H$. Now put $x = x_0$. As it is a critical point, $\psi_t(x_0) = x_0$ for all $t$ and we get $d \psi_1(x_0) - I = \text{Hess}(H)(x_0) \left( \int_0^1 \frac{d \psi_t(x_0)}{dx} dt \right),$ which proves that the matrix $\text{Hess}(H)(x_0)$ is non-singular. - It's nice, thank you! I accept it, of course, although I would also like to see a coordinate-free solution. – Piotr Pstrągowski Apr 9 '13 at 15:16 An alternative way of stating the non-degeneracy condition is that the following self-adjoint unbounded operator on $L^2(S^1, \gamma^*TM)$ does not have $0$ in the spectrum. (This formulation is actually what one uses in the proofs where the non-degeneracy assumption is used.) Let $\nabla$ be an arbitrary torsion-free connection on $M$. Then, define the differential operator by $\cdot \mapsto -J(\nabla_t \cdot + \nabla_{\cdot} X_H).$ If I haven't made a mistake, this should be just $-J L_{\cdot} X_H$ (and thus doesn't depend on the connection). (I learned this formulation from Siefring's paper on asymptotics, but I don't believe it is an original observation from him.) In particular, if we don't impose the condition that this operator act on periodic sections of $\gamma^*TM$, the linearized flow acting on any vector i.e.~$d\phi_t \cdot v$ for any $v$, is in the kernel of this operator. The kernel is then non-trivial if and only if there is some vector such that $d\phi_t \cdot v$ is of period 1 in $t$, which is precisely equivalent to having an eigenvector of eigenvalue $1$. With this formulation, it becomes immediate: if $x_0 = \gamma(t)$ is a constant solution, then this operator becomes $h \mapsto -J (\nabla_t h + \nabla_h X_H.$ However, we know that $X_H = J \nabla H$ and $X_H(x_0) = 0$, so $\nabla_h (J \nabla H) = (\nabla_h J) \nabla H(x_0) + J \nabla_h \nabla H = J \nabla_h \nabla H$. This means the differential operator just simplifies to $-J \nabla_t + \nabla \nabla H.$ Since the eigenvalues of the Hessian are real, this operator has non-trivial kernel if and only if $\nabla^2 H$ has a zero eigenvalue. -
2016-05-28 06:07:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9679192304611206, "perplexity": 100.96867518363246}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049277313.92/warc/CC-MAIN-20160524002117-00026-ip-10-185-217-139.ec2.internal.warc.gz"}
https://secure.sky-map.org/starview?object_type=1&object_id=1794&object_name=HIP+55560&locale=ZH
SKY-MAP.ORG 首页 开始 To Survive in the Universe News@Sky 天文图片 收集 论坛 Blog New! 常见问题 新闻 登录 # 56 UMa (???) ### 图像 DSS Images   Other Images ### 相关文章 Lithium abundances and rotational behavior for bright giant starsAims.We study the links possibly existing between the lithium content ofbright giant stars and their rotational velocity. Methods: .Weperformed a spectral analysis of 145 bright giant stars (luminosityclass II) spanning the spectral range from F3 to K5. All these starshave homogeneous rotational velocity measurements available in theliterature. Results: .For all the stars of the sample, we provideconsistent lithium abundances (A_Li), effective temperatures (T_eff),projected rotational velocity (v sin i), mean metallicity ([Fe/H]),stellar mass, and an indication of the stellar multiplicity. The gradualdecrease in lithium abundance with T_eff is confirmed for bright giantstars, and it points to a dilution factor that is at least assignificant as in giant stars. From the F to K spectral types, the A_Lispans at least three orders of magnitude, reflecting the effects ofstellar mass and evolution on dilution. Conclusions: .We find thatthe behavior of A_Li as a function of v sin i in bright giant starspresents the same trend as is observed in giants and subgiants: starswith high A_Li are moderate or fast rotators, while stars with low A_Lishow a wide range of v sin i values. Spectral Types for Four OGLE-III Transit Candidates: Could These Be Planets?We present spectral types for OGLE (Optical Gravitational LensingExperiment) transiting planet candidates OGLE-TR-134 through 137 basedon low-resolution spectra taken at Kitt Peak. Our main objective is toaid those planning radial velocity monitoring of transit candidates. Weobtain spectral types with an accuracy of 2 spectral subtypes, alongwith tentative luminosity classifications. Combining the spectral typeswith light-curve fits to the OGLE transit photometry, and with TwoMicron All Sky Survey counterparts in two cases, we conclude thatOGLE-TR-135 and 137 are not planetary transits, while OGLE-TR-134 and136 are good candidates and should be observed with precision radialvelocity monitoring to determine whether the companions are of planetarymass. OGLE-TR-135 is ruled out chiefly because a discrepancy between thestellar parameters obtained from the transit fit and those inferred fromthe spectra indicates that the system is a blend. OGLE-TR-137 is ruledout because the depth of the transit combined with the spectral type ofthe star indicates that the transiting object is stellar. OGLE-TR-134and 136, if unblended main-sequence stars, are each orbited by atransiting object with radius below 1.4 RJ. The caveats arethat our luminosity classification suggests that OGLE-TR-134 could be agiant (and therefore a blend), while OGLE-TR-136 shows a (much smaller)discrepancy of the same form as OGLE-TR-135, which may indicate that thesystem is a blend. However, since our luminosity classifications areuncertain at best, and the OGLE-TR-136 discrepancy can be explained ifthe primary is a slightly anomalous main-sequence star, the stars remaingood candidates. Synthetic Lick Indices and Detection of α-enhanced Stars. II. F, G, and K Stars in the -1.0 < [Fe/H] < +0.50 RangeWe present an analysis of 402 F, G, and K solar neighborhood stars, withaccurate estimates of [Fe/H] in the range -1.0 to +0.5 dex, aimed at thedetection of α-enhanced stars and at the investigation of theirkinematical properties. The analysis is based on the comparison of 571sets of spectral indices in the Lick/IDS system, coming from fourdifferent observational data sets, with synthetic indices computed withsolar-scaled abundances and with α-element enhancement. We useselected combinations of indices to single out α-enhanced starswithout requiring previous knowledge of their main atmosphericparameters. By applying this approach to the total data set, we obtain alist of 60 bona fide α-enhanced stars and of 146 stars withsolar-scaled abundances. The properties of the detected α-enhancedand solar-scaled abundance stars with respect to their [Fe/H] values andkinematics are presented. A clear kinematic distinction betweensolar-scaled and α-enhanced stars was found, although a one-to-onecorrespondence to thin disk'' and thick disk'' components cannot besupported with the present data. Empirically Constrained Color-Temperature Relations. II. uvbyA new grid of theoretical color indices for the Strömgren uvbyphotometric system has been derived from MARCS model atmospheres and SSGsynthetic spectra for cool dwarf and giant stars having-3.0<=[Fe/H]<=+0.5 and 3000<=Teff<=8000 K. Atwarmer temperatures (i.e., 8000-2.0. To overcome thisproblem, the theoretical indices at intermediate and high metallicitieshave been corrected using a set of color calibrations based on fieldstars having well-determined distances from Hipparcos, accurateTeff estimates from the infrared flux method, andspectroscopic [Fe/H] values. In contrast with Paper I, star clustersplayed only a minor role in this analysis in that they provided asupplementary constraint on the color corrections for cool dwarf starswith Teff<=5500 K. They were mainly used to test thecolor-Teff relations and, encouragingly, isochrones thatemploy the transformations derived in this study are able to reproducethe observed CMDs (involving u-v, v-b, and b-y colors) for a number ofopen and globular clusters (including M67, the Hyades, and 47 Tuc)rather well. Moreover, our interpretations of such data are verysimilar, if not identical, with those given in Paper I from aconsideration of BV(RI)C observations for the sameclusters-which provides a compelling argument in support of thecolor-Teff relations that are reported in both studies. Inthe present investigation, we have also analyzed the observedStrömgren photometry for the classic Population II subdwarfs,compared our final'' (b-y)-Teff relationship with thosederived empirically in a number of recent studies and examined in somedetail the dependence of the m1 index on [Fe/H].Based, in part, on observations made with the Nordic Optical Telescope,operated jointly on the island of La Palma by Denmark, Finland, Iceland,Norway, and Sweden, in the Spanish Observatorio del Roque de losMuchachos of the Instituto de Astrofisica de Canarias.Based, in part, on observations obtained with the Danish 1.54 mtelescope at the European Southern Observatory, La Silla, Chile. STELIB: A library of stellar spectra at R ~ 2000We present STELIB, a new spectroscopic stellar library, available athttp://webast.ast.obs-mip.fr/stelib. STELIB consists of an homogeneouslibrary of 249 stellar spectra in the visible range (3200 to 9500Å), with an intermediate spectral resolution (la 3 Å) andsampling (1 Å). This library includes stars of various spectraltypes and luminosity classes, spanning a relatively wide range inmetallicity. The spectral resolution, wavelength and spectral typecoverage of this library represents a substantial improvement overprevious libraries used in population synthesis models. The overallabsolute photometric uncertainty is 3%.Based on observations collected with the Jacobus Kaptein Telescope,(owned and operated jointly by the Particle Physics and AstronomyResearch Council of the UK, The Nederlandse Organisatie voorWetenschappelijk Onderzoek of The Netherlands and the Instituto deAstrofísica de Canarias of Spain and located in the SpanishObservatorio del Roque de Los Muchachos on La Palma which is operated bythe Instituto de Astrofísica de Canarias), the 2.3 mtelescope of the Australian National University at Siding Spring,Australia, and the VLT-UT1 Antu Telescope (ESO).Tables \ref{cat1} to \ref{cat6} and \ref{antab1} to A.7 are onlyavailable in electronic form at http://www.edpsciences.org. The StellarLibrary STELIB library is also available at the CDS, via anonymous ftpto cdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/402/433 Chemical compositions of four barium starsWe obtain abundances of alpha , iron peak and neutron capture(n-capture) process elements in four Ba stars HD 26886, HD 27271, HD50082 and HD 98839 based on high resolution, high signal-to-noisespectra. We find that all of these Ba stars are disk stars. Their alphaand iron peak elements are similar to the solar abundances. Then-capture process elements are overabundant relative to the Sun. Inparticular, the second peak slow neutron capture process (s-process)elements, Ba and La, are higher than the first peak s-process elements,Y and Zr. Analyzing the abundances of four sample stars, theheavy-element abundances of the strong Ba star HD 50082 are higher thanthose of other three mild Ba stars. The stellar mass of the strong Bastar HD 50082 is 1.32 Msun (+0.28, -0.22 Msun),which is consistent with the average mass of strong Ba stars (1.5Msun). For mild Ba star HD 27271, we derive 1.90Msun (+0.25, -0.20 Msun), consistent with theaverage mass of mild Ba stars (1.9 Msun, with 0.6Msun white dwarf companion). For mild Ba star HD 26886, thederived 2.78 Msun (+0.75, -0.78 Msun) isconsistent with the average 2.3 Msun of mild Ba stars with0.67 Msun companion white dwarfs within the errors. Mass ofmild Ba star HD 98839 is high to 3.62 Msun, which inspiresmore thoughts on the formation of Ba star phenomenon. Using our angularmomentum conservation theoretical model of wind accretion of Ba binarysystems, we obtain the theoretical heavy-element abundances of Ba starsthat best fit our data. The results show that the observed abundances ofthe typical strong Ba star HD 50082 and the typical mild Ba star HD27271 are consistent with the theoretical results very well. Thissuggests that their heavy-element abundances were caused by accretingthe ejecta of AGB stars, the progenitors of the present white dwarfcompanions, through stellar wind. However, wind accretion scenariocannot explain the observed abundance pattern of the mild Ba star HD26886 with shorter orbital period (P= 1263.2 d). The mild Ba star HD98839 with high mass (up to 3.62 Msun) and very long orbitalperiod (P> 11 000 d) may be either a star with the heavy elementsenriched by itself or a true Ba" star. Table 3 and Tables\ref{tab5}(1) to \ref{tab5}(4) are only available in electronic form athttp://www.edpsciences.org Catalogue of Apparent Diameters and Absolute Radii of Stars (CADARS) - Third edition - Comments and statisticsThe Catalogue, available at the Centre de Données Stellaires deStrasbourg, consists of 13 573 records concerning the results obtainedfrom different methods for 7778 stars, reported in the literature. Thefollowing data are listed for each star: identifications, apparentmagnitude, spectral type, apparent diameter in arcsec, absolute radiusin solar units, method of determination, reference, remarks. Commentsand statistics obtained from CADARS are given. The Catalogue isavailable in electronic form at the CDS via anonymous ftp tocdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcar?J/A+A/367/521 Sixth Catalogue of Fundamental Stars (FK6). Part III. Additional fundamental stars with direct solutionsThe FK6 is a suitable combination of the results of the HIPPARCOSastrometry satellite with ground-based data, measured over a longinterval of time and summarized mainly in the FK5. Part III of the FK6(abbreviated FK6(III)) contains additional fundamental stars with directsolutions. Such direct solutions are appropriate for single stars or forobjects which can be treated like single stars. Part III of the FK6contains in total 3272 stars. Their ground-based data stem from thebright extension of the FK5 (735 stars), from the catalogue of remainingSup stars (RSup, 732 stars), and from the faint extension of the FK5(1805 stars). From the 3272 stars in Part III, we have selected 1928objects as "astrometrically excellent stars", since their instantaneousproper motions and their mean (time-averaged) ones do not differsignificantly. Hence most of the astrometrically excellent stars arewell-behaving "single-star candidates" with good astrometric data. Thesestars are most suited for high-precision astrometry. On the other hand,354 of the stars in Part III are Δμ binaries in the sense ofWielen et al. (1999). Many of them are newly discovered probablebinaries with no other hitherto known indication of binarity. The FK6gives, besides the classical "single-star mode" solutions (SI mode),other solutions which take into account the fact that hidden astrometricbinaries among "apparently single-stars" introduce sizable "cosmicerrors" into the quasi-instantaneously measured HIPPARCOS proper motionsand positions. The FK6 gives, in addition to the SI mode, the "long-termprediction (LTP) mode" and the "short-term prediction (STP) mode". TheseLTP and STP modes are on average the most precise solutions forapparently single stars, depending on the epoch difference with respectto the HIPPARCOS epoch of about 1991. The typical mean error of anFK6(III) proper motion in the single-star mode is 0.59 mas/year. This isa factor of 1.34 better than the typical HIPPARCOS errors for thesestars of 0.79 mas/year. In the long-term prediction mode, in whichcosmic errors are taken into account, the FK6(III) proper motions have atypical mean error of 0.93 mas/year, which is by a factor of about 2better than the corresponding error for the HIPPARCOS values of 1.83mas/year (cosmic errors included). Evidence for Asphericity in the Type IIN Supernova SN 1998SWe present optical spectropolarimetry obtained at the Keck II 10 mtelescope on 1998 March 7 UT along with total flux spectra spanning thefirst 494 days after discovery (1998 March 2 UT) of the peculiar TypeIIn supernova (SN) 1998S. The SN is found to exhibit a high degree oflinear polarization, implying significant asphericity for itscontinuum-scattering environment. Prior to the removal of interstellarpolarization, the polarization spectrum is characterized by a flatcontinuum (at p~2%) with distinct changes in polarization associatedwith both the broad (symmetric, half-width near zero intensity>~10,000 km s-1) and narrow (unresolved, full width athalf-maximum less than 300 km s-1) line emission seen in thetotal flux spectrum. When analyzed in terms of a polarized continuumwith unpolarized broad-line recombination emission, an intrinsiccontinuum polarization of p~3% results, suggesting a global asphericityof >~45% from the oblate, electron-scattering dominated models ofHöflich. The smooth, blue continuum evident at early times is shownto be inconsistent with a reddened, single-temperature blackbody,instead having a color temperature that increases with decreasingwavelength. Broad emission-line profiles with distinct blue and redpeaks are seen in the total flux spectra at later times, suggesting adisklike or ringlike morphology for the dense(ne~107 cm-3) circumstellar medium,generically similar to what is seen directly in SN 1987A, although muchdenser and closer to the progenitor in SN 1998S. Implications of thecircumstellar scattering environment on the spectropolarimetry arediscussed, as are the effects of uncertainty in the removal ofinterstellar polarization; the importance of obtaining multiplespectropolarimetric epochs in future studies to help better constrainthe interstellar polarization value is particularly stressed. Usinginformation derived from the spectropolarimetry and the total fluxspectra, an evolutionary scenario for SN 1998S and its progenitor arepresented. The Vienna-KPNO search for Doppler-imaging candidate stars. I. A catalog of stellar-activity indicators for 1058 late-type Hipparcos starsWe present the results from a spectroscopic Ca ii H&K survey of 1058late-type stars selected from a color-limited subsample of the Hipparcoscatalog. Out of these 1058 stars, 371 stars were found to showsignificant H&K emission, most of them previously unknown; 23% withstrong emission, 36% with moderate emission, and 41% with weak emission.These spectra are used to determine absolute H&K emission-linefluxes, radial velocities, and equivalent widths of theluminosity-sensitive Sr ii line at 4077 Ä. Red-wavelengthspectroscopic and Strömgren y photometric follow-up observations ofthe 371 stars with H&K emission are used to additionally determinethe absolute Hα -core flux, the lithium abundance from the Li i6708 Å equivalent width, the rotational velocity vsin i, theradial velocity, and the light variations and its periodicity. Thelatter is interpreted as the stellar rotation period due to aninhomogeneous surface brightness distribution. 156 stars were found withphotometric periods between 0.29 and 64 days, 11 additional systemsshowed quasi-periodic variations possibly in excess of ~50 days. Further54 stars had variations but no unique period was found, and four starswere essentially constant. Altogether, 170 new variable stars werediscovered. Additionally, we found 17 new SB1 (plus 16 new candidates)and 19 new SB2 systems, as well as one definite and two possible new SB3systems. Finally, we present a list of 21 stars that we think are mostsuitable candidates for a detailed study with the Doppler-imagingtechnique. Tables A1--A3 are only available in electronic form at theCDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/Abstract.html A catalog of rotational and radial velocities for evolved starsRotational and radial velocities have been measured for about 2000evolved stars of luminosity classes IV, III, II and Ib covering thespectral region F, G and K. The survey was carried out with the CORAVELspectrometer. The precision for the radial velocities is better than0.30 km s-1, whereas for the rotational velocity measurementsthe uncertainties are typically 1.0 km s-1 for subgiants andgiants and 2.0 km s-1 for class II giants and Ib supergiants.These data will add constraints to studies of the rotational behaviourof evolved stars as well as solid informations concerning the presenceof external rotational brakes, tidal interactions in evolved binarysystems and on the link between rotation, chemical abundance and stellaractivity. In this paper we present the rotational velocity v sin i andthe mean radial velocity for the stars of luminosity classes IV, III andII. Based on observations collected at the Haute--Provence Observatory,Saint--Michel, France and at the European Southern Observatory, LaSilla, Chile. Table \ref{tab5} also available in electronic form at CDSvia anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/Abstract.html The effective temperature scale of giant stars (F0-K5). I. The effective temperature determination by means of the IRFMWe have applied the InfraRed Flux Method (IRFM) to a sample ofapproximately 500 giant stars in order to derive their effectivetemperatures with an internal mean accuracy of about 1.5% and a maximumuncertainty in the zero point of the order of 0.9%. For the applicationof the IRFM, we have used a homogeneous grid of theoretical modelatmosphere flux distributions developed by \cite[Kurucz (1993)]{K93}.The atmospheric parameters of the stars roughly cover the ranges: 3500 K<= T_eff <= 8000 K; -3.0 <= [Fe/H] <= +0.5; 0.5 <= log(g) <= 3.5. The monochromatic infrared fluxes at the continuum arebased on recent photometry with errors that satisfy the accuracyrequirements of the work. We have derived the bolometric correction ofgiant stars by using a new calibration which takes the effect ofmetallicity into account. Direct spectroscopic determinations ofmetallicity have been adopted where available, although estimates basedon photometric calibrations have been considered for some stars lackingspectroscopic ones. The adopted infrared absolute flux calibration,based on direct optical measurements of stellar angular diameters, putsthe effective temperatures determined in this work in the same scale asthose obtained by direct methods. We have derived up to fourtemperatures, TJ, TH, TK and T_{L'},for each star using the monochromatic fluxes at different infraredwavelengths in the photometric bands J, H, K and L'. They show goodconsistency over 4000 K, and there is no appreciable trend withwavelength, metallicity and/or temperature. We provide a detaileddescription of the steps followed for the application of the IRFM, aswell as the sources of error and their effect on final temperatures. Wealso provide a comparison of the results with previous work. Absolute declinations with the photoelectric astrolabe at Calern Observatory (OCA)A regular observational programme with a photoelectric astrolabe havebeen performed at Observatoire du Calern" (Observatoire de laCôte d'Azur, OCA, phi = +43() o44′55.011″; lambda =-0() h27() m42.44() s, Calern, Caussols, France) for the last twentyyears. It has been almost fully automatized between 1984 and 1987. Since1988 the photoelectric astrolabe was used without any modification. Inaddition to determining the daily orientation of the local vertical, theyearly analysis of the residuals permits to derive corrections to theused star catalogue \cite[(Vigouroux et al. 1992)]{vig92}. A globalreduction method was applied for the ASPHO observations. The new form ofthe equations \cite[(Martin & Leister 1997)]{mar97} give us thepossibility of using the entire set of the observing program using datataken at two zenith distances (30() o and 45() o). The program containsabout 41648 stars' transits of 269 different stars taken atObservatoire du Calern" (OCA). The reduction was based on theHIPPARCOS system. We discuss the possibility of computing absolutedeclinations through stars belonging simultaneously to the 30() o and45() o zenith distances programmes. The absolute declination correctionswere determined for 185 stars with precision of 0.027arcsec and thevalue of the determined equator correction is -0.018arcsec +/-0.005arcsec . The instrumental effects were also determined. The meanepoch is 1995.29. Catalogue only available at CDS in electronic from viaanonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/Abstract.html Catalogs of temperatures and [Fe/H] averages for evolved G and K starsA catalog of mean values of [Fe/H] for evolved G and K stars isdescribed. The zero point for the catalog entries has been establishedby using differential analyses. Literature sources for those entries areincluded in the catalog. The mean values are given with rms errors andnumbers of degrees of freedom, and a simple example of the use of thesestatistical data is given. For a number of the stars with entries in thecatalog, temperatures have been determined. A separate catalogcontaining those data is briefly described. Catalog only available atthe CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/Abstract.html Evolution of X-ray activity and rotation on G-K giantsThe recent availability of stellar parallaxes provided by the Hipparcosstar catalogue (ESA 1997) enables an accurate determination of thepositions of single field giants in a theoretical H-R diagram and areliable estimate of their masses. The present study combines these newastrometric data with previously published X-ray fluxes and rotationalvelocities. The results confirm the existence of a sharp decrease ofX-ray emission at spectral type K1 for 2.5 M_sun < M < 5 M_sungiants. The study shows that the rotational velocity of these starsreaches a minimum at the same location in the H-R diagram. However, notight relationship between X-ray luminosities and projected equatorialvelocities was found among the sample stars. I suggest that theseresults could reflect the importance of differential rotation indetermining the level of coronal emission among >= 2.5Msun G and K giants. The restoration of rigid rotation at thebottom of the red giant branch could prevent the maintenance of largescale magnetic fields, thus explaining the sharp decrease of coronalX-ray emission at spectral type K1. Broad-band JHK(L') photometry of a sample of giants with 0.5 > [Fe/H] > -3We present the results of a three-year campaign of broad-band photometryin the near-infrared J, H, K and L' bands for a sample of approximately250 giant stars carried out at the Observatorio del Teide (Tenerife,Spain). Transformations of the Telescopio Carlos Sanchez systeminto/from several currently used infrared systems are extended to theredward part of the colour axis. The linearity of our photometric systemin the range -3 mag [Fe/H] >-3. Data of comparable quality previouslypublished have been added to the sample in order to increase thereliability of the relations to be obtained. We also provide mean IRcolours for giant stars according to spectral type.ables 1, 2 and 3 are only available in electronic form via the CDS(anonymous ftp 130.79.128.5 or http://cdsweb.u-strasbg.fr/Abstract.html Insights into the formation of barium and Tc-poor S stars from an extended sample of orbital elementsThe set of orbital elements available for chemically-peculiar red giant(PRG) stars has been considerably enlarged thanks to a decade-longCORAVEL radial-velocity monitoring of about 70 barium stars and 50 Sstars. When account is made for the detection biases, the observedbinary frequency among strong barium stars, mild barium stars andTc-poor S stars (respectively 35/37, 34/40 and 24/28) is compatible withthe hypothesis that they are all members of binary systems. Thesimilarity between the orbital-period, eccentricity and mass-functiondistributions of Tc-poor S stars and barium stars confirms that Tc-poorS stars are the cooler analogs of barium stars. A comparative analysisof the orbital elements of the various families of PRG stars, and of asample of chemically-normal, binary giants in open clusters, revealsseveral interesting features. The eccentricity - period diagram of PRGstars clearly bears the signature of dissipative processes associatedwith mass transfer, since the maximum eccentricity observed at a givenorbital period is much smaller than in the comparison sample of normalgiants. be held The mass function distribution is compatible with theunseen companion being a white dwarf (WD). This lends support to thescenario of formation of the PRG star by accretion of heavy-element-richmatter transferred from the former asymptotic giant branch progenitor ofthe current WD. Assuming that the WD companion has a mass in the range0.60+/-0.04 Msb ȯ, the masses of mild and strong barium starsamount to 1.9+/-0.2 and 1.5+/-0.2 Msb ȯ, respectively. Mild bariumstars are not restricted to long-period systems, contrarily to what isexpected if the smaller accretion efficiency in wider systems were thedominant factor controlling the pollution level of the PRG star. Theseresults suggest that the difference between mild and strong barium starsis mainly one of galactic population rather than of orbital separation,in agreement with their respective kinematical properties. There areindications that metallicity may be the parameter blurring the period -Ba-anomaly correlation: at a given orbital period, increasing levels ofheavy-element overabundances are found in mild barium stars, strongbarium stars, and Pop.II CH stars, corresponding to a sequence ofincreasingly older, i.e., more metal-deficient, populations. PRG starsthus seem to be produced more efficiently in low-metallicitypopulations. Conversely, normal giants in barium-like binary systems mayexist in more metal-rich populations. HD 160538 (DR Dra) may be such anexample, and its very existence indicates at least that binarity is nota sufficient condition to produce a PRG star. This paper is dedicated tothe memory of Antoine Duquennoy, who contributed many among theobservations used in this study A catalogue of [Fe/H] determinations: 1996 editionA fifth Edition of the Catalogue of [Fe/H] determinations is presentedherewith. It contains 5946 determinations for 3247 stars, including 751stars in 84 associations, clusters or galaxies. The literature iscomplete up to December 1995. The 700 bibliographical referencescorrespond to [Fe/H] determinations obtained from high resolutionspectroscopic observations and detailed analyses, most of them carriedout with the help of model-atmospheres. The Catalogue is made up ofthree formatted files: File 1: field stars, File 2: stars in galacticassociations and clusters, and stars in SMC, LMC, M33, File 3: numberedlist of bibliographical references The three files are only available inelectronic form at the Centre de Donnees Stellaires in Strasbourg, viaanonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5), or viahttp://cdsweb.u-strasbg.fr/Abstract.html Barium stars, galactic populations and evolution.In this paper HIPPARCOS astrometric and kinematical data together withradial velocities from other sources are used to calibrate bothluminosity and kinematics parameters of Ba stars and to classify them.We confirm the results of our previous paper (where we used data fromthe HIPPARCOS Input Catalogue), and show that Ba stars are aninhomogeneous group. Five distinct classes have been found i.e. somehalo stars and four groups belonging to disk population: roughlysuper-giants, two groups of giants (one on the giant branch, the otherat the clump location) and dwarfs, with a few subgiants mixed with them.The confirmed or suspected duplicity, the variability and the range ofknown orbital periods found in each group give coherent resultssupporting the scenario for Ba stars that are not too highly massivebinary stars in any evolutionary stages but that all were previouslyenriched with Ba from a more evolved companion. The presence in thesample of a certain number of false'' Ba stars is confirmed. Theestimates of age and mass are compatible with models for stars with astrong Ba anomaly. The mild Ba stars with an estimated mass higher than3Msun_ may be either stars Ba enriched by themselves ortrue'' Ba stars, which imposes new constraints on models. Absolute magnitudes and kinematics of barium stars.The absolute magnitude of barium stars has been obtained fromkinematical data using a new algorithm based on the maximum-likelihoodprinciple. The method allows to separate a sample into groupscharacterized by different mean absolute magnitudes, kinematics andz-scale heights. It also takes into account, simultaneously, thecensorship in the sample and the errors on the observables. The methodhas been applied to a sample of 318 barium stars. Four groups have beendetected. Three of them show a kinematical behaviour corresponding todisk population stars. The fourth group contains stars with halokinematics. The luminosities of the disk population groups spread alarge range. The intrinsically brightest one (M_v_=-1.5mag,σ_M_=0.5mag) seems to be an inhomogeneous group containing bariumbinaries as well as AGB single stars. The most numerous group (about 150stars) has a mean absolute magnitude corresponding to stars in the redgiant branch (M_v_=0.9mag, σ_M_=0.8mag). The third group containsbarium dwarfs, the obtained mean absolute magnitude is characteristic ofstars on the main sequence or on the subgiant branch (M_v_=3.3mag,σ_M_=0.5mag). The obtained mean luminosities as well as thekinematical results are compatible with an evolutionary link betweenbarium dwarfs and classical barium giants. The highly luminous group isnot linked with these last two groups. More high-resolutionspectroscopic data will be necessary in order to better discriminatebetween barium and non-barium stars. Thirty years' radial velocities of 56 Ursae MajorisNot Available The Pulkovo Spectrophotometric Catalog of Bright Stars in the Range from 320 TO 1080 NMA spectrophotometric catalog is presented, combining results of numerousobservations made by Pulkovo astronomers at different observing sites.The catalog consists of three parts: the first contains the data for 602stars in the spectral range of 320--735 nm with a resolution of 5 nm,the second one contains 285 stars in the spectral range of 500--1080 nmwith a resolution of 10 nm and the third one contains 278 stars combinedfrom the preceding catalogs in the spectral range of 320--1080 nm with aresolution of 10 nm. The data are presented in absolute energy unitsW/m(2) m, with a step of 2.5 nm and with an accuracy not lower than1.5--2.0%. Stroemgren four-color photometry of X-ray active late-type stars: Evidence for activity-induced deficiency in the m_1_ index.We present the results of a uvby-β photometric study of a sample ofactive late-type stars (F-K) selected from the Einstein Extended MediumSensitivity Survey. Our work shows the presence in the sample of a starpopulation with the photometric index c_1_ typical of main sequencestars and an unexpected deficiency in m_1_ index. Stars with moreanomalous values of m_1_ have also very high values of f_ X_/f_V_ andX-ray surface fluxes, near the "saturation" limit observed in the mostactive stars and similar to the flux observed in solar active regions.We discuss these results in the light of similar results found in theSun comparing m_1_ indices in quiet and active regions and in othersamples of active stars. Colour excesses of F-G supergiants and Cepheids from Geneva photometry.A reddening scale for F-G supergiants and Cepheids is presented.Supergiants with low reddenings or in clusters form the basis of thecalibration. In this sense, it is entirely empirical. The data have beenobtained in the Geneva photometric system. Comparisons with otherreddening scales show no disagreement. The only problem is with Fernie'sscale for Cepheids (1990), where a systematic trend exists. Its originis not clear. It is suggested to extend the number of supergiants withindependently obtained colour excesses in order to test the existence ofa possible luminosity dependence of the calibration. A period-colourrelation for Cepheids is deduced, on the basis of the present reddeningcorrections. It gives strong support for V473 Lyr being a secondovertone pulsator. Vitesses radiales. Catalogue WEB: Wilson Evans Batten. Subtittle: Radial velocities: The Wilson-Evans-Batten catalogue.We give a common version of the two catalogues of Mean Radial Velocitiesby Wilson (1963) and Evans (1978) to which we have added the catalogueof spectroscopic binary systems (Batten et al. 1989). For each star,when possible, we give: 1) an acronym to enter SIMBAD (Set ofIdentifications Measurements and Bibliography for Astronomical Data) ofthe CDS (Centre de Donnees Astronomiques de Strasbourg). 2) the numberHIC of the HIPPARCOS catalogue (Turon 1992). 3) the CCDM number(Catalogue des Composantes des etoiles Doubles et Multiples) byDommanget & Nys (1994). For the cluster stars, a precise study hasbeen done, on the identificator numbers. Numerous remarks point out theproblems we have had to deal with. A revised effective-temperature calibration for the DDO photometric systemA revised effective-temperature calibration for the David DunlapObservatory (DDO) photometric system is presented. Recently publishedphotometric and spectroscopic observations of field and open-cluster Gand K stars allow a better definition of the solar-abundance fiducialrelation in the DDO C0(45-48) vs. C0(42-45)diagram. The ability of the DDO system to predict MK spectral types of Gand K giants is demonstrated. The new DDO effective temperaturecalibration reproduces satisfactorily the infrared temperature scale ofBell and Gustafsson (1989). It is shown that Osborn's (1979) calibrationunderestimates the effective temperatures of K giants by approximately170 K and those of late-type dwarfs by approximately 150 K. Stromgren Four-Colour UVBY Photometry of G5-TYPE Hd-Stars Brighter than MV=8.6Abstract image available at:http://adsabs.harvard.edu/cgi-bin/nph-bib_query?1993A&AS..102...89O&db_key=AST A catalogue of Fe/H determinations - 1991 editionA revised version of the catalog of Fe/H determinations published by G.Cayrel et al. (1985) is presented. The catalog contains 3252 Fe/Hdeterminations for 1676 stars. The literature is complete up to December1990. The catalog includes only Fe/H determinations obtained from highresolution spectroscopic observations based on detailed spectroscopicanalyses, most of them carried out with model atmospheres. The catalogcontains a good number of Fe/H determinations for stars from open andglobular clusters and for some supergiants in the Magellanic Clouds. Fifth fundamental catalogue. Part 2: The FK5 extension - new fundamental starsThe mean positions and proper motions for 3117 new fundamental starsessentially in the magnitude range about 4.5 to 9.5 are given in thisFK5 extension. Mean apparent visual magnitude is 7.2 and is on average2.5 magnitudes fainter then the basic FK5 which has a mean magnitude of4.7. (The basic FK5 gives the mean positions and proper motions for theclassical 1535 fundamental stars). The following are discussed: theobservational material, reduction of observations, star selection, andthe system for the FK5 extension. An explanation and description of thecatalog are given. The catalog of 3117 fundamental stars for the equinoxand epoch J2000.0 and B1950.0 is presented. The parallaxes and radialvelocities for 22 extension stars with large forecasting effects aregiven. Catalogs used in the compilation of the FK5 fundamental catalogare listed. The correction in right ascension of 508 stars determinated with PMO photoelectric transit instrument.Not Available • - 没有找到链接 - ### 下列团体成员 #### 观测天体数据 星座: 大熊座 右阿森松: 11h22m49.60s 赤纬: +43°28'58.0" 视星: 4.99 距离: 150.83 天文距离 右阿森松适当运动: -36.8 赤纬适当运动: -13.5 B-T magnitude: 6.239 V-T magnitude: 5.092 适当名称 ???   (Edit) Flamsteed 56 UMa HD 1989 HD 98839 TYCHO-2 2000 TYC 3015-2321-1 USNO-A2.0 USNO-A2 1275-07905480 BSC 1991 HR 4392 HIP HIP 55560 → 要求更多目录从vizier
2019-11-21 04:00:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6368448734283447, "perplexity": 8524.029614892803}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670729.90/warc/CC-MAIN-20191121023525-20191121051525-00105.warc.gz"}
https://faculty.math.illinois.edu/Macaulay2/doc/Macaulay2-1.18/share/doc/Macaulay2/Macaulay2Doc/html/___Singular_sp__Book_sp1.8.9.html
# Singular Book 1.8.9 -- radical membership Recall that an element $f$ is in an ideal $I$ if $1 \in (I, tf-1) \subset R[t]$. i1 : A = QQ[x,y,z]; i2 : I = ideal"x5,xy3,y7,z3+xyz"; o2 : Ideal of A i3 : f = x+y+z; i4 : B = A[t]; i5 : J = substitute(I,B) + ideal(f*t-1) 5 3 7 3 o5 = ideal (x , x*y , y , x*y*z + z , (x + y + z)t - 1) o5 : Ideal of B i6 : 1 % J o6 = 0 o6 : B The polynomial f is in the radical. Let's compute the radical to make sure. i7 : radical I o7 = ideal (z, y, x) o7 : Ideal of A
2021-09-26 19:29:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8770395517349243, "perplexity": 997.8501673925704}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057913.34/warc/CC-MAIN-20210926175051-20210926205051-00101.warc.gz"}
https://mathematica.stackexchange.com/questions/131671/define-tilde-as-a-postfix-operator
# Define Tilde as a postfix operator I have read a lot of posts in this forum on defining infix symbols like CircleDot and Tilde. The Help pages say Tilde is often used either as an infix or postfix operator, but I can find no example in the Mathematica documentation nor in this forum of how to define it as a postfix operator. I have several such operators I would like to define but I am unable to define even a very simple one. It is straight-forward to define Tilde as an infix operator. For example, input: Foo[expr_] := -expr expr1_∼expr2_ := Foo[expr1] x=.; 100∼x -100 However, the operation I wish to define would be expr1_∼:=Foo[expr1] 100∼ also with an answer of -100, but this is not accepted by Mathematica. I don't wish to invoke the Notations package. Is this possible to do? I hope the answer is trivial and, if so, I apologize for asking. • Can you link to the help page that says that it can be used as a postfix operator? – Szabolcs Nov 21 '16 at 7:47 • I just spent an hour looking and I can't find it. I saw it again yesterday, just one tiny reference in the Documentation Center. The sentence said that it is usually used as an infix symbol but sometimes as postfix. I guess the upshot of this is that it is standard in Mathematica to easily define binary operators (using special symbols like Tilde) but not unary operators. I find that strange. I had posted here just to find out whether or not this can be done so that I would know whether or not to stop trying to do this. – matrixbud Nov 21 '16 at 16:29 It is not possible to make \[Tilde] postfix. It is not possible to define new operators with new parsing rules in Mathematica in such a way that they work generally. It is only possible to do it with the Notation package, but it will only work in the notebook interface, not in plain text source files or in terminal mode. Perhaps someone else will write an answer about that topic, as I don't use the Notation package. Instead I'll talk about how Mathematica parses plain text input. a \[Tilde] b is hard-wired to parse to Tilde[a, b]. You can assign any definition to the symbol Tilde, but you cannot change how a \[Tilde] b is parsed. You cannot change the fact that it is infix, you cannot change its precedence, and you cannot change the fact that it parses directly to Tilde[a, b]. Note 1: Parsing means converting a textual representation of an expression into an in-memory data structure. It is separate from evaluation. Example: f@x doesn't evaluate to f[x]. It directly parses to the same in-memory data structure as f[x]. f@x and f[x] are indistinguishable by the evaluator, as it never even sees their textual form, only their internal representation. Note 2: Perhaps it is unlucky to work with \[Tilde] because it looks very similar to ~, which has a completely different meaning: x ~f~ y is the same as f[x, y]. • @Szablocs, thank you for this clear explanation. A follow-on question, if you have time. If I select a non-hard-wired special symbol, is it possible to give it a postfix attribute and maybe a precedence level (not sure I am using the correct terminology) and then define it as unary operator? I am not locked in to Tilde although it happens to be the mathematical standard for one of the operations I have in mind. If so, is there a list of non-hardwired special symbols or a way to query a special symbol to know if it is hardwired? – matrixbud Nov 21 '16 at 19:50 • @matrixbud No, it is not possible to customize the parsing step. One reason is that there are multiple parsers for Mathematica: the kernel has one, the front end has a different one, even the Workbench has a Java implementation (I think). These all should ideally behave the same way, so customizing just one can't be allowed. (Yes, there are sometimes bugs stemming form the fact that they don't truly behave the same way...) – Szabolcs Nov 21 '16 at 19:59 • @matrixbud What you can do is restrict yourself to using a notebook interface. Things in the notebook are represented as "boxes". Search for this work in the documentation. You can customize how boxes are translated to Mathematica input or the reverse, see MakeExpression and MakeBoxes. This is not easy. The Notation package uses this feature to make it easier to set up new notations—this is what you want. But I am not experienced with this package. Start here and read the tutorials, also notation-package. – Szabolcs Nov 21 '16 at 20:01 • THAT answer is definitive. It explains a lot and I appreciate it. I'll just have to decide whether this feature is important enough to me to (1) learn the Notation package and (2) live with the overhead that comes with the package. – matrixbud Nov 21 '16 at 22:13 • I need to put in a feature request to add a few postfix symbols. Presumably the developers can coordinate the parsers. – matrixbud Nov 21 '16 at 22:34
2019-10-20 15:38:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5068311095237732, "perplexity": 645.6249907533555}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986710773.68/warc/CC-MAIN-20191020132840-20191020160340-00141.warc.gz"}
https://physics.stackexchange.com/questions/412041/why-can-two-skew-forces-not-simplify-to-a-single-force?noredirect=1
# Why can two skew forces not simplify to a single force? In my textbook it states that: In 3 dimensions however there is a fourth possibility. For example, consider two forces whose lines of action are skew (non-intersecting, non-parallel). Such a pair of forces can not be equivalent to a single force, couple or be in equilibrium, but are equivalent to a force and a couple whose plane does not include the force. I understand the impossibility of equilibrium and it simplifying to a couple, but why can't it be equivalent to a single force acting through a specific point? A diagram to illustrate would be greatly appreciated. • In order for both forces to act on a single point, their lines of action must intersect at a point (or in the case of a rigid body, if their lines of action are parallel). If their lines of action are skew, no such point exists. – probably_someone Jun 16 '18 at 5:55 • :D you've simply restated the question. – Edward Garemo Jun 16 '18 at 9:32 • The translation part could be simplified to one force, but the torque would be different than the real one. You need both contributions. – FGSUZ Jun 17 '18 at 22:56 • What does your textbook say about the options for a pair of forces in 2 dimensions? – sammy gerbil Jun 18 '18 at 9:56 • Which point would you choose? – ja72 Jun 18 '18 at 22:12 Pick any point away from the line of action of a force and you need an equipollent moment to balance things out. With two forces any point along one of the lines of action requires a moment for the other line. The question then becomes, is there a point in space where the two moments needed for the two forces cancel each other out. The answer is yes, almost. What is going to be left is a component of the torque parallel to the combined line of action. Here is the procedure mathematically. 1. Two (non parallel) force vectors $\boldsymbol{F}_1$ and $\boldsymbol{F}_2$ that each passes through points $\boldsymbol{r}_1$ and $\boldsymbol{r}_2$ in space respectively. 2. The combined loading is simply $$\boldsymbol{F} = \boldsymbol{F}_1 + \boldsymbol{F}_2$$ 3. The combined moment about the origin is $$\boldsymbol{M} = \boldsymbol{r}_1 \times \boldsymbol{F}_1 + \boldsymbol{r}_2 \times \boldsymbol{F}_2$$ 4. The point closest to origin on the line of action of the combined loading is $$\boldsymbol{r} = \frac{ \boldsymbol{F} \times \boldsymbol{M} }{ \| \boldsymbol{F} \|^2 }$$ 5. The "pitch", ratio of combined parallel moment to combined force magnitude, is $$h = \frac{ \boldsymbol{F} \cdot \boldsymbol{M} } { \| \boldsymbol{F} \|^2 }$$ 6. The parallel moment at the point $\boldsymbol{r}$ is $$\boldsymbol{M}_\parallel = h \boldsymbol{F}$$ Proof that the parallel torque at $\boldsymbol{r}$ has the same equipollent moment about the origin $\boldsymbol{M}$ as the combined force. \begin{aligned} \boldsymbol{M} &= \boldsymbol{r} \times \boldsymbol{F} + \boldsymbol{M}_\parallel \\ & = \left( \frac{ \boldsymbol{F} \times \boldsymbol{M} }{ \| \boldsymbol{F} \|^2 } \right) \times \boldsymbol{F}+\left( \frac{ \boldsymbol{F} \cdot \boldsymbol{M} } { \| \boldsymbol{F} \|^2 } \right) \boldsymbol{F} \\ & = \frac{ ( \boldsymbol{F}\cdot \boldsymbol{M}) \boldsymbol{F} - \boldsymbol{F} \times ( \boldsymbol{F}\times \boldsymbol{M}) }{\| \boldsymbol{F} \|^2 } \\ & = \frac{ (\boldsymbol{F} \cdot \boldsymbol{M})\boldsymbol{F} - \boldsymbol{F} ( \boldsymbol{F} \cdot \boldsymbol{M}) + \boldsymbol{M} ( \boldsymbol{F} \cdot \boldsymbol{F}) }{\| \boldsymbol{F} \|^2 } \\ & = \frac{ \boldsymbol{M} \| \boldsymbol{F} \|^2}{\| \boldsymbol{F}\|^2} \equiv \boldsymbol{M} \end{aligned} Use vector triple product $a \times (b \times c) = b (a \cdot c) - c ( a \cdot b)$. Summary - Any force and moment combo can be expressed as a force along a specific line and a parallel moment to the line. The direction of the line is parallel to the force,and the location of the line is found with step 4 above. There's two equations to be considered. First, is F = mA, and there, you can add forces (because it's only one mass, and only one real-world acceleration results). The second, though, is torque = Moment * d(omega)/dt, and there are in general three moments (around three axes) for a rigid body. Unless your system is constrained somehow, there are three equations to be solved there. So, a two-forces-applied-skew poses the full problem, with four difference equations (three rotations, one velocity) to be solved. Two forces and two points of application gives you four data inputs, but one force and one point of application gives you only two. So, it's not soluble. Many 'torque' treatments involve spinning mechanisms with only one axis, but those aren't fully three dimensional, only one moment of rotation is exercised; that special case IS soluble. One force could be considered equivalent to two other forces, if it would have the same effect on the surrounding world. If two forces are not applied to the same point, they could be applied to two different non-overlapping objects. Obviously, it would be impossible to come up with one force that would have the same effect on these two objects. In fact, in many cases, when the trajectories of the two objects do not overlap, one force would be able to have the same effect only on one of the two objects and have no effect on the other. The same test could be used for two skewed forces acting on the same object, i.e., we can show that a pair of such forces will create moments that one force would not and, therefore the movement of the object or the stress in the object would be different and therefore, one force would not have the same effect on the world and, therefore, would not be equivalent to the two skewed forces. For instance, with one force acting on an object, the movement of each point of the object, translational and rotational, would be restricted to one plane. With two skewed forces that would not be the case.
2019-11-20 22:12:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.977148175239563, "perplexity": 422.9500216868837}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670635.48/warc/CC-MAIN-20191120213017-20191121001017-00132.warc.gz"}
http://www.askiitians.com/forums/Wave-Motion/11/25635/wat-is-the-general-equation-of-standing-wave.htm
Click to Chat 1800-2000-838 +91-120-4616500 CART 0 • 0 MY CART (5) Use Coupon: CART20 and get 20% off on all online Study Material ITEM DETAILS MRP DISCOUNT FINAL PRICE Total Price: R There are no items in this cart. Continue Shopping Get instant 20% OFF on Online Material. coupon code: MOB20 | View Course list • Complete Physics Course - Class 11 • OFFERED PRICE: R 2,800 • View Details Get extra R 2,800 off USE CODE: chait6 wat is the equation of a standing wave? 6 years ago Share Dear Mohini Harmonic waves travelling in opposite directions can be represented by the equations below: $y_1\; =\; y_0\, \sin(kx - \omega t)\,$ and $y_2\; =\; y_0\, \sin(kx + \omega t)\,$ where: y0 is the amplitude of the wave, ω (called angular frequency, measured in radians per second) is 2π times the frequency (in hertz), k (called the wave number and measured in radians per metre) is 2π divided by the wavelength λ (in metres), and x and t are variables for longitudinal position and time, respectively. So the resultant wave y equation will be the sum of y1 and y2: $y\; =\; y_0\, \sin(kx - \omega t)\; +\; y_0\, \sin(kx + \omega t)\,$. Using a trigonometric identity (the 'Sum to Product' identity for 'sin(u)+sin(v)') to simplify: $y\; =\; 2\, y_0\, \cos(\omega t)\; \sin(kx)\,$. This describes a wave that oscillates in time, but has a spatial dependence that is stationary: sin(kx). At locations x = 0, λ/2, λ, 3λ/2, ... called the nodes the amplitude is always zero, whereas at locations x = λ/4, 3λ/4, 5λ/4, ... called the anti-nodes, the amplitude is maximum. The distance between two conjugative nodes or anti-nodes is λ/2. All the best. AKASH GOYAL Please feel free to post as many doubts on our discussion forum as you can. We are all IITians and here to help you in your IIT JEE preparation. Win exciting gifts by answering the questions on Discussion Forum. So help discuss any query on askiitians forum and become an Elite Expert League askiitian. 6 years ago # Other Related Questions on Wave Motion what is wave intensity............................................? DEAR MOHAN, WAVE INTENSITY IS KNOWN AS THE INTENSITY OF THE INTERFERENCE OF THE WAVES IS CALLED A WAVE INTENSITY........................ Gowri sankar one year ago In the SI system, it has units watts per square metre (W/m 2 ). It is used most frequently with waves (e.g. sound or light), in which case the average power transfer over one period of the... ULFAT 10 months ago Wave intensity is the average power that travels through a given area as the wave travels through space. The intensity of sound waves is measured using the decibel scale. Lets talk about... sreekanth sree one year ago what is antinode ? the position of maximum displacement in a standing wave system. KUNCHAM SAMPATH one year ago the position of maximum displacement in a standing wave system. pa1 one year ago the position of maximum displacement in a standing wave system. RAKESH CHINDAM one year ago What are harmonics? Dear Prabhakar, Harmonic is the wave or signsl whose frequency is an integral multiple of prequency of some reference signal or wave. SAI SARDAR one year ago Hello Prabhakar, We noticed that you are misuing the rights of questioner given by forum and approving all the answers of all the users even if it is written garbage which has no connection... Forum Team one year ago hello prabhakar... A harmonic is a signal or wave whose frequency is an integral (whole-number) multiple of the frequency of some reference signal or wave. The term can also refer to the... mohan one year ago what is work energy thereom...................................? The work-energy theorem is a generalized description of motion which states that work done by the sum of all forces acting on an object is equal to change in that object’s kinetic energy.... dolly bhatia one month ago The work W done by the net force on a particle equals the change in the particles kinetic energy KE: W = Δ K E = 1 2 m v f 2 − 1 2 m v i 2 . The work-energy theorem can be derived from... Sushmit Trivedi one month ago For any net force acting on a particle moving along any curvilinear path, it can be demonstrated that its work equals the change in the kinetic energy of the particle by a simple derivation... rahul kushwaha 3 months ago The cieling of long hall is 25 m high What is the maximum horizontal distance that a ball thrown with a speed of 40 ms can go without hitting the ceiling of wall? Plz explain with a DIAGRAM Let the angle of throwing be Q with respect to the ground So, the range is R = 40*cosQ*t where t = time taken reach ground. Now, we know the time taken to reach the top of its flight is half... Tapas Khanda 5 months ago Let the angle of throwing be Q with respect to the ground So, the range is R = 30*cosQ*t where t = time taken reach ground. Now, we know the time taken to reach the top of its flight is half... Tapas Khanda 5 months ago Diagram is not required to solve this question. You can directly use the formula to find range : R = (v^2/g)*sin2Q Maximum height reached, H = v^2*sin^2Q/2g So, 25 = 40^2*sin^2Q/(2*9.8) So,... Tapas Khanda 5 months ago What harmfull gases are produced when we burn plastics? And can we burn plastics? I`m doing a project, can someone help me, How to reduce those toxic contents when plantic are burnt Like sulphur, nitrogen etc 2017 years ago View all Questions » • Complete Physics Course - Class 12 • OFFERED PRICE: R 2,600 • View Details Get extra R 2,600 off USE CODE: chait6 • Complete Physics Course - Class 11 • OFFERED PRICE: R 2,800 • View Details Get extra R 2,800 off USE CODE: chait6 More Questions On Wave Motion
2017-01-18 08:16:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42417556047439575, "perplexity": 2505.4071325268437}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00020-ip-10-171-10-70.ec2.internal.warc.gz"}
http://www.math-only-math.com/worksheet-on-measurement-of-capacity.html
# Worksheet on Measurement of Capacity In worksheet on measurement of capacity, all grade students can practice the questions on units for measuring capacity. This exercise sheet on measurements can be practiced by the students to get more ideas to learn to measure the capacity of the given line segments according to the questions. 1. Which kind of measure (litre or millilitre) would you use to measure the following things (liquids)? (i) Milk in a glass (ii) Medicine in a tea-spoon (iv) Cough syrup in a bottle (v) Petrol in a car tank (vi) Water in a can (vii) Kerosene oil in a jar (viii) Paint in a drum 2. What measures will you use to measure the following quantities of given liquid? Write the answer in front of each liquid: 3. How many 200 ml measures of water will fill a: (i) 1 litre can? (ii) 4 litre drum? (iii) 2 litre can? (iv) 1 litre 400 ml vessel? 4. A jar contains 1200 ml of milk. How many litres and ml of milk is in the jar? 5. How many 100 ml measures of oil will fill the following vessels? (i) 200 ml of capacity (ii) 500 ml of capacity (iii) 700 ml of capacity (iv) 1 l of capacity 6. Change the following into ml: (i) 3 l (ii) 2 l 75 ml (iii) 5 l 390 ml 7. Change the following into litres and ml: (i) 4000 ml (ii) 65035 ml (iii) 32570 ml 8. There was 5 l 500 ml of milk in my house. In the evening there was only 2 l 750 ml of milk left. How much milk was consumed during the day? 9. A petrol pump had 15900 litres of petrol in stock. During the day 5,950 litres were sold. How much petrol was left? 10. There is the following quantity of water in three vessels: (i) 54 l 80 ml (ii) 67 l 384 ml and (iii) 56 l 156 ml Find the total quantity of water. 11. A bucket holds 25 litre of water. 17 litres 250 ml of it was taken out and then 3 litre 780 ml was poured in it. How much water is there in the bucket now? 12. A bottle has the capacity to contain 250 ml. How much oil can be filled in 20 such bottles? 13. Among 40 students, 20 litres of juice was distributed. How much juice did each student get? 14. The capacity of a drum is 200 litres. It contains 123 litres of water. How much more water is required to fill it? 15. Out of 350 litres of kerosene oil, 125 l 50 ml was sold. How much is left now? If students have any queries regarding the questions given in the worksheet on measurement of capacity, please fill-up the comment box below so that we can help you. However, suggestions for further improvement, from all quarters would be greatly appreciated. Measurement - Worksheets Worksheet on Measurement of Length. Worksheet on Perimeter of a Figure. Worksheet on Measurement of Mass or Weight. Worksheet on Measurement of Capacity. Worksheet on Measurement of Time. Worksheet on Antemeridian and Postmeridian. Worksheet on Interpreting a Calendar. Worksheet on Units of Time. Worksheet on Telling Time.
2018-03-20 19:27:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4685077667236328, "perplexity": 2610.498113436217}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647530.92/warc/CC-MAIN-20180320185657-20180320205657-00569.warc.gz"}
https://math.stackexchange.com/questions/2097755/solving-second-order-differential-equation-arising-from-a-wave-equation
# Solving second-order differential equation arising from a wave equation I'm solving a scalar Wave Equation $$\left(\frac{1}{c^2}\frac{\partial^2}{\partial t^2}-\nabla^2\right)u(\vec{r},t)=0$$ under the assumption that $u$ only depends on the magnitude of the position vector: $u(\vec{r},t)=u(r,t)$. Assuming that $u(r,t)=T(t)\Psi(r)$, we have $$\frac{\Psi(r)}{c^2}T''(t)=T(t)\nabla^2\Psi(r) \implies\frac{T''(t)}{c^2T(t)}=\frac{1}{\Psi(r)}\nabla^2\Psi(r)=-k^2$$ for some (possibly complex) constant $k$. The differential equation for $T(t)$ is easily solved, we have $$T(t) = Ae^{i\omega t}+Be^{-i\omega t}$$ for arbitrary values of $A,B$ and $\omega=ck$. The spatial equation can be rewritten as $$\left(\nabla^2+k^2\right)\Psi(r) = 0$$ Now, since $\Psi(r)$ only depends on the radial distance, we can plug in the Laplacian in spherical coordinates to obtain an ODE for $\Psi(r)$. $$\frac{1}{r^2}\frac{d}{dr}\left(r^2\frac{d}{dr}\Psi(r)\right)+k^2\Psi(r) = 0$$ Calculating the derivatives and rearranging, we have $$r^2\Psi''(r)+2r\Psi'(r)+k^2r^2\Psi(r) = 0$$ Initially, I thought that this would require Bessel Functions, as it looks very similar to the Bessel Differential Equation. However, I was told that this can be solved using only elementary functions and I was given the hint to write $\Psi(r) = r^af(r)$ for some other function of $f(r)$ and an unknown power $a$. With this, we have the following expressions for the derivatives that appear in our ODE: \begin{align} \Psi'(r) &=r^af'(r)+ar^{a-1}f(r)\\ \Psi''(r) &= r^af''(r)+2ar^{a-1}f'(r)+a(a-1)r^{a-2}f(r) \end{align} Plugging this into our ODE becomes quite messy, but we have \begin{align} r^{a+2}f''(r)+2ar^{a+1}f'(r)+a(a-1)r^af(r)+2r^{a+1}f'(r)+2ar^af(r)+k^2r^{a+2}f(r) = 0 \end{align} However, this is still a second order differential equation that seems much more complicated than the one we had before, so I'm not sure what this accomplished. Could anybody give me a hint on how to proceed here? $$r^{a}\{r^2f''(r)+2(a+1)rf'(r)+[a(a+1)+k^2r^2]f(r)\}=0$$ If we put $a=-1$, then where the amplitude decays as $\dfrac{1}{r}$ that is consistent with spherical waves. • Of course, thanks. I tried to group by order of derivatives and that got me nowhere, I should have thought of grouping by powers of $r$ - that makes the whole thing rather obvious, once you do that. Thanks again. – Tom Jan 15 '17 at 13:09 • I'm curious, though: while it's pretty clear that $a=-1$ works in this way and $a=-1$ is the natural choice, are there solutions for other values of $a$, and if yes, how would one go about finding them? – Tom Jan 15 '17 at 13:19
2019-05-23 09:25:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9782532453536987, "perplexity": 117.5504089500637}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257197.14/warc/CC-MAIN-20190523083722-20190523105722-00272.warc.gz"}
http://stackexchange.com/filters/27192/mathematica
Mathematica Mathematica • All Sites • All Meta Sites ## Learning Resources: Mathematica Notebooks A while ago, I stumbled upon someone's list of Mathematica notebooks for learning various topics. I've since lost the link, but I'm wondering now if I could find notebooks again. So my question is: … ## How to solve integral in Mathematica? I need matrix H. G is working, but H is just derivative?? How to obtain? g = Integrate[x^(p + q - 2*(m + n + 1)), {x, -1, 1}]; h = Integrate[D[x^(p + q - 2*(m + n + 1)), {x, 2}], {x, -1, 1}]; … 10 hours ago ## How do you set the range for ListPlot (mathematica) I have a large set (about 2050 values) of data that I would like to make a graph out of. The data is in two columns in Excel. I copied the first column and made the list x={0, 3.6, 7.1, 10.7...} and … ## Delete Duplicated Sublists within a List in Mathematica For instance say, list = {{1, 2}, {3, 1}, {2, 1}, {2, 1}, {1, 1}} I would like it to instead print {{1, 2}, {3, 1}, {2, 1},{ {1, 1}} I have tried DeleteDuplicates[list] and that worked to no ... ## Disk inside plot in mathematica I have a question about using plot and disk together in one manipulate function in mathematica. I have this piece of code right now: Plot[h[t], {t, 0, ttot}, PlotRange -> {0, 30}] Manipulate[ … yesterday ## What is the recommended way to check that a list is a list of numbers in argument of a function? I've been looking at the ways to check arguments of functions. I noticed that MatrixQ takes 2 arguments, the second is a test to apply to each element. But ListQ only takes one argument. (also for … yesterday ## Exposing Symbols to \$ContextPath There are a variety of Internalcontext functions that are useful, such as InheritedBlock, Bag and StuffBag, etc., as well as many useful Developer functions. I wish to expose a selection of these … Dec 7 at 19:28 ## Rotate a rectangle around a certain point in canvas Im trying to rotate a rectangle around the center of its parent rectangle. The child's distance to its parents borders must always stay the same. I almost succeeded but there seems to be a small error … ## Function incomplete when plotting I just came over a problem while displaying a function. Mathematica doesnt plot the middle of the function: VoigtDistribution[\[Delta]_, \[Sigma]_] = ParameterMixtureDistribution[ ... Dec 6 at 14:11 ## clean up a wolfram response using xpath , in a java program trying to clean up a response from xpath to get rid of the xml I have it working fine when I declare the string as follows String xml = ... Dec 6 at 4:53 ## How to create vertical Bullet Graphs with PSTricks? I was looking for a package to create bullet gauge graphs, but I couldn't find any. A good person in this forum show me how he would make it in TikZ, with a little options. I was waiting any moths … ## Sum up numbers from 1 to 100000 after removing zeroes Ok so i have numbers from 1 to 100000.I need to remove the zeroes from all numbers and sum up numbers from 1 to 100000.So if the number is 405 consider it as 45 , if number if 20039 consider it 239. … Dec 5 at 16:51 ## How to stop Mathematica from giving numerical answer? I am very new to Mathematica. Everything is a nightmare to me. I am trying to calculate a product Product[((1 - 0.9 z^-1 Power[E, -I k (11 \[Pi])/50]) (1 - 0.9 z^-1 Power[E, I k (11 \[Pi])/50])), … ## Mathematica select elements from list constrained by an analytic function For some list of 2-d points, e.g. data = {{1, 2}, {3,4}, {3, 6}}, and some analytic function, e.g. f[x] = x^2, I want to select out only those points that lie beneath the curve, i.e., test for ... Dec 2 at 23:05 ## NMinimize is very slow You are my last hope. In my university there are no people able to answer my question. I've got a function quite complex depending on 6 paramethers a0,a1,a2,b0,b1,b2 that minimize the delta of ... Dec 2 at 10:17 ## What is the algorithm details of Mathematica's FindRoot when “method” is the default Newton? When solving a three-variate nonlinear system, I tried different implementations: 1) Mathemaitca; (FindRoot, default method) 2) Matlab programming(by using central finite difference to approximate … ## Set::write error when using For loop Solving a complicated formula f(u,v)==0, where I assign some constant value to u and then solve v. I can solve it without for-loop, but encounter errors by adding For[] enclosing the codes, where … Dec 2 at 2:36 ## How to improve the performance of this code I have implemented a code in Mathematica 9 to simulate a scattering problem, and I got really disappointed about its performance when compared to Matlab. Since I am a newbie in Mathematica, would ... ## How to construct unequal width histograms with Mathematica? The 2010 census for Anoka County, Minnesota shows that from a total population of 330844 - 86031 were ages 0 through 17, 26671 were ages 18 through 24, 91927 were ages 25 through 44, 93983 were … ## Mathematica, solving non linear system of equations with lot of equations and variables I need to find a square matrix A satisfying the equation A.L.A = -17/18A -2(A.L.L + L.A.L + (L.L).A) + 3(A.L + L.A) -4L.L.L + 8L.L - 44/9L + 8/9*(ID) ,where L is a diagonal matrix L = ... ## Mathematica, system of differential equations I'm trying to run the next code and a series of errors appear. I can´t find any errors on it. The purpose of the code is to plot the solution to a system of differential equations. Please help me find … Dec 1 at 4:47 ## Mathematica for plotting multiple functions Does anyone know why the following plot function is not plotting all four of the functions simultaneously? Plot[{-(x/3)*(x - e)*(x - Pi)/((3 + e)*(3 + Pi)), ((x + 3)*(x - e)*(x - Pi))/(3*e*Pi), ... Dec 1 at 3:47 ## Coupled non-linear second order PDE in Mathematica I've been trying to solve this equation on Mathematica, to no avail up to now Btheta[x_,t_] := (-2 A/Ms*Sin[theta[x, t]]*Cos[theta[x, t]]*(D[phi[x, t], x])^2 - 2 Ku/Ms*Sin[theta[x, ... ## Why the lower limit of this integral is 1? I solve this differential equation using Mathematica. But I don't understand the solution. Why the lower limit of this integral is 1? I run: $$\text{DSolve}\left[y'(x)+y(x)=Q(x),y(x),x\right]$$ the … ## Understanding solution to differential equation in Mathematica I got a solution using DSolve as : Function[{t}, -(( 1. (2. + k (-1. - 2. t) + k^2 (-0.5 + 1. t + 1. t^2)))/k^3) + E^(-1. k t) C[1]] What does '.' mean and '()'? For example, in the case of … ## compressing HDF5 files in Mathematica I am working with Mathematica 9 and exporting huge lists (a typical list will have dimensions of 182500,4,8,42). Each file has about 6 lists of this size (all integers, not sure if this makes a ... ## Levenshtein Distance with Mathematica I try to code Levenshtein Distance with Mathematica but the recursive part doesn't seems to work, can someone tell me why ? distanceLevenshtein[chaine1_, length1_, chaine2_, length2_] := ... ## Replacing Mathematica's QuasiMonteCarlo integration in C++ I have a Mathematica program which performs some integrals in 3 or 4 dimensions using the QuasiMonteCarlo method. The problem is, it takes an annoyingly long time to run, to the point where some of … ## Photo Mosaic in Mathematica: an example from 2008 doesn't work in Mathematica 9 I'm trying to get a Mathematica example working. It's the one on Theo Gray's blog. In Mathematica 9.0 It doesn't work. I already have search the answer on stackoverflow in mathematica 8.0 . I use the … Nov 27 at 2:03
2013-12-10 19:39:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18368835747241974, "perplexity": 1701.2461892621204}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164023947/warc/CC-MAIN-20131204133343-00056-ip-10-33-133-15.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/118392/trigonometric-equation-2-cos2-theta-cos-theta-1-sin2%ce%b8
# Trigonometric equation $2\cos^2\theta -\cos\theta-1 = \sin^2θ$ I have the following equation, and I have been stumped on it for a long time now, I was wondering if I could get some hints in attempting to solve it. $$2\cos^2\theta-\cos\theta-1 = \sin^2\theta$$ Solved! Use the hyperbolic function (Thanks svenkatr): $$\cos^2\theta - \sin^2\theta = 1$$ - Add $\cos^2(\theta)$ to both sides, and reduce it to quadratic equation in $\cos(\theta)$. –  Sasha Mar 9 '12 at 22:47 @Sasha: Please read this for reasons why it might be better to add an answer rather than commenting: meta.math.stackexchange.com/questions/1559/… –  Aryabhata Mar 9 '12 at 22:50 You have a trigonometric equation, which you want to solve. An identity is an equality that holds for every possible value of the variable(s). –  Américo Tavares Mar 9 '12 at 23:03 @Alex - $\cos^2 \theta + \sin^2 \theta = 1$. We are dealing with trigonometric, not hyperbolic functions. –  svenkatr Mar 9 '12 at 23:28 @Alex The identity should be $\cos^2 \theta + \sin^2 \theta = 1$, with a plus. –  TMM Mar 10 '12 at 1:37 Write $\sin^2 \theta = 1- \cos^2 \theta$, simplify to get a quadratic equation in $\cos \theta$ and solve the equation. As Sasha pointed out, one can simply add $\cos^2\theta$ to both sides and solve for the quadratic equation. Here is what occurs: $$2\cos^2\theta-\cos\theta-1+\cos^2\theta=\sin^2\theta+\cos^2\theta$$ $$3\cos^2\theta-\cos\theta-1=1$$ $$3\cos^2\theta-\cos\theta-2=0$$ Let $w=\cos\theta$. Then we get: $$3w^2-w-2=0$$ $$3w^2-3w+2w-2=0$$ $$3w(w-1)+2(w-1)=0$$ $$(3w+2)(w-1)=0$$ So $w=\frac{-2}{3}$ or $w=1$, which means $\cos\theta=\frac{-2}{3}$ or $\cos\theta=1$. This gives us the answers $\theta=2k\pi, k\in\mathbb{Z}$ and $\theta=\pm\cos^{-1}\left(\frac{-2}{3}\right)+2k\pi, k\in \mathbb{Z}$ Not quite a complete answer: it omitted $-\cos^{-1}(-2/3)$, and also omitted to say $\cos^{-1}(-2/3)+2k\pi$ as in the case of $\theta=0$. –  Michael Hardy Mar 10 '12 at 0:57
2015-07-07 11:51:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9503820538520813, "perplexity": 315.7584582605069}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375099173.16/warc/CC-MAIN-20150627031819-00266-ip-10-179-60-89.ec2.internal.warc.gz"}
http://mathhelpforum.com/algebra/211814-definition-linear.html
1. ## definition of 'linear' Of the above list of equations a) c) and f) are said to be linear, but I thought linear equations were not allowed to have fractional exponents? Both a) and f) have the square root of a variable (x_3 and x_2 respectively) no? 2. ## Re: definition of 'linear' Hey kingsolomonsgrave. I agree with you in that if the terms are to a non-unit power (i.e. not x^1) then they are not linear. If its just the coeffecient that is irrational though then this is OK.
2016-12-10 04:10:17
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8652338981628418, "perplexity": 1219.5672988757876}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542938.92/warc/CC-MAIN-20161202170902-00418-ip-10-31-129-80.ec2.internal.warc.gz"}
https://www.math.unipd.it/news/lagrangian-and-conservative-solutions-of-the-2d-euler-equations/
## “Lagrangian and conservative solutions of the 2D Euler equations” Mercoledì 17 Giugno 2020, ore 15:00 - Zoom - Gennaro Ciampa (University of Basel, CH) Abstract Smooth solutions of the 2D incompressible Euler equations enjoy two very natural properties: the first one is that they are Lagrangian, namely the vorticity is advected by the flow of the velocity; the second property is that smooth solutions conserve the kinetic energy. When we consider solutions in weaker classes, precisely when the initial vorticity is in $L^p$ with $1\leq p \le \infty$, the existence of Lagrangian solutions and the conservation of the energy may depend in general on the approximation scheme. Furthermore, a reasonable question is whether solutions constructed by a given method are unique. In this talk we prove the existence of solutions which enjoy the above properties constructed via different methods. Moreover, we will show that already in the linear case a smooth approximation of the velocity field can produce different solutions in the limit. Based on joint works with G. Crippa (University of Basel) and S. Spirito (University of L’Aquila). Seminari di equazioni differenziali e applicazioni 10/07/2020 11:11
2020-10-25 19:37:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.862750768661499, "perplexity": 645.7476821384644}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107889651.52/warc/CC-MAIN-20201025183844-20201025213844-00514.warc.gz"}
https://www.transtutors.com/questions/state-some-of-the-advantages-of-employing-thermal-derating-techniques-in-an-electron-765079.htm
# State some of the advantages of employing thermal derating techniques in an electronic design. State some of the advantages of employing thermal derating techniques in an electronic design.
2021-01-19 05:38:41
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9741657376289368, "perplexity": 1940.9728150645058}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703517966.39/warc/CC-MAIN-20210119042046-20210119072046-00370.warc.gz"}
https://www.inceptiontechnology.net/2019/10/knitting-technology-by-david-j-spencer.html
--> # Knitting Technology By David J Spencer Knitting technology is the ideal textbook for a range of textile courses from technician to degree level and the textile institutes examinations as well as being an essential companion to all those involved in the knitting industry. He has been an examiner and moderator in the manufacture of hosiery and knitted goods for the city and guilds of london institute. Knitting Technology David J Spencer 9781855733336 knitting technology by david j spencer is important information accompanied by photo and HD pictures sourced from all websites in the world. Download this image for free in High-Definition resolution the choice "download button" below. If you do not find the exact resolution you are looking for, then go for a native or higher resolution. Don't forget to bookmark knitting technology by david j spencer using Ctrl + D (PC) or Command + D (macos). If you are using mobile phone, you could also use menu drawer from browser. Whether it's Windows, Mac, iOs or Android, you will be able to download the images using download button. Knitting technology by david j spencer. David j spencer book types. A number of worked calculations are included to clarify the examples given. Spencer c text fti acfi recently retired as a senior lecturer in textile and knitting technology at de montfort university leicester. Download knitting technolgy by david j spencer march 1. 427 rating details 26 ratings. Flag like see review. Knitting technology by david j spencer ebook free download knitting. Knitting terms and definition 25314 melt spinning dry spinning and wet spinning method 25050 yarn numbering system 24190 visitors. I am interested in textiles so i choose this book. Knitting technology by david j spencer 3rd edition a comprehensive handbook a. Knitting technology is the ideal textbook for a range of textile courses from technician to degree level and the textile institutes examinations as well as being an essential companion to all those involved in the knitting industry. I think it must be a good book for me about knitting. David j spencerdownload textile booksknitting technologytextile books. Download knitting technolgy by david j spencer. Camilla rated it it was amazing jun 29 2014. Knitting Technology 3rd Edition Knitting Technology 3rd Third Edition David J Spencer Knitting Technology A Comprehensive Handbook And Practical Knitting Technology Second Edition David J Spencer D J 9781855733336 Knitting Technology By David J Spencer Knitting Technology A Comprehensive Handbook And Practical David J Spencer Book Depository Knitting Technology A Comprehensive Handbook And Practical Knitting Technology By Cvmr Prakash Issuu 9780080247625 Knitting Technology Abebooks David J
2021-05-13 17:57:01
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8126246333122253, "perplexity": 8628.288165524924}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991943.36/warc/CC-MAIN-20210513173321-20210513203321-00556.warc.gz"}
https://eshinjolly.com/2019/02/18/rep_measures/
# What is this all about? My hope with this post is to provide a conceptual overview of how to deal with a specific type of dataset commonly encountered in the social sciences (and very common in my own disciplines of experimental psychology and cognitive neuroscience). My goal is not to provide mathematical formalisms, but rather build some intuitions and try to avoid as much jargon as possible. Specifically, I’d like to compare some of the more common analysis strategies one can use and how they vary by situation, ending with some takeaways that hopefully guide your future decision-making. But to do so we need to start at the beginning… # Repeated what? Datasets come in all different shapes and sizes. In many introductory tutorials, classes, and even real world examples, folks are usually dealing with datasets that are referred to as satisfying the “i.i.d assumption” of many common statistical models. What does this mean in English? It refers to the fact that each data-point is largely independent of other data points in the complete dataset. More specifically, it means that the residuals of a model (i.e. what’s left over that the model can’t explain) are independent of each other and that they all come from the same distribution which has a mean of 0 and a standard deviation of $$\sigma^2$$. In other words, knowing something about one error that the model makes tells you little about any other error the model makes, and by extension, knowing something about one data point tells you little about any other datapoint. However, many types of data contain “repeats” or “replicates” such as measuring the same people over time or under different conditions. These data notably violate this assumption. In these cases, some data points are more similar to each other than other data points. Violations of these assumptions can lead to model estimates that are not as accurate as they could possibly be (Ugrinowitsch et al, 2004). The more insidious issue is that inferences made using these estimates (e.g. computing t-statistics and by extension p-values) can be wildly inaccurate and produce false-positives (Vasey & Thayer, 1987). Let’s try to make this more concrete by considering two different datasets. In case 1 (left) we give 21 people a survey 1 time each and try to see if their survey responses share any relationship with some demographic about them. 21 total data points, pretty straightforward. In case 2 (right), we give 3 people a survey 7 times each and do the same thing. 21 total data points again, but this time each data point is not independent of every other. In the first case, each survey response is independent of any other. That is, knowing something about one person’s response tells you little about another person’s response. However, in the second case this is not true. Knowing something about person A’s survey response the first time you survey them tells you a bit more about person A’s survey response the second time you survey them, whereas it does necessarily give you more information about any of person B’s responses. Hence the non-independence. In the most extreme case estimating a model ignoring these dependencies in the data can completely reverse the resulting estimates, a phenomenon known as Simpson’s Paradox. # Analysis Strategies. So what do we typically do? Well there are a few different analysis “traditions” that have dealt with this in different ways. This isn’t by any means an exhaustive list, but approaches that are reasonably common across many different literatures. ## Multi-level models Like many other researchers in psychology/neuroscience, I was first taught that repeated-measures ANOVAs are the only way to analyze these type of data. However, this has fallen a bit out of practice in favor of the more flexible approach of multi-level/mixed effects modeling (Baayen et al, 2008). I don’t want to focus on why multi-level modeling is often far more preferable, as that’s a different discussion (e.g. better handling of missing data, different numbers of repeats, additional levels of replicates, different numbers of replicates, etc), but suffice to say that it once you start using this approach there’s essentially no reason to ever run a repeated measured ANOVA again. Going into all the details of how these models work is beyond the scope of this post, but I’ll link to a few resources throughout. Conceptually, multi-level modeling simultaneously estimates coefficients that describe a relationship across the entire dataset, as well as within each group of replicates. In our example above, this amounts to estimating the relationship between survey responses and demographics for the entire population of survey respondents, but also the degree to which individual people deviate from these estimates. This has the net effect of “pooling” estimates and their associated errors together and works in a manner not entirely unlike using a prior if you are familiar with Bayesian terminology or regularization/smoothing if machine-learning is more your thing. The result of estimating a model this way means that estimates can “help each other out” such that we can impute values if some of our survey respondents didn’t fill out the survey each time we asked them to, or we can “clean-up” noisy estimates we get from specific individuals by assuming that individuals’ estimates all come from the same population, thereby restricting wonky values they may take on. In practice using these models can be a bit tricky. This is due to the fact that it’s not immediately obvious how to set these models up for estimation. For example, should we assume that each respondent has a different relationship between their survey results and demographics? Or should we simply assume that their survey results differ on average but vary with their demographics in the same way? Specifically, users have a variety of choices for how to specify what’s referred to as the “random effects” (deviation estimates) part of the model. You may have come across terminology like “random intercepts” or “random slopes.” In our example, this is the difference between allowing a model to learn a unique mean estimate for each individual’s survey responses and learning a unique regression estimate for the relationship between each individual’s survey responses and demographic outcome measure. In many cases, computing the complete set of coefficients one could compute (intercepts, slopes, and the correlations between them for every predictor) (Barr, et al, 2013) leads the model to fail to converge, leaving a user with unreliable estimates. This has lead to suggestions to keep models relatively “simple” with respect to the inference one is trying to make (Bates, et al 2015), or compare different model structures and use a model selection criteria to adjudicate between them before performing inferences (Matuschek et al, 2017). Pretty tricky huh? Try this guide to help you out if you venture down this path or check out this post for a nice visual treatment. Brauer & Curtin, 2018 is a particularly good one-stop-shop for review, theory, practice, estimation issues, and code snippets. There are a ton of resources available if multi-level models have got you excited. ## Robust/corrected standard errors In other academic fields/areas, there is an entirely different tradition for handling these types of data. For example, in some economics disciplines “robust/sandwich/huber-white” standard errors are computed for what is otherwise a standard linear regression model. This lecture provides a nice math-ish overview of what these techniques are, but the general takeaway is that this approach entails computing the regression coefficients in a “typical” manner using ordinary least squares (OLS) regression, but “correcting” the variance of these estimators (i.e. the standard errors) for how heteroscedastic they are. That is, how much their variances differ. There are several ways to account for heteroscedasticicty that incorporate things like small-sample and auto-correlation correction, but another one is to compute these robust estimates with respect to “clusters” or grouping factors in the data. In the example above, clusters would comprise survey respondents and each survey response would comprise a data point within that cluster. Therefore, this approach completely ignores the fact that there are repeated measurements when computing the regression coefficients, but takes the repeated measures data into account when making inferences on these coefficients by adjusting their standard errors. For an overview of this calculation see this presentation and for a more formal treatment see Cameron & Miller, 2015. ## Two-stage-regression/summary statistics approach* Finally, a third approach we can use is what has been sometimes referred to as a two-stage-regression (Gelman, 2005) or the summary statistics approach (Frison & Pocock, 1992; Holmes & Friston, 1998). This approach is routine in the analysis of functional MRI data (Mumford & Nichols, 2009). Conceptually, this looks like fitting a standard OLS regression model to each survey respondent separately, and then fitting a second OLS model to the coefficients from each individual subject’s fit. In the simplest case this equivalent to calculating a one-sample t-test over individuals’ coefficients. You might notice that this approach “feels” similar to the multi-level approach, and in colloquial English, there are in fact multiple levels of modeling going on. However, notice how each first-level model is estimated completely independently of every other model and how their errors or the variance of their estimates are not aggregated in any meaningful way. This means that we lose out on some of the benefits we gain from the formal multi-level modeling framework described above. Yet what we might lose in benefits we gain back in simplicity: there are no additional choices to be made such as choosing an appropriate “random effects” structure. In fact, Gelman, 2005 notes that two-stage-regression can be viewed as a special case of multi-level modeling in which we assume that the distribution from which individual/cluster level coefficients comes has infinite variance. # How do we decide? Having all these tools at our disposal can sometimes make it tricky to figure out which approach is preferable for what situation and whether there is one approach that is always better than the others (spoiler: there isn’t). To better understand when we might use each approach let’s consider some of the most common situations we might encounter. I’ll refer to these as the “dimensions” along which our data can vary. ## Dimension 1: Sample size of units we would like to make inferences about The most common thing that varies about different datasets is simply their size, i.e. how many observations we’re really dealing with. In the case of non-independent data, an analyst may most often be interested in making inferences about a particular “level” of the data. In our survey example, this is generalizing to “people” rather than specific instances of the survey. So this dimension varies based on how many individuals we sampled, irrespective of how many times we sampled any given individual. ## Dimension 2: Sample size of units nested within units we would like to make inferences about Another dimension in which our repeated-measures data may vary, is how many repeats we’re dealing with. In our example above, this is the number of observations we have about any given individual. Did each person fill out the survey 5 times? 10? 100? This dimension therefore varies based on how often we sample any given individual, irrespective of how many total individuals we sample. ## Dimension 3: Variability between units we would like to make inferences about A key way in which each of these analysis approaches varies is how they handle (or don’t) variability between clusters of replicates. In our example above, this is the variance between individuals. Do different people really respond differently from each other? At one extreme we can treat every individual survey response as entirely independent ignoring the fact that we surveyed individuals multiple times and pretending each survey is totally unique. At the other end, we can assume that the relationship between survey responses and demographics come from a higher-level distribution and specific people’s estimates are instances of this distribution, preserving the fact that each person’s own responses are more similar to each other than they are to anyone else’s responses. I’ll return to this a bit more below. # Simulations can help us build intuitions. Often in cases like this we can use simulated data, designed to vary in particular ways, to help us gain some insight as to how these things influence our different analysis strategies. So let’s see how that looks. I’m going to be primarily using the pymer4 Python package that I wrote to simulate some data and compare these different models. I wrote this package originally so I could reduce the switch cost I kept experiencing bouncing between R and Python for my work. I quickly realized that my primary need for R was using the fantastic lme4 package for multi-level modeling and so I wrote this Python package as a way to use lme4 from within Python while playing nicely with the rest of the scientific Python stack (e.g. pandas, numpy, scipy, etc). Since then the package has grown quite a bit (Jolly, 2018), including the ability to fit the different types of models discussed above and simulate different kinds of data. Ok let’s get started: # Import what we need import pandas as pd import numpy as np from pymer4.simulate import simulate_lmm, simulate_lm from pymer4.models import Lm, Lmer, Lm2 import matplotlib.pyplot as plt import seaborn as sns sns.set_context('poster'); sns.set_style("whitegrid") %matplotlib inline ## Starting small Let’s start out with a single simulated dataset and fit each type of model discussed above. Below I’m generating multi-level data similar to our toy example above. The dataset is comprised of 50 “people” with 50 “replicates” each. For each person, we measured 3 independent variables (e.g. 3 survey questions) and would like to relate them to 1 dependent variable (e.g. 1 demographic outcome). num_obs_grp = 50 num_grps = 50 num_coef = 3 formula = 'DV ~ IV1 + IV2 + IV3' # Not required for generating data, but just for convenience estimating models below data, blups, betas = simulate_lmm(num_obs_grp, num_coef, num_grps) DV IV1 IV2 IV3 Group 0 0.962692 0.884919 -1.027279 1.267401 1.0 1 0.692995 -0.034800 1.487490 -0.623623 1.0 2 -0.227617 -0.841247 0.227976 -1.411721 1.0 3 -0.502931 1.466788 -1.332548 -1.336735 1.0 4 2.254925 0.675905 -0.400129 0.977755 1.0 We can see that the overall dataset is generated as described above. Simulating data this way also allows us to generate the best-linear-unbiased-predictions (BLUPs) for each person in our dataset. These are the coefficients for each individual person. Intercept IV1 IV2 IV3 Grp1 0.334953 0.804446 1.273423 0.484458 Grp2 0.296504 0.533969 1.499689 -0.323965 Grp3 0.269028 0.748626 0.826473 0.494888 Grp4 0.489680 0.714883 1.073006 0.103898 Grp5 0.224353 0.898960 1.171890 0.034940 Finally, we can also checkout the “true” coefficients that generated these data. These are the “correct answers” we hope that our models can recover. Since these data have been simulated using the addition of noise to each individual’s data ($$(\mu=0,\sigma^2=1)$$, and with variance across individuals (pymer4’s default is $$\sigma^2=0.25$$) we don’t expect perfect recovery of these parameters, but something pretty close (we’ll explore this more below). print(f"True betas: {betas}") # Regression coefficients for intercept, IV1, IV2, and IV3 True betas: [0.18463772 0.78093358 0.97054762 0.45977883] ## Evaluating performance Ok time to evaluate some modeling strategies. For each model type I’ll fit the model to the data as described, and then compute 3 metrics: 1. Absolute Error of Coefficient Recovery - this is simply the sum of the absolute value differences between the real coefficients and the estimated ones. It gives us the total error of our model with respect to the data-generating coefficients. We could have computed the average instead of the sum, but since our simulated data are all on the same scale, the sum provides us the exact amount we’re off from what we were expecting to recover. 2. Sum of Model Standard Errors - this and the next measure are more related to the inferences we want to make on our parameters. SE and the associated confidence intervals tell us the total amount of variance around our estimates given this particular modeling strategy. Once again, we could have computed the average, but like above, the sum gives us the total variance across all our parameters. 3. Sum of Model T-statistics - this is the sum of the absolute value of the t-statistics of our model estimates. This gives us a sense of how likely we would be to walk away with the inference that there is a statistically significant relationship between our independent variables and dependent variable. All else being equal, larger t-stats generally mean smaller p-values so we can build an intuition about how sensitive our modeling strategy is to tell us “yup this is a statistically significant effect.” ### Multi-level models Let’s begin with fitting a multi-level model specifying the complete set of all possible parameters we can estimate. This has the effect of letting each individual have their own set of regression estimates while still treating these estimates as coming from a common distribution. You can see below we can recover the parameters pretty well and as we expect all our are results are “significant.” # Fit lmer with random intercepts, slopes, and their correlations lmer = Lmer(formula + '+ (IV1 + IV2 + IV3 | Group)',data=data) lmer.fit(summarize=False) lmer.coefs print(f"Absolute Error of Coef Recovery: {diffs(betas, lmer.coefs['Estimate'])}") print(f"Sum of Model Standard Errors: {lmer.coefs['SE'].sum()}") print(f"Sum of Model T statistics: {lmer.coefs['T-stat'].abs().sum()}") Estimate 2.5_ci 97.5_ci SE DF T-stat P-val Sig (Intercept) 0.146146 0.072889 0.219402 0.037377 49.105624 3.910072 2.831981e-04 *** IV1 0.800926 0.720934 0.880917 0.040813 49.575279 19.624452 5.065255e-25 *** IV2 0.964310 0.874273 1.054347 0.045938 48.977731 20.991603 3.900389e-26 *** IV3 0.418673 0.336092 0.501255 0.042134 49.064194 9.936621 2.449657e-13 *** Absolute Error of Coef Recovery: 0.10582723675727804 Sum of Model Standard Errors: 0.16626160033359066 Sum of Model T statistics: 54.46274837271574 Next, let’s see what happens when we fit a multi-level model with the simplest possible “random effects” structure. Notice that by not letting each individual be free to have their own estimates (aside from their own mean/intercept), our coefficient recovery drops a little bit, but our t-statistics increase dramatically. This looks to be driven by the fact that the variance estimates of the coefficients (standard errors) are quite a bit smaller. All else being equal, we would be much more likely to identify “significant” relationships using a simpler, or in this case, “misspecified” multi-level model, since we know that the data were generated such that each individual did in fact, have different BLUPs. # Fit lmer with random-intercepts only lmer_mis = Lmer(formula + '+ (1 | Group)',data=data) lmer_mis.fit(summarize=False) lmer_mis.coefs print(f"Absolute Error of Coef Recovery: {diffs(betas, lmer_mis.coefs['Estimate'])}") print(f"Sum of Model Standard Errors: {lmer_mis.coefs['SE'].sum()}") print(f"Sum of Model T statistics: {lmer_mis.coefs['T-stat'].abs().sum()}") Estimate 2.5_ci 97.5_ci SE DF T-stat P-val Sig (Intercept) 0.153057 0.077893 0.228221 0.038350 49.009848 3.991084 2.195726e-04 *** IV1 0.800550 0.757763 0.843338 0.021831 2477.919857 36.670958 1.443103e-235 *** IV2 0.946433 0.902223 0.990644 0.022557 2473.820894 41.957498 4.899264e-291 *** IV3 0.403981 0.361007 0.446954 0.021926 2465.536399 18.424992 3.968080e-71 *** Absolute Error of Coef Recovery: 0.1311098578975485 Sum of Model Standard Errors: 0.10466304433776347 Sum of Model T statistics: 101.04453264690632 ### Cluster-robust models Next, let’s evaluate the cluster-robust-error modeling approach. Remember, this involves estimating a single regression model to obtain coefficient estimates, but then applying a correction factor to the SEs, and thereby the t-statistics to adjust our inferences. It looks like our coefficient recovery is about the same as our simple multi-level model above, but our inferences are far more conservative due to the larger standard-errors and smaller t-statistics. In fact, these are even a bit more conservative than the fully specified multi-level model we estimated first. # Fit clustered errors LM lm = Lm(formula,data=data) lm.fit(robust='cluster',cluster='Group',summarize=False) lm.coefs print(f"Absolute Error of Coef Recovery: {diffs(betas, lm.coefs['Estimate'])}") print(f"Sum of Model Standard Errors: {lm.coefs['SE'].sum()}") print(f"Sum of Model T statistics: {lm.coefs['T-stat'].abs().sum()}") Estimate 2.5_ci 97.5_ci SE DF T-stat P-val Sig Intercept 0.153013 0.075555 0.230470 0.038481 46 3.976365 2.453576e-04 *** IV1 0.802460 0.712104 0.892815 0.044888 46 17.876851 0.000000e+00 *** IV2 0.945528 0.851538 1.039518 0.046694 46 20.249556 0.000000e+00 *** IV3 0.405163 0.313841 0.496485 0.045368 46 8.930504 1.307510e-11 *** Absolute Error of Coef Recovery: 0.13278703905990247 Sum of Model Standard Errors: 0.17543089877808005 Sum of Model T statistics: 51.03327490657406 ### Two-stage-regression Lastly, let’s use the two-stage-regression approach. We’ll fit a separate regression to each of our 50 people and then compute another regression on those 50 coefficients. In this simple example, we’re really just computing a one-sample t-test on these 50 coefficients. Notice that our coefficient recovery is a tiny bit better than our fully-specified multi-level model and our inferences (based on T-stats and SEs) would largely be similar. This suggests that for this particular dataset we could have gone with either strategy and walked away with the same inference. # Fit two-stage OLS lm2 = Lm2(formula,data=data,group='Group') lm2.fit(summarize=False) lm2.coefs print(f"Absolute Error of Coef Recovery: {diffs(betas, lm2.coefs['Estimate'])}") print(f"Sum of Model Standard Errors: {lm2.coefs['SE'].sum()}") print(f"Sum of Model T statistics: {lm2.coefs['T-stat'].abs().sum()}") Estimate 2.5_ci 97.5_ci SE DF T-stat P-val Sig (Intercept) 0.144648 0.070338 0.218958 0.036978 49 3.911745 2.822817e-04 *** IV1 0.796758 0.716781 0.876736 0.039798 49 20.019944 0.000000e+00 *** IV2 0.971252 0.878498 1.064005 0.046156 49 21.042892 0.000000e+00 *** IV3 0.424135 0.339132 0.509138 0.042299 49 10.027041 1.840750e-13 *** Absolute Error of Coef Recovery: 0.09216204521686983 Sum of Model Standard Errors: 0.16523098907361963 Sum of Model T statistics: 55.00162203260664 # Simulating a universe. Now, this was only one particular dataset with a particular size and particular level of between-person variability. Remember the dimensions outlined above? The real question we want to answer is how these different modeling strategies vary with respect to each of those dimensions. So let’s expand our simulation here. Let’s generate a “grid” of settings such that we simulate every combination of dimensions we can in a reasonable amount of time. Here’s the grid we’ll try to simulate: Going down the rows we’ll vary dimension 1 the sample size of the units we’re making inferences over (number of people) from 5 -> 100. Going across columns we’ll vary dimension 2, the sample size of the units nested within the units we’re making inferences over (number of observations per person) from 5 -> 100. Going over the z-plane we’ll vary dimension 3 the variance between the units we’re making inferences over (between-person variability) from 0.10 -> 4 standard deviations. Since varying dimension 1 and dimension 2 should make intuitive sense (they’re different aspects of the sample size of our data) let’s explore what varying dimension 3 looks like. Here are plots illustrating how changing our between person variance influences coefficients. Each figure below depicts a distribution of person level coefficients; these are the BLUPs we discussed above. When simulating a dataset with two parameters described by an intercept and a slope (IV1), notice how each distribution is centered on the true value of the parameter, but the width of the distribution increases as we increase the between-group variance. These distributions are the distributions that our person level parameters come from. So while they average out to be the same value, they are increasingly dispersed around that value. As these distributions become wider it becomes more challenging to recover the true coefficients of the data if a dataset is too small, as models need more data in order to stabilize their estimates. For the sake of brevity I’ve removed the plotting code for the figures below, but am happy to share them on request. ## Setting it up The next code block sets up this parameter grid and defines some helper functions to compute the metrics defined above. Since this simulation took about ~50 minutes to run on a 2015 quad-core Macbook Pro, I also defined some functions to save each simulation to a csv file. # Define the parameter grid nsim = 50 # Number of simulated datasets per parameter combination num_grps = [10, 30, 100] # Number of "clusters" (i.e. people) obs_grp = [5, 25, 100] # Number of observations per "cluster" grp_sigmas = [.1, .25, 1., 2., 4.] # Between "cluster" variance num_coef = 3 # Number of terms in the regression equation noise_params = (0, 1) # Assume each cluster has normally distributed noise seed = 0 # to repeat this simulation formula = 'DV ~ IV1 + IV2 + IV3' # The model formula # Define some helper functions. diffs() was used above examining each model in detail def diffs(a, b): """Absolute error""" return np.sum(np.abs(a - b)) def calc_model_err(model_type, formula, betas, data): """ Fit a model type to data using pymer4. Return the absolute error of the model's coefficients, the sum of the model's standard errors, and the sum of the model's t-statistics. Also log if the model failed to converge in the case of lme4. """ if model_type == 'lm': model = Lm(formula, data=data) model.fit(robust='cluster',cluster='Group',summarize=False) elif model_type == 'lmer': model = Lmer(formula + '+ (IV1 + IV2 + IV3 | Group)',data=data) model.fit(summarize=False, no_warnings=True) elif model_type == 'lmer_mis': model = Lmer(formula + '+ (1 | Group)',data=data) model.fit(summarize=False, no_warnings=True) elif model_type == 'lm2': model = Lm2(formula,data=data,group='Group') model.fit(n_jobs = 2, summarize=False) coef_diffs = diffs(betas, model.coefs['Estimate']) model_ses = model.coefs['SE'].sum() model_ts = model.coefs['T-stat'].abs().sum() if (model.warnings is None) or (model.warnings == []): model_success = True else: model_success = False return coef_diffs, model_ses, model_ts, model_success, model.coefs def save_results(err_params, sim_params, sim, model_type, model_coefs, df, coef_df, save=True): """Aggregate and save results using pandas""" model_coefs['Sim'] = sim model_coefs['Model'] = model_type model_coefs['Num_grp'] = sim_params[0] model_coefs['Num_obs_grp'] = sim_params[1] model_coefs['Btwn_grp_sigma'] = sim_params[2] coef_df = coef_df.append(model_coefs) dat = pd.DataFrame({ 'Model': model_type, 'Num_grp': sim_params[0], 'Num_obs_grp': sim_params[1], 'Btwn_grp_sigma': sim_params[2], 'Coef_abs_err': err_params[0], 'SE_sum': err_params[1], 'T_sum': err_params[2], 'Fit_success': err_params[3], 'Sim': sim }, index = [0]) df = df.append(dat,ignore_index=True) if save: df.to_csv('./sim_results.csv',index=False) coef_df.to_csv('./sim_estimates.csv') return df, coef_df # Run it results = pd.DataFrame() coef_df = pd.DataFrame() models = ['lm', 'lm2', 'lmer', 'lmer_mis'] for N in num_grps: for O in obs_grp: for S in grp_sigmas: for I in range(nsim): data, blups, betas = simulate_lmm(O, num_coef, N, grp_sigmas=S, noise_params=noise_params) for M in models: c, s, t, success, coefs, = calc_model_err(M, formula, betas, data) results, coef_df = save_results([c,s,t, success], [N,O,S], I, M, coefs, results, coef_df) # Results For the sake of brevity I’ve removed the plotting code for the figures below, but am happy to share them on request! ## Coefficient Recovery Ok, let’s take first take a look at our coefficient recovery. If we look from the top left of the grid to the bottom right the first thing to jump out is that when we increase our overall sample size (number of clusters * number of observations per cluster), and our between cluster variability is medium to low, all model types do a similarly good job of recovering the true data generating coefficients. In other words, under good conditions (lots of data that isn’t too variable) we can’t go wrong picking any of the analysis strategies. In the converse, going from bottom left to top right, when between cluster variability is high, we quickly see the importance of having more clusters rather than more observations per cluster; without enough clusters to observe, even a fully specified multi-level model does a poor job of recovering the true coefficients. When we have small to medium sized datasets and lots of between-cluster variability all models tend to do a poor job of recovering the true coefficients. Interestingly, having particularly few observations per cluster (left-most column) disproportionately affects two-stage-regression estimation (orange boxplots). This is consistent with Gelman, 2005 who suggests that with few per cluster observations, the first-level OLS estimates are pretty poor with high-variance and there are none of the multi-level modeling benefits to help offset the situation. This situation also seems to favor fully-specified multi-level models the most (green boxplots), particularly when between cluster variability is high. It’s interesting to note that cluster-robust, and misspecified (simple) multi-level models seem to perform similarly in this situation. In medium data situations (middle column) cluster-robust models seem to do a slightly worse job across the board of recovering coefficients. This is most likely due to the fact that the estimates completely ignore the clustered nature of the data and have no smoothing/regularization applied to them either through averaging (in the case of the two-stage-regression models) or through random-effects estimation (in the case of the multi-level models). Finally, in the high observations per cluster situation (right-most column), all models seem to perform rather similarly suggesting that each modeling strategy is about as good as any other when we densely sampling the unit of interest (increasing number of observations per cluster) even if the desire is to make inferences about the clusters themselves. ## Making Inferences (SEs + T-stats) Next, let’s look at both standard errors and t-statistics to see how our inferences might vary. The effect of increased between cluster variance has a very notable effect on SEs and t-stats values generally making it less likely to identify a statistically significant relationship regardless of the size of the data. Interestingly, what two-stage-regression models exhibit in terms of poorer coefficient recovery in situations with few observations per cluster, they make up for with higher standard error estimates. We can see that their t-statistics are low in these situations suggesting that in these situations this approach may tip the scales towards lower false-positive, higher false-negative inferences. However, unlike other model types, they do not necessarily benefit from more clusters overall (bottom left panel) and run the risk of an inflated level of false negatives. Misspecified multilevel models seem to have the opposite properties: they have higher t-stats and lower SEs in most situations with medium to high between-cluster variability and benefit the most from situations with a high number of observations per cluster. This suggests they might run the risk of introducing more false positives in situations where other models may behave more conservatively, but also may be more sensitive to detecting true relationships in the face of high between-cluster variance. They also seem to benefit most from increasing observations within cluster. Inferences from cluster-robust and full-specified multi-level models seem to be largely comparable which is consistent with the proliferate use of both these model types across multiple literatures. ## Bonus: When fully-specified multi-level models fail Finally, we can take a brief look at what situations most often cause convergence failures for our fully-specified multi-level models (note: the simple multi-level models examined earlier never failed to converge in these simulations). In general, this seems to occur when between cluster variability is low, or the number of observations per cluster is very small. This makes sense because even though the data were generated in a multi-level manner, clusters are quite similar and simplifying models by discarding terms which try to model variance that may not be exhibited by the data in a meaningful way (e.g. dropping “random-slopes”) achieve better estimation overall. In other words, the model may be trying to fit a variance parameter that is small enough to cause it to run out of optimizer iterations before it reaches a suitably small change in error. This is like trying to find the lowest point on a “hill” that has a very shallow declining slope by comparing the height of your current step to the height of your previous step. ## Conclusions So what have we learned? Here are some intuitions that I think this exercise has helped flesh out: • Reserve two-stage-regression for situations when there are enough observations per cluster. This is because modeling each cluster separately without any type of smoothing/regularization/prior imposed by multi-level modeling, produces poor first-level estimates in these situations. • Be careful using misspecified/simple multi-level models. While they may remove some of the complexity involved in specifying the “random effects” part of the model, and they converge pretty much all the time, they are more likely to lead to statistically significant inferences relative to other approaches (all else being equal). This may be warranted if your data don’t exhibit enough between-cluster variance. It may be generally preferable then to specify a model structure that accounts for variance confounded with predictors of interest (Barr et al, 2013) (i.e. dropping the correlation term between a random intercept and random slope rather than dropping the random slope), in other words the most “maximal” structure you can get away with, with respect to the inferences you want to make of your data. • Cluster-robust models appear to be an efficient solution if your primary goal is making inferences and you can live with coefficient estimates that are a bit less accurate than other approaches. These are harder to specify if there are multiple levels of clustering in the data (e.g. survey responses within-person, within-city, within-state, etc) or if accounting for item-level effects are important (Baayen et al, 2008). However, there are techniques to incorporate two-way or multi-way cluster-robust errors and such approaches are reasonably common in economics. This lecture and this paper discuss these approaches further. Pymer4 used for this post only implements one-way clustering. • Consider using two-stage-regression or cluster-robust errors1 instead of misspecified multi-level models as your inferences maybe be largely similar to fully-specified multi-level models that converge successfully. This may not be true if item-level variance or multiple-levels of clustering need to be taken into account, but for relatively straight forward cases illustrated in this post, they seem to fair just fine. • Generally, simulations can be helpful ways to build statistical intuitions especially if the background mathematics feels daunting. This is has been one of my preferred approaches for learning statistical concepts in more depth and has made reading literature heavy on mathematical formalisms far more approachable. ### Caveats and cautions I don’t want to end this post with the feeling that we’ve figured everything out and are expert analysts now, but rather appreciate that there are some limitations to this exercise that are worth keeping in mind. While we can build some general intuitions, there are conditions under which these intuitions may not always hold and it’s incredibly important to be aware of them: • In many ways, the data generated in these simulations were “ideal” for playing around with these different approaches. Data points were all on the same scale, came from a normal distribution with known means and variances, contained no missing data points, and adhered to other underlying assumptions of these statistical approaches not discussed. • Similarly, I built-in “real relationships” into the data and then tried to recover them. I made mention of false-positives and negatives throughout the post, but I did not formally estimate the false-positive or false-negative rate for any of these approaches. Again this was by design to leave you with some general intuitions, but there exist several papers that utilize this approach to more explicitly defend the use of certain inference techniques (e.g. Luke, 2017). • The space of parameters we explored (i.e. the different “dimensions” of our data) spanned a range I thought reasonably covered a variety of datasets often collected in empirical social science laboratory studies. In the real world, data are far messier and increasingly, far larger. More data is almost always better, particularly if it’s of high quality, but what constitutes quality can be very different based on the inferences one wants to make. Sometimes high variability between clusters is desirable, other times densely sampling a small set of clusters is more important. These factors will vary based on the questions one is trying to answer. • The metrics I chose to evaluate each model with are simply the ones that I wanted to know about. There are certainly other metrics that could be more or less informative based on what intuitions you would like to build. For example, what is the prediction accuracy of each model? # Conclusion I hope this was useful for some folks out there and even if it wasn’t, it certainly helped me build some intuitions about the different analysis strategies that are available. Moreover, I hope if nothing else, this might motivate people who feel like they have limited formal training in statistics/machine-learning to take a more tinker/hacker approach to their own learning. I remember when as a kid, breaking things and taking them apart was one of my favorite ways to learn about how things worked. With the mass availability of free and open-source tools like scientific Python and R, I see no reason why statistical education can’t be the same. #### Appendix This is a nice quick guide that defines much of the terminology across different fields and reviews a lot of the concepts covered here (plus more) in a much more pithy way. For those interested, p-values for multi-level models were computed using the lmerTest R package using Satterthwaite approximation for degrees of freedom calculation; note that based on the random-effects structure specified, these degrees of freedom can change dramatically. P-values for other model types were computed using a standard t-distribution, but pymer4 also offers non-parametric permutation testing and boot-strapped confidence intervals for other styles of inference. At the time of this writing, fitting two-stage-regression models is only available in development branch on github, but should be incorporated in a new release in the future. ##### Notes and Corrections *In a previous version of this post, this approach was mistakenly called two-stage-least-squares (2SLS). 2SLS is a formal name for a completely different technique which falls under the broader scope of instrumental variable estimation. This confusion is because the two-stage-regression approach discussed above technically does employ “two-stages” of ordinary-least-squares estimation, yet this is not what 2SLS is in the literature. Thanks to Jonas Oblesser for pointing this out and Stephen John Senn for appropriate terminology that is in fact consistent across medical and fMRI literatures. 1. While in this post (and often in the literature) two-stage, or even multi-level modeling and cluster-robust inference are seen as two different possible analytic strategies another possibility involves combining these approaches. That is, using a multi-level model or two-stage-regression to obtain coefficient estimates, and then computing robust-standard errors on the highest-level coefficients when performing inference. Thanks to James E. Pustejovsky for bringing up this often overlooked option.
2022-10-04 09:01:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6786845922470093, "perplexity": 1209.585205863857}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337490.6/warc/CC-MAIN-20221004085909-20221004115909-00555.warc.gz"}
https://docs.analytica.com/index.php/IndexValue
# IndexValue ## IndexValue(I) Returns the index value for the given variable or index «I». Some variables have both an index value and a result value. Examples include -- a self-indexed array; a variable or index defined as a list of identifiers or list of expressions; and a Choice list with a self-domain. IndexValue(I) returns the index value of «I», where (I) alone would return its result value. Array Functions ## Details The IndexValue function, if it weren't built-in, could easily be defined as: Function IndexValue(I: IndexType) := I ## Examples Index L := [I, J, K, "value"] Index rows := 1..Size(A) Variable Flat_A := MdArrayToTable(A, rows, IndexValue(L))
2023-02-06 03:08:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3412156403064728, "perplexity": 8632.501727045923}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500303.56/warc/CC-MAIN-20230206015710-20230206045710-00013.warc.gz"}
https://www.gradesaver.com/textbooks/math/prealgebra/prealgebra-7th-edition/chapter-7-review-page-525/46
## Prealgebra (7th Edition) Published by Pearson # Chapter 7 - Review: 46 165 #### Work Step by Step x= 33%$\times$500 translate to an equation x=0.33$\times$500 write 33% as 0.33 and multiply 165=x After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2018-08-18 01:51:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.632611870765686, "perplexity": 8473.359679855706}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221213247.0/warc/CC-MAIN-20180818001437-20180818021437-00007.warc.gz"}
http://biomechanical.asmedigitalcollection.asme.org/article.aspx?articleid=1475640
0 Research Papers # Three-Dimensional Computational Modeling of Subject-Specific Cerebrospinal Fluid Flow in the Subarachnoid Space [+] Author and Article Information Sumeet Gupta Laboratory of Thermodynamics in Emerging Technologies, Department of Mechanical and Process Engineering, ETH Zurich, 8092 Zurich, Switzerland Michaela Soellinger, Peter Boesiger Institute for Biomedical Engineering, University of Zurich, CH-8006 Zurich, Switzerland; ETH Zurich, 8092 Zurich, Switzerland Dimos Poulikakos Laboratory of Thermodynamics in Emerging Technologies, Department of Mechanical and Process Engineering, ETH Zurich, 8092 Zurich, [email protected] Vartan Kurtcuoglu1 Laboratory of Thermodynamics in Emerging Technologies, Department of Mechanical and Process Engineering, ETH Zurich, 8092 Zurich, [email protected] 1 Corresponding author. J Biomech Eng 131(2), 021010 (Dec 10, 2008) (11 pages) doi:10.1115/1.3005171 History: Received October 23, 2007; Revised August 25, 2008; Published December 10, 2008 ## Abstract This study aims at investigating three-dimensional subject-specific cerebrospinal fluid (CSF) dynamics in the inferior cranial space, the superior spinal subarachnoid space (SAS), and the fourth cerebral ventricle using a combination of a finite-volume computational fluid dynamics (CFD) approach and magnetic resonance imaging (MRI) experiments. An anatomically accurate 3D model of the entire SAS of a healthy volunteer was reconstructed from high resolution T2 weighted MRI data. Subject-specific pulsatile velocity boundary conditions were imposed at planes in the pontine cistern, cerebellomedullary cistern, and in the spinal subarachnoid space. Velocimetric MRI was used to measure the velocity field at these boundaries. A constant pressure boundary condition was imposed at the interface between the aqueduct of Sylvius and the fourth ventricle. The morphology of the SAS with its complex trabecula structures was taken into account through a novel porous media model with anisotropic permeability. The governing equations were solved using finite-volume CFD. We observed a total pressure variation from $−42Pato40Pa$ within one cardiac cycle in the investigated domain. Maximum CSF velocities of about $15cm∕s$ occurred in the inferior section of the aqueduct, $14cm∕s$ in the left foramen of Luschka, and $9cm∕s$ in the foramen of Magendie. Flow velocities in the right foramen of Luschka were found to be significantly lower than in the left, indicating three-dimensional brain asymmetries. The flow in the cerebellomedullary cistern was found to be relatively diffusive with a peak Reynolds number $(Re)=72$, while the flow in the pontine cistern was primarily convective with a peak $Re=386$. The net volumetric flow rate in the spinal canal was found to be negligible despite CSF oscillation with substantial amplitude with a maximum volumetric flow rate of $109ml∕min$. The observed transient flow patterns indicate a compliant behavior of the cranial subarachnoid space. Still, the estimated deformations were small owing to the large parenchymal surface. We have integrated anatomic and velocimetric MRI data with computational fluid dynamics incorporating the porous SAS morphology for the subject-specific reconstruction of cerebrospinal fluid flow in the subarachnoid space. This model can be used as a basis for the development of computational tools, e.g., for the optimization of intrathecal drug delivery and computer-aided evaluation of cerebral pathologies such as syrinx development in syringomelia. <> ## Figures Figure 1 (a) CSF space anatomy (T2 weighted MRI image) and pathways in the intracranial cavities. (b) Schematic of trabeculae bridging the SAS between arachnoid and pia layers. Figure 2 3D reconstruction of the SAS using anatomic MRI. (a) Anatomical MRI slices were segmented to produce (b) a 3D model of the SAS. (c) The current investigation domain. (d) Detailed anatomy of the superior cranial SAS. Figure 3 Velocity profiles at the boundaries as obtained using velocimetric MRI at (a) the pontine cistern, (b) the cerebellomedullary cistern, and (c) the spinal SAS. (d) Measured volumetric flow rates at each boundary. (e) Magnitude image at pontine cistern with segmented basilar artery. Figure 4 Porous media representation of the SAS, (a) Representative porous model for CFD, (b) Representative unit cell and permeability directions, (c) permeability variation with porosity (The number of “RUCs” across a channel can be more than one) Figure 5 CSF volumetric flow rates from CFD simulations at LFL, RFL, and FM Figure 6 Normal velocity (m/s) contours at LFL Figure 7 Stream traces colored by velocity magnitude (m/s). Particles are injected at Plane A intersecting the basal pontine and cerebellomedullary cisterns. Figure 8 Velocity profiles at different cross sections within the domain. α is the Womersley number. (Cross sections are scaled differently at different time steps for better representation of the vectors.) Figure 9 Velocity magnitude (m/s) contours in the SAS during one complete cardiac cycle (The velocity range in this figure has been chosen in order to best visualize the flow field.) Figure 10 Relative pressure (Pa) contours in the SAS during one complete cardiac cycle. The pressure values are given with respect to zero reference pressure at the superior end of the fourth ventricle. Figure 11 Transient deformation and deformation rate of cranial SAS during one complete cardiac cycle Figure 12 Results of the independence studies. Pressure along a critical path in the treated domain as calculated (a) with different meshes, (b) at different time steps, and (c) with different time periods. (d) Pressure contours on a critical plane in the domain as calculated with different meshes. ## Discussions Some tools below are only available to our subscribers or users with an online account. ### Related Content Customize your page view by dragging and repositioning the boxes below. Related Journal Articles Related Proceedings Articles Related eBook Content Topic Collections
2018-07-18 20:14:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 7, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.388441264629364, "perplexity": 5025.813589529645}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590329.25/warc/CC-MAIN-20180718193656-20180718213656-00189.warc.gz"}
https://tjyj.stats.gov.cn/CN/10.19343/j.cnki.11-1302/c.2019.02.008
• • 环境规制的经济效应:“减排”还是“增效” • 出版日期:2019-02-25 发布日期:2019-03-07 Economic Effects of Environmental Regulations: “Emission Reduction” or “Efficiency Enhancement” Yu Binbin et al. • Online:2019-02-25 Published:2019-03-07 Abstract: This paper constructs a theoretical and analytical framework for the economic effects of environmental regulations, and tests the "emission reduction" and "efficiency enhancement" effects of environmental regulations by applying the Chinese urban panel data and the dynamic spatial panel model. It is found that the environmental regulations in China has the economic and spatial spillover effects of "emission reductions only without economic efficiency", and the conclusion is still unchanged with a series of robustness tests. A further heterogeneity study reveals that the environmental effects of "emission reductions only without economic efficiency" are validated in those three parts of China, i.e., East, Central and West. After the international financial crisis, the "emission reductions" effects brought about by the environmental regulations have significantly intensified, but only with an increased "compliance cost", and no "innovation effects". The effects of "emission reductions only without economic efficiency" can only be improved efficiently through speeding up the restructuring of industries. The relationship between economic development and energy efficiency presents a trend of U shape, and China is on the left side of the "U", and so far the Kuznets environmental curve has not been confirmed by Chinese city data yet.
2022-07-02 09:07:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31025415658950806, "perplexity": 5055.460097934011}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103989282.58/warc/CC-MAIN-20220702071223-20220702101223-00696.warc.gz"}
https://www.strath.ac.uk/science/mathematicsstatistics/events/
# Mathematics & Statistics Seminars and colloquia ### Department Colloquia 18th October: Dr Colin Torney (University of Glasgow) Title:   Cues and decision-making in collective systems Date: 3.30pm Wednesday 18th October Venue: Livingstone Tower, 9th floor, room LT908 Abstract:  Animal groups in nature are a classic example of a complex system in which individual behavior and social interaction scale to produce a collective response to external stimuli. In these systems there is an interplay between leadership, imitation, and environmental cues that determines the accuracy of group decisions. In this talk I will present some stylized models of information flow in interacting systems and show how evolution may drive these systems to unresponsive states. I will also discuss the methods we're using to investigate these questions in the field and lab, including tools to collect video footage, computational methods to locate animals within images, and statistical techniques to infer behavioral rules from movement data. 1st November: Dr Youcef Mammeri (Université de Picardie Jules Verne, France) Title:   Multiscale modelling of Lithium batteries Date: 3.30pm Wednesday 1st November Venue: Livingstone Tower, 9th floor, room LT908 Abstract:  The development of theoretical methods to correlate the chemical and structural properties of materials in energy storage devices is of crucial importance for a coherent interpretation of the experimental data and for their optimization.  I will present how multiscale mathematical models, which combine microstructures, reaction kinetics and mass transport, can predict battery performances. 15th November: Prof Jeremy Levesley (University of Leicester) Title: To approximate or not to approximate, that is the question Date: 3.30pm Wednesday 15th November Venue: Livingstone Tower, 9th floor, room LT908 Abstract: I will consider practical approximation in high dimensions and ask when we should approximate. I will give a quick overview of ideas in neural networks related to concentration of measure which are being developed by Gorban and Tyukin in Leicester. I will then talk about sparse grid approximation using smooth kernels, with some theoretical results related to interpolation and quasi-interpolation with Gaussians. As a byproduct of this work a new set of polynomials related to Hermite polynomials have been invented. This work is in collaboration with Xingping Sun, Alex Kushpel and more recently Simon Hubbert. I will make reference to applications of the sparse grid technology to solution of PDEs in 4 dimensions, with the question - Is this high? 22nd November: Prof Paul Milewski (University of Bath) Title: Understanding the Complex Dynamics of Faraday Pilot Waves Date: 3.30pm Wednesday 22nd November Venue: Livingstone Tower, 9th floor, room LT908 Abstract:  Faraday pilot waves are a newly discovered hydrodynamic structure that consists a bouncing droplet which creates, and is propelled by, a Faraday wave. These pilot waves can behave in extremely complex ways and result in dynamics mimicking quantum mechanics. I will show some of this fascinating behaviour and will present a surface wave-droplet fluid model that captures many of the features observed observed in experiments, focussing on the statistical emergence of complex states. 29th November: Prof Alexander Korobkin (University of East Anglia) Title: Diffraction of hydroelastic waves by a vertical cylinder Date: 3.30pm Wednesday 29th November Venue: Livingstone Tower, 9th floor, room LT908 Abstract:  Linear problem of wave diffraction is studied for a circular vertical cylinder mounted at the sea bed and piercing the fluid surface covered by ice plate of infinite extent. The ice plate is modeled by a thin elastic plate of constant thickness clamped to the surface of the cylinder. One-dimensional incident hydroelastic wave of small amplitude propagates towards the cylinder and is diffracted on the cylinder.  Deflection of the ice plate and the bending stresses in it are determined by two methods: (a) using the integral Weber transform in radial direction, (b) using the vertical modes for the fluid of constant depth with the rigid bottom and elastic upper boundary. The solution by the second method is straightforward but we cannot prove that the solution is complete because the properties of the vertical modes are not known yet.  The solution by the Weber transform is more complicated but this solution is unique. In this talk we will show that these two solutions are identical. This result justifies the method of the vertical modes in the hydroelastic wave diffraction problems. 17th January: Prof Dirk Pauly (University of Duisberg-Essen) Title: Electro-Magneto Statics by a Functional Analysis Toolbox Date: 3.30pm Wednesday 17th January Venue: Livingstone Tower, 9th floor, room LT908 Abstract:  We will give a simple introduction to Maxwell equations. Concentrating on the static case, we will present a proper L^2-based solution theory for bounded weak Lipschitz domains in three dimensions. The main ingredients are a functional analysis toolbox and a sound investigation of the underlying operators gradient, rotation, and divergence. This FA-toolbox is useful for all kinds of partial differential equations as well.. 28th January: Dr Paweł Dłotko (University of Swansea) Title: TBA Date: 3.30pm Wednesday 28th February Venue: Livingstone Tower, 9th floor, room LT908 Abstract:  TBA ### Applied Analysis 17th October: Prof Ernesto Estrada (Department of Mathematics and Statistics) Title:   Communicability geometry and transport in networks Date:   3pm Tuesday 17th October Venue: Livingstone Tower, 9th floor, room LT907 Abstract:  I will show how a geometry emerges from the communicability function of a network (graph). Then, I will study some examples in which "information" is claimed to flow through the shortest path but for which we show that it seems to flow through the shortest communicability path. Such communicability paths are considered as the shortest paths in a communicability distance-weighted graph. The examples we will discuss include flow of water in brain networks  and the flow of cars in rush hour in different world cities. In both cases I will present theoretical and empirical results based on real-world situations. 24th October: Damien Allen (Department of Mathematics and Statistics) Title:   Characterising Submonolayer Deposition via the Visibility Graph Date:   3pm Tuesday 24th October Venue: Livingstone Tower, 9th floor, room LT907 Abstract: Submonolayer deposition (SD) is a term used to describe the initial stages of processes, such as molecular beam epitaxy, in which particles are deposited onto a surface, diffuse and form large-scale structures. We discuss a mean-field model of the process under the assumption of fixed rate deposition by investigating the effects of variations in the critical island size on a (SD) model using the visibility graph. Using methods from network theory and spectral graph theory, we derive results that combine the information contained in the island size distributions and spacial distributions. 21st November: Lyndsay Kerr (Department of Mathematics and Statistics) Title:   The Discrete Coagulation-Fragmentation System Date:   3pm Tuesday 21st November Venue: Livingstone Tower, 9th floor, room LT907 Abstract:  In many situations in nature and industrial processes clusters of particles can combine into larger clusters or fragment into smaller clusters.  The evolution of these particles can be described by differential equations known as coagulation-fragmentation equations.  In the discrete size case it is assumed that the mass of each cluster is a natural number and a cluster of mass n consists of n identical units.  The main part of the talk will concentrate on the case of pure discrete fragmentation.  Here, the theory of substochastic C_0-semigroups can be used to obtain results relating to the existence of a unique, positive, mass conserving solution.  The full coagulation-fragmentation system, where the coagulation coefficients may be time-dependent, will also be briefly examined.  28th November: Prof José Tadeu Lunardi (State University of Ponta Grossa, Brazil) Title:   A distributional approach to point interactions in one dimensional quantum mechanics Date:   3pm Tuesday 28th November Venue: Livingstone Tower, 4th floor, room LT412 Abstract:  Physicists often use regularization and renormalization procedures to deal with singular potentials. Though this approach is intuitive, it generally lacks mathematical consistency, leading sometimes to ambiguous results. Although these procedures are common in quantum field theory, they also arise in quantum mechanics. Typical examples are the singular point interactions associated with a Dirac delta potential or its derivative in one dimension. When the potential is regular, the interaction term in the Schr\"odinger (or Dirac) equation is usually given by the product between the potential function and the wave function. However, when the potential is singular, this product sometimes is not well defined, and the interaction term may not make sense. Mathematically this problem can be solved by using the theory of self-adjoint extensions of symmetric operators (SAE), from which one finds a well defined self-adjoint hamiltonian. In one dimension, the self-adjoint extensions of the hamiltonian for a point interaction are members of a 4-parameter family, and are completely characterized by the boundary conditions the wave function satisfies at the singular point. One disadvantage of this approach, from a physicist's point of view, is that the self-adjoint hamiltonian is not given as a sum of two well defined operators, corresponding to the kinectic and the potential energies; the hamiltonian is given as a whole", and one lacks intuition about the specific properties of the potential". In this seminar I will present a formal approach to this problem based on the theory of distributions. In this approach the ill-defined product forming the interaction term in the Schr\"odinger equation is replaced by a well defined distribution concentrated in a single point. By imposing on this distribution some simple mathematical requirements, besides the probability conservation across the singular point, one finds that the allowable interaction terms are described by a family of 4-parameters, which are related to the boundary conditions at the singular point in exactly the same way as we find by the theory of SAE. I intend to discuss the relationships between the theory of SAE and this distributional approach, as well as to discuss some possibilities to formulate the latter (still formal) in a mathematical rigorous way. ### Continuum Mechanics and Industrial Mathematics 26th September: Dr Alex Wray (University of Strathclyde) Title: The evaporative behaviour of asymmetric drops Date:  1.00pm Tuesday 26th September Venue: Livingstone Tower, 9th floor, room LT907 Abstract: The evaporation of liquid drops has received extensive attention over time due to its fundamental significance in a variety of industrial contexts, not to mention the widespread consideration given to the so-called `coffee-stain effect’. Of particular interest are drops that are in some way asymmetric: it is known that the flow inside such drops is itself asymmetric as a result of non-uniformities in the evaporative flux, but the exact mechanism was not previously understood. Unfortunately the system is not amenable to the standard method described in the seminal 1997 paper of Deegan et al., but I discuss how the system may nonetheless be modelled. The finer details, especially in situations where the drop is non-slender, prove to be rather challenging, and much remains as yet unknown. I discuss what progress has been made so far, and discuss promising avenues. 28th September: Marc Calvo Schwarzwälder (Centre de Recerca Matemàtica, Barcelona) Title: Phase change at the nanoscale Date:  4.00pm Thursday 28th September Venue: Livingstone Tower, 9th floor, room LT907 Abstract: Nanotechnology has been a very important research topic due to the wide range of applications it has to offer in multiple fields such as industry or medicine. Many of these applications involve high temperatures which can even lead to a phase change and therefore it is crucial to understand how these processes occur at small length scales. It is widely known that heat transport at the nanoscale cannot be described in the same manner as for macroscopic objects. There exists a large number of experimental observations which show that many thermodynamic properties, such as the melt temperature or the thermal conductivity, become highly size-dependent at the nanoscale and thus developing mathematical models which are able to describe this dependence accurately is very important. In addition, most of the mathematical models describing heat transfer processes are based on Fourier’s law, which states that the heat flux is proportional to the temperature gradient. However, it has been shown that the classical equations break down at the nanoscale and thus other approaches are necessary to describe heat conduction at small length or short time scales correctly. The Guyer-Krumhansl equation is a very popular extension to the classical Fourier law that incorporates memory and non-localities, which become significant at the nanoscale. In this talk we will discuss the mathematical modelling of phase change and how nanoscale effects have been incorporated into the mathematical description. We will show that the widely accepted equations are incorrect and we will provide a new system. A mathematical model for the size-dependent melt temperature will also be presented and we will show that there is an excellent agreement with experimental observations. In the end we will discuss how the Guyer-Krumhansl equation affects a solidification process in a simple geometry. 10th October: Tony Mulholland (University of Strathclyde) Title: Analysis of a Fractal Ultrasonic Transducer Date:  1.00pm Tuesday 10th October Venue: Livingstone Tower, 9th floor, room LT907 Abstract: Ultrasonic transducers are an essential tool in medical imaging, in imaging cracks in nuclear plants, and in a wide range of inverse problems.This talk will provide some theorems which can be used to predict the dynamics of a fractal ultrasound transducer whose piezoelectric components span a range of length scales. As far as we know this is the first to study waves in the complement to the Sierpinski gasket. This is an important mathematical development as the complement is formed from a broad distribution of length scales whereas the Sierpinski gasket is formed from triangles of equal size. A finite element method is used to discretise the model and a renormalisation approach is then used to develop a recursion scheme that analytically describes the key components from the discrete matrices that arise. It transpires that the fractal device has a significantly higher reception sensitivity and a significantly wider bandwidth than an equivalent Euclidean (standard) device. So much so that our engineering colleagues have built the world’s first fractal ultrasonic transducer which I will try and bring along ! 24th October: Marcus Waurick (University of Strathclyde) Title: On the dependence of solutions of pdes on the coefficients Date:  1.00pm Tuesday 24th October Venue: Livingstone Tower, 9th floor, room LT907 Abstract:  In the setting of so-called evolutionary equations invented by Rainer Picard in 2009 we study a certain type of a continuity property of solution operators. We will describe homogenisation theory in the framework of this continuity property. In fact, it can be shown that $G$-convergence of matrix-coefficients is equivalent to convergence of  certain inverses in the weak operator topology. With this, one can show various homogenisation results for a wide class of standard linear equations in mathematical physics. Furthermore, the genericity of memory effects to arise due to the homogenisation process in the context Maxwell's equations can be explained by operator-theoretic means. 31st October: Larry Forbes (University of Tasmania) Title: tbc Date:  1.00pm Tuesday 31st October Venue: Livingstone Tower, 9th floor, room LT907 7th November: Carlos Alberto da Costa Filho (University of Edinburgh) Title: Marchenko Methods for Seismics: Improving images without a detailed model Date:  1.00pm Tuesday 7th November Venue: Livingstone Tower, 9th floor, room LT907 Abstract:  Seismic methods which rely on emitting, recording and processing seismic waves, are widely used to locate subsurface resources and monitor known reservoirs. They are part of any hydrocarbon exploration or geological carbon capture and storage project. One of the most powerful tools used in seismics is migration, a method of imaging which provides high-resolution details of the subsurface. First-order Born methods which have been traditionally used for most migration algorithms fail to accurately map subsurface interfaces, and create a number of artifacts, the most pernicious of which are "phantom" reflectors. These "phantom" reflectors are coherent forms of noise which are caused by the presence of higher-order scattering in the data (multiples). Recently, Marchenko methods have been developed which, among other uses, can provide images almost devoid of any multiple-related artifacts. This is possible because, even without a detailed model of the subsurface, Marchenko methods can obtain estimates of these multiples, something conventional methods lack. This talk will introduce Marchenko methods, contextualized from a geophysical and mathematical point of view, and show some of its recent applications which have been developed at the University of Edinburgh. 21st November: David Fairhurst (Nottingham Trent) Title: The Jellycopter: Stable Levitation using a Magnetic Stirrer Date:  1.00pm Tuesday 21st November 2017 Venue: Livingstone Tower, 9th floor, room LT907 Title: In laboratories around the world, scientists use magnetic stirrers to mix solutions and dissolve powders. It is well known that at high drive rates the stir bar jumps around erratically with poor mixing, leading to its nick-name 'flea'. Investigating this behaviour, we discovered a state in which the flea levitates stably above the base of the vessel, supported by magnetic repulsion between flea and drive magnet. The vertical motion is oscillatory and the angular motion a superposition of rotation and oscillation. By solving the coupled vertical and angular equations of motion, we characterised the flea’s behaviour in terms of two dimensionless quantities: (i) the normalized drive speed and (ii) the ratio of magnetic to viscous forces. However, Earnshaw’s theorem states that levitation via any arrangement of static magnets is only possible with additional stabilising forces. In our system, we find that these forces arise from the flea’s oscillations which pump fluid radially outwards, and are only present for a narrow range of Reynold's numbers. At slower, creeping flow speeds, only viscous forces are present, whereas at higher speeds, the flow reverses direction and the flea is no longer stable. We also use both the levitating and non-levitating states to measure rheological properties of the system. 28th November: Prash Valluri (University of Edinburgh) Title: Watching Sessile Droplets Evaporate: Beautiful (and never boring) phenomena! Date:  1.00pm Tuesday 28th November 2017 Venue: Livingstone Tower, 9th floor, room LT907 Abstract: The evaporation of a liquid drop on a solid substrate is a remarkably common phenomenon. Yet, the complexity of the underlying mechanisms has constrained previous studies to spherically-symmetric configurations. We recently demonstrated [1] detailed evolution of thermocapillary instabilities during evaporation of hemispherical and non-hemispherical sessile droplets and iii) non-hemispherical sessile droplets. Rigorous DNS (using our in house TPLS2 solver [2]) showed for the first time, breakage of symmetry and the consequent development of a preferential direction for thermocapillary convection. This results in counter-rotating whirling currents in the drop playing a critical role in regulating the interface thermal and fluid dynamics. We will also present our recent-most investigations of well-defined, non-spherical evaporating drops of pure liquids and binary mixtures. We recently deduced a new universal scaling law for the evaporation rate valid for any shape and demonstrated that more curved regions lead to preferential localized depositions in particle-laden drops [3]. Furthermore, geometry induces well-defined flow structures within the drop that change according to the driving mechanism and spatially-dependent thresholds for thermocapillary instabilities. In the case of binary mixtures, geometry dictates the spatial segregation of the more volatile component as it is depleted. In the light of our results, we believe that the drop geometry can be exploited to facilitate precise local control over the particle deposition and evaporative dynamics of pure drops and the mixing characteristics of multicomponent drops. 5th December: Peter Stewart (University of Glasgow) Title: Fracture phenomena in foams: upscaling to PDE models Date:  1.00pm Tuesday 5th December 2017 Venue: Livingstone Tower, 9th floor, room LT907 Abstract: Injection of a gas into a gas/liquid foam is known to give rise to instability phenomena on a variety of time and length scales. Macroscopically, one observes a propagating gas-filled structure that can display properties of liquid finger propagation as well as of fracture in solids. Using a discrete network model, which incorporates the underlying film instability as well as viscous resistance from the moving liquid structures, we describe both large-scale ductile finger-like cracks and brittle cleavage phenomena in line with experimental observations. Based on this discrete model, we then derive a continuum limit PDE description of both the ductile and brittle modes and draw analogy with Saffman--Taylor fingering in non-Newtonian continuum fluids and molecular dynamics simulations of fracture in crystalline atomic solids. 23rd January: Can Evren Yarman (Schlumberger) Title: More with less for seismic imaging Date:  1.00pm Tuesday 23rd January 2018 Venue: Livingstone Tower, 9th floor, room LT907 Abstract: In seismic exploration, a medium is excited and the medium response is measured at the receivers. The medium properties and measurements are related by the wave equation. Given the medium, computation of the measurements is referred to as the forward problem. Consequently, the inverse problem is estimation of medium properties from the given measurements. Advances in the microprocessor, computer memory and storage technologies, miniaturization and improved accuracy of sensors combined with operational advancements enabled exponential growth of measurement channels in seismic surveys since 1970s. With the current systems we easily collect 10-20Tb/day which leads to Petabytes or more data per survey. The challenge is to design acquisition systems with reduced number of sensors and measurements providing comparable data information or inversion quality with existing acquisition systems. We formulate this sampling problem in the form of an inverse problem. This talk discusses two ways we formulated the problem and the necessary ingredients in the formulation. An efficient way to address this problem is still under question and will be open to discussion.  ### Numerical Analysis and Scientific Computing 10th October: Dr Prashanth Nadukandi (University of Manchester) Title: Stable computation of the trigonometric matrix functions: cos(sqrt(A)) and sinc(sqrt(A)) Date:  4.00pm Tuesday 10th October Venue: Livingstone Tower, 9th floor, room LT907 Abstract: 24th October: Dr Francesco Tudisco (Department of Mathematics and Statistics) Title:  Perron-Frobenius theorem for multi-homogeneous maps and some applications Date:  4.00pm Tuesday 24th October Venue: Livingstone Tower, 9th floor, room LT907 Abstract:  The nonlinear Perron-Frobenius theory addresses existence, uniqueness and maximality of positive eigenpairs for order-preserving homogeneous functions. This is an important and relatively recent generalization of the famous results for nonnegative matrices. In this talk I present a further generalization of this theory to "multi-dimensional" order-preserving and homogeneous maps, which we briefly call multi-homogeneous maps. The results presented are then used to discuss some nonlinear matrix and tensor eigenvalue problems and some of their applications. 7th November: Dr Victorita Dolean (Department of Mathematics and Statistics) Title:   An introduction to multitrace formulations and associated domain decomposition solvers Date:  4.00pm Tuesday 7th November Venue: Livingstone Tower, 9th floor, room LT907 Abstract:   Multitrace formulations (MTFs) are based on a decomposition of the problem domain into subdomains, and thus domain decomposition solvers are of interest. The fully rigorous mathematical MTF can however be daunting for the non-specialist. We introduce in this work MTFs on a simple model problem using concepts familiar to researchers in domain decomposition. This allows us to get a new understanding of MTFs and a natural block Jacobi iteration, for which we determine optimal relaxation parameters. We then show how iterative multitrace formulation solvers are related to a well known domain decomposition method called optimal Schwarz method: a method which used Dirichlet to Neumann maps in the transmission condition. We finally show that the insight gained from the simple model problem leads to remarkable identities for Calder ́on projectors and related operators, and the convergence results and optimal choice of the relaxation parameter we obtained is independent of the geometry, the space dimension of the problem, and the precise form of the spatial elliptic operator, like for optimal Schwarz methods. We illustrate our analysis with numerical experiments. This is a joint work with X. Claeys and M.J. Gander 14th November: Prof Iain Duff (Rutherford Appleton Laboratory) Title:   Direct Solution of Sparse Linear Equations on Parallel Computers Date:  4.00pm Tuesday 14th November Venue: Livingstone Tower, 9th floor, room LT907 Abstract:   As part of the H2020 FET-HPC Project NLAFET (http://www.nlafet.eu/), we are studying the scalability of algorithms and software for using direct methods for solving large sparse equations. In this talk we briefly discuss the structure of NLAFET and the scope of the Project. We then focus on algorithmic approaches for solving sparse systems: positive definite, symmetric indefinite, and unsymmetric. An important aspect of most of our algorithms is that although we are solving sparse equations most of the kernels are for dense linear algebra. We show why this is the case with a simple example before illustrating the various levels of parallelism available in the sparse case. The work described in this talk has been conducted by the STFC NLAFET Team who comprise: Florent Lopez, Stojce Nakov, and Philippe Gambron. 21st November: Dr Lyonell Boulton (Heriot-Watt University) Title:   The p-Laplacian on a segment. Spectral and time evolution problem Date:  4.00pm Tuesday 21st November Venue: Livingstone Tower, 9th floor, room LT907 Abstract:  The non-linear spectral and time evolution problems associated to the p-Laplacian have attracted significant attention in recent years. In this talk we will examine various analytical properties of these two problems, when posed on a segment of finite length and subject to homogeneous Dirichlet boundary conditions at the end points. An explicit expression for the eigenfunctions can be found in terms of special functions. These eigenfunctions are naturally called p-sine functions, a terminology introduced by Elbert, Otani and others in the 1980s. The p-sine functions play a fundamental role in the theory of Sobolev embeddings, yet many questions about them remain open. During the talk we will discuss partial answers and challenges associated to some of these open questions. We only known, for example, that the p-sine functions form a Riesz basis of the Hilbert space L^2(0,1) for all p larger than or equal to a threshold p_1, where p_1 is the solution of a transcendental equation and is approximately equal to 1.043817. The confirmation of this threshold relies on the Beurling representation of the change of coordinate operator in terms of Dirichlet series and the answer to the basis question remains completely open for 1 30th January: Prof Georgios Akrivis (University of Ioannina) Title: Date:  4.00pm Tuesday 30 January 2018 Venue: Livingstone Tower, 9th floor, room LT907 Abstract: 20th February: Dr Ivan‎ Tyukin (University of Leicester) Title: Date:  4.00pm Tuesday 20 February 2018 Venue: Livingstone Tower, 9th floor, room LT907 Abstract: 27th February: Dr Melina Kazakidi, Department of Biomedical Engineering Title: Date:  4.00pm Tuesday 27 February 2018 Venue: Livingstone Tower, 9th floor, room LT907 Abstract: 6th March: Dr Mariarosa Mazza (Max Planck Institute for Plasma Physics) Title: Date:  4.00pm Tuesday 6th March 2018 Venue: Livingstone Tower, 9th floor, room LT907 Abstract: 27th March: Asst Prof Kirk Soodhalter (Trinity College Dublin) Title: Date:  4.00pm Tuesday 27th March 2018 Venue: Livingstone Tower, 9th floor, room LT907 Abstract: ### Population Modelling and Epidemiology 30th March: Dr Robert Wilson (Mathematics and Statistics, University of Strathclyde) Title: Zooplankton Diapause in a Warmer World: Modelling the Impact of 21st Century Climate Change on Calanus Finmarchicus Date: 1pm Wednesday 30th March 2016 Venue: Livingstone Tower, 9th floor, room LT907 Abstract:  To avoid starving  in winter, many zooplankton species spend over six months dormant in deep waters. The time animals can remain dormant will likely be reduced by global warming.  We therefore modelled changes in potential dormancy duration in the key species Calanus finmarchicus under 21st century climate change. Climate change impacts varied markedly. Western Atlantic populations see large reductions in potential dormancy duration, but the Norwegian Sea experiences only marginal change. The reductions in the Western Atlantic will likely cause important changes to the populations of C. finmarchicus and species that prey on it. 6th April: Dr Amanda Weir (Health Protection Scotland) Title: TBA Date: 1pm Wednesday 6th April 2016 Venue: Livingstone Tower, 9th floor, room LT907 Abstract: TBA 26th October: Dr Gary Napier (University of Glasgow) Title: A General Methodological Framework for Identifying Disease Risk Spatial Clusters Based Upon Mixtures of Temporal Trends Date: 1pm Wednesday 26th October 2016 Venue: Livingstone Tower, 9th floor, room LT907 Abstract:   We present a novel general Bayesian hierarchical mixture model for clustering areas based on their temporal trends. Our approach is general in that it allows the user to choose the shape of the temporal trends to include in the model, and examples include linear, general monotonic, and changepoint trends. Inference from the model is based on Metropolis coupled Markov chain Monte Carlo (MC)^3 techniques in order to prevent issues pertaining to multimodality often associated with mixture models. The effectiveness of (MC)^3 is demonstrated in a simulation study, before applying the model to hospital admission rates due to respiratory disease in the city of Glasgow between 2002 and 2011. Software for implementing this model will be made freely available as part of the R package CARBayesST. 25th May: Raphael Ximenes (Department of Mathematics & Statistics, University of Strathclyde) Title: The Risk of Dengue for Non-Immune Foreign Visitors to the 2016 Summer Olympic Games in Rio de Janeiro, Brazil Date: 1.00pm, Wednesday 25th May 2016 Venue: Livingstone Tower, 9th Floor, LT9.07 Abstract:  Dengue is a viral infection caused by 4 dengue serotypes transmitted by mosquitoes that is an increasing problem in Brazil and other countries in the tropics and subtropics. As Brazil is the country with the highest number of dengue cases worldwide. Rio de Janeiro, the venue for the 2016 Olympic Games, has been of major importance for the epidemiology of dengue in Brazil. After the DENV 1–4 introductions in 1986, 1990, 2000 and 2011, respectively, the city has suffered explosive outbreaks. Properly quantifying the risk of dengue for foreign visitors to the Olympics is important. A mathematical model to calculate the risk of developing dengue for foreign tourists attending the Olympic Games in Rio de Janeiro in 2016 is proposed. A system of differential equation models the spread of dengue amongst the resident population and a stochastic approximation is used to assess the risk to tourists. 2nd November: Sandra Maier (University of Strathclyde) Title: Optimal Vaccination Age for Dengue in Brazil with a Tetravalent Dengue Vaccine Date: 1pm Wednesday 2nd November 2016 Venue: Livingstone Tower, 9th floor, room LT907 Abstract:  With the first vaccine against Dengue being licensed in several endemic countries an important aspect that needs to be considered is the age at which it should be administered. If vaccination is done too early it is ineffective as individuals are protected by maternal antibodies, but if it is done later the infection may spread in the younger age groups, also the risks of hospitalisation and mortality change with age of infection, which is influenced by vaccination. However, to find the optimal vaccination age the possible coexistence of up to four distinct Dengue serotypes and the cross-reactions between these serotypes and Dengue antibodies need to be taken into account. We adapt a method previously applied to other infectious diseases and define the lifetime expected risk due to Dengue with respect to two different risk measures (hospitalization and lethality) which we then seek to minimize for a given three-dose vaccination strategy. Our results show that the optimal vaccination age not only depends on the risk measure but also on the number and combination of serotypes in circulation, as well as on underlying assumptions about cross-immunity and antibody dependent enhancement (ADE). 16th November: Dr Laura Hobbs (University of Strathclyde) Title: Dancing in the Moonlight: Vertical Migration of Arctic Zooplankton during the Polar Night Date: 1pm Wednesday 16th November 2016 Venue: Livingstone Tower, 9th floor, room LT907 Abstract:  This talk will focus on the results from my PhD, which I completed this year at the Scottish Association for Marine Science before starting here at Strathclyde. In recent years, evidence has been found of Diel Vertical Migration (DVM) in zooplankton during the
Polar Night in the Arctic Ocean. However, the drivers of this light mediated behaviour during an apparent lack of
illumination and food are poorly understood. A novel
dataset comprising 58 deployments of moored Acoustic Doppler Current Profilers is used in this study
to observe the vertical migratory behaviour of zooplankton on a pan-Arctic scale. Methods of circadian rhythm analysis are applied to detect synchronous activity. During the Polar Night, the moon is seen to control the vertical positioning of zooplankton, and a new type of migratory behaviour is described: Lunar Vertical Migration (LVM). This exists as LVM-day (24.8 hour periodicity) and LVM-month (29.5 day periodicity), and is observed throughout the Arctic Ocean. The results presented here show continuous activity throughout winter, and
challenge assumptions of a quiescent Polar Night. 3rd May: Dr Emanuele Giorgi (CHICAS, Lancaster University) Title: Disease Mapping and Visualization using Data from Spatio-Temporally Referenced Prevalence Surveys Date: 1pm Wednesday 3rd May 2017 Venue: Livingstone Tower, 9th floor, room LT907 Abstract:   We set out general principles and develop statistical tools for the analysis of data from spatio-temporally referenced prevalence surveys. Our objective is to provide a tutorial guide that can be used in order to identify parsimonious geostatistical models for prevalence mapping. A general variogram-based Monte Carlo procedure is proposed to check the validity of the modelling assumptions. We describe and contrast likelihood-based and Bayesian methods of inference, showing how to account for parameter uncertainty under each of the two paradigms. We also describe extensions of the standard model for disease prevalence that can be used when stationarity of the spatio-temporal covariance function is not supported by the data. We discuss how to define predictive targets and argue that exceedance probabilities provide one of the most effective ways to convey uncertainty in prevalence estimates. We describe statistical software for the visualization of spatio-temporal predictive summaries of prevalence through interactive animations. Finally, we illustrate an application to historical malaria prevalence data from 1334 surveys conducted in Senegal between 1905 and 2014. 1st June: Dr Luigi Sedda(CHICAS, Lancaster University) Title: Including biology in spatial statistical models. Examples from vector-borne disease studies. Date: 12.30pm, Thursday 1st June 2017 Venue: Livingstone Tower, 9th floor, room LT907 Abstract: Vector borne diseases (e.g. Malaria, Dengue, Leishmaniasis) account for 20% of all infectious diseases, causing several million of infections and more than 1 million deaths annually. The majority of the vectors are insects (e.g. mosquitoes, midges and flies) and ticks, which biology and epidemiology are not often fully understood. Biological and statistical models are used for mapping and modelling vector-borne diseases, however, rarely these methods are combined to produce maps and tools for disease surveillance and control (e.g. vector hot spots).  In this talk I will present some techniques that can make data biologically meaningful; and the use of geo-bio-statistical models for tsetse flies (sleeping sickness) surveillance and control in Zambia. We show how mapping tsetse flies immigration, emigration, mortality and fertility can be the key element for successful disease eradication. ### Stochastic Analysis 15th March: Dr Joszef Lorinczi (Loughborough University) Title: Non-local Schrodinger Operators and Related Jump Processes Date: 3pm Wednesday 15th March 2017 Abstract:  Classical Schrödinger operators have been the object of much research involving functional analysis, probability and mathematical physics in the past decades. The recent interest in non-local Schrödinger operators consisting of the sum of a pseudo-differential operator and a multiplication operator greatly extended the range of applications, and inspired much new research in pure mathematics too. I will discuss how Feynman-Kac-type representations can be derived for the non-local cases and which random processes they give rise to. Then I will consider various sample path properties of these jump processes in terms of spectral properties of the generating non-local operators, and will contrast them with diffusions and classical Schrödinger operators. 19th April: Dr Alexandru Hening (Imperial College London) Title: Stochastic Lotka-Volterra Food Chains Date: 3.30pm Wednesday 19th April 2017 Abstract:   We study the persistence and extinction of species in a simple food chain that is modelled by a Lotka-Volterra system with environmental stochasticity. There exist sharp results for deterministic Lotka-Volterra systems in the literature but few for their stochastic counterparts. The food chain we analyze consists of one prey and $n-1$ predators for $n\in\{2,3,4,\dots\}$. The $j$th predator eats the $j-1$th species and is eaten by the $j+1$th predator; this way each species only interacts with at most two other species - the ones that are immediately above or below it in the trophic chain. We show that one can classify, based on an explicit quantity depending on the interaction coefficients of the system, which species go extinct and which converge to their unique invariant probability measure. Our work can be seen as a natural extension of the deterministic results of Gard and Hallam '79 to a stochastic setting. A novelty of our analysis is the fact that we can describe the behavior the system when the noise is degenerate. This is relevant because of the possibility of strong correlations between the effects of the environment on the different species. This is joint work with Dang H. Nguyen. 19th May: Dr Fengzhong Li (Shandong University, China) Title: Time-Varying Feedback and its Control Ability Date: 3.00pm Friday 19th May 2017 Abstract:   Comparison to pure feedback control, time-varying feedback control has distinct advantages, e.g., in handling system nonlinearities, counteracting system uncertainties and achieving prescribed performance. But due to the time-variations, time-varying feedback always keeps most people away, and its potential has been investigated far from enough. Here I shall illustrate some good and ability of time-varying feedback, and introduce some applications in SDEs, as well as several problems to be further investigated. 14th June: Dr Nicos Georgiou (University of Sussex) Title: Last Passage Percolation Models in a Bernoulli Environment Date: 3.00pm Wednesday 14th June 2017 Venue: Livingstone Tower, LT9.07 Abstract:   We will discuss two different last passage percolation models in an i.i.d. Bernoulli random environment.  In particular, I will show explicit laws of large numbers and order of fluctuations for the last passage time - the maximum number of Bernoulli points one can collect by following a sequence of admissible steps that ends in a predetermined lattice site. I will show how the behaviour of these models change depending on the set of admissible steps (e.g. the LLN changes, directions that belong in a "percolation cluster” change) and also show how the order of fluctuations change if the direction of the path endpoint changes. This is joint work with Janosch Ortmann and Federico Ciech (Univ. of Sussex). 16th June: Dr Gongfei Song (Nanjing University of Information Science and Technology, China) Title: Quantized Feedback Control for Control Systems with Saturation Nonlinearity Date: 3.30pm Friday 16th June 2017 Venue: Livingstone Tower, LT9.07 Abstract:   In control systems, every physical actuator or sensor is subject to saturation owing to its maximum and minimum limits. Common examples of such limits are the deflection limits in aircraft actuators, the voltage limits in electrical actuators. Saturation nonlinearities are also purposely introduced into engineering systems such as control systems and neural network systems. In addition, one of the most important research areas in control theory is quantized control. Quantized feedback is found in many engineering systems including mechanical systems and networked systems. Since communication that need to transmit the feedback information from the sensor to the controller may become less reliable as the bandwidth is limited. Here, I shall investigate quantized feedback control problems for systems subject to saturation nonlinearity. 5th July: Professor Qian Guo (Shanghai Normal University, China) Title: Stability of Two Kinds of Stochastic Runge-Kutta Methods for Stochastic Differential Equations Date: 3.30pm Wednesday 5th July 2017 Venue: Livingstone Tower, LT9.07 Abstract:   We present two kinds of explicit Runge–Kutta methods for solving stochastic differential equations by using the three–term recurrence relations of Chebyshev and Legendre polynomials.  The almost sure stability and mean-square stability  of the numerical solutions generated by the two kinds of methods are investigated respectively. Numerical examples are provided to confirm theoretical results. 24th August: Dr Leila Setayeshgar (Providence College, USA) Title: Bayes' Rule and the Law Date: 3.00pm Thursday 24th August 2017 Venue: Livingstone Tower, LT9.07 Abstract:   Bayesian inference is an approach in mathematical statistics where the probability of a hypothesis is updated as more evidence and data become available.  It has wide applications in many areas such as machine learning, evolutionary biology, medicine and even in the judicial system.  This talk will explore how Bayesian inference can be used in a specific court case to assist jurors in the process of legal decision making, demonstrating the power of mathematics in the court room. 19th September: Dr Abdul-Lateef Haji-Ali (Oxford University) Title: MLMC for Value-At-Risk Date: 4.00pm Tuesday 19th September 2017 Venue: Livingstone Tower, LT9.07 Abstract:   In this talk, I explore Monte Carlo methods to estimate the Value-At-Risk (VaR) of a portfolio, which is a measure of the risk of the portfolio in some short time horizon.  It turns out that estimating VaR involves approximating a nested expectation where the outer expectation is taken with respect to stock values at the risk horizon and the inner expectation is taken with respect to the option index and stock values at some final time.  Following (Giles, 2015), our approach is to use MLMC to approximate the outer expectation where deeper levels use more samples in the Monte Carlo estimate of the inner expectation.  We look at various control variates to reduce the variance of such an estimate.  We also explore using an adaptive strategy (Broadie et al, 2011) to determine the number of samples used in estimating the inner expectation. Finally, we discuss using unbiased MLMC (Rhee et al., 2015) when simulating stocks requires time discretization.  Our results show that using MLMC to approximate a probability of large-loss with an error tolerance of order $\epsilon$, we are able to get an optimal complexity of order $\epsilon^{-2}(\log(\epsilon^{-1})^2$ that is independent of the number of options, for a large enough number of options. 18th October: Dr Eyal Neuman (Imperial College London) Title: On Uniqueness and Blowup Properties for a Class of Second Order SDEs Date: 2.30pm Wednesday 18th October 2017 Venue: Livingstone Tower, LT9.07 Abstract:   As the first step for approaching the uniqueness and blowup properties of the solutions of the stochastic wave equations with multiplicative noise, we analyze the conditions for the uniqueness and blowup properties of the solution (X_t; Y_t) of the equations dX_t = Y_tdt, dY_t =|X_t|^\alpha dB_t, (X_0; Y_0) = (x_0; y_0). In particular, we prove that solutions are nonunique if 0 < \apha < 1 and (x_0; y_0) = (0; 0) and unique if 1=2 < \alpha and(x_0; y_0) \not= (0; 0). We also show that blowup in finite time holds if \alpha > 1 and (x_0; y_0) \not= (0; 0).
2018-01-24 01:52:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.51239413022995, "perplexity": 1882.4897908986047}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084892892.86/warc/CC-MAIN-20180124010853-20180124030853-00773.warc.gz"}
https://math.stackexchange.com/questions/453113/how-to-merge-two-gaussians
# How to merge two Gaussians I have two multivariate Gaussians each defined by mean vectors and Covariance matrices (diagonal matrices). I want to merge them to have a single Gaussian i.e. I assume there is only one Gaussian but I separated observations randomly into two groups to get two different Gaussians which are not too different than each other. Since I know the number of observations in each of two Gaussians, combined mean estimation is straight forward : $\frac{n_1\mu_1 + n_2\mu_2}{n_1+n_2}$ But, what about the Covariance matrix? Thanks EDIT: The question was confusing in the original post, especially the "merging Gaussians" part. Maybe the following paragraph would be a better choice. I have two sets of observations drawn from two multivariate Gaussians each defined by mean vectors and Covariance matrices (diagonal matrices). I want to merge the observations to have a single sample, and I assume to have another Gaussian (i.e. I assume initially there was only a single Gaussian, and observations were separated into two groups to get two different Gaussians). • Ok I solved it :) Since covariance matrix is diagonal we can assume having multiple univariates. And then variance combination is as mu = (n1*mu1 + n2*mu2) / (n1+n2) sigma^2 = (((sigma1^2 + mu1^2)*n1 + (sigma2^2 + mu2^2)*n2) / (n1+n2)) - mu^2 ps: I used the equation sigma^2 = E[x^2] - E[x]^2 thanks again – ahmethungari Jul 26 '13 at 21:59 • You could post this as an answer (preferably formatted in $\LaTeX$) and accept it. This is encouraged by this discussion on meta – Ross Millikan Jul 26 '13 at 22:04 • Unfortunately, it did not let me to do so since I do not have enough reputation. I was required to wait some time. – ahmethungari Jul 28 '13 at 1:04 Ok I solved it :) Since covariance matrix is diagonal we can assume having multiple univariates. And then variance combination is as $$\hat{\mu} = \frac{n_1\mu_1 + n_2\mu_2}{n_1+n_2}$$ $$\hat{\sigma}^2 = \frac{(\sigma_1^2 + \mu_1^2)n_1 + (\sigma_2^2 + \mu_2^2)n_2}{ (n_1+n_2)} - \hat{\mu}^2$$ Here, I used $\sigma^2 = E[x^2] - E[x]^2$ thanks again I might be wrong or misinterpreted the question, but trying to reproduce the result in the accepted and upvoted answer, I get a different result: Let $x \sim N(\mu, \sigma^2)$. From the definition of the variance follows $$\sigma^2 = E[x^2] - E[x]^2 = E[x^2] - \mu^2$$$$or \quad E[x^2] = \sigma^2 + \mu^2$$ Now let $x$ be the random variable defined as the weighted average $x = \frac{n_1x_1 + n_2x_2}{n_1 + n_2}$, where $x_1 \sim N(\mu_1, \sigma_1^2)$ and $x_2 \sim N(\mu_2, \sigma_2^2)$ are independent. We have easily $$E[x] = \frac{n_1\mu_1 + n_2\mu_2}{n_1 + n_2} := \mu$$ By the above formula for $E[x_1^2]$ and $E[x_2^2]$, and since $x_1$ and $x_2$ are independent ($E[x_1x_2] = E[x_1]E[x_2]$) we have \begin{align} E[x^2] &= E[(\frac{n_1x_1 + n_2x_2}{n_1 + n_2})^2] \\ &= \frac{1}{(n_1 + n_2)^2} E[n_1^2 x_1^2 + n_2^2 x_2^2 + 2n_1n_2x_1x_2] \\ &= \frac{1}{(n_1 + n_2)^2} (n_1^2 E[x_1^2] + n_2^2 E[x_2^2] + 2n_1n_2E[x_1]E[x_2]) \\ &= \frac{(\sigma_1^2 + \mu_1^2)n_1^2 + (\sigma_2^2 + \mu_2^2)n_2^2 + 2n_1n_2\mu_1\mu_2}{(n_1+n_2)^2} \\ &= \frac{\sigma_1^2 n_1^2 + \sigma_2^2 n_2^2 + (n_1\mu_1 + n_2\mu_2)^2}{(n_1+n_2)^2} \end{align} We can use this to calculate the pooled variance: $$\sigma^2 = E[x^2] - E[x]^2 = \frac{\sigma_1^2 n_1^2 + \sigma_2^2 n_2^2 + (n_1\mu_1 + n_2\mu_2)^2}{(n_1+n_2)^2} - \mu$$ In the multivariate case, if the covariance matrix is diagonal, we can apply this formula on each dimension separately. Otherwise, let $X \sim N(\mu, \Sigma)$, where $\mu \in \mathbb{R}^n$ and $\Sigma \in \mathbb{R}^{n \times n}$. Again start with the general definition of the covariance matrix that gives $$\Sigma = E[(X-\mu)(X-\mu)^T] = E[XX^T] - \mu \mu^T$$$$or \quad E[XX^T] = \Sigma + \mu\mu^T$$ Let $X$ be the random variable defined as the weighted average $X = \frac{n_1X_1 + n_2X_2}{n_1 + n_2}$, where $X_1 \sim N(\mu_1, \Sigma_1)$ and $X_2 \sim N(\mu_2, \Sigma_2)$ are independent. To simplify, let $D = \frac{1}{(n_1 + n_2)^2}$ . We have now \begin{align*} E[XX^T] &= D \cdot E[(n_1X_1 + n_2X_2)(n_1X_1 + n_2X_2)^T] \\ &= D \cdot E[n_1^2 X_1X_1^T + n_2^2 X_2X_2^T + 2n_1n_2X_1X_2^T] \\ &= D \cdot (n_1^2 E[X_1X_1^T] + n_2^2 E[X_2X_2^T] + 2n_1n_2E[X_1]E[X_2^T]) \\ &= D \cdot (n_1^2 (\Sigma_1 + \mu_1\mu_1^T) + n_2^2 (\Sigma_2 + \mu_2\mu_2^T) + 2n_1n_2\mu_1\mu_2^T) \\ &= D \cdot (n_1^2 \Sigma_1 + n_2^2 \Sigma_2 + (n_1\mu_1 + n_2\mu_2)(n_1\mu_1 + n_2\mu_2)^T) \end{align*} Finally the pooled covariance matrix is: $$\Sigma = E[XX^T] - E[X]E[X]^T = \frac{n_1^2 \Sigma_1 + n_2^2 \Sigma_2 + (n_1\mu_1 + n_2\mu_2)(n_1\mu_1 + n_2\mu_2)^T}{(n_1 + n_2)^2} - \mu\mu^T$$ • Welcome to MSE. Nice first post! – José Carlos Santos Jul 20 '17 at 10:53 • Sorry for not much clear question. With merge, I had simply meant that we concatenate (put together) two sets of observations (with sizes $n_1$ and $n_1$) and recalculate mean and variance. So there is no assumption of defining a new variable as the weighted average of the older two. Hence, you simply take the average of $E[x_1^2]$ and $E[x_2^2]$ when calculating new $E[x^2]$. Again, sorry for confusion. – ahmethungari Aug 17 '17 at 0:20 • @ahmethungari I disagree: you give a formula for E[x^2] - E[x]^2 but you never say what "x" is, and it is not obvious. In particular, with your result, x cannot be the intuitive weighted average I used above. – JulienD Aug 18 '17 at 7:38
2019-05-26 23:14:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999284744262695, "perplexity": 659.7457900653854}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232260161.91/warc/CC-MAIN-20190526225545-20190527011545-00434.warc.gz"}
https://www.dcode.fr/function-equation-finder
Search for a tool Function Equation Finder Tool to find the equation of a function from its points, its coordinates x, y=f(x) according to some interpolation methods and equation finder algorithms Results Function Equation Finder - Tag(s) : Functions Share dCode and more dCode is free and its tools are a valuable help in games, maths, geocaching, puzzles and problems to solve every day! A suggestion ? a feedback ? a bug ? an idea ? Write to dCode! Please, check our dCode Discord community for help requests! NB: for encrypted messages, test our automatic cipher identifier! Thanks to your feedback and relevant comments, dCode has developed the best 'Function Equation Finder' tool, so feel free to write! Thank you! # Function Equation Finder ## Function Equation Search ### How to find the equation of a curve? To find the equation from a graph: Method 1 (fitting): analyze the curve (by looking at it) in order to determine what type of function it is (rather linear, exponential, logarithmic, periodic etc.) and indicate some values in the table and dCode will find the function which comes closest to these points. Method 2 (interpolation): from a finite number of points, there are formulas allowing to create a polynomial which passes exactly through these points (see Lagrange interpolation), indicate the values of certain points and dCode will calculate the passing polynomial by these points. ### How to find an equation from a set of points? To derive the equation of a function from a table of values (or a curve), there are several mathematical methods. Method 1: detect remarkable solutions, like remarkable identities, it is sometimes easy to find the equation by analyzing the values (by comparing two successive values or by identifying certain precise values). Example: a function has for points (couples $(x,y)$) the coordinates: $(1,2) (2,4), (3,6), (4,8)$, the ordinates increase by 2 while the abscissas increase by 1, the solution is trivial: $f (x) = 2x$ Method 2: use a interpolation function, more complicated, this method requires the use of mathematical algorithms that can find polynomials passing through any points. The most well known interpolations are Lagrangian interpolation, Newtonian interpolation and Neville interpolation. NB: for a given set of points there is an infinity of solutions because there are infinite functions passing through certain points. dCode tries to propose the most simplified solutions possible, based on affine function or polynomial of low degree (degree 2 or 3). ### How to find the equation of a line? To find the equation of a straight line, see the page: linear equation. ## Source code dCode retains ownership of the online "Function Equation Finder" source code. Except explicit open source licence (indicated CC / Creative Commons / free), the "Function Equation Finder" algorithm, the applet or snippet (converter, solver, encryption / decryption, encoding / decoding, ciphering / deciphering, translator), or the "Function Equation Finder" functions (calculate, convert, solve, decrypt / encrypt, decipher / cipher, decode / encode, translate) written in any informatic language (Python, Java, PHP, C#, Javascript, Matlab, etc.) and all data download, script, copy-paste, or API access for "Function Equation Finder" are not public, same for offline use on PC, tablet, iPhone or Android ! Remainder : dCode is free to use. ## Need Help ? Please, check our dCode Discord community for help requests! NB: for encrypted messages, test our automatic cipher identifier!
2021-10-22 05:41:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4066513478755951, "perplexity": 3399.59232035385}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585460.87/warc/CC-MAIN-20211022052742-20211022082742-00193.warc.gz"}
https://bcphysics180.wordpress.com/2016/01/
## Day 89 – Projectiles Physics 11 – One of the best things I have changed in physics is the order in which I do projectiles. From what I’ve seen across Vancouver, most teachers do projectiles at the end of kinematics. However, I now have students learn about them after we’ve done forces. At this point the students are about as good at drawing force diagrams and recognizing balanced forces as they will ever be. Therefore, the students in general have no problem understanding why a projectile only accelerates towards the Earth but not horizontally. They also have done some vector analysis with force diagrams, so the idea of vertical/horizontal components is not a new concept. This is not to say that projectiles are easy for students. These questions often involve multiple small steps for solving and while the students can do each small step individually, putting them together into a complete solution is challenging. As well, this is a topic that only motivated students seem to do well in. I don’t have students hand in homework, so if they slack off, then the chances of them being successful is not that great. ## Day 88 – Forces Follow-Up Science 8 – Following from last day’s lesson, today students formalized their ideas on forces by reading through the textbook and answering some questions.  The focus on this work was the differences between constant and non-contact forces, and what kind of motion results from unbalanced forces. For the last 20 minutes of class, students were asked to put together a concept map for forces.  I gave them keywords such as: balanced, unbalanced, constant, contact, at-a-distance, gravity, electrostatic, etc… Most of the class nailed the concept map – they had sufficiently complex connections that made sense (indicating they knew how the concepts were related) and the links and descriptors that were accurate and applicable. ## Day 87 – Exit Slip Science 9 – Trying hard not to get mired in smaller details of reproduction, students did some practice with small concept maps of sexual reproduction and slide show overview of asexual vs sexual reproduction. The class ended with my asking for an Exit Slip. As many teachers know, this is a great formative assessment tool. ## Day 86 – Forces Science 8 – Today’s lesson has students pushing a hover disk back and forth along a desk. While doing this, students are asked to make observations and try to answer the question, “what forces are acting on the soccer disk, and what is the result of force?” There are a few key things at play in this activity. First, I ask students to identify forces acting on the disk. Kids will have a variety of ideas from 1 to 4 different forces. The two most common forces identified are the push and gravity. We then break it down a bit by looking at a stationary disk on the desk. From this, students are convinced that there must be a force pushing the disk up (otherwise gravity would pull it down to the ground). Eventually we get the point that maybe there are 3 forces acting on the disk. Next, I put some restrictions on when we are looking at the forces – I deliberately set it up that we are looking at the forces while the disk is moving between two markers, and no one is touching the disk at this point. Students will say there is a push on the disk. The next question then is, if there is a push, who/what is doing the pushing? Obviously the kid, they’ll say. But wait, how can this be if the kid isn’t touching the disk? This causes some serious reflection from the kids. Eventually we get to the idea that there are only two forces acting on the disk (gravity and normal force), that these forces are balanced, and that it results in constant motion. Whew. This can be one of my favorite lessons because it forces students to really think about where forces come from, and what it means to have balanced forces. It is also a good challenge for getting the students to make useful observations. However, there is a big downside to the lesson. With a class of 30 students, it can be very hard for every student to have a voice. In particular, students that aren’t as quick as others to catch on will not truly participate. ## Day 85 – Meiosis Science 9 – The above picture is a snapshot of my reproduction unit plan.  As the science 9 classes worked their way through reproduction, I sort of got bogged down in details again.  I managed to avoid the small bits of meirosis and focused on how it results in genetic diversity. However, I don’t think the students had enough meaningful tasks by which they could really assimilate these new ideas. What I should have done is got them started on a transfer task or some intermediate project that focuses on the Understandings and Essential Questions.  Passively reading through some material isn’t much of a learning experience for them.  Next year?….? ## Day 84 – Drawing Conclusions Science 9 – This is a result from one group’s experiment with yeast budding. Their experimental variable was water pH, and it looks like they may have made a mistake. This turns out to be excellent for everyone because it will give the class a chance to apply reasoning to an experimental results. E: pH 3 F: pH 5 G: pH 7 H: pH 11 If maximum gas was at pH 7, then clearly the pH 3 solution should have less gas than the pH 5. It’s hard to tell from the picture but balloon E was the 2nd fullest.  A few students were able to apply CER (Claim, Evidence, Reasoning) to find this mistake. ## Day 83 – Universal Gravitation, oh my! Physics 11 – Today was the kids introduction to calculating gravitational forces. It was supposed to be a pretty straight forward skill since the students already covered the concepts of gravity.  I even anticipated one problem from last year: students mistook the units for G as a formula.  So I mentioned this to the kids. And then chaos ensued.  Basically it came down to how (un)comfortable students are with using symbolic representations. Students can do this:   $10=\frac{160}{x^2}$    but they can’t do this:    $F_g=G\frac{m_1m_2}{r^2}$ They are fundamentally the same mathematical operation. For today, the one idea I gave students was to group the terms in their equation with a coefficient and variable. For example, when finding m2: $F_g = \left(\frac{Gm_1}{r^2}\right)m_2$ That helps to some extent but this is still confusing when finding r: $F_g = \left(Gm_1m2\right)\frac{1}{r^2}$ The solution to the above is to multiply both sides of the equation by the reciprocal of $Gm_1m_2$ and then take the inverse of $\frac{1}{r^2}$, at which point I might was well be speaking Latin. Somewhere in mathematics education we need to really, really emphasize that mathematics isn’t just about numbers and that mathematical rules apply to all sorts of things. This is a huge cognitive gap with most students. To finish things off, I had over a dozen students ask me what N and m stand for in the equation for G, and that they didn’t know what to do with the equation. So my question for readers of this blog: next year do I omit the units for G, or simply suffer through more questions about the “equation for G”?
2020-01-24 14:41:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5609834790229797, "perplexity": 699.8883659779481}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250620381.59/warc/CC-MAIN-20200124130719-20200124155719-00208.warc.gz"}
http://cms.math.ca/cmb/msc/42
Canadian Mathematical Society www.cms.math.ca location:  Publications → journals Search results Search: MSC category 42 ( Fourier analysis ) Expand all        Collapse all Results 1 - 25 of 72 1. CMB Online first Gurbuz, Ferit Some estimates for generalized commutators of rough fractional maximal and integral operators on generalized weighted Morrey spaces In this paper, we establish $BMO$ estimates for generalized commutators of rough fractional maximal and integral operators on generalized weighted Morrey spaces, respectively. Keywords:fractional integral operator, fractional maximal operator, rough kernel, generalized commutator, $A(p,q)$ weight, generalized weighted Morrey spaceCategories:42B20, 42B25 2. CMB Online first Liu, Feng; Wu, Huoxiong Endpoint Regularity of Multisublinear Fractional Maximal Functions In this paper we investigate the endpoint regularity properties of the multisublinear fractional maximal operators, which include the multisublinear Hardy-Littlewood maximal operator. We obtain some new bounds for the derivative of the one-dimensional multisublinear fractional maximal operators acting on vector-valued function $\vec{f}=(f_1,\dots,f_m)$ with all $f_j$ being $BV$-functions. Keywords:multisublinear fractional maximal operators, Sobolev spaces, bounded variationCategories:42B25, 46E35 3. CMB Online first Liao, Fanghui; Liu, Zongguang Some Properties of Triebel-Lizorkin and Besov Spaces Associated with Zygmund Dilations In this paper, using Calderón's reproducing formula and almost orthogonality estimates, we prove the lifting property and the embedding theorem of the Triebel-Lizorkin and Besov spaces associated with Zygmund dilations. Keywords:Triebel-Lizorkin and Besov spaces, Riesz potential, Calderón's reproducing formula, almost orthogonality estimate, Zygmund dilation, embedding theoremCategories:42B20, 42B35 4. CMB Online first Jahan, Qaiser Characterization of low-pass filters on local fields of positive characteristic In this article, we give necessary and sufficient conditions on a function to be a low-pass filter on a local field $K$ of positive characteristic associated to the scaling function for multiresolution analysis of $L^2(K)$. We use probability and martingale methods to provide such a characterization. Keywords:multiresolution analysis, local field, low-pass filter, scaling function, probability, conditional probability and martingalesCategories:42C40, 42C15, 43A70, 11S85 5. CMB Online first De Carli, Laura; Samad, Gohin Shaikh One-parameter groups of operators and discrete Hilbert transforms We show that the discrete Hilbert transform and the discrete Kak-Hilbert transform are infinitesimal generator of one-parameter groups of operators in $\ell^2$. Keywords:discrete Hilbert transform, groups of operators, isometriesCategories:42A45, 42A50, 41A44 6. CMB Online first Hare, Kathryn; Ramsey, L. Thomas The relationship between $\epsilon$-Kronecker sets and Sidon sets A subset $E$ of a discrete abelian group is called $\epsilon$-Kronecker if all $E$-functions of modulus one can be approximated to within $\epsilon$ by characters. $E$ is called a Sidon set if all bounded $E$-functions can be interpolated by the Fourier transform of measures on the dual group. As $% \epsilon$-Kronecker sets with $\epsilon \lt 2$ possess the same arithmetic properties as Sidon sets, it is natural to ask if they are Sidon. We use the Pisier net characterization of Sidonicity to prove this is true. Keywords:Kronecker set, Sidon setCategories:43A46, 42A15, 42A55 7. CMB 2015 (vol 59 pp. 62) Feng, Han Uncertainty Principles on Weighted Spheres, Balls and Simplexes This paper studies the uncertainty principle for spherical $h$-harmonic expansions on the unit sphere of $\mathbb{R}^d$ associated with a weight function invariant under a general finite reflection group, which is in full analogy with the classical Heisenberg inequality. Our proof is motivated by a new decomposition of the Dunkl-Laplace-Beltrami operator on the weighted sphere. Keywords:uncertainty principle, Dunkl theoryCategories:42C10, 42B10 8. CMB 2015 (vol 59 pp. 104) He, Ziyi; Yang, Dachun; Yuan, Wen Littlewood-Paley Characterizations of Second-Order Sobolev Spaces via Averages on Balls In this paper, the authors characterize second-order Sobolev spaces $W^{2,p}({\mathbb R}^n)$, with $p\in [2,\infty)$ and $n\in\mathbb N$ or $p\in (1,2)$ and $n\in\{1,2,3\}$, via the Lusin area function and the Littlewood-Paley $g_\lambda^\ast$-function in terms of ball means. Keywords:Sobolev space, ball means, Lusin-area function, $g_\lambda^*$-functionCategories:46E35, 42B25, 42B20, 42B35 9. CMB 2015 (vol 58 pp. 877) Zaatra, Mohamed Generating Some Symmetric Semi-classical Orthogonal Polynomials We show that if $v$ is a regular semi-classical form (linear functional), then the symmetric form $u$ defined by the relation $x^{2}\sigma u = -\lambda v$, where $(\sigma f)(x)=f(x^{2})$ and the odd moments of $u$ are $0$, is also regular and semi-classical form for every complex $\lambda$ except for a discrete set of numbers depending on $v$. We give explicitly the three-term recurrence relation and the structure relation coefficients of the orthogonal polynomials sequence associated with $u$ and the class of the form $u$ knowing that of $v$. We conclude with an illustrative example. Keywords:orthogonal polynomials, quadratic decomposition, semi-classical forms, structure relationCategories:33C45, 42C05 10. CMB 2015 (vol 59 pp. 211) Totik, Vilmos Universality Under Szegő's Condition This paper presents a theorem on universality on orthogonal polynomials/random matrices under a weak local condition on the weight function $w$. With a new inequality for polynomials and with the use of fast decreasing polynomials, it is shown that an approach of D. S. Lubinsky is applicable. The proof works at all points which are Lebesgue-points both for the weight function $w$ and for $\log w$. Keywords:universality, random matrices, Christoffel functions, asymptotics, potential theoryCategories:42C05, 60B20, 30C85, 31A15 11. CMB 2015 (vol 58 pp. 757) Han, Yanchang Embedding Theorem for Inhomogeneous Besov and Triebel-Lizorkin Spaces on RD-spaces In this article we prove the embedding theorem for inhomogeneous Besov and Triebel-Lizorkin spaces on RD-spaces. The crucial idea is to use the geometric density condition on the measure. Keywords:spaces of homogeneous type, test function space, distributions, Calderón reproducing formula, Besov and Triebel-Lizorkin spaces, embeddingCategories:42B25, 46F05, 46E35 12. CMB 2015 (vol 58 pp. 507) Hsu, Ming-Hsiu; Lee, Ming-Yi VMO Space Associated with Parabolic Sections and its Application In this paper we define $VMO_\mathcal{P}$ space associated with a family $\mathcal{P}$ of parabolic sections and show that the dual of $VMO_\mathcal{P}$ is the Hardy space $H^1_\mathcal{P}$. As an application, we prove that almost everywhere convergence of a bounded sequence in $H^1_\mathcal{P}$ implies weak* convergence. Keywords:Monge-Ampere equation, parabolic section, Hardy space, BMO, VMOCategory:42B30 13. CMB 2015 (vol 58 pp. 808) Liu, Feng; Wu, Huoxiong On the Regularity of the Multisublinear Maximal Functions This paper is concerned with the study of the regularity for the multisublinear maximal operator. It is proved that the multisublinear maximal operator is bounded on first-order Sobolev spaces. Moreover, two key point-wise inequalities for the partial derivatives of the multisublinear maximal functions are established. As an application, the quasi-continuity on the multisublinear maximal function is also obtained. Keywords:regularity, multisublinear maximal operator, Sobolev spaces, partial deviative, quasicontinuityCategories:42B25, 46E35 14. CMB 2014 (vol 58 pp. 432) Yang, Dachun; Yang, Sibei Second-order Riesz Transforms and Maximal Inequalities Associated with Magnetic Schrödinger Operators Let $A:=-(\nabla-i\vec{a})\cdot(\nabla-i\vec{a})+V$ be a magnetic Schrödinger operator on $\mathbb{R}^n$, where $\vec{a}:=(a_1,\dots, a_n)\in L^2_{\mathrm{loc}}(\mathbb{R}^n,\mathbb{R}^n)$ and $0\le V\in L^1_{\mathrm{loc}}(\mathbb{R}^n)$ satisfy some reverse Hölder conditions. Let $\varphi\colon \mathbb{R}^n\times[0,\infty)\to[0,\infty)$ be such that $\varphi(x,\cdot)$ for any given $x\in\mathbb{R}^n$ is an Orlicz function, $\varphi(\cdot,t)\in {\mathbb A}_{\infty}(\mathbb{R}^n)$ for all $t\in (0,\infty)$ (the class of uniformly Muckenhoupt weights) and its uniformly critical upper type index $I(\varphi)\in(0,1]$. In this article, the authors prove that second-order Riesz transforms $VA^{-1}$ and $(\nabla-i\vec{a})^2A^{-1}$ are bounded from the Musielak-Orlicz-Hardy space $H_{\varphi,\,A}(\mathbb{R}^n)$, associated with $A$, to the Musielak-Orlicz space $L^{\varphi}(\mathbb{R}^n)$. Moreover, the authors establish the boundedness of $VA^{-1}$ on $H_{\varphi, A}(\mathbb{R}^n)$. As applications, some maximal inequalities associated with $A$ in the scale of $H_{\varphi, A}(\mathbb{R}^n)$ are obtained. Keywords:Musielak-Orlicz-Hardy space, magnetic Schrödinger operator, atom, second-order Riesz transform, maximal inequalityCategories:42B30, 42B35, 42B25, 35J10, 42B37, 46E30 15. CMB 2014 (vol 58 pp. 19) Chen, Jiecheng; Hu, Guoen Compact Commutators of Rough Singular Integral Operators Let $b\in \mathrm{BMO}(\mathbb{R}^n)$ and $T_{\Omega}$ be the singular integral operator with kernel $\frac{\Omega(x)}{|x|^n}$, where $\Omega$ is homogeneous of degree zero, integrable and has mean value zero on the unit sphere $S^{n-1}$. In this paper, by Fourier transform estimates and approximation to the operator $T_{\Omega}$ by integral operators with smooth kernels, it is proved that if $b\in \mathrm{CMO}(\mathbb{R}^n)$ and $\Omega$ satisfies a certain minimal size condition, then the commutator generated by $b$ and $T_{\Omega}$ is a compact operator on $L^p(\mathbb{R}^n)$ for appropriate index $p$. The associated maximal operator is also considered. Keywords:commutator,singular integral operator, compact operator, maximal operatorCategory:42B20 16. CMB 2014 (vol 58 pp. 144) Olevskii, Victor Localization and Completeness in $L_2({\mathbb R})$ We give a necessary and sufficient condition for a sequence to be a localization set for a determining average sampler. Keywords:localization, completeness, average samplingCategories:42C30, 94A20 17. CMB 2014 (vol 57 pp. 834) Koh, Doowon Restriction Operators Acting on Radial Functions on Vector Spaces Over Finite Fields We study $L^p-L^r$ restriction estimates for algebraic varieties $V$ in the case when restriction operators act on radial functions in the finite field setting. We show that if the varieties $V$ lie in odd dimensional vector spaces over finite fields, then the conjectured restriction estimates are possible for all radial test functions. In addition, assuming that the varieties $V$ are defined in even dimensional spaces and have few intersection points with the sphere of zero radius, we also obtain the conjectured exponents for all radial test functions. Keywords:finite fields, radial functions, restriction operatorsCategories:42B05, 43A32, 43A15 18. CMB 2013 (vol 57 pp. 463) Bownik, Marcin; Jasper, John Constructive Proof of Carpenter's Theorem We give a constructive proof of Carpenter's Theorem due to Kadison. Unlike the original proof our approach also yields the real case of this theorem. Keywords:diagonals of projections, the Schur-Horn theorem, the Pythagorean theorem, the Carpenter theorem, spectral theoryCategories:42C15, 47B15, 46C05 19. CMB 2013 (vol 57 pp. 254) Christensen, Ole; Kim, Hong Oh; Kim, Rae Young On Parseval Wavelet Frames with Two or Three Generators via the Unitary Extension Principle The unitary extension principle (UEP) by Ron and Shen yields a sufficient condition for the construction of Parseval wavelet frames with multiple generators. In this paper we characterize the UEP-type wavelet systems that can be extended to a Parseval wavelet frame by adding just one UEP-type wavelet system. We derive a condition that is necessary for the extension of a UEP-type wavelet system to any Parseval wavelet frame with any number of generators, and prove that this condition is also sufficient to ensure that an extension with just two generators is possible. Keywords:Bessel sequences, frames, extension of wavelet Bessel system to tight frame, wavelet systems, unitary extension principleCategories:42C15, 42C40 20. CMB 2013 (vol 56 pp. 729) Currey, B.; Mayeli, A. The Orthonormal Dilation Property for Abstract Parseval Wavelet Frames In this work we introduce a class of discrete groups containing subgroups of abstract translations and dilations, respectively. A variety of wavelet systems can appear as $\pi(\Gamma)\psi$, where $\pi$ is a unitary representation of a wavelet group and $\Gamma$ is the abstract pseudo-lattice $\Gamma$. We prove a condition in order that a Parseval frame $\pi(\Gamma)\psi$ can be dilated to an orthonormal basis of the form $\tau(\Gamma)\Psi$ where $\tau$ is a super-representation of $\pi$. For a subclass of groups that includes the case where the translation subgroup is Heisenberg, we show that this condition always holds, and we cite familiar examples as applications. Keywords:frame, dilation, wavelet, Baumslag-Solitar group, shearletCategories:43A65, 42C40, 42C15 21. CMB 2013 (vol 56 pp. 745) Fu, Xiaoye; Gabardo, Jean-Pierre Dimension Functions of Self-Affine Scaling Sets In this paper, the dimension function of a self-affine generalized scaling set associated with an $n\times n$ integral expansive dilation $A$ is studied. More specifically, we consider the dimension function of an $A$-dilation generalized scaling set $K$ assuming that $K$ is a self-affine tile satisfying $BK = (K+d_1) \cup (K+d_2)$, where $B=A^t$, $A$ is an $n\times n$ integral expansive matrix with $\lvert \det A\rvert=2$, and $d_1,d_2\in\mathbb{R}^n$. We show that the dimension function of $K$ must be constant if either $n=1$ or $2$ or one of the digits is $0$, and that it is bounded by $2\lvert K\rvert$ for any $n$. Keywords:scaling set, self-affine tile, orthonormal multiwavelet, dimension functionCategory:42C40 22. CMB 2012 (vol 56 pp. 801) Oberlin, Richard Estimates for Compositions of Maximal Operators with Singular Integrals We prove weak-type $(1,1)$ estimates for compositions of maximal operators with singular integrals. Our main object of interest is the operator $\Delta^*\Psi$ where $\Delta^*$ is Bourgain's maximal multiplier operator and $\Psi$ is the sum of several modulated singular integrals; here our method yields a significantly improved bound for the $L^q$ operator norm when $1 \lt q \lt 2.$ We also consider associated variation-norm estimates. Keywords:maximal operator calderon-zygmundCategory:42A45 23. CMB 2011 (vol 56 pp. 326) Erdoğan, M. Burak; Oberlin, Daniel M. Restricting Fourier Transforms of Measures to Curves in $\mathbb R^2$ We establish estimates for restrictions to certain curves in $\mathbb R^2$ of the Fourier transforms of some fractal measures. Keywords:Fourier transforms of fractal measures, Fourier restrictionCategories:42B10, 28A12 24. CMB 2011 (vol 56 pp. 3) Aïssiou, Tayeb Semiclassical Limits of Eigenfunctions on Flat $n$-Dimensional Tori We provide a proof of a conjecture by Jakobson, Nadirashvili, and Toth stating that on an $n$-dimensional flat torus $\mathbb T^{n}$, and the Fourier transform of squares of the eigenfunctions $|\varphi_\lambda|^2$ of the Laplacian have uniform $l^n$ bounds that do not depend on the eigenvalue $\lambda$. The proof is a generalization of an argument by Jakobson, et al. for the lower dimensional cases. These results imply uniform bounds for semiclassical limits on $\mathbb T^{n+2}$. We also prove a geometric lemma that bounds the number of codimension-one simplices satisfying a certain restriction on an $n$-dimensional sphere $S^n(\lambda)$ of radius $\sqrt{\lambda}$, and we use it in the proof. Keywords:semiclassical limits, eigenfunctions of Laplacian on a torus, quantum limitsCategories:58G25, 81Q50, 35P20, 42B05 25. CMB 2011 (vol 55 pp. 646) Zhou, Jiang; Ma, Bolin Marcinkiewicz Commutators with Lipschitz Functions in Non-homogeneous Spaces Under the assumption that $\mu$ is a nondoubling measure, we study certain commutators generated by the Lipschitz function and the Marcinkiewicz integral whose kernel satisfies a Hörmander-type condition. We establish the boundedness of these commutators on the Lebesgue spaces, Lipschitz spaces, and Hardy spaces. Our results are extensions of known theorems in the doubling case. Keywords:non doubling measure, Marcinkiewicz integral, commutator, ${\rm Lip}_{\beta}(\mu)$, $H^1(\mu)$Categories:42B25, 47B47, 42B20, 47A30 Page 1 2 3 Next top of page | contact us | privacy | site map | © Canadian Mathematical Society, 2016 : https://cms.math.ca/
2016-09-27 17:18:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9206457138061523, "perplexity": 998.9171801059126}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661155.56/warc/CC-MAIN-20160924173741-00272-ip-10-143-35-109.ec2.internal.warc.gz"}
https://kulturforum-metzingen.de/7nfknc/9559d2-what-does-a-95%25-confidence-interval-mean
Actually, the chosen family will hardly contain the true measure, moreover this true measure may not even exist. That is, after observing the data $x$, we cannot employ the probabilistic reasoning anymore. If you are only guessing your friends coin flips with 50% heads/tails then you are not doing it right. These are the upper and lower bounds of the confidence interval. MathJax reference. What is the fundamental confidence interval fallacy? As others have said, for frequentists you can't assign a probability to an event having occurred, but rather you can describe the probability of an event occurring in the future using a given process. I can also pick any number I want for b. I will pick 3. Your overall guess might be, if you cheat, x>50% of the time right, but that does not necessarily mean that the probability for every particular throw was constantly x% heads. This will only be accurate if our prior is accurate (and other assumptions such as the form of the likelihood). I just ran the script a bunch of times and it's actually not too uncommon to find that less than 94% of the CIs contained the true mean. As degrees of freedom increase, the shape of the t distribution approaches that of the normal z distribution. Probably the thing that makes the CI confusing is it's name. a. That is to say, given the same model and data, achieving 99% confidence would require a wider interval than would achieving 95% confidence. But it might not be! Because σ was not known, she used a Student's t distribution. A third mistake is to say that a 95% confidence interval implies that 95% of all possible sample means fall within the range of the interval. Then, the classical statistical model emerges Copyright © 2020 Multiply Media, LLC. You perform a test that might be seen as a Bernoulli trial (positive or negative) which has a high $p=0.99$ for positive outcome when the person is sick or low $p=0.01$ when the person is not sick. One item was "Has difficulty organizing work," rated on a. Let the parameter be $\mathfrak{p}$ and the statistic be $\mathfrak{s}$. But we can explicitly express these probabilities by using a For example, if you are estimating a 95% confidence interval around the mean proportion of female babies born every year based on a random sample of babies, you might find an upper bound of 0.56 and a lower bound of 0.48. On the other hand, after observing the data $x$, $C_\alpha(x)$ is just a fixed set and the probability that "$C_\alpha(x)$ contains the mean $\mu_\theta$" should be in {0,1} for all $\theta \in \Theta$. That it contains the mean, but perhaps only way out at the extreme, excluding everything else on the other side of the mean. Which of the following descriptions of confidence intervals is correct? Like you in 2012, I'm struggling to see how this doesn't imply that a 95% confidence interval has a 95% probability of containing the mean. with a bayesian central posterior interval, with $\mu$ the "true mean") as a function of \$P(L_1'<\bar{X}-\mu
2021-06-16 08:03:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7840197086334229, "perplexity": 292.0944348393392}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487622234.42/warc/CC-MAIN-20210616063154-20210616093154-00357.warc.gz"}
http://math.stackexchange.com/questions/580775/how-to-prove-trigonometry-equation
# How to prove Trigonometry equation? how to solve following equation $$\tan^{-1}\left(\frac{1}{4}\right) + \tan^{-1}\left(\frac{1}{9}\right) = \cos^{-1}\left(\frac{3}{5}\right)$$ How to prove the above equation? - Have you attempted anything? –  Zhoe Nov 25 '13 at 17:04 I was trying with tana+tanb formula but not able to convert it –  subodh joshi Nov 25 '13 at 17:05 One problem is of course that the equation is wrong. –  Daniel Fischer Nov 25 '13 at 17:10 There may be a mistake in the equation –  Dutta Nov 25 '13 at 17:10 *Disprove${}{}{}$ –  Alizter Nov 25 '13 at 17:35 - U mean above equation wrong? –  user110715 Nov 25 '13 at 17:28 @user110715 Yes. –  Felix Marin Nov 25 '13 at 17:28 From this or Ex$\#5$ of Page $\#276$ of this $$\tan^{-1}x+\tan^{-1}y=\tan^{-1}\frac{x+y}{1-xy}$$ if $xy<1$ Now, as the principal value of $\tan$ lies $\in\left[-\frac\pi2,\frac\pi2\right],$ If $\displaystyle \tan^{-1}z=\theta,\tan\theta=z,$ $\displaystyle\cos\theta=+\frac1{\sqrt{1+z^2}}$ - how u made cosQ like that? –  user110715 Nov 25 '13 at 17:14 @user110715, $\sec^2\theta=1+\tan^2\theta,$ right? –  lab bhattacharjee Nov 25 '13 at 17:16 After you apply the formula, let $cos \theta= \frac{3}{5}$. Now convert this to $tan \theta$, which is trivial, and compare both sides. -
2015-02-01 21:37:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9879705309867859, "perplexity": 2934.906641785675}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422121934081.85/warc/CC-MAIN-20150124175214-00193-ip-10-180-212-252.ec2.internal.warc.gz"}
https://www.shaalaa.com/question-paper-solution/maharashtra-state-board-hsc-mathematics-statistics-arts-science-12th-board-exam-2016-2017_10312
# Mathematics and Statistics 2015-2016 HSC Arts 12th Board Exam Question Paper Solution Mathematics and Statistics Marks: 80Academic Year: 2015-2016 Date: July 2016 [12]1 [6]1.1 | Select and write the correct answer from the given alternatives in each of the following sub-questions: [2]1.1.1 Inverse of the statement pattern (p ∨ q) → (p ∧ q) is (A) (p ∧ q) → (p ∨ q) (B) ∼ (p ∨ q) → (p ∧ q) (C) (∼ p ∨ ∼ q) → (∼ p ∧ ∼ q) (D) (∼ p ∧ ∼ q) → (∼ p ∨ ∼ q) Concept: Mathematical Logic - Sentences and Statement in Logic Chapter: [0.01] Mathematical Logic [2]1.1.2 If the vectors 2hati-qhatj+3hatk and 4hati-5hatj+6hatk are collinear, then value of q is (A) 5 (B) 10 (C) 5/2 (D) 5/4 Concept: Vectors - Collinearity and Coplanarity of Vectors Chapter: [0.07] Vectors [2]1.1.3 If in ∆ABC with usual notations a = 18, b = 24, c = 30 then sin A/2 is equal to (A) 1/sqrt5 (B) 1/sqrt10 (C) 1/sqrt15 (D) 1/(2sqrt5) Concept: Trigonometric Functions - Solution of a Triangle Chapter: [0.03] Trigonometric Functions [6]1.2 | Attempt any THREE of the following: [2]1.2.1 Find the angle between the lines barr=3hati+2hatj-4hatk+lambda(hati+2hatj+2hatk) and barr=5 hati-2hatk+mu(3hati+2hatj+6hatk) Concept: Acute Angle Between the Lines Chapter: [0.04] Pair of Straight Lines [2]1.2.2 If p, q, r are the statements with truth values T, F, T, respectively then find the truth value of (r ∧ q) ↔ ∼ p Concept: Mathematical Logic - Truth Value of Statement in Logic Chapter: [0.01] Mathematical Logic [0.011000000000000001] Mathematical Logic [2]1.2.3 If A =[[2,-3],[3,5]] then find A-1 by adjoint method. Concept: Determinants - Adjoint Method Chapter: [0.02] Matrices [2]1.2.4 By vector method show that the quadrilateral with vertices A (1, 2, –1), B (8, –3, –4), C (5, –4, 1), D (–2, 1, 4) is a parallelogram. Concept: Vectors - Diagonals of a Parallelogram Bisect Each Other and Converse Chapter: [0.07] Vectors [2]1.2.5 Find the general solution of the equation sin x = tan x. Concept: Trigonometric Functions - General Solution of Trigonometric Equation of the Type Chapter: [0.03] Trigonometric Functions [14]2 [6]2.1 | Attempt any TWO of the following: [3]2.1.1 Find the joint equation of pair of lines passing through the origin and perpendicular to the lines represented by ax2+ 2hxy + by2= 0 Concept: Pair of Straight Lines - Pair of Lines Passing Through Origin - Homogenous Equation Chapter: [0.04] Pair of Straight Lines [3]2.1.2 Find the principal value of sin^-1(1/sqrt2) Concept: Basic Concepts of Trigonometric Functions Chapter: [0.03] Trigonometric Functions [3]2.1.3 Find the cartesian form of the equation of the plane bar r=(hati+hatj)+s(hati-hatj+2hatk)+t(hati+2hatj+hatj) Concept: Vector and Cartesian Equation of a Plane Chapter: [0.1] Plane [8]2.2 | Attempt any TWO of the following: [4]2.2.1 Simplify the following circuit so that new circuit has minimum number of switches. Also draw simplified circuit. Concept: Mathematical Logic - Application of Logic to Switching Circuits, Switching Table. Chapter: [0.01] Mathematical Logic [0.011000000000000001] Mathematical Logic [4]2.2.2 A line makes angles of measures 45° and 60° with positive direction of y and z axes respectively. Find the d.c.s. of the line and also find the vector of  magnitude 5 along the direction of line. Concept: Concept of Line - Equation of Line Passing Through Given Point and Parallel to Given Vector Chapter: [0.09] Line [4]2.2.3 Solve the following LPP by graphical method: Maximize: z = 3x + 5y Subject to:  x + 4y ≤ 24 3x + y ≤ 21 x + y ≤ 9 x ≥ 0, y ≥ 0 Concept: Graphical Method of Solving Linear Programming Problems Chapter: [0.017] Linear Programming [0.11] Linear Programming Problems [14]3 [6]3.1 | Attempt any TWO of the following: [3]3.1.1 Find the shortest distance between the lines (x+1)/7=(y+1)/(-6)=(z+1)/1 and (x-3)/1=(y-5)/(-2)=(z-7)/1 Concept: Shortest Distance Between Two Lines Chapter: [0.09] Line [3]3.1.2 Show that the points (1, –1, 3) and (3, 4, 3) are equidistant from the plane 5x + 2y – 7z + 8 = 0 Concept: Distance of a Point from a Plane Chapter: [0.016] Line and Plane [0.1] Plane [3]3.1.3 In any triangle ABC with usual notations prove c = a cos B + b cos A Concept: Trigonometric Functions - General Solution of Trigonometric Equation of the Type Chapter: [0.03] Trigonometric Functions [8]3.2 | Attempt any TWO of the following: [4]3.2.1 Find p and k if the equation px2 – 8xy + 3y+14x + 2y + k = 0 represents a pair of perpendicular lines. Concept: Concept of Line - Equation of Line Passing Through Given Point and Parallel to Given Vector Chapter: [0.09] Line [4]3.2.2 The cost of 4 dozen pencils, 3 dozen pens and 2 dozen erasers is Rs. 60. The cost of 2 dozen pencils, 4 dozen pens and 6 dozen erasers is Rs. 90 whereas the cost of 6 dozen pencils, 2 dozen pens and 3 dozen erasers is Rs. 70. Find the cost of each item per dozen by using matrices. Concept: Elementary Operation (Transformation) of a Matrix Chapter: [0.012] Matrics [0.02] Matrices [4]3.2.3 Find the volume of the parallelopiped whose coterminus edges are given by vectors 2hati+5hatj-4hatk, 5hati+7hatj+5hatk and 4hati+5hatj-2hatk Concept: Scalar Triple Product of Vectors Chapter: [0.015] Vectors [0.07] Vectors [12]4 [6]4.1 | Select and write the correct answer from the given alternatives in each of the following sub-questions: [2]4.1.1 Order and degree of the differential equation [1+(dy/dx)^3]^(7/3)=7(d^2y)/(dx^2) are respectively (A) 2, 3 (B) 3, 2 (C) 7, 2 (D) 3, 7 Concept: Order and Degree of a Differential Equation Chapter: [0.026000000000000002] Differential Equations [0.17] Differential Equation [2]4.1.2 ∫_4^9 1/sqrtxdx=_____ (A) 1 (B) –2 (C) 2 (D) –1 Concept: Properties of Definite Integrals Chapter: [0.15] Integration [2]4.1.3 If the p.d.f. of a continuous random variable X is given as f(x)=x^2/3 for -1< x<2 =0   otherwise then c.d.f. fo X is (A) x^3/9+1/9 (B) x^3/9-1/9 (C) x^2/4+1/4 (D) 1/(9x^3)+1/9 Concept: Probability Distribution - Probability Density Function (P.D.F.) Chapter: [0.19] Probability Distribution [6]4.2 | Attempt any THREE of the following: [2]4.2.1 If y = sec sqrtx then find dy/dx. Concept: Derivative - Derivative of Functions in Product of Function Form Chapter: [0.13] Differentiation [2]4.2.2 Evaluate : ∫(x+1)/((x+2)(x+3))dx Concept: Methods of Integration - Integration Using Partial Fractions Chapter: [0.023] Indefinite Integration [0.15] Integration [2]4.2.3 Find the area of the region lying in the first quandrant bounded by the curve y2= 4x, X axis and the lines x = 1, x = 4 Concept: Area of the Region Bounded by a Curve and a Line Chapter: [0.16] Applications of Definite Integral [2]4.2.4 For the differential equations find the general solution: sec2 x tan y dx + sec2 y tan x dy = 0 Concept: Methods of Solving First Order, First Degree Differential Equations - Differential Equations with Variables Separable Method Chapter: [0.17] Differential Equation [2]4.2.5 Given is X ~ B (n, p). If E(X) = 6, and Var(X) = 4.2, find the value of n. Concept: Bernoulli Trial - Calculation of Probabilities Chapter: [0.2] Bernoulli Trials and Binomial Distribution [14]5 [6]5.1 | Attempt any TWO of the following: [2]5.1.1 If the function f(x)=(4^sinx-1)^2/(xlog(1+2x))  for x ≠ 0 is continuous at x = 0, find f (0). Concept: Continuity of Some Standard Functions - Trigonometric Function Chapter: [0.12] Continuity [2]5.1.2 Evaluate : ∫1/(3+2sinx+cosx)dx Concept: Methods of Integration - Integration by Substitution Chapter: [0.023] Indefinite Integration [0.15] Integration [2]5.1.3 If y = f(x) is a differentiable function of x such that inverse function x = f–1 (y) exists, then prove that x is a differentiable function of y and dx/dy=1/((dy/dx)) " where " dy/dx≠0 Concept: Derivative - Derivative of Inverse Function Chapter: [0.13] Differentiation [8]5.2 | Attempt any TWO of the following: [4]5.2.1 A point source of light is hung 30 feet directly above a straight horizontal path on which a man of 6 feet in height is walking. How fast will the man’s shadow lengthen and how fast will the tip of shadow move when he is walking away from the light at the rate of 100 ft/min. Concept: Rate of Change of Bodies Or Quantities Chapter: [0.14] Applications of Derivative [4]5.2.2 The probability mass function for X = number of major defects in a randomly selected appliance of a certain type is X = x 0 1 2 3 4 P(X = x) 0.08 0.15 0.45 0.27 0.05 Find the expected value and variance of X. Concept: Variance of Binomial Distribution (P.M.F.) Chapter: [0.027999999999999997] Binomial Distribution [0.2] Bernoulli Trials and Binomial Distribution [4]5.2.3 Prove that int_0^af(x)dx=int_0^af(a-x) dx hence evaluate int_0^(pi/2)sinx/(sinx+cosx) dx Concept: Properties of Definite Integrals Chapter: [0.15] Integration [14]6 [6]6.1 | Attempt any TWO of the following: [3]6.1.1 If y = etan x+ (log x)tan x then find dy/dx Concept: General and Particular Solutions of a Differential Equation Chapter: [0.17] Differential Equation [3]6.1.2 If the probability that a fluorescent light has a useful life of at least 800 hours is 0.9, find the probabilities that among 20 such lights at least 2 will not have a useful life of at least 800 hours. [Given : (0⋅9)19 = 0⋅1348] Concept: Random Variables and Its Probability Distributions Chapter: [0.027000000000000003] Probability Distributions [0.19] Probability Distribution [3]6.1.3 Find a and b, so that the function f(x) defined by f(x)=-2sin x,       for -π≤ x ≤ -π/2 =a sin x+b,  for -π/2≤ x ≤ π/2 =cos x,        for π/2≤ x ≤ π is continuous on [- π, π] Concept: Definition of Continuity - Continuity of a Function at a Point Chapter: [0.12] Continuity [8]6.2 | Attempt any TWO of the following: [4]6.2.1 Find the equation of a curve passing through the point (0, 2), given that the sum of the coordinates of any point on the curve exceeds the slope of the tangent to the curve at that point by 5 Concept: Area of the Region Bounded by a Curve and a Line Chapter: [0.16] Applications of Definite Integral [4]6.2.2 If u and v are two functions of x then prove that intuvdx=uintvdx-int[du/dxintvdx]dx Hence evaluate, int xe^xdx Concept: Methods of Integration - Integration by Parts Chapter: [0.023] Indefinite Integration [0.15] Integration [4]6.2.3 Find the approximate value of log10 (1016) given that log10= 0⋅4343. Concept: Approximations Chapter: [0.022000000000000002] Applications of Derivatives [0.14] Applications of Derivative #### Request Question Paper If you dont find a question paper, kindly write to us View All Requests #### Submit Question Paper Help us maintain new question papers on Shaalaa.com, so we can continue to help students only jpg, png and pdf files ## Maharashtra State Board previous year question papers 12th Board Exam Mathematics and Statistics with solutions 2015 - 2016 Maharashtra State Board 12th Board Exam Maths question paper solution is key to score more marks in final exams. Students who have used our past year paper solution have significantly improved in speed and boosted their confidence to solve any question in the examination. Our Maharashtra State Board 12th Board Exam Maths question paper 2016 serve as a catalyst to prepare for your Mathematics and Statistics board examination. Previous year Question paper for Maharashtra State Board 12th Board Exam Maths-2016 is solved by experts. Solved question papers gives you the chance to check yourself after your mock test. By referring the question paper Solutions for Mathematics and Statistics, you can scale your preparation level and work on your weak areas. It will also help the candidates in developing the time-management skills. Practice makes perfect, and there is no better way to practice than to attempt previous year question paper solutions of Maharashtra State Board 12th Board Exam. How Maharashtra State Board 12th Board Exam Question Paper solutions Help Students ? • Question paper solutions for Mathematics and Statistics will helps students to prepare for exam. • Question paper with answer will boost students confidence in exam time and also give you an idea About the important questions and topics to be prepared for the board exam. • For finding solution of question papers no need to refer so multiple sources like textbook or guides.
2021-03-04 16:53:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.643196702003479, "perplexity": 2533.088176810587}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369420.71/warc/CC-MAIN-20210304143817-20210304173817-00437.warc.gz"}
https://mathshistory.st-andrews.ac.uk/Extras/Lusztig_citation/
# Steele Prize citation for Lusztig In 2008 the American Mathematical Society awarded George Lusztig their Leroy P Steele Prize for Lifetime Achievement. What follows is the citation for the award: The work of George Lusztig has entirely reshaped representation theory and in the process changed much of mathematics. Here is how representation theory looked before Lusztig entered the field in 1973. A central goal of the subject is to describe the irreducible representations of a group. The case of reductive groups over locally compact fields is classically one of the most difficult and important parts. There were three more or less separate subjects, corresponding to groups over $\mathbb{R}$ (Lie groups), $\mathbb{Q}_{p}$ (p-adic groups), and finite fields (finite Chevalley groups). Lusztig's first great contribution was to the representation theory of groups over finite fields. In a 1974 book he showed how to construct "standard" representations - the building blocks of the theory - in the case of general linear groups. Then, working with Deligne, he defined standard representations for all finite Chevalley groups. This was mathematics that had been studied for nearly a hundred years; Lusztig and Deligne did more in one paper than everything that had gone before. With the standard representations in hand (in the finite field case), Lusztig turned to describing irreducible representations. The first step is simply to get a list of irreducible representations. This he did almost immediately for the "classical groups", like the orthogonal groups over a finite field. The general case required deep new ideas about connections among three topics: irreducible representations of reductive groups, the representations of the Weyl group, and the geometry of the unipotent cone. Although some key results were contributed by other (great!) mathematicians like T Springer, the deepest new ideas about these connections came from Lusztig, sometimes in work with Kazhdan. Lusztig's results allowed him to translate the problem of describing irreducible representations of a finite Chevalley group into a problem about the Weyl group. This allowed results about the symmetric group (like the Robinson-Schensted algorithm and the character theory of Frobenius and Schur) to be translated into descriptions of the irreducible representations of finite classical groups. For the exceptional groups, Lusztig was asking an entirely new family of questions about the Weyl groups, and considerable insight was needed to arrive at complete answers, but eventually he did so. Lusztig's new questions about Weyl groups originate in his 1979 paper with Kazhdan. The little that was known about irreducible representations first becomes badly behaved in some very specific examples in $SL(4, \mathbb{C})$. Kazhdan and Lusztig noticed that their new questions about Weyl groups first had nontrivial answers in exactly these same examples (for the symmetric group on four letters). In an incredible leap of imagination, they conjectured a complete and detailed description of singular irreducible representations (for reductive groups over the complex numbers) in terms of their new ideas about Weyl groups. This (in its earliest incarnation) is the Kazhdan-Lusztig conjecture. The first half of the proof was given by Kazhdan and Lusztig themselves, and the second half by Beilinson-Bernstein and Brylinski-Kashiwara independently. The structure of the proof is now a paradigm for representation theory: use combinatorics on a Weyl group to calculate some geometric invariants, relate the geometry to representation theory, and draw conclusions about irreducible representations. Lusztig has used this paradigm in an unbelievably wide variety of settings. One striking case is that of groups over p-adic fields. In that setting Langlands formulated a conjectural parametrization of irreducible representations around 1970. Deligne refined this conjecture substantially, and many more mathematicians have worked on it. Lusztig (jointly with Kazhdan) showed how to prove the Deligne-Langlands conjecture in an enormous family of new cases. This work has given new direction to the representation theory of p-adic groups. There is much more to say: about Lusztig's work on quantum groups, on modular representation theory, and on affine Hecke algebras, for instance. His work has touched widely separated parts of mathematics, reshaping them and knitting them together. He has built new bridges to combinatorics and algebraic geometry, solving classical problems in those disciplines and creating exciting new ones. This is a remarkable career and as exciting to watch today as it was at the beginning more than thirty years ago. George Lusztig began his response as follows: When writing a response it is very difficult to say something that has not been said before. Therefore, I thought that I might give some quotes from responses of previous Steele Prize recipients which very accurately describe my sentiments. "What a pleasant surprise!" (Y Katznelson, 2002). "I feel honored and pleased to receive the Steele prize - with a small nuance, that it is awarded for work done up to now" (D Sullivan, 2006). "I always thought this prize was for an old person, certainly someone older than I, and so it was a surprise to me, if a pleasant one, to learn that I was chosen a recipient" (G Shimura, 1996). "But if ideas tumble out in such a profusion, then why aren't they here now when I need them to write this little acceptance?" (J H Conway, 2000). Now, I thank the Steele Prize Committee for selecting me for this prize. It is an unexpected honor, and I am delighted to accept it. I am indebted to my teachers, collaborators, colleagues at MIT, and students for their encouragement and inspiration over the years. Last Updated July 2011
2020-08-13 17:40:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 3, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6713094711303711, "perplexity": 771.7471076702892}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739048.46/warc/CC-MAIN-20200813161908-20200813191908-00427.warc.gz"}
http://math.stackexchange.com/questions/57315/positive-definite-function-zoo
Positive definite function zoo A positive definite function $\varphi: G \rightarrow \mathbb{C}$ on a group $G$ is a function that arises as a coefficient of a unitary representation of $G$. For a definition and discussion of positive definite function see here. I've often wished I had a collection of diverse examples of positive definite functions on groups, for the purpose of testing various conjectures. I hope the diverse experience of the participants of this forum can help me collect a list of such examples. To clarify what I'd like to see: What is an example of a positive definite function on a group $G$ that is not easily seen to be a coefficient of a unitary representation of $G$? What are some positive definite functions that arise in contexts sufficiently removed from studying the coefficients of unitary representations? Also, the weirder the group $G$ the better. Edit: There is now a version of this question on MO. - As per request, I've made the question CW. –  Zev Chonoles Aug 13 '11 at 23:42 Thank you, Zev! –  Jon Bannon Aug 13 '11 at 23:45 I guess my favourite example is $\frac{1}{1+x^2}$ on $\mathbb{R}$. But it's not quite clear what exactly you're looking for: explicit positive elements of the Fourier algebra or would a function of the form $f \ast \tilde{f}$ for $f \in L^2(G)$ already be satisfactory for you? I find the exposition on positive definite functions in appendix C of Bekka-de la Harpe-Valette quite nice, but sect 13.4 and the following chapters of Dixmier's book on $C^\ast$-algebras is still the best source in my opinion (few explicit examples, though). –  t.b. Aug 14 '11 at 10:06 let me clarify, Theo. –  Jon Bannon Aug 14 '11 at 11:43 Thanks for the tag edits, someone! –  Jon Bannon Aug 14 '11 at 11:54
2014-11-26 16:25:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8031708598136902, "perplexity": 685.3329451443009}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931007150.95/warc/CC-MAIN-20141125155647-00216-ip-10-235-23-156.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/58099/what-is-known-about-higgs-bundles-with-sections?sort=oldest
# What is known about Higgs bundles with sections? Let $C$ be a complex curve. Recall that a Higgs bundle on $C$ is a vector bundle $E$ on $C$ equipped with a morphism $E \to E \otimes K_C$. The space of (stable) Higgs bundles is much studied, and is in particular known to be smooth. Moreover there is a "nonabelian Hodge theorem" giving a diffeomorphism between the moduli of Higgs bundles and a certain character variety of $\pi_1(C)$. What is known about the moduli space of Higgs bundles with a section, i.e., the space parameterizing triples $(E, s \in \mathrm{H}^{0}(E), \phi: E\to E \otimes K_C)$ ? Is it smooth (after imposing some appropriate stability condition)? Is there an analogue of the "nonabelian Hodge theorem"? - "..and is in particular known to be smooth": I believe this is only true when the rank and degree of the vector bundle are assumed to be coprime. (In this case, stable and semistable mean the same thing.) – user5395 Mar 10 '11 at 18:44 Do you have a reference for the diffeomorphism in the third sentence? – Peter Samuelson Mar 14 '11 at 23:44 It is a homeomorphism, and a good reference is Simpson: Moduli of representations of the fundamental group of a smooth projective variety. I. Inst. Hautes Études Sci. Publ. Math. No. 79 (1994), 47–129. and Moduli of representations of the fundamental group of a smooth projective variety. II. Inst. Hautes Études Sci. Publ. Math. No. 80 (1994), 5–79 (1995). – Richard Wentworth Apr 12 '11 at 2:08 ## 2 Answers There is a fundamental difference between the case of Higgs bundles (where the section lies in a twisted adjoint representation) and the case of a section of the bundle itself (where the section is in the vector representation). In the former, the notion of stability is rigid, whereas in the latter the definition of stability depends on a parameter. This was discovered by Bradlow (J. Differential Geom. 33 (1991), no. 1, 169--213) and Bradlow-Daskalopoulos (Internat. J. Math. 2 (1991), no. 5, 477--513) and exploited by Thaddeus (Invent. Math. 117, no. 2 (1994), 317--353). These papers will point you in the direction of a definition of stability/semistability for the case you're interested in. I guess it will be true that stable points are smooth, though for certain values of the parameter the compactifications will contain strictly semistable (non-smooth) points. To answer your question, there is apparently no relationship between Bradlow pairs and representations of the fundamental group. Rather, these spaces are more closely related to higher rank generalizations of symmetric products of the Riemann surface (see also J. Amer. Math. Soc. 9 (1996), 529-571). - For stable $E$ of degree $0$ there are no nonzero holomorphic sections. The interessting things must happen at non-stable bundles. As far as I know, there is a desingulaization procedure for the space of semistable holomorphic bundles $V$ similar to your situation: one takes $E=End V$ together with a holomorphic section of $E.$ Again, in the case of stable bundles there are only multiples of the identity, but for nonstable there are more endomorphisms. For details, see Tyurin#s 'red book' on vector bundles over surfaces (Quantization, Classical and Quantum Field Theory and Theta functions) and the references therein. Maybe you can apply these ideas to your situation. -
2016-02-06 20:57:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9385083913803101, "perplexity": 196.71107508200527}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701147841.50/warc/CC-MAIN-20160205193907-00075-ip-10-236-182-209.ec2.internal.warc.gz"}
https://mathoverflow.net/questions/188975/applications-of-c-algebras-in-the-field-of-pdes/189010
# applications of C$^*$-algebras in the field of PDEs I know only a little bit about C$^*$-algebras and I want a to know if you know a nice apllication or the influence of them in the field of partial differential equations (it is better that it is understandable for graduate students), or maybe can explain me why they are important for pseudo-differential-operators. A similar question i found here https://math.stackexchange.com/questions/798342/application-of-calgebras but there are not enough answers to convince me. I'm not sure if it is ok to ask it here, but maybe anyone could know more about C$^*$-algebras and partial differential equations. Regards • Following the answer by Nik Weaver below, you might want to take a look at "Analytic K-homology" by Higson and Roe. – Michael Dec 6 '14 at 4:36 • I disagree with the decision to close this question. I'm not aware of a large body of applications of C*-algebra to PDEs. Could any of those who voted to close suggest other examples besides mine? – Nik Weaver Dec 9 '14 at 16:36 • @NikWeaver Not so much an alternative to your answer as a mention of subsequent work that sounds in similar vein: work of Monthubert, Nistor and their collaborators math.univ-toulouse.fr/~monthube/research.php – Yemon Choi Dec 10 '14 at 1:00 I think the canonical connection between C*-algebra and differential operators is Connes' index theorem for foliated manifolds. I don't know if that counts as PDEs but it's certainly related. Every foliated manifold $M$ has an associated C*-algebra $A$ which is noncommutative (except in trivial cases) but in some way embodies the idea of "the continuous functions on $M$ that are constant on leaves". Any pseudodifferential operator $D$ on $M$ which is elliptic on each leaf has an "index" which belongs to the $K_0$ group of $A$. There is an analytic definition and a topological definition of the index, and Connes' index theorem says that they agree. It is a profound generalization of the Atiyah-Singer index theorem. Connes' notes on the subject can be found here.
2021-01-20 10:33:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6294906139373779, "perplexity": 370.42415316033185}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519984.9/warc/CC-MAIN-20210120085204-20210120115204-00224.warc.gz"}
https://learn.careers360.com/ncert/questions/cbse_11_class-maths-conic_section/?page=2
## Filters Clear All P Pankaj Sanodiya Given a Hyperbola equation, Can also be written as  Comparing this equation with the standard equation of the hyperbola: We get,  and  Now, As we know the relation in a hyperbola, Hence,  Coordinates of the foci: The Coordinates of vertices: The Eccentricity: The Length of the latus rectum : P Pankaj Sanodiya Given a Hyperbola equation, Can also be written as  Comparing this equation with the standard equation of the hyperbola: We get,  and  Now, As we know the relation in a hyperbola, Here as we can see from the equation that the axis of the hyperbola is Y-axis. So,  Coordinates of the foci: The Coordinates of vertices: The Eccentricity: The Length of the latus rectum : P Pankaj Sanodiya Given a Hyperbola equation, Can also be written as  Comparing this equation with the standard equation of the hyperbola: We get,  and  Now, As we know the relation in a hyperbola, Here as we can see from the equation that the axis of the hyperbola is X -axis. So,  Coordinates of the foci: The Coordinates of vertices: The Eccentricity: The Length of the latus rectum : P Pankaj Sanodiya Given, in an ellipse Major axis on the x-axis and passes through the points (4,3) and (6,2). Since The major axis of this ellipse is on the  X-axis, the equation of the ellipse will be of the form:   Where  and are the length of the semimajor axis and semiminor axis respectively. Now since the ellipse passes through the point,(4,3) since the ellipse also passes through the point (6,2). On... P Pankaj Sanodiya Given,In an ellipse,    b = 3, c = 4, centre at the origin; foci on the x axis. Here  foci of the ellipse are in X-axis so the major axis of this ellipse will be X-axis. Therefore, the equation of the ellipse will be of the form:   Where  and are the length of the semimajor axis and semiminor axis respectively. Also Given,   and  Now, As we know the relation, Hence, The Equation of the... P Pankaj Sanodiya Given, In an ellipse,  V Foci (± 3, 0), a = 4 Here foci of the ellipse are in X-axis so the major axis of this ellipse will be X-axis. Therefore, the equation of the ellipse will be of the form:   Where  and are the length of the semimajor axis and semiminor axis respectively. So on comparing standard parameters( vertices and foci) with the given one, we get   and  Now, As we know the... P Pankaj Sanodiya Given, In an ellipse,   Length of minor axis 16, foci (0, ± 6). Here, the focus of the ellipse is on the  Y-axis so the major axis of this ellipse will be Y-axis. Therefore, the equation of the ellipse will be of the form:   Where  and are the length of the semimajor axis and semiminor axis respectively. So on comparing standard parameters( length of semi-minor axis and foci) with the given... P Pankaj Sanodiya Given, In an ellipse,  Length of major axis 26, foci (± 5, 0) Here, the focus of the ellipse is in X-axis so the major axis of this ellipse will be X-axis. Therefore, the equation of the ellipse will be of the form:   Where  and are the length of the semimajor axis and semiminor axis respectively. So on comparing standard parameters( Length of semimajor axis and foci) with the given one, we... P Pankaj Sanodiya Given, In an ellipse,   Ends of the major axis (0, ± ), ends of minor axis (± 1, 0) Here, the major axis of this ellipse will be Y-axis. Therefore, the equation of the ellipse will be of the form:   Where  and are the length of the semimajor axis and semiminor axis respectively. So on comparing standard parameters( ends of the major and minor axis ) with the given one, we get   and  Hence, The... P Pankaj Sanodiya Given, In an ellipse,  Ends of the major axis (± 3, 0), ends of minor axis (0, ± 2) Here, the major axis of this ellipse will be X-axis. Therefore, the equation of the ellipse will be of the form:   Where  and are the length of the semimajor axis and semiminor axis respectively. So on comparing standard parameters( ends of the major and minor axis ) with the given one, we get   and  Hence, The... P Pankaj Sanodiya Given, In an ellipse,   Vertices (± 6, 0), foci (± 4, 0) Here Vertices and focus of the ellipse are in X-axis so the major axis of this ellipse will be X-axis. Therefore, the equation of the ellipse will be of the form:   Where  and are the length of the semimajor axis and semiminor axis respectively. So on comparing standard parameters( vertices and foci) with the given one, we get   and  Now,... P Pankaj Sanodiya Given, In an ellipse,   Vertices (0, ± 13), foci (0, ± 5) Here Vertices and focus of the ellipse are in Y-axis so the major axis of this ellipse will be Y-axis. Therefore, the equation of the ellipse will be of the form:   Where  and are the length of the semimajor axis and semiminor axis respectively. So on comparing standard parameters( vertices and foci) with the given one, we... P Pankaj Sanodiya Given, In an ellipse,  Vertices (± 5, 0), foci (± 4, 0) Here Vertices and focus of the ellipse are in X-axis so the major axis of this ellipse will be X-axis. Therefore, the equation of the ellipse will be of the form:   Where  and are the length of the semimajor axis and semiminor axis respectively. So on comparing standard parameters( vertices and foci) with the given one, we get   and  Now,... P Pankaj Sanodiya Given The equation of the ellipse As we can see from the equation, the major axis is along X-axis and the minor axis is along Y-axis. On comparing the given equation with the standard equation of an ellipse, which is  We get   and . So, Hence, Coordinates of the foci:   The vertices: The length of the major axis: The length of minor axis: The eccentricity : The length of the latus rectum: P Pankaj Sanodiya Given The equation of the ellipse As we can see from the equation, the major axis is along Y-axis and the minor axis is along X-axis. On comparing the given equation with the standard equation of such  ellipse, which is  We get   and . So, Hence, Coordinates of the foci:   The vertices: The length of the major axis: The length of minor axis: The eccentricity : The length of the latus rectum: P Pankaj Sanodiya Given The equation of the ellipse As we can see from the equation, the major axis is along Y-axis and the minor axis is along X-axis. On comparing the given equation with the standard equation of such  ellipse, which is  We get   and . So, Hence, Coordinates of the foci:   The vertices: The length of the major axis: The length of minor axis: The eccentricity : The length of the... P Pankaj Sanodiya Given The equation of the ellipse As we can see from the equation, the major axis is along Y-axis and the minor axis is along X-axis. On comparing the given equation with the standard equation of such  ellipse, which is  We get   and . So, Hence, Coordinates of the foci:   The vertices: The length of the major axis: The length of minor axis: The eccentricity : The length of the latus rectum: P Pankaj Sanodiya Given The equation of ellipse As we can see from the equation, the major axis is along X-axis and the minor axis is along Y-axis. On comparing the given equation with standard equation of ellipse, which is  We get   and . So, Hence, Coordinates of the foci:   The vertices: The length of major axis: The length of minor axis: The eccentricity : The length of the latus rectum:
2020-03-30 19:56:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9433737993240356, "perplexity": 974.7304498885763}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370497301.29/warc/CC-MAIN-20200330181842-20200330211842-00533.warc.gz"}
http://openstudy.com/updates/55725561e4b0bffbfefe3f66
## Afrodiddle one year ago A store had 235 MP3 players in the month of January. Every month, 30% of the MP3 players were sold and 50 new MP3 players were stocked in the store. Which recursive function best represents the number of MP3 players in the store f(n) after n months? A. f(n) = 0.7 × f(n - 1) + 50, f(0) = 235, n > 0 B.f(n) = 237 - 0.7 × f(n - 1) + 50, f(0) = 235, n > 0 C.f(n) = 0.3 × f(n - 1) + 50, f(0) = 235, n > 0 D.f(n) = 237 + 0.7 × f(n - 1) + 50, f(0) = 235, n > 0 1. Afrodiddle I believe it is A) but I am not entirely sure, can someone walk me through the steps? 2. anonymous i agree, it is A. the first term $0.7\times f(n-1)$ represents 70% of the total MP3 players of the previous month, that is the remaining quantity since 30% were sold. the other term +50, represents the newly added MP3 players to the stock and if January is indexed by n=0, f(0) should be 235, as is in answer A so, answer A matches all the requirements!
2016-10-25 17:25:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43446075916290283, "perplexity": 1388.4498963458188}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720238.63/warc/CC-MAIN-20161020183840-00030-ip-10-171-6-4.ec2.internal.warc.gz"}
https://conferences.famnit.upr.si/event/4/contributions/39/
# Graphs, groups, and more: celebrating Brian Alspach’s 80th and Dragan Marušič’s 65th birthdays 28 May 2018 to 1 June 2018 Koper UTC timezone ## Hamilton decompositions of one-ended Cayley graphs Not scheduled 15m UP FHS (Koper) ### UP FHS #### Koper Titov trg 5,Koper ### Speaker Florian Lehner (University of Warwick) ### Description In 1984, Alspach asked whether every Cayley graph of a finite Abelian group admits a Hamilton decomposition. The conjectured answer is yes, but except in some special cases the question remains wide open. In this talk we study an analogous question for infinite, finitely generated groups, using spanning double rays as an infinite analogue of Hamilton cycles. We show that if $G$ is a one-ended Abelian group and $S$ is a generating set only containing non-torsion elements, then the corresponding Cayley graph admits a decomposition into spanning double rays. In particular, any Cayley graph of $\mathbb Z^d$ has such a decomposition. Related results for two-ended groups will also be discussed. ### Primary authors Florian Lehner (University of Warwick) Joshua Erde (Universität Hamburg) Max Pitz (Universität Hamburg) ### Presentation Materials There are no materials yet.
2021-03-07 18:02:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48312291502952576, "perplexity": 3764.2037260987913}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178378043.81/warc/CC-MAIN-20210307170119-20210307200119-00139.warc.gz"}
https://class12chemistry.com/tag/packing-efficiency-chemistry-notes/
## Packing Efficiency Chemistry Notes → As we know that the constituent particles in crystal lattice are arranged in close packing. Some spaces remain vacant in this state, which are called voids. The percentage of the total space filled by the particles is called packing efficiency or the fraction of total space filled is called packing fraction. % Packing efficiency Packing Eficiency in hcp or ccp or fcc Structures : Length of edge of a unit cell = a Volume of one sphere = $$\frac{4}{3}$$ (πr3) ∵ fcc stucture is formed from four spheres. ∴ Volume of four spheres = 4 × $$\frac{4}{3}$$ (πr3) = $$\frac{16}{3}$$ (πr3) ∆ ABC AC2 = AB2 + BC2 = a2 + a2 ∴ AC = a√2 … (1) If we see AC then the arrangement of spheres in it is as follows Hence, the total volume occupied by spheres or particles in fec or ccp or hep structure is 74%. While the empty space i.e. volume of total voids is 26%. Packing Efficiency in Body Centred Cubic Sturcture (bcc) : Edge length of unit cell = a Since bec structure forms from two spheres. So, Volume of two spheres = 2 × $$\left(\frac{4}{3} \pi r^{3}\right)$$ = $$\frac{16}{3}$$ πr2 In ∆ABC, AC2 = AB2 + BC2 AC2 = a2 + a2 AC2 = 2a2 In ∆ACD, AD2 = AC2 + CD2 If we see AD, then the arrangement of spheres in it is as follows On putting the value of AD in equation (i), Hence, the total volume occupied by spheres or particles in bee structure is 68% While, the empty space i.e. volume of total voids is 32%. Packing Efficiency in Simple Cubic Unit Cell (scc) Volume of one sphere = $$\frac{4}{3}$$ πr3
2021-12-02 10:24:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8749809265136719, "perplexity": 3953.9483246311474}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964361253.38/warc/CC-MAIN-20211202084644-20211202114644-00289.warc.gz"}
https://codegolf.stackexchange.com/questions/216985/generalised-taxicab-numbers
# Generalised Taxicab Numbers Let's generalise this by defining a related function: $$\\Ta n x i\$$ which returns the smallest positive integer which can be expressed as the sum of $$\x\$$ $$\i\$$th powers of positive integers in $$\n\$$ different ways. In this case, $$\\T n = \Ta n 2 3\$$ (note: this is the same function here, $$\\Ta n x i = \text{Taxicab}(i, x, n)\$$) Your task is to take 3 positive integers $$\n, x\$$ and $$\i\$$ and return $$\\Ta n x i\$$. This is so the shortest code in bytes wins. In case $$\x = 1 \$$ and $$\ n > 1\$$, your program can do anything short of summoning Cthulhu, and for other cases where $$\\Ta n x i\$$ is not known to exist (e.g. $$\\Ta n 2 5\$$), the same applies. ## Test cases n, x, i -> out 1, 1, 2 -> 1 1, 2, 3 -> 2 2, 2, 2 -> 50 2, 2, 3 -> 1729 3, 3, 2 -> 54 3, 3, 3 -> 5104 2, 6, 6 -> 570947 2, 4, 4 -> 259 6, 4, 4 -> 3847554 2, 5, 2 -> 20 2, 7, 3 -> 131 2, 5, 7 -> 1229250016 5, 8, 4 -> 4228 ### Properties of $$\\Ta n x i\$$ • $$\\forall i : \Ta 1 x i = x\$$ as $$\x = \underbrace{1^i + \cdots + 1^i}_{x \text{ times}}\$$ • $$\\Ta n 1 i\$$ does not exist for all $$\n > 1\$$ • $$\\Ta n 2 5\$$ is not known to exist for any $$\n \ge 2\$$ This is a table of results $$\\{\Ta n x i \:|\: 1 \le n,x,i \le 3 \}\$$, ignoring $$\\Ta 2 1 i\$$ and $$\\Ta 3 1 i\$$: $$\begin{array}{ccc|c} n & x & i & \Ta n x i \\ \hline 1 & 1 & 1 & 1 \\ 1 & 1 & 2 & 1 \\ 1 & 1 & 3 & 1 \\ 1 & 2 & 1 & 2 \\ 1 & 2 & 2 & 2 \\ 1 & 2 & 3 & 2 \\ 1 & 3 & 1 & 3 \\ 1 & 3 & 2 & 3 \\ 1 & 3 & 3 & 3 \\ 2 & 2 & 1 & 4 \\ 2 & 2 & 2 & 50 \\ 2 & 2 & 3 & 1729 \\ 2 & 3 & 1 & 5 \\ 2 & 3 & 2 & 27 \\ 2 & 3 & 3 & 251 \\ 3 & 2 & 1 & 6 \\ 3 & 2 & 2 & 325 \\ 3 & 2 & 3 & 87539319 \\ 3 & 3 & 1 & 6 \\ 3 & 3 & 2 & 54 \\ 3 & 3 & 3 & 5104 \\ \end{array}$$ • Sandbox. Imaginary brownies for beating my 14 byte Jelly answer Dec 28 '20 at 20:34 • I was wondering why you used $i$ and $x$... but now I realise it spells "taxi" ha Dec 28 '20 at 20:40 • @pxeger Thank Adám for that :P Dec 28 '20 at 20:42 # 05AB1E, 15 bytes ∞.ΔL¹m²ã€{ÙOy¢Q Try it online! This is really slow, inserting ¹zmï after .Δ limits the search space significantly: Try it online! Commented: # Inputs ¹=i, ²=x, ³=n ∞.Δ # find the first natural number k that satisfies: L # each of the range [1..k] ¹m # raised to the i-th power ²ã # take the list to the x-th cartesian power €{ # sort each x-tuple Ù # deduplicate the tuples O # sum each x-tuple y¢ # count the number of k's Q # is this equal to the last input? ã€{Ù does the same as Python's itertools.combinations_with_replacement, but there doesn't seem to be a builtin for this in 05AB1E. • I replaced the second $n$ with $k$ as it's clearer that way. Dec 29 '20 at 2:24 # R + gtools, 84 bytes function(n,x,i){while(sum(rowSums(gtools::combinations(+T,x,re=1)^i)==T)<n)T=T+1;+T} Try it online! Counts T up from 1, calculating the number of combinations of x numbers ≤T whose ith powers sum to T. Stops at the first number that gives (at least) n combinations. function(n,x,i){ # variable T (TRUE) initialized by default to 1 while( ... )T=T+1 # while the condition (see below) is met, keep incrementing T # a=gtools::combinations(+T,x,re=1) # all combinations of x numbers from 1..T (as rows of matrix) b=a^i # raised to the i-th power c=rowSums(b) # the row sums are the sums of each combination d=sum(c==T) # when the row sums equal T, we have a combination of x i-th powers that equals T d<n # the condition: we didn't get n combinations yet # +T # finally, the condition fails, so we've got n combinations: return this value of T } For a base-R solution (without the gtools library), swap gtools::combinations(+T,x,re=1) for unique(t(apply(expand.grid(rep(list(1:T),x)),1,sort))) (+23 bytes). # Wolfram Language (Mathematica), 79 bytes (n=1;While[Length@Select[PowersRepresentations[n++,#2,#3],#~FreeQ~0&]!=#];n-1)& Try it online! # Jelly, 12 bytes œċ³*⁴§ċ⁼⁵µ1# A full program which (given enough time!) prints the result if it exists. Inputs are $$\x\$$, $$\i\$$, $$\n\$$. Try it online! (Too slow for most of the test cases.) ### How? Brute-force (and an inefficient one at that): œċ³*⁴§ċ⁼⁵µ1# - Main Link: x µ1# - Count up starting with k=x until 1 truthy result, then yield k, using: ³ - 1st program argument, x œċ - all combinations (of x elements from [1..k]), allowing repeats ⁴ - 2nd program argument, i * - exponentiate (vectorises) § - sums ċ - count occurrences (of k) ⁵ - 3rd program argument, n ⁼ - equal? Note œċ does not give unordered repeats, for example 2œċ3 yields: [[1, 1, 1], [1, 1, 2], [1, 2, 2], [2, 2, 2]] • Nice! I had this for my 14 bytes Dec 30 '20 at 0:09 # Wolfram Language, 110 108 bytes Try it online! (i=0;While[Length@FindInstance[i++==Tr[(v=Unique[]~Table~#2)^#3]&&LessEqual@@v,v,PositiveIntegers,#]!=#];i)& # JavaScript (ES7), 89 bytes (n,x,i)=>(F=q=>(g=(k,p,j)=>j?p**i>k?0:g(k-p**i,p,j-1)+g(k,p+1,j):!k)(++q,1,x)-n?F(q):q) Try it online! ### Commented We recursively look for the smallest $$\q\$$ such that there exists exactly $$\n\$$ sums of the form: $${p_1}^i+{p_2}^i+\ldots+{p_x}^i=q,\:p_{k+1}\ge p_k \ge 1$$ (n, x, i) => ( // input variables, as described in the challenge F = q => // F is a recursive function taking a candidate solution q ( g = ( // g is a recursive function taking: k, // k = current sum p, // p = number to be raised to the power of i j // j = remaining number of terms in the sum ) => // j ? // if there's at least one more term to compute: p ** i > k ? // if p ** i is greater than k: 0 // abort : // else: g( // do a 1st recursive call with: k - p ** i, // k = k - p ** i p, // p unchanged j - 1 // one less term to compute ) + // g( // do a 2nd recursive call with: k, // k unchanged p + 1, // p incremented j // j unchanged ) // : // else: !k // increment the final result if k = 0 )(++q, 1, x) // initial call to g with k = ++q, p = 1 and j = x - n ? // if the result is not equal to n: F(q) // try again : // else: q // success: return q ) // initial call to F with q zero'ish # Scala, 125 bytes (n,x,i)=>1 to 1<<30 find(c=>Seq.fill(x)(1 to c map(Math.pow(_,i))takeWhile(_<=c)).flatten.combinations(x).count(_.sum==c)==n) Try it online! Approach: for each candidate integer c, generate a list of x copies of i-powered integers lower than c, and count whether n (distinct!) combinations of length x sum up to c. Performance: the trade-off between performance and code length is made such that most test cases finish within some seconds. The following variant would save 14 bytes, but would be much less performant: (n,x,i)=>0 to 1<<30 find(c=>Seq.fill(x)(1 to c).flatten.combinations(x).count(_.map(Math.pow(_, i)).sum==c)==n)
2022-01-16 19:15:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 34, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46312791109085083, "perplexity": 2440.886411521206}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300010.26/warc/CC-MAIN-20220116180715-20220116210715-00393.warc.gz"}
https://ask.sagemath.org/questions/9724/revisions/
# Revision history [back] ### Factorizing a polynomial with two variables Hi, I would like to factorize the following polynomial in $t$, with an integer indeterminate $n$, $2^n ((n-1)t^n+1)^{n-1} - n^{n-1} t^{(n-1)(n-2)} (t^{n-1} + 1)(t+1)^{n-1}$ I expect it to have a factor of $(t-1)^2$ and hope that after division the polynomial in $t$ would have positive coefficients. Is there any way I can verify this by Sage? ### Factorizing a polynomial with two variables Hi, I would like to factorize the following polynomial in $t$, with an integer indeterminate $n$, $2^n$f_n(t) = 2^n ((n-1)t^n+1)^{n-1} - n^{n-1} t^{(n-1)(n-2)} (t^{n-1} + 1)(t+1)^{n-1}$I expect it to have a factor of$(t-1)^2$and hope that after division the polynomial in$t$would have positive coefficients. 1. Is there any way I can verify this by Sage? 2. I have tried to check that$f_n(t)$has a factor of$(t-1)^2$by checking that$f_n(1) = f_n'(1) = 0$. I named my polynomial$f$, and let$h$be f.derivative(t), and try to find$h(t=1)$. Here's what I got, (n - 1)^22^nnn^(n - 2) - (n - 1)2^(n - 1)n^(n - 1) - 2(n - 1)2^(n - 2)n^(n - 1) - 2(n^2 - 3n + 2)2^(n - 1)n^(n - 1) Which turns out to be 0 when I verify by hand. However it seems like Sage is unable to detect the redundance in the expression (e.g.$n \times n^{n-2} = n^{n-1}\$). Did I do the computation in the "wrong way"? If so, what's the proper way to do it? Thanks!
2021-05-09 11:03:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8869600892066956, "perplexity": 746.8075968921706}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988966.82/warc/CC-MAIN-20210509092814-20210509122814-00324.warc.gz"}
http://humancommunications.wikia.com/wiki/SDMA
FANDOM 186 Pages SDMA is space-time division multiple access (SDMA) in which for the downlink case, the base station transmits different information signals to multiple users simultaneously using spatial pre-processing at the transmitter (base station) and for the uplink case, one or more than one users transmit information simultaneously to the base station and aggregated signals are decoded at the receiver (base station) using spatial post-processing. Depending on an antenna configuration, we apply beamforming, precoding or hybrid of two processing to the antenna array as effective spatial preprocessing. In a wide sense, beamforming represents both original beamforming used for correlated channels and precoding used for uncorrelated channels. The beamformed signal is mathematically described in the downlink case as $\mathbf{x} = \sum_{k=1}^{K} \mathbf{w}_k s_k$ where $\mathbf{w}_k$ and $s_k$is the beamforming vector and the input signal for user $k$, respectively. Recently, several multiuser MIMO techniques such as ZFBF and PU2RC are proposed as effective SDMA solutions in terms of the achievable performance and implementation complexity.
2017-06-26 07:05:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6119506359100342, "perplexity": 1217.6179779039398}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320685.63/warc/CC-MAIN-20170626064746-20170626084746-00340.warc.gz"}
https://www.esaral.com/q/if-the-median-of-the-following-frequency-distribution-is-28-5-find-the-missing-frequencies-72702
# If the median of the following frequency distribution is 28.5 find the missing frequencies: Question: If the median of the following frequency distribution is 28.5 find the missing frequencies: Solution: Given: Median = 28.5 We prepare the cumulative frequency table, as given below. Now, we have $N=60$ $45+f_{1}+f_{2}=60$ $f_{2}=15-f_{1}$.....(1) $\mathrm{Also}, \frac{N}{2}=30$ Since the median $=28.5$ so the median class is $20-30$. Here, $l=20, f=20, F=5+f_{1}$ and $h=10$ We know that Median $=l+\left\{\frac{\frac{N}{2}-F}{f}\right\} \times h$ $28.5=20+\left\{\frac{30-\left(5+f_{1}\right)}{20}\right\} \times 10$ $8.5=\frac{\left(25-f_{1}\right) \times 10}{20}$ $8.5 \times 20=250-10 f_{1}$ $10 f_{1}=250-170$ $=80$ $f_{1}=8$ Putting the value of $f_{1}$ in (1), we get $f=15-8$ $=7$ Hence, the missing frequencies are 7 and 8.
2023-02-08 03:25:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6698103547096252, "perplexity": 1121.817587995054}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500671.13/warc/CC-MAIN-20230208024856-20230208054856-00026.warc.gz"}
https://indico.desy.de/event/27991/contributions/101476/
# ICRC 2021 Jul 12 – 23, 2021 Online Europe/Berlin timezone ## A novel trigger based on neural networks for radio neutrino detectors Jul 14, 2021, 12:00 PM 1h 30m 05 #### 05 Poster NU | Neutrinos & Muons Astrid Anker ### Description The ARIANNA experiment is a proposed Askaryan detector designed to record radio signals induced by neutrino interactions in the Antarctic ice. Because of the low neutrino flux at high energies, the physics output is limited by statistics. Hence, an increase in sensitivity will significantly improve the interpretation of data and will allow us to probe new parameter spaces. The trigger thresholds are limited by the rate of triggering on unavoidable thermal noise fluctuations. Here, we present a real-time thermal noise rejection algorithm that will allow us to lower the thresholds substantially and increase the sensitivity by up to a factor of two compared to the current ARIANNA capabilities. A deep learning discriminator, based on a Convolutional Neural Network (CNN), was implemented to identify and remove a high percentage of thermal events in real time while retaining most of the neutrino signal. We describe a CNN that runs efficiently on the current ARIANNA microcomputer and retains 94% of the neutrino signal at a thermal rejection factor of $10^5$. Finally, we report on the experimental verification from lab measurements. ### Keywords Askaryan; UHE neutrinos; in-ice radio detection; trigger optimization; radio; deep learning Collaboration other (fill field below) ARIANNA Experimental Methods & Instrumentation
2022-08-11 15:27:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5934070348739624, "perplexity": 2463.3967850086497}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571472.69/warc/CC-MAIN-20220811133823-20220811163823-00603.warc.gz"}
http://eprint.iacr.org/2015/689
## Cryptology ePrint Archive: Report 2015/689 Counting Keys in Parallel After a Side Channel Attack Daniel P. Martin and Jonathan F. O'Connell and Elisabeth Oswald and Martijn Stam Abstract: Side channels provide additional information to skilled adversaries that reduce the effort to determine an unknown key. If sufficient side channel information is available, identification of the secret key can even become trivial. However, if not enough side information is available, some effort is still required to find the key in the key space (which now has reduced entropy). To understand the security implications of side channel attacks it is then crucial to evaluate this remaining effort in a meaningful manner. Quantifying this effort can be done by looking at two key questions: first, how deep' (at most) is the unknown key in the remaining key space, and second, how expensive' is it to enumerate keys up to a certain depth? We provide results for these two challenges. Firstly, we show how to construct an extremely efficient algorithm that accurately computes the rank of a (known) key in the list of all keys, when ordered according to some side channel attack scores. Secondly, we show how our approach can be tweaked such that it can be also utilised to enumerate the most likely keys in a parallel fashion. We are hence the first to demonstrate that a smart and parallel key enumeration algorithm exists. Category / Keywords: key enumeration, key rank, side channels Original Publication (with minor differences): IACR-ASIACRYPT-2015 Date: received 9 Jul 2015, last revised 4 Sep 2015 Contact author: j oconnell at bris ac uk Available format(s): PDF | BibTeX Citation Short URL: ia.cr/2015/689 [ Cryptology ePrint archive ]
2017-03-26 07:27:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6608424186706543, "perplexity": 2290.0508694288837}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189130.57/warc/CC-MAIN-20170322212949-00245-ip-10-233-31-227.ec2.internal.warc.gz"}
https://gamedev.stackexchange.com/questions/179064/alternative-path-not-working
# Alternative path not working I am using Android studio (libGDX), and I can't figure out why alternative paths doesn't work. I've tried it as it said in the book, but it doesn't do anything, it shows a black window and than the program exterminates. If the .png file would be in the .../android/assets/ folder, it wouldn't work either. This is what the book wants me to do it: public Spaceship(float x, float y, Stage s){ super(x,y,s); setBoundaryPolygon(8); setAcceleration(200); setMaxSpeed(100); setDeceleration(10); } This is the shortest form that works for me: public Spaceship(float x, float y, Stage stage) { super(x, y, stage); setBoundaryPolygon(8); setAcceleration(400); setMaxSpeed(200); setDeceleration(20); } This is what my path looks like: For a Desktop Application If you are working on a desktop app it has to do with the Run/Debug Configuration: The important part is to set the "Working directory" to YourGameName\android\assets
2020-06-03 01:04:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19484545290470123, "perplexity": 2099.595721887443}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347426956.82/warc/CC-MAIN-20200602224517-20200603014517-00093.warc.gz"}
https://aimsciences.org/article/doi/10.3934/dcds.2019182
# American Institute of Mathematical Sciences August  2019, 39(8): 4455-4469. doi: 10.3934/dcds.2019182 ## Weak periodic solutions and numerical case studies of the Fornberg-Whitham equation 1 Fakultät für Mathematik, Universität Wien, Austria 2 Department of Mathematics, Gakushuin University, Tokyo, Japan The paper is for the special theme: Mathematical Aspects of Physical Oceanography, organized by Adrian Constantin Received  July 2018 Revised  October 2018 Published  May 2019 Spatially periodic solutions of the Fornberg-Whitham equation are studied to illustrate the mechanism of wave breaking and the formation of shocks for a large class of initial data. We show that these solutions can be considered to be weak solutions satisfying the entropy condition. By numerical experiments, we show that the breaking waves become shock-wave type in the time evolution. Citation: Günther Hörmann, Hisashi Okamoto. Weak periodic solutions and numerical case studies of the Fornberg-Whitham equation. Discrete & Continuous Dynamical Systems - A, 2019, 39 (8) : 4455-4469. doi: 10.3934/dcds.2019182 ##### References: show all references ##### References: The solution from data1. $0 \le t \le 0.65$ data2 traveling wave $U$; $c = 0.025, 0.0255, 0.026, 0.0269$ The time dependent solution with the traveling wave as the initial data The time dependent solution with $d$ as in the second case of (12). The points $( u_{300}^n,u_{600}^n)$ with $n$ corresponding to $0 \le t \le 300$ are plotted [1] Günther Hörmann. Wave breaking of periodic solutions to the Fornberg-Whitham equation. Discrete & Continuous Dynamical Systems - A, 2018, 38 (3) : 1605-1613. doi: 10.3934/dcds.2018066 [2] Xue Yang, Xinglong Wu. Wave breaking and persistent decay of solution to a shallow water wave equation. Discrete & Continuous Dynamical Systems - S, 2016, 9 (6) : 2149-2165. doi: 10.3934/dcdss.2016089 [3] Alain Hertzog, Antoine Mondoloni. Existence of a weak solution for a quasilinear wave equation with boundary condition. Communications on Pure & Applied Analysis, 2002, 1 (2) : 191-219. doi: 10.3934/cpaa.2002.1.191 [4] Út V. Lê. Regularity of the solution of a nonlinear wave equation. Communications on Pure & Applied Analysis, 2010, 9 (4) : 1099-1115. doi: 10.3934/cpaa.2010.9.1099 [5] Weiguo Zhang, Yan Zhao, Xiang Li. Qualitative analysis to the traveling wave solutions of Kakutani-Kawahara equation and its approximate damped oscillatory solution. Communications on Pure & Applied Analysis, 2013, 12 (2) : 1075-1090. doi: 10.3934/cpaa.2013.12.1075 [6] Jong-Shenq Guo, Ying-Chih Lin. Traveling wave solution for a lattice dynamical system with convolution type nonlinearity. Discrete & Continuous Dynamical Systems - A, 2012, 32 (1) : 101-124. doi: 10.3934/dcds.2012.32.101 [7] Claudianor O. Alves. Existence of periodic solution for a class of systems involving nonlinear wave equations. Communications on Pure & Applied Analysis, 2005, 4 (3) : 487-498. doi: 10.3934/cpaa.2005.4.487 [8] José F. Caicedo, Alfonso Castro. A semilinear wave equation with smooth data and no resonance having no continuous solution. Discrete & Continuous Dynamical Systems - A, 2009, 24 (3) : 653-658. doi: 10.3934/dcds.2009.24.653 [9] Ying Zhang. Wave breaking and global existence for the periodic rotation-Camassa-Holm system. Discrete & Continuous Dynamical Systems - A, 2017, 37 (4) : 2243-2257. doi: 10.3934/dcds.2017097 [10] Jiao Chen, Weike Wang. The point-wise estimates for the solution of damped wave equation with nonlinear convection in multi-dimensional space. Communications on Pure & Applied Analysis, 2014, 13 (1) : 307-330. doi: 10.3934/cpaa.2014.13.307 [11] Kaïs Ammari, Thomas Duyckaerts, Armen Shirikyan. Local feedback stabilisation to a non-stationary solution for a damped non-linear wave equation. Mathematical Control & Related Fields, 2016, 6 (1) : 1-25. doi: 10.3934/mcrf.2016.6.1 [12] Caixia Chen, Shu Wen. Wave breaking phenomena and global solutions for a generalized periodic two-component Camassa-Holm system. Discrete & Continuous Dynamical Systems - A, 2012, 32 (10) : 3459-3484. doi: 10.3934/dcds.2012.32.3459 [13] Wei-Jian Bo, Guo Lin, Shigui Ruan. Traveling wave solutions for time periodic reaction-diffusion systems. Discrete & Continuous Dynamical Systems - A, 2018, 38 (9) : 4329-4351. doi: 10.3934/dcds.2018189 [14] Hongqiu Chen, Jerry L. Bona. Periodic traveling--wave solutions of nonlinear dispersive evolution equations. Discrete & Continuous Dynamical Systems - A, 2013, 33 (11&12) : 4841-4873. doi: 10.3934/dcds.2013.33.4841 [15] Xiaojie Hou, Yi Li, Kenneth R. Meyer. Traveling wave solutions for a reaction diffusion equation with double degenerate nonlinearities. Discrete & Continuous Dynamical Systems - A, 2010, 26 (1) : 265-290. doi: 10.3934/dcds.2010.26.265 [16] Bendong Lou. Traveling wave solutions of a generalized curvature flow equation in the plane. Conference Publications, 2007, 2007 (Special) : 687-693. doi: 10.3934/proc.2007.2007.687 [17] Faustino Sánchez-Garduño, Philip K. Maini, Judith Pérez-Velázquez. A non-linear degenerate equation for direct aggregation and traveling wave dynamics. Discrete & Continuous Dynamical Systems - B, 2010, 13 (2) : 455-487. doi: 10.3934/dcdsb.2010.13.455 [18] Anna Geyer, Ronald Quirchmayr. Traveling wave solutions of a highly nonlinear shallow water equation. Discrete & Continuous Dynamical Systems - A, 2018, 38 (3) : 1567-1604. doi: 10.3934/dcds.2018065 [19] Hirokazu Ninomiya. Entire solutions and traveling wave solutions of the Allen-Cahn-Nagumo equation. Discrete & Continuous Dynamical Systems - A, 2019, 39 (4) : 2001-2019. doi: 10.3934/dcds.2019084 [20] Guo Lin, Wan-Tong Li. Traveling wave solutions of a competitive recursion. Discrete & Continuous Dynamical Systems - B, 2012, 17 (1) : 173-189. doi: 10.3934/dcdsb.2012.17.173 2018 Impact Factor: 1.143
2019-09-22 17:05:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6217858791351318, "perplexity": 3528.528421719189}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514575596.77/warc/CC-MAIN-20190922160018-20190922182018-00317.warc.gz"}
https://www.infoq.com/news/2018/10/the-road-to-micronaut-1.0
Facilitating the Spread of Knowledge and Innovation in Professional Software Development Write for InfoQ ### Topics InfoQ Homepage News The Road to Micronaut 1.0: A JVM-Based Full-Stack Framework # The Road to Micronaut 1.0: A JVM-Based Full-Stack Framework This item in japanese A year in the making, the road to Micronaut 1.0 has accelerated within the past three weeks as Object Computing (OCI) published release candidates RC1, RC2 and RC3. Formerly known as Project Particle, Micronaut is a full-stack JVM-based framework for creating microservice-based, cloud-native and serverless applications that can be written in Java, Groovy, and Kotlin. Micronaut was introduced earlier this year by Graeme Rocher, principal software engineer and Grails and Micronaut product lead at OCI, at the Greach Conference. Micronaut was subsequently open-sourced in late May. New features in all three release candidates include: GraalVM native image support; compile-time support for Swagger (OpenAPI); compile-time validation; and mapping annotations at compile-time. Micronaut Test 1.0 RC1, Micronaut's testing framework, was also released in RC3. Micronaut uses dependency injection and Ahead-of-Time (AOT) compilation. As defined on the website: Reflection-based IoC frameworks load and cache reflection data for every single field, method, and constructor in your code, whereas with Micronaut, your application startup time and memory consumption are not bound to the size of your codebase. Built on top of Netty, Micronaut ships with its own non-blocking web server. Designed to reduce memory consumption, Micronaut reactive clients can be built declaratively and are implemented at compile-time. ### Profiles Micronaut includes several built-in profiles that generate skeleton, yet working, applications as a building block for developing web or command line applications. Each profile consists of a template and additional commands specific to that profile. For example, create-app initiates the service profile that includes additional commands for building controller (create-controller) and client (create-client) classes that may not be available in other profiles. ### Getting Started After downloading and installing Micronaut, applications are created via the command line or the Micronaut shell. Inspired by the familiar command line interface in Grails, Micronaut follows the same concept for creating applications. Consider the following command: $mn create-app org.redlich.demo As shown below, this will create a Java application and Gradle project under a root folder named demo and the package name, org.redlich. Notice the inclusion of a test directory structure, a Dockerfile, and YAML configuration files. The micronaut-cli.yml file provides specific information about the project: profile: service defaultPackage: org.redlich --- testFramework: junit sourceLanguage: java The generated Java source file, Application.java, launches a Micronaut server: package org.redlich; import io.micronaut.runtime.Micronaut; public class Application { public static void main(String[] args) { Micronaut.run(Application.class); } } To build and run this initial application: $ ./gradlew run To add a controller to the above project: $mn create-controller HelloController As shown below, the files, HelloController.java and HelloControllerTest.java, are added to the project. In the generated HelloController.java file, notice how the endpoint in the @Controller annotation is named "/hello" based on the name of the controller, HelloController. package org.redlich; import io.micronaut.http.annotation.Controller; import io.micronaut.http.annotation.Get; import io.micronaut.http.HttpStatus; @Controller("/hello") public class HelloController { @Get("/") public HttpStatus index() { return HttpStatus.OK; } } Java and Gradle are the default language and build tool. To generate Groovy and Kotlin applications: $ mn create-app org.redlich.demo --lang groovy $mn create-app org.redlich.demo --lang kotlin There is also support for generating a Maven project: $ mn create-app org.redlich.service --build maven \$ ./mvnw compile exec:exec In Part 1 of this Micronaut tutorial, Sergio del Amo Caballero, software engineer at OCI, demonstrates how to create three microservices in three languages: Java, Groovy, and Kotlin. InfoQ: What inspired OCI to develop this new microservices framework? Graeme Rocher: The technology landscape has changed drastically over the past few years, in particular if you look at systems like Docker, Kubernetes and the Serverless movement they are really optimised for low memory Microservices and applications that have low overhead when it comes to cold starts. The result of this is that languages like Go and Node are getting significant traction on those platforms over Java due superior cold start performance and memory consumption. A good question to ask yourself, is given the technology choices available to the Docker and Kubernetes teams, why choose Go over Java to implement those platforms? In my opinion, the answer is simple: if they had written those technology stacks in Java, with the technology choices available today, we would all need a supercomputer for a laptop to run them locally. The reasons for this are varied; on the one hand you have the language level limitations. The JVM is an amazing technological achievement, but for short lived operations like Serverless functions the optimisations it provides are often lost, yet you still have to drag along this entire JVM to run your application. Projects like GraalVM have the potential to resolve these limitations by allowing Java applications to be compiled to a native image, but frankly framework designers have a large role to play when it comes to making Java applications more efficient. At the framework level, traditional JVM frameworks (like Spring and Java/Jakarta EE) are over 10 years old, from an era when everyone was deploying Monolithic applications, and are primarily built around the use of reflection and runtime analysis of annotations. The issue with this approach is that due to a variety of issues from type erasure, to a limited annotation API, to the relative slowness of reflective logic it makes it nearly impossible to build a Java framework that includes both ultra fast startup and low memory consumption. The burden placed on the framework runtime is enormous. If you look at what Spring does at runtime it is quite remarkable, from literally parsing your byte code with ASM to produce annotation metadata, to aggressively caching reflection information to avoid the inevitable slow down repeatedly reading it would cause. There exists a irreconcilable conflict between the need to cache all of this runtime produced information and the goal to achieve fast startup and low memory consumption. We believe Micronaut is the basis for a framework for the future, by resolving this tension by eliminating all use of reflection and producing all annotation metadata, proxies and framework infrastructure at compilation time through a set of annotation processors and AST transformations that perform Ahead-of-Time (AOT) compilation. What this allows Micronaut to achieve is blazing fast startup time, low memory consumption and crucially improved compatibility with GraalVM native image. Of course the Java ecosystem is huge with massive projects based on Java like Spring, Kafka, Cassandra, Hadoop, Grails etc. and a rich language ecosystem with Groovy, Scala, Kotlin, Java, Clojure etc. so it is not all about low memory footprint Microservices and Serverless applications and there are many, many workloads that still benefit massively from the JVM and the JIT. However, even for those workloads we believe that Micronaut has a lot to offer, by simply being more efficient than other frameworks, both in terms of startup time and memory consumption. InfoQ: Are there plans to include Scala and/or Clojure as supporting JVM-based languages in Micronaut? Rocher: Micronaut is already built with multiple languages in mind, and in fact we support Java, Kotlin and Groovy today by creating a common AST produced for each language. We have plans to at some point to add Scala support through a Scala compiler plugin (see https://github.com/micronaut-projects/micronaut-core/issues/675), although if there are folks in the Scala community who wish to help accelerate that we would love to hear from them. Clojure is an interesting one, we would certainly need input from the Clojure community on how that could be made to happen. InfoQ: Since GraalVM supports non-JVM-based languages, would it be possible to one day build Micronaut applications with languages supported by GraalVM? Rocher: I can certainly imagine it enabling sidecars and easier integrations with other languages into a Micronaut application. InfoQ: When do you anticipate a GA release of Micronaut? Rocher: Micronaut 1.0 GA will be released on the 23rd of October. InfoQ: What's on the horizon for Micronaut, especially after the GA release? Rocher: Micronaut 1.0 is all about establishing a stable baseline to build on. Since Micronaut uses AOT compilation the pre-compiled metadata format needs a stable 1.0 release. Once 1.0 we plan to build integration with a lot more technologies such as RabbitMQ, Kubernetes, GRPC, GraphQL etc. Style ## Hello stranger! You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered. Get the most out of the InfoQ experience. Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p • ##### How is Micronaut different from e.g. Ratpack, Netty or Vert.x ? by Jakob Jenkov, • ##### Re: How is Micronaut different from e.g. Ratpack, Netty or Vert.x ? by Graeme Rocher, • ##### Re: How is Micronaut different from e.g. Ratpack, Netty or Vert.x ? by Jakob Jenkov, • ##### Re: How is Micronaut different from e.g. Ratpack, Netty or Vert.x ? by Graeme Rocher, • ##### Re: How is Micronaut different from e.g. Ratpack, Netty or Vert.x ? by Jakob Jenkov, • ##### Re: How is Micronaut different from e.g. Ratpack, Netty or Vert.x ? by abhishek manocha, • ##### How is Micronaut different from e.g. Ratpack, Netty or Vert.x ? by Jakob Jenkov, Your message is awaiting moderation. Thank you for participating in the discussion. How is Micronaut different from e.g. Ratpack, Netty or Vert.x ? I am not trying to be condescending here - just trying to figure out what it is about these other platforms that did not address the itch Micronaut is scratching. I am myself working on a distributed systems toolkit - albeit it is not desgined for microservices or HTTP-like communication etc. etc. so we had to write it ourselves. • ##### Re: How is Micronaut different from e.g. Ratpack, Netty or Vert.x ? Your message is awaiting moderation. Thank you for participating in the discussion. Hi Jakob, The Java framework world is divided into frameworks that take an opinionated approach (Spring Boot, Grails, Jakarta EE) using an annotation-driven model and those that take an unopinionated approach (Ratpack, Netty, Vert.x etc.) where you have to piece together your application from various components manually. Whilst it is true some aspects of Micronaut can be achieved with Ratpack/Vert.x etc by manually configuring components yourself, the popularity of Spring Boot, Grails etc. in the Java community is testament to the fact that developers overwhelmingly prefer a framework to have opinions and don't want to have to manually figure out how to setup MongoDB, or how to integrate service discovery or how to achieve retry and failover etc. The problem is today's frameworks that feature out-of-the-box experiences come with the performance and memory consumption compromises I mentioned in the article. Micronaut is trying to provide the opinionated, out-of-the-box, annotation-driven experience you get with Spring Boot and Grails that is the most popular with the vast majority of Java developers today, but the memory consumption and startup performance of the piece it together yourself toolkits like Netty, Vert.x, Ratpack etc. • ##### Re: How is Micronaut different from e.g. Ratpack, Netty or Vert.x ? by Jakob Jenkov, Your message is awaiting moderation. Thank you for participating in the discussion. Hi Graeme, thank you for your elaboration. Could you provide an example of a "feature" / "aspect" that you have done different than Spring, so I can understand in what way Micronaut is different? I understand that it *is* different - and within the realms of memory consumption etc. but not how. Regarding opiniated / unopiniated frameworks, I think they appeal to different types of developers. I now have 20 years of experience. In my first 5-8 years I loved frameworks like Spring. However, as I gained more experience, I felt I needed a framework's "opinion" less and less. Now I avoid them. What is your take on this? • ##### Re: How is Micronaut different from e.g. Ratpack, Netty or Vert.x ? Your message is awaiting moderation. Thank you for participating in the discussion. The primary difference is that Spring/Jakarta EE/MicroProfile etc. performs DI, AOP and Configuration Management at runtime using reflection, Micronaut does it at compile time. You can see a more detailed explanation here: objectcomputing.com/news/2018/10/08/micronaut-1... Regarding opiniated / unopinionated frameworks, the diversity available in the Java ecosystem is great, you have a range of frameworks appealing to a range of tastes and use cases. It is certainly true that as you gain experience you are more capable of stepping outside opinions, however IMO you and developers like you are most certainly in the minority when you look at framework adoption / marketshare stats. Java developers as a whole overwhelmingly prefer annotation-based, opinionated frameworks. As for myself I regard myself as a pretty experienced developer and most certainly see the value in frameworks like Spring and Jakarta EE. Sure you could roll your own framework, and some developers go down that path, but you will inevitably end up repeated tried and tested patterns already available. • ##### Re: How is Micronaut different from e.g. Ratpack, Netty or Vert.x ? Your message is awaiting moderation. Thank you for participating in the discussion. Opinions are good. Or rather right opinions are good. It is the fundamental principles underlying "convention over configuration" or the design patterns lets say. It builds on knowledge built overtime and hence helps in not reinventing the wheel. The hard part is to get the opinions right. • ##### Re: How is Micronaut different from e.g. Ratpack, Netty or Vert.x ? by Jakob Jenkov, Your message is awaiting moderation. Thank you for participating in the discussion. Hi Graeme, thanks again! ... this time I think it is I that has to clarify a bit :-) I don't think that using "no frameworks" and writing everything myself is the solution. There is no reason to implement my own JSON parser, DI container etc. when there are excellent implementations available, that does everything I need. The types of frameworks I tend to avoid are those that try to be a "one-stop-solution" for all kinds of different problems. Quite often some of these solutions are not good enough for what you need, and you need to introduce another solution, or work around the solution the framework provides etc. In many cases I would have preferred "no solution". I liked Spring when it was primarily a DI container that helped you integrate many different toolkits into a coherent application. Then Spring tried to come bundled with solutions for everything under the sun - most of which weren't that great. Now you have the fight on every team about whether to "do it the Spring way", or "use your common sense" - constantly having to debate whether you really think you are smarter than the Spring team or not. I agree, many developers tend to prefer this one-stop-solution type of frameworks, that seem to have a solution for every problem they might encounter. My attitude is definitely not mainstream. Anyways, this was never an argument against Micronaut. I like small toolkits - not big all-encompassing frameworks. I guess I was just trying to figure out in which category Micronaut fits, and how it is different from other players in that category :-) Open source development takes a lot of work. I have great respect for anyone taking that on. Best of luck with Micronaut and any other open source project you are working on! Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p
2022-07-05 03:24:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21228617429733276, "perplexity": 4721.476197329949}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104512702.80/warc/CC-MAIN-20220705022909-20220705052909-00025.warc.gz"}
https://study.com/academy/answer/1-compute-x-2-y-2-da-over-an-annulus-1-x-2-y-2-4-2-compute-0-1-0-1-x-2-x-2-y-2-dy-dx-3-find-the-volume-of-the-solid-bounded-below-by-the-xy-plane.html
# 1. Compute (x^2 + y^2) dA over an annulus 1 x^2 + y^2 4. 2. Compute 0^1 ( 0^ ... ## Question: 1. Compute{eq}\displaystyle \ \int \int \cos (x^2 + y^2) \ dA {/eq} over an annulus {eq}\displaystyle \ 1 \leq x^2 + y^2 \leq 4. {/eq} 2. Compute{eq}\displaystyle \ \int_0^1 \left ( \int_0^{ \sqrt {1 - x^2} } \sin \left ( \sqrt {x^2 + y^2} \right ) \ dy \right ) \ dx {/eq} 3. Find the volume of the solid bounded below by the xy-plane and above by{eq}\displaystyle \ z = 2x; {/eq} on the sides by the cylinder{eq}\displaystyle \ (x - 1)^2 + y^2 = 1. {/eq} ## Double Integrals in Polar Coordinates: If we have a double integral {eq}\displaystyle\iint_R f(x, y) \: dA, {/eq} where the region {eq}R {/eq} is a circle, annulus, or portion of a circle, and the integrand {eq}f(x, y) {/eq} is a function of {eq}x^2 + y^2, {/eq} then quite often it is easier to first convert the problem to polar coordinates, then evaluate. An annulus centered at the origin can be described as {eq}0 \leq \theta \leq 2\pi, \: a \leq r \leq b, {/eq} and a circle or portion of a circle is similarly described. Next use the fact that {eq}r^2 = x^2 + y^2 {/eq} to convert the integrand, and replace {eq}dA {/eq} with {eq}r \: dr \: d\theta. {/eq} The new integral is ready to go! 1. Compute{eq}\displaystyle \ \iint \cos (x^2 + y^2) \ dA {/eq} over an The annulus {eq}\displaystyle \ 1 \leq x^2 + y^2 \leq 4 {/eq} can be... Become a Study.com member to unlock this answer! Create your account
2019-11-18 16:19:24
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9994626641273499, "perplexity": 1765.7838412641302}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669809.82/warc/CC-MAIN-20191118154801-20191118182801-00001.warc.gz"}
https://www.greaterwrong.com/posts/BZ6XaCwN4QGgH9CxF/the-kelly-criterion
# The Kelly Criterion Epistemic Status: Reference Post /​ Introduction The Kelly Criterion is a formula to determine how big one should wager on a given proposition when given the opportunity. It is elegant, important and highly useful. When considering sizing wagers or investments, if you don’t understand Kelly, you don’t know how to think about the problem. In almost every situation, reasonable attempts to use it will be somewhat wrong, but superior to ignoring the criterion. What Is The Kelly Criterion? The Kelly Criterion is defined as (from Wikipedia): For simple bets with two outcomes, one involving losing the entire amount bet, and the other involving winning the bet amount multiplied by the payoff odds, the Kelly bet is: where: • f * is the fraction of the current bankroll to wager, i.e. how much to bet; • b is the net odds received on the wager (“b to 1″); that is, you could win $b (on top of getting back your$1 wagered) for a $1 bet • p is the probability of winning; • q is the probability of losing, which is 1 − p. As an example, if a gamble has a 60% chance of winning (p = 0.60, q = 0.40), and the gambler receives 1-to-1 odds on a winning bet (b = 1), then the gambler should bet 20% of the bankroll at each opportunity (f* = 0.20), in order to maximize the long-run growth rate of the bankroll. (A bankroll is the amount of money available for a gambling operation or series of wagers, and represents what you are trying to grow and preserve in such examples.) For quick calculation, you can use this rule: bet such that you are trying to win a percentage of your bankroll equal to your percent edge. In the above case, you win 60% of the time and lose 40% on a 1:1 bet, so you on average make 20%, so try to win 20% of your bankroll by betting 20% of your bankroll. Also worth remembering is if you bet twice the Kelly amount, on average the geometric size of your bankroll will not grow at all, and anything larger than that will on average cause it to shrink. If you are trying to grow a bankroll that cannot be replenished, Kelly wagers are an upper bound on what you can ever reasonably wager, and 25%-50% of that amount is the sane range. You should be highly suspicious if you are considering wagering anything above half that amount. (Almost) never go full Kelly. Kelly betting, or betting full Kelly, is correct if all of the following are true: 1. You care only about the long-term geometric growth of your bankroll. 2. Losing your entire bankroll would indeed be infinitely bad. 3. You do not have to worry about fixed costs. 4. When opportunities to wager arise, you never have a size minimum or maximum. 5. There will be an unlimited number of future opportunities to bet with an edge. 6. You have no way to meaningfully interact with your bankroll other than wagers. 7. You can handle the swings. 8. You have full knowledge of your edge. At least seven of these eight things are almost never true. In most situations: 1. Marginal utility is decreasing, but in practice falls off far less than geometrically. 2. Losing your entire bankroll would end the game, but that’s life. You’d live. 3. Fixed costs, including time, make tiny bankrolls only worthwhile for the data and experience. 4. There is always a maximum, even if you’ll probably never hit it. Long before that, costs go up and people start adjusting the odds based on your behavior. If you’re a small fish, smaller ponds open up that are easier to win in. 5. There are only so many opportunities. Eventually we are all dead. 6. At some cost you can usually earn money and move money into the bankroll. 7. You can’t handle the swings. 8. You don’t know your edge. There are two reasons to preserve one’s bankroll. A bankroll provides opportunity to get data and experience. One can use the bankroll to make money. Executing real trades is necessary to get worthwhile data and experience. Tiny quantities work. A small bankroll with this goal must be preserved and variance minimized. Kelly is far too aggressive. If your goal is profit,$0.01 isn’t much better than $0.00. You’ll need to double your stake seven times to even have a dollar. That will take a long time with ‘responsible’ wagering. The best thing you can do is bet it all long before things get that bad. If you lose, you can walk away. Stop wasting time. Often you should do both simultaneously. Take a small amount and grow it. Success justifies putting the new larger amount at risk, failure justifies moving on. One can say that this can’t possibly be optimal, but it is simple, and psychologically beneficial, and a limit that is easy to justify to oneself and others. This is often more important. The last reason, #8, is the most important reason to limit your size. If you often have less edge than you think, but still have some edge, reliably betting too much will often turn you from a winner into a loser. Whereas if you have more edge than you think, and you end up betting too little, that’s all right. You’re gonna be rich anyway. For compactness, I’ll stop here for now. • A bit more explanation on what the Kelly Criterion is, for those who haven’t seen it before: suppose you’re making a long series of independent bets, one after another. They don’t have to be IID, just independent. They key insight is that the long-run payoff will be the product of the payoff of each individual bet. So, from the central limit theorem, the logarithm of the long-run payoff will converge to the average logarithm of the individual payoffs times the number of bets. This leads to a simple statement of the Kelly Criterion: to maximize long-run growth, maximize the expected logarithm of the return of each bet. It’s quite general—all we need is multiplicative returns and some version of the central limit theorem. • That’s a really accessible (to me) explanation! Thank you. • Seconding shminux, found this explanation really helpful. • (Almost) never go full Kelly. Kelly betting, or betting full Kelly, is correct if all of the following are true: Just to clarify, the first two points that followed are actually reasons you might want to be *more* risk-seeking than Kelly, no? At least as they’re described in the “most situations” list: Marginal utility is decreasing, but in practice falls off far less than geometrically. Losing your entire bankroll would end the game, but that’s life. You’d live. If your utility is linear in money, you should just bet it all every time. If it’s somewhere between linear and logarithmic, you should do something in between Kelly and betting it all. • Being linear in utility is insufficient to make betting it all correct, you also need to be able to place bets of unlimited size (or not have future opportunities for advantage bets). Otherwise, even if your utility outside of the game is linear, inside of the game it is not. And yes, some of these points are towards being *more* risk-loving than Kelly, at which point you consider throwing the rules out the window. • even if your utility outside of the game is linear, inside of the game it is not. Are there any games where it’s a wise idea to use the Kelly criterion even though your utility outside the game is linear? • Yes, if the game has many opportunities for betting, you should focus on the instrumental use of the money, which is via compounding, thus the instrumental value is geometric, and so you should use the Kelly criterion. In particular, if your edge is small (but can be repeated), the only way you can make a lot of money is by compounding, so you should use the Kelly criterion. • I think this comment is incorrect (in the stated generality). Here is a simple counterexample. Suppose you have a starting endowment of$1, and that you can bet any amount at 0.50001 probability of doubling your bet and 0.49999 probability of losing everything you bet. You can bet whatever amount of your money you want a total of n times. (If you lost everything in some round, we can think of this as you still being allowed to bet 0 in remaining future rounds.) The strategy that maximizes expected linear utility is the one where you bet everything every time. • It depends on n. If n is small, such as n=1, then you should bet a lot. In the limit of n large, you should use the Kelly criterion. The crossover is about n=10^5. Which is why I said that it depends on having many opportunities. • You can prove e.g. by (backwards) induction that you should bet everything every time. With the odds being p>0.5 and 1-p, if the expectation of whatever strategy you are using after n-1 steps is E, then the maximal expectation over all things you could do on the n’th step is p2E (you can see this by writing the expectation as a conditional sum over the outcomes after n-1 steps), which corresponds uniquely to the strategy where you bet everything in any situation on the n’th step. It then follows that the best you can do on the (n-1)th step is also to maximize the expectation after it, and the same argument gives that you should bet everything, and so on. (Where did you get n=10^5 from? If it came from some computer computation, then I would wager that there was some overflow/​numerical issues.) • Sorry, I’m confused. I got 10^5 from 1/​(p-1/​2). • If you have money x after n-1 steps, then betting a fraction f on the n’th step gives you expected money (1-f)x+f2px. Given p>0.5, this is maximized at f=1, i.e. betting everything, which gives the expectation 2px. So conditional on having money x after n-1 steps, to maximize expectation after n steps, you should bet everything. Letting X_i be the random variable that is the amount of money you have after i steps given your betting strategy. We have (one could also write down a continuous version of the same conditioning but it is a bit easier to read if we assume that the set of possible amounts of money after n-1 steps is discrete, which is what I did here). From this formula, it follows that for any given strategy up to step n-1, hence given values for P(X_{n-1}=x), the thing to do on step n that maximizes E[X_n] is the same as the thing to do that maximizes E[X_n|X_{n-1}=x] for each x. So to maximize E[X_n], you should bet everything on the n’th step. If you bet everything, then the above formula gives To recap what we showed so far: we know that given any strategy for the first n-1 steps, the best thing to do on the last step gives E[X_n]=2pE[X_{n-1}]. It follows that the strategy with maximal E[X_n] is the one with maximal 2pE[X_{n-1}], or equivalently the one with maximal E[X_{n-1}]. Now repeat the same argument for step n-1 to conclude that one should bet everything on step n-1 to maximize the expectation after it, and so on. • Or maybe to state a few things a bit more clearly: we first showed that E[X_n|X_{n-1}=x]<=2px, with equality iff we bet everything on step n. Using this, note that , with equality iff we bet everything on step n conditional on any value of X_{n-1}. So regardless of what you do for the first n-1 steps, what you should do on step n is to bet everything, and this gives you the expectation E[X_n]=2pE[X_{n-1}]. Then finish as before. • Can you give a concrete example of such a game? • You start with with 10 bucks, I start with 10 bucks. You wager any amount up to a hundred times, each time doubling it 60% of the time and losing it 40% of the time, until one of us is bankrupt or you stop. If you wager it all, I have a 40% chance to win. If you wager one buck at a time, you win almost certainly. • If you wager one buck at a time, you win almost certainly. But that isn’t the Kelly criterion! Kelly would say I should open by betting two bucks. In games of that form, it seems like you should be more-and-more careful as the amount of bets gets larger. The optimal strategy doesn’t tend to Kelly in the limit. EDIT: In fact my best opening bet is $0.64, leading to expected winnings of$19.561. EDIT2: I reran my program with higher precision, and got the answer $0.58 instead. This concerned me so I reran again with infinite precision (rational numbers) and got that the best bet is$0.21. The expected utilities were very similar in each case, which explains the precision problems. EDIT3: If you always use Kelly, the expected utility is only \$18.866. • Does your program assume that the Kelly bet stays a fixed size, rather than changing? Here’s a program you can paste in your browser that finds the expected value from following Kelly in Gurkenglas’ game (it finds EV to be 20) https://​​pastebin.com/​​iTDK7jX6 (You can also fiddle with the first argument to experiment to see some of the effects when 4 doesn’t hold) • I believe you missed one of the rules of Gurkenglas’ game, which was that there are at most 100 rounds. (Although it’s possible I misunderstood what they were trying to say.) If you assume that play continues until one of the players is bankrupt then in fact there are lots of winning strategies. In particular betting any constant proportion less than 38.9%. The Kelly criterion isn’t unique among them. My program doesn’t assume anything about the strategy. It just works backwards from the last round and calculates the optimal bet and expected value for each possible amount of money you could have, on the basis of the expected values in the next round which it has already calculated. (Assuming each bet is a whole number of cents.) • I just discovered this now, Zvi. It’s such a great heuristic! I whipped up an interactive calculator version in Desmos for my own future reference, but others might find it useful too: https://​​www.desmos.com/​​calculator/​​pf74qjhzuk • I’m trying to think of real life situations where this is useful, but I don’t think I am doing too good a job. For people who are trying to use their bankroll to make money: • People who are in charge of spending money in a business. When there is money available for a business, you want to invest it to grow it, but you don’t want to risk too much and go bankrupt. • People who play the stock market for profit. (Although from what I understand, the stock market is so competitive that trying to outperform is almost always a bad idea.) • People who gamble for a living. Poker is the big example I can think of where this can make sense. With most other forms of gambling, like blackjack, the odds are against you. And with things like sports betting where you could get an edge if you’re smart enough, my understanding is that the house takes a big enough cut such that it’s really, really hard to get an edge. For people who are looking to gamble for the sake of intellectual growth, like OP says, tiny quantities should work. And even if you need to bet in larger quantities so that you “feel the pain”, 1) these quantities are still probably going to be a small fraction of your overall bankroll, and 2) if you are gambling for the sake of intellectual growth, you are probably smart enough to get a job that pays a lot of money, and can thus replenish your bankroll easily. • Marginal utility is decreasing, but in practice falls off far less than geometrically. I think this is only true if you’re planning to give the money to charity or something. If you’re just spending the money on yourself then I think marginal utility is literally zero after a certain point. • I think this doesn’t take into account two things: 1. The line between “spending on yourself” and “giving to charity” is thin. 2. Signaling arms races. For example, suppose I am a billionaire. I’ve got my mansions in every city, my helicopter, my private island, and a vault full of gold that I can dive into, Scrooge McDuck style. What more can I want? In a word: intangibles. I’ve purchased joy, security, and freedom; now I want respect. I want people everywhere to love me, and my name to be synonymous with goodness and benevolence. So I give to charity. But so does everyone, right? My friend/​rival Bob—also a billionaire—has just given five hundred million dollars to save cute puppies in war-torn countries. TIME magazine man of the year! Me? I’m a footnote on page 40. The more money I make, the more I can give. The more I can give (and/​or spend on other things! there’s more than one way to reap the intangible benefits of fame, after all), the more I gain—in real benefit, to myself, personally. My marginal utility of money, then, is far greater than zero. • Consider the parallel to the AI whose goal is to bring you coffee, so it takes over the world to make sure no one can stop it from bringing you coffee: The fact that one might need or want more money makes it nonzero. The more serious issue here is something I call the Uncanny Valley of Money, which I hope to write about at some point soon, where you have to move from spending on yourself (at as little as 1:1, in some sense) to spending on everyone (at up to 7000000000:1, or even more if you count the future, in some sense), in order to actually make any progress even for yourself.
2023-03-27 16:30:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6474015712738037, "perplexity": 816.7677813227533}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948673.1/warc/CC-MAIN-20230327154814-20230327184814-00794.warc.gz"}
https://lapkb.github.io/Pmetrics/reference/NM2PM.html
NM2PM will convert NONMEM .csv data files to Pmetrics csv data files. ## Usage NM2PM(data, ctl) ## Arguments data The name and extension of a NONMEM data (e.g. .csv) file in the working directory, or the full path to a file. ctl The name and extension of a NONMEM control (e.g. .ctl) file in the working directory, or the full path to a file. ## Value A Pmetrics style PMmatrix data.frame. ## Details The format of NONMEM and Pmetrics data .csv files are similar, but not quite identical. A major difference is that the order of the columns are fixed in Pmetrics (not including covariates), while they are user-determined in NONMEM, and specified in a control (.ctl) file. A list of other differences follows by data item. • ID This item is the same in both formats and is required. • EVID This is the same in both formats but is not required in NONMEM. Doses have an EVID of 1 and observations 0. EVID=4 (dose/time reset) is the same in Pmetrics and NONMEM. EVID=2 (other event) and EVID=3 (dose reset) are not directly supported in Pmetrics, but if included in a NONMEM file, will be converted into covariate values. Specifically the value in the CMT variable will be the covariate value for EVID=2, while for EVID=3, the covariate will be 1 at the time of the EVID=3 entry and 0 othewise. This allows for handling of these events in the Pmetrics model file using conditional statements. • DATE Pmetrics does not use dates, but will convert all NONMEM dates and times into relative times. • TIME Pmetrics uses relative times (as does NONMEM), but the NONMEM pre-processor will convert clock times to relative times, as does NM2PM. • RATE NONMEM RATE items are converted by this function to Pmetrics DURation values. • AMT becomes DOSE in Pmetrics • ADDL is supported in both formats. However, if NONMEM files contain an SS flag, it will be incorporated as ADDL=-1 according to Pmetrics style. • II is the same in both formats. • INPUT in Pmetrics is similar to CMT in NONMEM for doses. • DV in NONMEM becomes OUT in Pmetrics. Ensure that the units of OUT are consistent with the units of DOSE. • OUTEQ In Pmetrics, this is roughly equivalent to CMT in NONMEM for observation events. The lowest CMT value for any observation becomes OUTEQ=1; the next lowest becomes OUTEQ=2, etc. • MDV Missing DV in NONMEM become OUT=-99 in Pmetrics. • Covariates These are copied from NONMEM to Pmetrics. Note that Pmetrics does not allow missing covariates at time 0 for each subject. • DROP Items marked as DROP in the NONMEM control file will not be included in the Pmetric data file. It is strongly suggested to run PMcheck on the returned object for final adjusting. PMcheck, PMwriteMatrix, PMwrk2csv
2023-02-04 14:25:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38891294598579407, "perplexity": 7975.999678266788}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500140.36/warc/CC-MAIN-20230204142302-20230204172302-00351.warc.gz"}
https://gamedev.stackexchange.com/questions/15942/2d-platformer-collision-physics-problems
# 2D Platformer Collision/Physics Problems I'm making a 2D platformer similar to Terraria, although I'm having some problems with some collision detection code. I'm using the code from the AppHub Platformer sample, but tweaked so it fits with my character size and tilesize. Although, I am having a few problems: If I box myself in, and then jump. Instead of just canceling the jump it will move me through a wall (most likely to the right) and then when it can jump, it performs the jump. Next problem I'm having is if I have a 1x1 gap above the character, I am freely able to jump through it. The character size is 2 blocks wide by 3 high. I don't want to use a physics engine because I will not be using 95% of the features, and I don't really want to add it if i'm not going to use most of it. I just require a basic collision detection and basic physics. Honestly, I can't see anything wrong with my code and it's pretty much the same as the sample. Just by changing the tile size must have added these bugs and I'm not exactly sure how to fix it. This is my existing code. Basically, what I'm asking is: How could I modify my current system to eliminate these bugs, or what new system could I implement to have a better, fully functional physics system without these bugs. (Without implementing a whole engine). • Can you post your code? – user159 Aug 13 '11 at 17:02 • @Joe - The physics part? I included a link to it in the OP: pastebin.com/fT5mKkfE Aug 13 '11 at 17:11 • Unless this is reformulated as a general question about implementing 2D tile physics, I think it's too localized. – user744 Aug 14 '11 at 15:56 • Collision and basic physics are the core of any physics engine. I'd reconsider using an outside physics engine, especially with all the weird edge cases that are bound to show up in a game with customizable scenery. Writing your own physics for this complex of a game really is reinventing the wheel. Aug 14 '11 at 16:36 • @Gregory Weir, I completely agree. Aug 19 '11 at 20:14 Well, there is no solution to debug, so let's play a guessing game :) • First problem seems to be caused by resolving jump collision on the X axis alone thanks to absDepthX always being bigger in if (absDepthY < absDepthX ...) condition. • Second problem seems to be caused by either bad intersection detection or velocity which is bigger than single block dimension, then collision logic would fail to detect any collisions. • Not really a big problem, just a strange thing: it looks like velocity is scaled by frame delta time twice. I would fix those problems by creating a strong mental image of how I want my collision system to work, imagining its progression by time, drawing several images of main hero colliding with walls and ceiling to keep weird cases in mind, and then writing code from scratch while using other developer's code sample only for insight on some concrete feature implementation details or general inspiration. Well, it's possible to tweak code of others, but then it is a must to understand the said code even better than its original author, or esoteric behavior will emerge sooner or later. • Thank you, I didn't realize that the jump power would be the cause of not only the second problem, but the first too. Aug 14 '11 at 11:56 What I usually do in my code is quite simple: First of all, you must calculate how much your character has penetrated inside the other object, in each axis. int penX = (character.Left < object.Right) ? character.Left - object.Right : character.Right - object.Left; Then, one at a time, substract this distance, which will be smaller than the character's delta, from the character position and check if the collision is no more. If the collision still exists, undo the substraction and test the next axis (or both at the same time). If, on the other hand, the collision is no more, your character is now collision free, right beside/above/below the object it collided with. Since it is a one step system, there is no way you'll end appearing above the object you collided with. That is, unless the character is completely inside a wall or something like that. In this case, adding a couple more calculations to take movement delta into account (knowing where you are coming from) would fix the problem. Note: it only works when calculating collisions of one moving object against a static one. For collisions of two moving objects it requires a precedence system or some tweaks (e.g. distinguishing dynamic from kinematic objects, as Unity does). Otherwise, strange things happen. P.S.: I'm sorry I am not adding the code. It's in an old project in my SVN, and finding it would take quite some time. • Thank you, but this is a more basic way of the current system. Aug 14 '11 at 11:58 A simple solution is: 1. Calculate the new position for the character (based on velocity, or button presses, or platforms, etc), but don't move them just yet. 2. Check for collisions with the character at the new position. 3. If there are no collisions at the new position, then move the character, otherwise, keep the character where they are. Apply these steps separately to the X and Y axis so that the character can slide along the unconstrained axis. By decoupling the movement of the character from the actual position, the character will only move until it collides with an object. Additional info: If the character is moving at high speed, then you may end up with a problem where the character collides a few pixels before a boundary. In that case you can apply the steps by recursively sub-dividing the distance in halves until the distance between the colliding and non-colliding positions is one pixel.
2022-01-24 10:35:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43304890394210815, "perplexity": 849.2606063693657}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304528.78/warc/CC-MAIN-20220124094120-20220124124120-00129.warc.gz"}
http://openstudy.com/updates/502df63ee4b0e5faaccebe15
• anonymous the vector $e_v$ is a unit vector of v I guess $\vec v=<3,-1>$ $||v||=\sqrt{9+1}=\sqrt{10}$ so $e_v=\frac{1}{||v||} \vec v=\left <\frac3{\sqrt{10}},\frac{-1}{\sqrt{10}}\right >$ Mathematics Looking for something else? Not the answer you are looking for? Search for more explanations.
2017-04-26 12:01:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47499898076057434, "perplexity": 1101.0638447744238}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121305.61/warc/CC-MAIN-20170423031201-00323-ip-10-145-167-34.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/3688709/proving-the-following-matrix-is-diagonalizable/3688766
# Proving the following matrix is diagonalizable I'm asked to prove that the matrix $$A\in M_{n}(\mathbb C)$$ that satisfy $$A^8+A^2=I$$ is diagonalizable. I've tried looking at the equation $$x^8+x^2-1=0$$ and determining whether $$M_A$$ has any repeating roots, but this got me nowhere. Afterwards, I thought about trying to determine whether its Jordan form is diagonal (I know such form exists since $$\mathbb C$$ is algebraically closed, so $$P_A$$ splits into linear factors) still got nowhere. Is there a right approach to the question or is there something I'm missing? Is it not enough to check if this polynomial has a double root? Its derivative is $$8x^7 + 2x = x(8x^6 + 2) = 8x(x^6 + \frac 14).$$ Now $$0$$ is not a common root, so the double roots would satisfy $$x^6 + \frac14 = 0.$$ Plugging this into our original polynomial, a double root would satisfy $$0 = x^8 + x^2 - 1 = x^2(x^6 + \frac14) + \frac34 x^2 -1= \frac34 x^2 -1,$$ but the roots of $$x^2 - \frac43$$ are not roots of $$x^6 + \frac14$$, since if $$x^2 = \frac 43$$, then $$x^6 + \frac14 = \left(\frac43\right)^3 + \frac14 \neq 0.$$ • I do deem this as enough. Thank you very much. – GBA May 24 '20 at 20:03 I think your first idea to check the roots of $$f(x) = x^8 + x^2 - 1$$ is good. With some careful arguments you can show that $$f$$ has distinct roots. Let $$\alpha$$ be a root and let $$\beta = \alpha^2$$. So we have $$\beta^4 + \beta - 1 = 0$$. The polynomial $$x^4 + x - 1$$ has two distinct real roots $$a < 0 < b$$ and a pair of complex conjugate roots $$c,d$$. So there are four possible choices for $$\beta$$. Since $$b > 0$$ we have that $$\sqrt{b}$$ and $$-\sqrt{b}$$ are real roots of $$f$$. Since $$a < 0$$ we have that $$i\sqrt{-a}$$ and $$-i\sqrt{-a}$$ are also distinct solutions for $$f$$ When we take the squareroot of $$c$$ and $$d$$, we need to be sure that none of these numbers coincides with any of the roots of $$f$$ we've found so far. Since $$c$$ and $$d$$ are complex conjugates, let's write them as $$c = re^{i\theta}$$ and $$d = re^{-i \theta}$$ for some $$r > 0$$ and $$\theta \in (0, \pi)$$. Note that this choice of $$\theta$$ is possible because $$c, d$$ are not real. Taking their squareroots we get the following roots of $$f$$: $$\sqrt r e^{i (\theta/2)}, \sqrt re^{-i (\theta/2)}, \sqrt r e^{i (\pi/2 + \theta/2)}, \sqrt r e^{-i (\pi/2 + \theta/2)}.$$ If you plot these out in the complex plane you will notice they all lie in different quadrants and none of them are purely real or imaginary. So all the roots are distinct, this shows that the minimal polynomial has distinct roots and so $$A$$ is diagonalisable. • The roots of your $f$ are not eigen-values. May 23 '20 at 22:20 • Thaks for the comment, for some reason I had thought $f$ was the characteristic polynomial May 23 '20 at 22:35 Look at $$g(x)=x^4+x-1$$ . Observe that $$g'(x)=4x^3+1$$ has only one real root and deduce that g as no repeated root in $$\mathbb R$$. Moreover, we have $$g(0)=-1$$ and $$g(x)\to \infty$$ for $$x\to \infty$$ and $$x\to -\infty$$. So $$g$$ has only two distinct real roots. The other two roots are distinct and complex occurring in conjugate pairs. Now we have $$f(x)=x^8+x^2-1=g(x^2)$$ $$\therefore \{\alpha \in \mathbb C|f(x)=0\}=\{\alpha \in \mathbb C|\alpha^2 \text{ is a root of }g\}$$ As the roots of $$g$$ are distict so are the roots of $$f$$. So $$f$$ is product of distinct linear factors. Now you know your minimal polynomial $$M_A$$ divides $$f$$ in $$\mathbb C[x]$$ as $$f$$ annihilates $$A$$. Hence $$M_A$$ is just the product of distinct linear factors hence $$A$$ is diagonalizible.
2021-10-17 13:28:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 65, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8815061450004578, "perplexity": 80.74602244263102}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585177.11/warc/CC-MAIN-20211017113503-20211017143503-00531.warc.gz"}
https://www.physicsforums.com/threads/a-work-problem.165868/
A Work problem 1. Apr 15, 2007 rootX This questions asks what is the work done by the tension in the cable. And, the book answered that it is equal to the work done by the gravity. But shouldn't it be more than the work done by the gravity? (because there is also a horizontal displacement) see the attached image Attached Files: • lastscan.jpg File size: 35.2 KB Views: 39 2. Apr 15, 2007 Andrew Mason I can't see your attachment yet, what is the direction of the force? So what is $\vec{F} \cdot \vec{d}$ ? AM
2017-09-23 02:43:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5503978729248047, "perplexity": 1051.840669426867}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689413.52/warc/CC-MAIN-20170923014525-20170923034525-00102.warc.gz"}
https://ask.libreoffice.org/en/question/90264/formula-to-value-greyed-out/
# Formula to Value greyed out This post is a wiki. Anyone with karma >75 is welcome to improve it. Version: 5.2.5.1 Created a formula in a cell. Did: Data, Calculate, Formula to Value. Worked fine to display value. However, now I wish to modify the formula slightly. However, when I try to reverse this to display formula, I find it greyed out. What to do? edit retag close merge delete Sort by » oldest newest most voted Any cell can contain exactly one of the following: a number, a text, or a formula. As soon as you replaced a formula by its result (either number or text) the formula is no longer present. The only way to get it back is the > 'Edit' > 'Undo' tool (also Ctrl+Z) of the user interface, but only as long as the undo-stack was big enough, and the document was not saved. more Thanks. There must be a way to retain the (an) original formula, perhaps in a "protected" cell, and have the results show in another cell? ( 2017-03-15 15:23:59 +0100 )edit You can always refer to the formula cell in another cell to display its value, e.g. with your formula in A1 just enter into any other cell =A1 ( 2017-03-15 15:31:09 +0100 )edit By default the result of the formula should be shown anyway in the cell containing the formula. If you got shown there permanently (not only during editing) the formula, you must have the setting 'Tools' > 'Options' > 'LibreOffice Calc' > 'View' > 'Display' > 'Formulae' switched on (wrongly). This is an option only (rarely) useful during the design/debugging of complicated sheets. ( 2017-03-15 15:41:00 +0100 )edit Thanks to all. ( 2017-03-15 17:01:30 +0100 )edit @joea:L Would you mind to tell in what way you got rid of your problem? ( 2017-03-15 19:17:35 +0100 )edit
2019-12-16 07:18:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.295967161655426, "perplexity": 3780.4432265516516}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541318556.99/warc/CC-MAIN-20191216065654-20191216093654-00284.warc.gz"}
https://engineeringprep.com/problems/284
## Hyperbola What is the eccentricity of the conic section represented by the below equation? Hint The equation for a hyperbola has the below format: $$\frac{(x-h)^2}{a^2}-\frac{(y-k)^2}{b^2}=1$$$Hint 2 Eccentricity: $$e=\sqrt{1+(b^2/a^2)}$$$ A conic section is a curve obtained from the intersection of a cone’s surface and a flat plane. The eccentricity, $$e$$ , of a conic section indicates how close its shape is a circle. As eccentricity grows larger, the less the shape resembles a circle. The problem’s equation is a hyperbola because it has the below format: $$\frac{(x-h)^2}{a^2}-\frac{(y-k)^2}{b^2}=1$$$Note the similarities to an ellipse equation, except the two components are being subtracted instead of added. The eccentricity, $$e$$ , of a conic section indicates how close its shape is a circle. To solve for eccentricity: $$e=\sqrt{1+(b^2/a^2)}$$$ Thus, $$e=\sqrt{1+(64/49)}=\sqrt{1+1.306}=\sqrt{2.306}=1.52$$\$ 1.52
2022-08-19 11:12:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8228899836540222, "perplexity": 1243.0908865305864}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573667.83/warc/CC-MAIN-20220819100644-20220819130644-00319.warc.gz"}
https://codinghero.ai/how-to-make-a-bar-graph/
• Home • / • Blog • / • How to Make a Bar Graph – Definition, Advantages & Examples # How to Make a Bar Graph – Definition, Advantages & Examples October 18, 2022 This post is also available in: हिन्दी (Hindi) A graphical representation is a visual display of data and statistical results. It is more effective than presenting the data in the form of tables. There are many ways to represent the data graphically such as bar graph, double bar graph, pictograph, line graph, linear graph, histogram, pie chart, etc. A bar chart or bar graph is a chart or graph that presents the data with rectangular bars with heights or lengths proportional to the values that they represent. For example, if you want to present information on sales of cars during a period, you draw rectangles of the same width and height corresponding to the numbers representing the sales figures. Let’s understand what is a bar graph and how is it read and how to make a bar graph using examples. ## What is a Bar Graph? A bar graph or a bar chart is a pictorial representation of grouped data. A bar graph shows the data with rectangular bars of the same width and the heights proportional to the values that they represent. The bars in the graph can be shown vertically or horizontally. A bar graph is an excellent tool to represent data that are independent of one another and that do not need to be in any specific order while being represented. The bars give a visual display for comparing quantities in different categories. The bar graphs have two lines, horizontal and vertical axis, also called the $x$ and $y$-axis along with the title, labels, and scale range. Let’s consider an example of how does a bar graph looks like. The bar graph shown above represents the sales of cars in a town during the period from 2000 to 2004. You can see the bar graph is drawn between two axes. The horizontal axis(or $x$-axis) shows the years and the vertical axis(or $y$-axis) shows the number of cars sold. The rectangular bars drawn of the same width represent the sales of cars during the year corresponding to the bars. The length (or height) of each of the bars corresponds to the value(or the number of cars sold) during a year. Let’s consider another example of the number of games played by tennis players in a particular year. This bar graph also represents the same information related to sales of cars in a town during the period from 2000 to 2004. This bar graph is also drawn between two axes. The horizontal axis(or $x$-axis) shows the number of cars sold and the vertical axis(or $y$-axis) shows the years. The only difference between these two bar graphs is that the first one is a vertical bar graph and the second one is a horizontal bar graph. ## Types of Bar Graphs Bar Graphs are broadly classified into two types depending on the orientation of the bars(rectangles): • Vertical Bar Graph • Horizontal Bar Graph The bars in bar graphs can be plotted horizontally or vertically, but the most commonly used bar graph is the vertical bar graph. Either of these two categories can be of the following two types. • Grouped Bar Graph • Stacked Bar Graph ### Vertical Bar Graphs Vertical bar graphs are bar graphs where data is represented vertically in a graph or chart with the help of rectangular bars that show the measure of data. The rectangular bars are vertically drawn on the $x$-axis, and the $y$-axis shows the value of the height of the rectangular bars which represents the quantity of the variables written on the $x$-axis. ### Horizontal Bar Graphs Horizontal bar graphs are bar graphs where data is represented horizontally in a graph or chart with the help of rectangular bars that show the measure of data. In this type, the variables or the categories of the data have to be written and then the rectangular bars are horizontally drawn on the y-axis and the x-axis shows the length of the bars equal to the values of different variables present in the data. ### Stacked Bar Graph The stacked bar graph(or composite bar graph) divides the whole bar into different parts. In this, each part of a bar is represented using different colors to easily identify the different categories. It requires specific labeling to indicate the different parts of the bar. Thus, in a stacked bar graph every rectangular bar represents the whole, and each segment in the rectangular bar shows the different parts of the whole. ### Grouped Bar Graph The grouped bar graph(or clustered bar graph) is used to show the discrete value for two or more categorical data. In this, rectangular bars are grouped by position for levels of one categorical variable, with the same colors showing the secondary category level within each group. It can be shown both vertically and horizontally. ## Properties of Bar Graph These are some properties that make a bar graph unique and different from other types of graphs. • All rectangular bars should have equal width and should have equal space between them. • The rectangular bars can be drawn horizontally or vertically. • The height of the rectangular bar is equivalent to the data they represent. • The rectangular bars must be on a common base. These are the advantages of bar graphs over the other types of graphs. • Display relative numbers or proportions of multiple categories • Summarize a large amount of data in a visual, easily interpretable form • It makes trends easier to highlight than tables do • Bar graphs help in studying patterns over a long period of time • It is used to compare data sets. data sets are independent of each other • It is the most widely used method of data representation. therefore, it is used in various fields • Bar graphs estimates can be made quickly and accurately • It permits visual guidance on the accuracy and reasonableness of calculations • Bar graphs are very efficient in comparing two or three data sets ## How to Read a Bar Graph The different steps to read a bar graph are given below: Step 1: Check whether the given bar graph is a horizontal bar graph or a vertical bar graph. Step 2: In the case of a vertical bar graph, the categories are present on the horizontal axis (or x-axis) and the data values are represented by the vertical bars(or rectangles). In the case of a horizontal bar graph, the categories are present on the vertical axis (or y-axis) and the data values are represented by the horizontal bars(or rectangles). Step 3: For each category, In the case of a vertical bar graph note down the point on the $y$-axis corresponding to the height of a vertical bar In the case of a horizontal bar graph note down the point on the $x$-axis corresponding to the length of a horizontal bar ### Examples Let’s consider an example to understand how a bar graph is read. Ex 1: Observe the bar graph and answer the following questions • In which year maximum number of trees were planted by the eco-club? • In which year the number of trees planted by the eco-club was minimum? • In which two years, the same number of trees were planted. How many total trees were planted in those two years? • How many total trees were planted by the eco-club between 2005 and 2010? The longest bar(or rectangle) corresponds to the year 2008. Therefore, the maximum number of trees planted in the year 2008, and was 300. The smallest bar(or rectangle) corresponds to the year 2009. Therefore, the minimum number of trees planted in the year 2009, and was 100. The height of the bars corresponding to the years 2005 and 2010 is the same and is equal to 150. Therefore, between 2005 and 2010, each year 150 trees were planted each. The total number of trees planted in these two years was 150 + 150 = 300. The number of trees planted each year is During 2005, the number of trees planted = 150 During 2006, the number of trees planted = 250 During 2007, the number of trees planted = 200 During 2008, the number of trees planted = 300 During 2009, the number of trees planted = 100 During 2010, the number of trees planted = 150 Total number of trees planted by the eco-club between 2005 and 2010 were 150 + 250 + 200 + 300 + 100 + 150 = 1150 ## How to Make a Bar Graph The different steps to make a bar graph are given below: Step 1: First, decide the title of the bar graph. Step 2: Draw the horizontal axis and vertical axis. (For example, Favourite Fruit) Step 3: Now, label the horizontal axis. Step 4: Write the names on the horizontal axis, such as Apple, Mango, Pineapple, Orange, and Grapes. Step 5: Now, label the vertical axis. (For example, the Number of Students) Step 6: Finalise the scale range for the given data. Step 7: Finally, draw the bar graph that should represent each category of the fruit with their respective numbers. ### Examples Let’s consider an example to understand how a bar graph is drawn for the given data. Ex 1: A group of students was asked about their favourite activities. The following table shows the data regarding the hobbies of the students. Draw a bar graph to represent the given data. The first step is deciding the title of the bar graph. Let’s name our bar graph “Favourite Activities of Students”. Next, draw the axes. The horizontal axis represents the activity and the vertical axis represents the number of students. To choose an appropriate scale, note down the minimum and maximum values in the data set. The minimum value (Dance – Number of students = 5) The maximum value (Drawing – Number of students = 30) Since the maximum value is 30 and the minimum 5, so let’s mark the maximum point on the vertical axis as 35 starting from 0 with equal intervals of 5. This will fit all the values properly in the bar graph. Finally, draw the bar graph that should represent each category of the activity with their respective numbers. ## Practice Problems In the library of a school, there are books on the following subjects in given numbers: English — 150, History — 300, Math — 500, Science — 325. Draw a bar graph for the given data. Draw a bar graph for the given data The number of students studying in each of the five classes of a school is given below. Draw a bar graph to represent the numerical data. From a pond, the following numbers of fish were caught on different days. Draw the bar graph of it. ## FAQs ### What is a bar graph explain with an example. A bar graph can be defined as a graphical representation of data, quantities, or numbers using bars or strips. They are used to compare and contrast different types of data, frequencies, or other measures of distinct categories of data. For example, the graph on the right shows the number of trees planted by eco-club during the years 2005 – 2010. ### What is the difference between a bar graph and a histogram? The major difference between a bar chart and a histogram is the bars of the bar chart are not just next to each other. In a histogram, the bars are adjacent to each other. In statistics, bar charts and histograms are important for expressing a huge or big number of data. ### How do you represent a bar graph? The rectangular bars in a bar graph can be drawn horizontally or vertically. In a bar graph, horizontal (or vertical) rectangular bars should have equal width and space between them. The height of the rectangular bars in a bar chart is equivalent to the given data it represents. ## Conclusion A bar graph shows the data with rectangular bars of the same width and the heights proportional to the values that they represent. The bars in the graph can be shown vertically or horizontally. These are very helpful in displaying relative numbers or proportions of multiple categories.
2023-02-01 12:00:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6030889749526978, "perplexity": 627.2154690784155}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499934.48/warc/CC-MAIN-20230201112816-20230201142816-00462.warc.gz"}
https://web2.0calc.com/questions/using-the-lcd-to-eliminate-the-fractions-from-this-equation
+0 # Using the LCD to eliminate the fractions from this equation? 0 42 2 $${x - 1 \over 3}$$-$${2x - 5 \over 4}$$=$${5 \over 12}$$ + $${x \over 6}$$ I'm extremely lost... I'm not really sure how to do this. I know that the LCD: (3)(4)(6)... right? Am I just basically multiplying the LCD with each fraction? The text got this: $${4(x - 1) - 3(2x - 5) = 5 + 2x}$$ How do I get this answer? Guest Apr 3, 2018 Sort: #1 +333 +2 The LCD is actually just 3*4. This is because 12/6 = 2. When you multiply this to each fraction, you get the answer that the text gave. Mathhemathh  Apr 3, 2018 #2 +92206 +2 $$\frac{x-1}{3}-\frac{2x-5}{4}=\frac{5}{12}+\frac{x}{6}\\ \text{Multiply BOTH sides by the lowest common denominator which is 12}\\ 12\left[\frac{x-1}{3}-\frac{2x-5}{4}\right]=12\left[\frac{5}{12}+\frac{x}{6}\right]\\ \frac{12(x-1)}{3}-\frac{12(2x-5)}{4}=\frac{12*5}{12}+\frac{12x}{6}\\ 4(x-1)-3(2x-5)=5+2x\\ 4x-4-6x+15=5+2x\\ etc$$ Melody  Apr 3, 2018 ### 9 Online Users We use cookies to personalise content and ads, to provide social media features and to analyse our traffic. We also share information about your use of our site with our social media, advertising and analytics partners.  See details
2018-04-22 14:06:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9355947375297546, "perplexity": 2698.86623802612}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945604.91/warc/CC-MAIN-20180422135010-20180422155010-00367.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/intermediate-algebra-12th-edition/chapter-2-section-2-2-the-slope-of-a-line-2-2-exercises-page-159/74
## Intermediate Algebra (12th Edition) $\bf{\text{Solution Outline:}}$ To graph the line with the given characteristics: \begin{array}{l}\require{cancel} \text{Through } (5,3) \\ m=0 .\end{array} draw a horizontal line passing through the given point. $\bf{\text{Solution Details:}}$ Since lines with a slope of $0$ are horizontal lines, then the graph is a horizontal line passing through the point $(5,3) .$
2018-07-19 15:50:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9992019534111023, "perplexity": 596.2996733470605}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591140.45/warc/CC-MAIN-20180719144851-20180719164851-00522.warc.gz"}
https://deepai.org/publication/an-equilibrated-a-posteriori-error-estimator-for-arbitrary-order-nedelec-elements-for-magnetostatic-problems
# An equilibrated a posteriori error estimator for arbitrary-order Nédélec elements for magnetostatic problems We present a novel a posteriori error estimator for Nédélec elements for magnetostatic problems that is constant-free, i.e. it provides an upper bound on the error that does not involve a generic constant. The estimator is based on equilibration of the magnetic field and only involves small local problems that can be solved in parallel. Such an error estimator is already available for the lowest-degree Nédélec element [D. Braess, J. Schöberl, Equilibrated residual error estimator for edge elements, Math. Comp. 77 (2008)] and requires solving local problems on vertex patches. The novelty of our estimator is that it can be applied to Nédélec elements of arbitrary degree. Furthermore, our estimator does not require solving problems on vertex patches, but instead requires solving problems on only single elements, single faces, and very small sets of nodes. We prove reliability and efficiency of the estimator and present several numerical examples that confirm this. ## Authors • 3 publications • 5 publications • 11 publications 04/17/2020 ### A polynomial-degree-robust a posteriori error estimator for Nédélec discretizations of magnetostatic problems We present an equilibration-based a posteriori error estimator for Nédél... 09/29/2020 ### Residual-based a posteriori error estimates for 𝐡𝐩-discontinuous Galerkin discretisations of the biharmonic problem We introduce a residual-based a posteriori error estimator for a novel h... 04/23/2020 ### Adaptive virtual element methods with equilibrated flux We present an hp-adaptive virtual element method (VEM) based on the hype... 05/06/2020 ### A Residual Based A Posteriori Error Estimators for AFC Schemes for Convection-Diffusion Equations In this work, we propose a residual-based a posteriori error estimator f... 04/20/2020 ### Residual-type a posteriori error analysis of HDG methods for Neumann boundary control problems We study a posteriori error analysis of linear-quadratic boundary contro... 06/17/2021 ### A posteriori estimator for the adaptive solution of a quasi-static fracture phase-field model with irreversibility constraints Within this article, we develop a residual type a posteriori error estim... 10/19/2020 ### A posteriori error estimates by weakly symmetric stress reconstruction for the Biot problem A posteriori error estimates are constructed for the three-field variati... ##### This week in AI Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday. ## 1. Introduction We consider an a posteriori error estimator for finite element methods for solving equations of the form . These equations are related to magnetostatics but also appear in eddy current models for non-conductive media. The first a posteriori error estimator in this context was introduced and analysed in [4]. It is a residual-type estimator and provides bounds of the form c0estimator≤error ≤c1estimator, up to some higher-order data oscillation terms, where are positive constants that do not depend on the mesh resolution. Similar bounds can be obtained by hierarchical error estimators; see, e.g., [5], under the assumption of a saturation condition, and by Zienkiewicz–Zhu-type error estimators; see, e.g., [17]. A drawback of these estimators is that the constants and are usually unknown, resulting in significant overestimation or underestimation of the real error. Equilibration-based error estimators can circumvent this problem. Often attributed to Prager and Synge [18], these estimators have become a major research topic; for a recent overview, see, for example, [13] and the references therein. An equilibration-based error estimator was introduced for magnetostatics in [7] and provides bounds of the form c0estimator≤error ≤estimator up to some higher-order data oscillation terms. In other words, it provides a constant-free upper bound on the error. A different equilibration-based error estimator for magnetostatics was introduced in [19] and, for an eddy current problem, in [11, 10]. Constant-free upper bounds are also obtained by the functional estimate in [16], when selecting a proper function in their estimator, and by the recovery-type error estimator in [8], in case the equations contain an additional term , with . A drawback of the estimators in [19, 11, 10, 16] is that they require solving a global problem. The estimator in [7], on the other hand, only involves solving local problems related to vertex patches. However, the latter estimator is defined for Nédélec elements of the lowest degree only. In this paper, we present a new equilibration-based constant-free error estimator that can be applied to Nédélec elements of arbitrary degree. Furthermore, our estimator involves solving problems on only single elements, single faces, and very small sets of nodes. The paper is constructed as follows: We firstly introduce a finite element method for solving magnetostatic problems in Section 2. We then derive our error estimator step by step in Section 3, with a summary given in Section 3.3, and prove its reliability and efficiency in Section 3.4. Numerical examples confirming the reliability and efficiency of our estimator are presented in Section 4, and an overall summary is given in Section 5. ## 2. A finite element method for magnetostatic problems Let be an open, bounded, simply connected, polyhedral domain with a connected Lipschitz boundary . In case of a linear, isotropic medium and a perfectly conducting boundary, the static magnetic field satisfies the equations ∇×H =j in Ω, ∇⋅μH =0 in Ω, ^n⋅μH =0 on ∂Ω, where is the vector of differential operators , and denote the outer- and inner product, respectively, (therefore, and are the curl- and divergence operator, respectively), denotes the outward pointing unit normal vector, , with for some positive constants and , is a scalar magnetic permeability, and is a given divergence-free current density. The first equality is known as Ampères law and the second as Gauss’s law for magnetism. These equations can be solved by writing , where is a vector potential, and by solving the following problem for : (1a) ∇×(μ−1∇×u) =j in Ω, (1b) ∇⋅u =0 in Ω, (1c) ^n×u =0 on ∂Ω. The second condition is only added to ensure uniqueness of and is known as Coulomb’s gauge. Now, for any domain , let denote the standard Lebesque space of square-integrable vector-valued functions equipped with norm and inner product , and define the following Sobolev spaces: H1(Ω) :={ϕ∈L2(Ω)|∇ϕ∈L2(Ω)3}, H10(Ω) :={ϕ∈H1(Ω)|ϕ=0 on ∂Ω}, H(curl;Ω) :={u∈L2(Ω)3|∇×u∈L2(Ω)3}, H0(curl;Ω) :={u∈H(curl;Ω)|^n×u=0 on ∂Ω}, H(div;Ω) :={u∈L2(Ω)3|∇⋅u∈L2(Ω)}, H(div0;Ω) :={u∈H(div;Ω)|∇⋅u=0}. The weak formulation of problem (1) is finding such that (2) (μ−1∇×u,∇×w)Ω =(j,w)Ω ∀w∈H0(curl;Ω), which is a well-posed problem [15, Theorem 5.9]. The solution of the weak formulation can be approximated using a finite element method. Let be a tetrahedron and define to be the space of polynomials on of degree or less. Also, define the Nédélec space of the first kind and the Raviart-Thomas space by Rk(T) Dk(T) Finally, let denote a tessellation of into tetrahedra with a diameter smaller than or equal to , let , , and denote the discontinuous spaces given by P−1k(Th) :={ϕ∈L2(Ω)|ϕ|T∈Pk(T) for all% T∈Th}, R−1k(Th) :={u∈L2(Ω)3|u|T∈Rk(T) for all T∈Th}, D−1k(Th) :={u∈L2(Ω)3|u|T∈Dk(T) for all T∈Th}, and define Pk(Th) :=P−1k(Th)∩H1(Ω), Pk,0(Th) :=P−1k(Th)∩H10(Ω), Rk(Th) :=R−1k(Th)∩H(curl;Ω), Rk,0(Th) :=R−1k(Th)∩H0(curl;Ω), Dk(Th) :=D−1k(Th)∩H(div;Ω). We define the finite element approximation for the magnetic vector potential as the vector field that solves (3a) (μ−1∇×uh,∇×w)Ω =(j,w)Ω ∀w∈Rk,0(Th), (3b) (uh,∇ψ)Ω =0, ∀ψ∈Pk,0(Th). The approximation of the magnetic field is then given by Hh:=μ−1∇×uh, which converges quasi-optimally as the mesh width tends to zero [15, Theorem 5.10]. In the next section, we show how we can obtain a reliable and efficient estimator for . ## 3. An equilibration-based a posteriori error estimator We follow [7] and present an a posteriori error estimator that is based on the following result. ###### Theorem 3.1 ([7, Thm. 10]). Let be the solution to (2), let be the solution of (3), and set and . If satisfies the equilibrium condition (4) ∇×~H =j, then (5) ∥μ1/2(H−Hh)∥Ω ≤∥μ1/2(~H−Hh)∥Ω. ###### Proof. The result follows from the orthogonality of and : =(μ1/2(~H−H),μ−1/2∇×(u−uh))Ω =(~H−H,∇×(u−uh))Ω =(∇×(~H−H),u−uh)Ω =(j−j,u−uh)Ω =0 and Pythagoras’s theorem (6) ∥μ1/2(~H−Hh)∥2Ω =∥μ1/2(~H−H)∥2Ω+∥μ1/2(H−Hh)∥2Ω. ###### Remark 3.2. Equation (6) is also known as a Prager–Synge type equation and obtaining an error estimator from such an equation is also known as the hypercircle method. Furthermore, equation (4) is known as the equilibrium condition and using the numerical approximation to obtain a solution to this equation is called equilibration of . ###### Corollary 3.3. Let be the solution to (2), let be the solution of (3), set and , and let be the discrete current distribution. If satisfies the (residual) equilibrium condition (7) ∇×~HΔ =j−jh, which is an identity of distributions, then (8) ∥μ1/2(H−Hh)∥Ω ≤∥μ1/2~HΔ∥Ω. ###### Proof. Since (7) is an identity of distributions, we can equivalently write ⟨∇×~HΔ,w⟩ =⟨j−jh,w⟩ ∀w∈C∞0(Ω)3, where denotes the application of a distribution to a function in . Now, set . Using the definition , we obtain ⟨∇×~H,w⟩ =⟨∇×~HΔ+∇×Hh,w⟩ =⟨j−jh+∇×Hh,w⟩ =⟨j,w⟩ ∀w∈C∞0(Ω)3. From this, it follows that , so is in and satisfies equilibrium condition (4). Inequality (8) then follows from Theorem 3.1. ∎ From Corollary 3.3, it follows that a constant-free upper bound on the error can be obtained from any field that satisfies (7). An error estimator of this type was first introduced in [7], where it is referred to as an equilibrated residual error estimator. There, is decomposed into a sum of local divergence-free current distributions that have support on only a single vertex patch. The error estimator is then obtained by solving local problems of the form for each vertex patch and by then taking the sum of all local fields . It is, however, not straightforward to decompose into local divergence-free current distributions. An explicit expression for is given in [7] for the lowest-degree Nédélec element, but this expression cannot be readily extended to basis functions of arbitrary degree. Here, we instead present an error estimator based on equilibration condition (7) that can be applied to elements of arbitrary degree. Furthermore, instead of solving local problems on vertex patches, our estimator requires solving problems on only single elements, single faces, and small sets of nodes. The assumptions and a step-by-step derivation of the estimator are given in Sections 3.1 and 3.2 below, a brief summary is given in Section 3.3, and reliability and efficiency are proven in Section 3.4. ### 3.1. Assumptions In order to compute the error estimator, we use polynomial function spaces of degree , where denotes the degree of the finite element approximation , and assume that: • The magnetic permeability is piecewise constant. In particular, the domain can be partitioned into a finite set of polyhedral subdomains such that is constant on each subdomain . Furthermore, the mesh is assumed to be aligned with this partition so that is constant within each element. • The current density is in . Although assumption A2 does not hold in general, we can always replace by a suitable projection by taking, for example, as the standard quasi-interpolation operator of the space. The error is in that case bounded by ∥μ1/2(H−Hh)∥2Ω ≤∥μ1/2~HΔ∥2Ω+C∥j−πhj∥Ω, where is some positive constant that does not depend on the mesh width . If is sufficiently smooth, then the data oscillation term is of order , so if , then this term converges with a higher rate than and we may assume that it is negligible. ### 3.2. Derivation of the error estimator Before we derive the error estimator, we first write in terms of element and face distributions. For every , we can write ⟨jh,w⟩ =⟨∇×Hh,w⟩ =(Hh,∇×w)Ω =∑T∈Th[(∇×Hh,w)T+(Hh,^nT×w)∂T] =∑T∈Th[(∇×Hh,w)T+(Hh,^nT×w)∂T∖∂Ω] =∑T∈Th[(∇×Hh,w)T+(−^nT×Hh,w)∂T∖∂Ω] =∑T∈Th(∇×Hh,w)T+∑f∈FIh(−[[Hh]]t,w)f =:∑T∈Th(jh,T,w)T+∑f∈FIh(jh,f,w)f, where denotes the application of a distribution to a function, denotes the set of all internal faces, denotes the normal unit vector to pointing outward of , and denotes the tangential jump operator, with , , and and the two adjacent elements of . Since and is piecewise constant, we have that . Therefore, if and if , and , where is a normal unit vector of and is given by Dk(f):= {u∈L2(f)|u(x)=^nf×(v(x)+xw(x)) for some v∈Pk−1(f)3,w∈Pk−1(f)}. In other words, can be represented by functions on the elements and face distributions on the internal faces. We define and can write (9) ⟨jΔ,w⟩ =∑T∈Th(jΔT,w)T+∑f∈FIh(jΔf,w)f ∀w∈C∞0(Ω)3∪Rk,0(Th), where (10) jΔT:=j|T−jh,T=j|T−∇×Hh|TandjΔf:=−jh,f=[[Hh]]t|f. We look for a solution of (7) of the form , with and and where denotes the element-wise gradient operator. The term will take care of the element distributions of and the term will take care of the remaining face distributions. In the following, we firstly describe how to compute in Section 3.2.1 and characterize the remainder in Section 3.2.2. We then describe how to compute the jumps of on internal faces in Section 3.2.3 and explain how to reconstruct from its jumps in Section 3.2.4. #### 3.2.1. Computation of ^HΔ We compute by solving the local problems (11a) ∇×^HΔ|T =jΔT, (11b) (μ^HΔ,∇ψ)T =0 ∀ψ∈Pk′(T), for each element . This problem is well-defined and has a unique solution due to the discrete exact sequence property Pk′(T)\xlongrightarrow∇Rk′(T)\xlongrightarrow∇× Dk′(T)\xlongrightarrow∇⋅ Pk′−1(T), and since and . This last property follows from the fact that due to assumption A2 and due to assumption A1. #### 3.2.2. Representation of the remainder jΔ−∇×^HΔ Set ^\jΔ :=jΔ−∇×^HΔ. For every , we can write ⟨^\jΔ,w⟩ =⟨jΔ−∇×^HΔ,w⟩ =⟨jΔ,w⟩−⟨∇×^HΔ,w⟩ =⟨jΔ,w⟩−(^HΔ,∇×w)Ω ∑T∈Th(jΔT,w)T+∑f∈FIh(jΔf,w)f−∑T∈Th[(^HΔ,^nT×w)∂T+(∇×^HΔ,w)T] =∑T∈Th[(jΔT−∇×^HΔ,w)T−(^HΔ,^nT×w)∂T]+∑f∈FIh(jΔf,w)f ∑T∈Th[(0,w)T−(^HΔ,^nT×w)∂T∖∂Ω]+∑f∈FIh([[Hh]]t,w)f =∑T∈Th(^nT×^HΔ,w)∂T∖∂Ω+∑f∈FIh([[Hh]]t,w)f =∑f∈FIh([[^HΔ]]t,w)f+([[Hh]]t,w)f =∑f∈FIh([[Hh+^HΔ]]t,w)f, so (12) ⟨^\jΔ,w⟩ =∑f∈FIh([[Hh+^HΔ]]t,w)f=:∑f∈FIh(^\jΔf,w)f for all . This means that can be represented by only face distributions, and since and , we have that and therefore . #### 3.2.3. Computation of the jumps of ϕ on internal faces It now remains to find a such that ∇×∇hϕ =^\jΔ. For every , we can write ⟨∇×∇hϕ,w⟩ =(∇hϕ,∇×w)Ω =∑T∈Th[(∇×∇ϕ,w)T+(∇ϕ,^nT×w)∂T] =∑T∈Th[(0,w)T+(∇ϕ,^nT×w)∂T∖∂Ω] =∑T∈Th−(^nT×∇ϕ,w)∂T∖∂Ω Therefore, we need to find a such that −[[∇ϕ]]t|f =^\jΔf ∀f∈FIh. To do this, we define, for each internal face , the scalar jump with , two orthogonal unit tangent vectors and such that , differential operators , and the gradient operator restricted to the face: . We can then write −[[∇ϕ]]t|f =−(^n+×∇ϕ++^n−×∇ϕ−)|f =−(^nf×∇ϕ+−^nf×∇ϕ−)|f =−(^nf×∇fϕ+−^nf×∇fϕ−)|f =−^nf×∇f[[ϕ]]f for all . We therefore introduce an auxiliary variable and solve (13a) −^nf×∇fλf =^\jΔf, (13b) (λf,1)f =0, for each , where (13b) is only added to ensure a unique solution. In the next section, we will show the existence of and how to construct a such that for all . Now, we will prove that problem (13) uniquely defines . We start by showing that (13) corresponds to a 2D curl problem on a face. To see this, note that . If we take the inner product of (13a) with and , we obtain (14a) ∂t2λf =^\jΔf,t1, (14b) −∂t1λf =^\jΔf,t2, where , which is equivalent to a 2D curl problem on . To show that (13) is well-posed, we use the discrete exact sequence in 2D: R\xlongrightarrow⊂ Pk′(f)\xlongrightarrowcurlfDk′(f)\xlongrightarrow∇⋅ Pk′−1(f), where . Since , it suffices to show that . To prove this, we use that, for every , ⟨^\jΔ,∇ψ⟩ =⟨jΔ−∇×^HΔ,∇ψ⟩ =⟨∇×(H−Hh−^HΔ),∇ψ⟩ =(H−Hh−^HΔ,∇×∇ψ)Ω =(H−Hh−^HΔ,0)Ω =0. Then, for every , we can write 0 =⟨^\jΔ,∇ψ⟩ =∑f∈FIh(^\jΔf,∇fψ)f =∑f∈FIh[(^n∂f⋅^\jΔf,ψ)∂f−(∇f⋅^\jΔf,ψ)f] =∑f∈FIh[(^n∂f⋅^\jΔf,ψ)∂f∖∂Ω−(∇f⋅^\jΔf,ψ)f] =∑f∈FIh∑e:e⊂∂f∖∂Ω(^ne,f⋅^\jΔf,ψ)e−∑f∈FIh(∇f⋅^\jΔf,ψ)f =∑e∈EIh∑f:∂f⊃e(^ne,f⋅^\jΔf,ψ)e−∑f∈FIh(∇f⋅^\jΔf,ψ)f =∑e∈EIh⎛⎝
2022-05-25 10:36:22
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9130159616470337, "perplexity": 870.365718703}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662584398.89/warc/CC-MAIN-20220525085552-20220525115552-00076.warc.gz"}
https://www.biostars.org/p/183338/
Why GATK and bcftools SNP calling different? 6 5 Entering edit mode 5.4 years ago fernardo ▴ 160 Hi all, I am trying to find SNP/Indel in a dataset using both "GATK" and "bcftools call". I find two big differences for generating VCF files with these tools and I appreciate your help and experience: 1- "GATK" result has higher "DP" value than "bcftools", any clue? e.g. GATK.vcf ==> NC_000 747 . A G 4054 . AC=1;AF=1.00;AN=1;DP=113;FS=0.000;MLEAC=1;MLEAF=1.00;MQ=59.98;MQ0=0;QD=28.10;SOR=0.769 GT:AD:DP:GQ:PL 1:0,108:108:99:4084,0 bcftools.vcf ==> NC_000913.3 747 . A G 228 . DP=76;VDB=0.987864;SGB=-0.693147;MQSB=1;MQ0F=0;AC=2;AN=2;DP4=0,0,20,56;MQ=59 GT:PL 1/1:255,229,0 2- Why does GATK take much longer time(30-60 mins difference) compared to bcftools? is it normal or someoothing wrong with my usage? Pipelines I used: GATK pipeline ==> java -jar ./GATK3/GenomeAnalysisTK.jar -T HaplotypeCaller -R reference.fasta -I sample.bam -stand_call_conf 30 -stand_emit_conf 10 -o GATK.vcf bcftools pipeline ==> samtools mpileup -uf reference.fasta sample.bam | ./bcftools/bcftools call -m -v -o bcftools.vcf SNP next-gen GATK bcftools variant calling • 5.9k views 0 Entering edit mode Yes, Haplotypecaller is quite slow. You could try to use a target file (in case you're not doing genome sequencing, depending on project). use -L targets.bed -ip 50 (ip: intervalpadding). 0 Entering edit mode Well I am working on a bacterial genome that is why dbsnp doesn't work for it. And I read that GATK is not good to be used for bacterial genome. But still, it is strange to me that GATK show more Read Depth (DP) compared to the other tool Bcftools. 0 Entering edit mode I agree with you it is weird that they have different DPs for the same SNP did you use BWA MEM for both alignments or different aligners? 0 Entering edit mode Yes, I used BWA MEM for both alignment. do you feel something wrong? I also updated the post to add the whole two lines of VCF files in case it is more informative. One more things that seems strange, for Diploid the GT has two value looks like this (0/1, 0/0). But for Haploid the GT has one value. So my data is Bacteria which is Haploid and true for GATK which is one value while in Bcftools it is two values! that might tells us something too. 4 Entering edit mode 5.4 years ago kirannbishwa01 ★ 1.3k HaplotypeCaller from GATK is a haplotype based caller. It takes the aligned *.bam file and then realigns it again while calling the polymorphisms simulataneously at the loci; by default it calls maximum of 6 alternate alleles at any locus and output 2 best ones. Bcftools isn't a haplotype based caller, it takes the aligned bam file and output polymporhisms without doing any realignment. If you happen to use UnifiedGenotyper from GATK its way faster since its not a haplotype based caller. For the samples I have worked HaplotypeCaller took almost 3 days and UnifiedGenotyper would take on average 30-50 minutes to complete the jobs. But, polymorphisms data from HaplotypeCaller are considered to be of greater confidence. Thanks, 1 Entering edit mode Thanks. Yes HaplotypeCaller takes much longer but based on GATK documentation, this one is a newer version and better to be used. And I think it is also used for both Diploid and non-Diploid organism(data). Which one do you personally use, GATK or Bcftools? 2 Entering edit mode 5.4 years ago kirannbishwa01 ★ 1.3k GATK report DP values twice: one under INFO which generally is the filered DP values and one for the FORMAT which generally is the unfiltered DP values for that allele. You have different DP due the different statisticial methods employed by different variant callers. Also they have different filtering parameters they might have by default. If you ran the variant calling using default parameters, please check what are default filtering parameter between GATK and bcftools to understand your DP values more clearly. 0 Entering edit mode Thanks again. The answer makes sense. As I also asked above, if one needs to use DP value for a type of research, as we discussed there are differences between DP value of the two tools, so which one to be used the research? e.g. if for a SNP from GATK DP is 125 but from bcftools is 97. Just in case if you have an idea. 0 Entering edit mode 2 Entering edit mode 5.3 years ago fernardo ▴ 160 Solved!! After researching around this case, shortly, I came up with the solution as follows: Doing "samtools tview" you can see three types of reads: 1- "." : positive strands. 2- "," : negative strands GATK ==> sums all three of them including Orphan reads Bcftools ==> sums only 1 and 2. Doesn't count for Orphan reads. A questions comes up which I will be asking in a different post: - Which one is recommended, removing orphan reads or not? Note: to remove Orphan reads, samtools and bamtools can be used. bamtools filter -isProperPair true -in File.bam -out File_no_orphan.bam 0 Entering edit mode Well, some searching around paid off. My suggestions has always been GATK - due to its good capabilities to detect greater amount of true positives. I don't think you will need to remove orphan reads - the reason being that the other mate pair may not have mapped due to large indels and it can actually happen when you align datasets from other population than the one reference was generated from. 0 Entering edit mode Yes, right. But I also saw suggestions around that it is recommended to remain only proper pair reads and remove orphans. I have to also think about this to make a decision :) 1 Entering edit mode 5.4 years ago skbrimer ▴ 650 I think I know why, so I was wondering in my first comment if the aligners where the same. I am going to assume that they are the same aligner and if you did your workflow like this it would explain the difference. Clean Reads | V | | V V 1. Samtools 2. GATK 1. bcftools 2.GATK Haplotype This is because the GATK pipeline realigns its Indels, so you could have less error in those regions and more reads would be remapped back to them, whereas the first mapping is all you work with in the samtools pipeline. 0 Entering edit mode Thanks. Yes I used the same aligner. In experiment, both results were the same, is it also fine? so it is possible sometimes to have the same DP value and sometimes not? One more important question, based on your experience, if one needs to use DP value for a type of project, as we discussed there are differences between DP value of the two tools, so which one to be used? just in case if you have an idea. 1 Entering edit mode 5.4 years ago kirannbishwa01 ★ 1.3k For more true positive data GATK is my preference, actually I have almost always used GATK for calling variants. Regarding the DP value you will need to do some research by yourself. You will need to check the global coverage, local coverage and the coverage (DP value) you want to use as a cutoff to pull variants from that particular loci. Hope that helps, 1 Entering edit mode 5.4 years ago chen ★ 2.1k Different tools give much difference for low-frequency mutations.
2021-08-06 00:28:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24606944620609283, "perplexity": 4908.107338909928}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152085.13/warc/CC-MAIN-20210805224801-20210806014801-00603.warc.gz"}
https://www.gradesaver.com/textbooks/math/precalculus/precalculus-6th-edition-blitzer/chapter-8-test-page-951/8
# Chapter 8 - Test - Page 951: 8 a) The required form is, $\left[ \begin{array}{*{35}{r}} 3 & 5 \\ 2 & -3 \\ \end{array} \right]\left[ \begin{matrix} x \\ y \\ \end{matrix} \right]=\left[ \begin{array}{*{35}{r}} 9 \\ -13 \\ \end{array} \right]$ b) The inverse of matrix $A=\left[ \begin{array}{*{35}{r}} 3 & 5 \\ 2 & -3 \\ \end{array} \right]$ is, ${{A}^{-1}}=\frac{1}{19}\left[ \begin{array}{*{35}{r}} 3 & 5 \\ 2 & -3 \\ \end{array} \right]$ c) The solution is, $x=-2,y=3$ #### Work Step by Step (a) Consider the system of equations \begin{align} & 3x+5y=9 \\ & 2x-3y=-13 \end{align} Therefore, in matrix form, the system of equations can be written as $AX=B$ Where $A=\left[ \begin{array}{*{35}{r}} 3 & 5 \\ 2 & -3 \\ \end{array} \right];B=\left[ \begin{array}{*{35}{r}} 9 \\ -13 \\ \end{array} \right];X=\left[ \begin{matrix} x \\ y \\ \end{matrix} \right]$ The representation of the system is $\left[ \begin{array}{*{35}{r}} 3 & 5 \\ 2 & -3 \\ \end{array} \right]\left[ \begin{matrix} x \\ y \\ \end{matrix} \right]=\left[ \begin{array}{*{35}{r}} 9 \\ -13 \\ \end{array} \right]$. (b) Consider the given matrix $A=\left[ \begin{array}{*{35}{r}} 3 & 5 \\ 2 & -3 \\ \end{array} \right]$ Consider the determinant of the matrix \begin{align} & \left| A \right|=\left| \begin{array}{*{35}{r}} 3 & 5 \\ 2 & -3 \\ \end{array} \right| \\ & =-9-10 \\ & =-19 \end{align} Consider the adjoint of the matrix \begin{align} & \text{adj}\left( A \right)={{\left[ \begin{array}{*{35}{r}} +\left( -3 \right) & -\left( 2 \right) \\ -\left( 5 \right) & +3 \\ \end{array} \right]}^{t}} \\ & ={{\left[ \begin{array}{*{35}{r}} -3 & -2 \\ -5 & 3 \\ \end{array} \right]}^{t}} \\ & =\left[ \begin{array}{*{35}{r}} -3 & -5 \\ -2 & 3 \\ \end{array} \right] \end{align} Therefore, the inverse of the matrix is given by: \begin{align} & {{A}^{-1}}=\frac{1}{\left| A \right|}\text{adj}\left( A \right) \\ & =-\frac{1}{19}\left[ \begin{array}{*{35}{r}} -3 & -5 \\ -2 & 3 \\ \end{array} \right] \\ & =\frac{1}{19}\left[ \begin{array}{*{35}{r}} 3 & 5 \\ 2 & -3 \\ \end{array} \right] \end{align} The inverse matrix is ${{A}^{-1}}=\frac{1}{19}\left[ \begin{array}{*{35}{r}} 3 & 5 \\ 2 & -3 \\ \end{array} \right]$. (c) Consider the given system of equations in matrix form $AX=B$ Therefore, the solution of the equations is $X={{A}^{-1}}B$ Consider \begin{align} & X=\frac{1}{19}\left[ \begin{array}{*{35}{r}} 3 & 5 \\ 2 & -3 \\ \end{array} \right]\left[ \begin{array}{*{35}{r}} 9 \\ -13 \\ \end{array} \right] \\ & =\frac{1}{19}\left[ \begin{array}{*{35}{r}} 27-65 \\ 18+39 \\ \end{array} \right] \\ & =\frac{1}{19}\left[ \begin{array}{*{35}{r}} -38 \\ 57 \\ \end{array} \right] \\ & =\left[ \begin{array}{*{35}{r}} -2 \\ 3 \\ \end{array} \right] \end{align} The solution of the system is $x=-2,y=3$. After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2020-05-28 16:14:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 5, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000100135803223, "perplexity": 417.636944031199}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347399820.9/warc/CC-MAIN-20200528135528-20200528165528-00415.warc.gz"}
https://proofwiki.org/wiki/Definition:Random_Variable
Definition:Random Variable Informal Definition A random variable is a number whose value is determined unambiguously by an experiment. Formal Definition Definition 1 Let $\left({\Omega, \Sigma, \Pr}\right)$ be a probability space, and let $\left({X, \Sigma'}\right)$ be a measurable space. A random variable (on $\left({\Omega, \Sigma, \Pr}\right)$) is a $\Sigma \, / \, \Sigma'$-measurable mapping $f: \Omega \to X$. Definition 2 Let $\mathcal E$ be an experiment with a probability space $\left({\Omega, \Sigma, \Pr}\right)$. A random variable on $\left({\Omega, \Sigma, \Pr}\right)$ is a mapping $X: \Omega \to \R$ such that: $\forall x \in \R: \left\{{\omega \in \Omega: X \left({\omega}\right) \le x}\right\} \in \Sigma$ Definition 3 Alternatively (and meaning exactly the same thing), the above condition can be written as: $\forall x \in \R: X^{-1} \left({\left({-\infty \,.\,.\, x}\right]}\right) \in \Sigma$ where: $\left({-\infty \,.\,.\, x}\right]$ denotes the unbounded closed interval $\left\{{y \in \R: y \le x}\right\}$; $X^{-1} \left({\left({-\infty \,.\,.\, x}\right]}\right)$ denotes the preimage of $\left({-\infty \,.\,.\, x}\right]$ under $X$. Discrete Random Variable Let $\mathcal E$ be an experiment with a probability space $\left({\Omega, \Sigma, \Pr}\right)$. A discrete random variable on $\left({\Omega, \Sigma, \Pr}\right)$ is a mapping $X: \Omega \to \R$ such that: $(1): \quad$ The image of $X$ is a countable subset of $\R$ $(2): \quad$ $\forall x \in \R: \left\{{\omega \in \Omega: X \left({\omega}\right) = x}\right\} \in \Sigma$ Alternatively (and meaning exactly the same thing), the second condition can be written as: $(2)': \quad$ $\forall x \in \R: X^{-1} \left({x}\right) \in \Sigma$ where $X^{-1} \left({x}\right)$ denotes the preimage of $x$. Note that if $x \in \R$ is not the image of any elementary event $\omega$, then $X^{-1} \left({x}\right) = \varnothing$ and of course by definition of event space as a sigma-algebra, $\varnothing \in \Sigma$. Note that a discrete random variable also fulfils the conditions for it to be a random variable. Continuous Random Variable Let $\mathcal E$ be an experiment with a probability space $\left({\Omega, \Sigma, \Pr}\right)$. A continuous random variable on $\left({\Omega, \Sigma, \Pr}\right)$ is a random variable $X: \Omega \to \R$ whose cumulative distribution function is continuous for all $x \in \R$. Also known as The word variate is often encountered which means the same thing as random variable. The image $\operatorname{Im} \left({X}\right)$ of $X$ is often denoted $\Omega_X$.
2019-07-19 10:45:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9690052270889282, "perplexity": 163.86074713398355}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526210.32/warc/CC-MAIN-20190719095313-20190719121313-00523.warc.gz"}
https://www.zbmath.org/?q=an%3A1121.32007
# zbMATH — the first resource for mathematics Fixed point indices and invariant periodic sets of holomorphic systems. (English) Zbl 1121.32007 The aim of the paper under review is to propose a qualitative study of the holomorphic ODE $$\frac{dx}{dt}=F(x)$$ with $$F$$ being a holomorphic map from a neighborhood of $$O$$ in $$\mathbb C^n$$ to $$\mathbb C^n$$ with only one isolated fixed point at $$O$$. The main result is that if $$dF_O$$ has an eigenvalue $$i\omega$$ with $$\omega\in \mathbb R\setminus\{0\}$$ then the time $$2\pi/\omega$$ flow $$\Phi(z)$$ has $$O$$ as an accumulation fixed point. Such a theorem is proved by looking at the fixed points index of $$F$$. As a consequence of the previous theorem the author proves several interesting results. For instance, he proves that $$dF_O$$ has a nonzero purely imaginary eigenvalue if and only if there exists a germ of complex variety of dimension at least one consisting of $$O$$ and the periodic orbits of the same period of the system. Other results in terms of the length of the period and resonances of the pure imaginary eigenvalue are also given. ##### MSC: 32H50 Iteration of holomorphic maps, fixed points of holomorphic maps and related problems for several complex variables 32M25 Complex vector fields, holomorphic foliations, $$\mathbb{C}$$-actions 37C25 Fixed points and periodic points of dynamical systems; fixed-point index theory; local dynamics Full Text: ##### References: [1] Louis Brickman and E. S. Thomas, Conformal equivalence of analytic flows, J. Differential Equations 25 (1977), no. 3, 310 – 324. · Zbl 0348.34034 [2] Javier Chavarriga and Marco Sabatini, A survey of isochronous centers, Qual. Theory Dyn. Syst. 1 (1999), no. 1, 1 – 70. [3] L. A. Cherkas, V. G. Romanovskii, and H. Żołdek, The centre conditions for a certain cubic system, Differential Equations Dynam. Systems 5 (1997), no. 3-4, 299 – 302. Planar nonlinear dynamical systems (Delft, 1995). · Zbl 0899.34021 [4] E. M. Chirka, Complex analytic sets, Mathematics and its Applications (Soviet Series), vol. 46, Kluwer Academic Publishers Group, Dordrecht, 1989. Translated from the Russian by R. A. M. Hoksbergen. · Zbl 0683.32002 [5] C. J. Christopher and J. Devlin, Isochronous centers in planar polynomial systems, SIAM J. Math. Anal. 28 (1997), no. 1, 162 – 177. · Zbl 0881.34057 [6] Gregor, J., Dynamical systems with regular hand-side, Pokroky Mat. Fys. Astronom. 3 (1958), 153-160 (Zbl 081.30802). [7] Otomar Hájek, Notes on meromorphic dynamical systems. I, Czechoslovak Math. J. 16 (91) (1966), 14 – 27 (English, with Russian summary). Otomar Hájek, Notes on meromorphic dynamical systems. II, Czechoslovak MAth. J. 16 (91) (1966), 28 – 35 (English, with Russian summary). Otomar Hájek, Notes on meromorphic dynamical systems. III, Czechoslovak Math. J. 16 (91) (1966), 36 – 40 (English, with Russian summary). Otomar Hájek, Notes on meromorphic dynamical systems. I, Czechoslovak Math. J. 16 (91) (1966), 14 – 27 (English, with Russian summary). Otomar Hájek, Notes on meromorphic dynamical systems. II, Czechoslovak MAth. J. 16 (91) (1966), 28 – 35 (English, with Russian summary). Otomar Hájek, Notes on meromorphic dynamical systems. III, Czechoslovak Math. J. 16 (91) (1966), 36 – 40 (English, with Russian summary). Otomar Hájek, Notes on meromorphic dynamical systems. I, Czechoslovak Math. J. 16 (91) (1966), 14 – 27 (English, with Russian summary). Otomar Hájek, Notes on meromorphic dynamical systems. II, Czechoslovak MAth. J. 16 (91) (1966), 28 – 35 (English, with Russian summary). Otomar Hájek, Notes on meromorphic dynamical systems. III, Czechoslovak Math. J. 16 (91) (1966), 36 – 40 (English, with Russian summary). [8] N. A. Lukaševič, The isochronism of a center of certain systems of differential equations, Differencial$$^{\prime}$$nye Uravnenija 1 (1965), 295 – 302 (Russian). [9] John Milnor, Singular points of complex hypersurfaces, Annals of Mathematics Studies, No. 61, Princeton University Press, Princeton, N.J.; University of Tokyo Press, Tokyo, 1968. · Zbl 0184.48405 [10] D. J. Needham, A centre theorem for two-dimensional complex holomorphic systems and its generalization, Proc. Roy. Soc. London Ser. A 450 (1995), no. 1939, 225 – 232. · Zbl 0835.34007 [11] D. J. Needham and S. McAllister, Centre families in two-dimensional complex holomorphic dynamical systems, R. Soc. Lond. Proc. Ser. A Math. Phys. Eng. Sci. 454 (1998), no. 1976, 2267 – 2278. · Zbl 0920.34047 [12] Marco Paluszny, On periodic solutions of polynomial ODEs in the plane, J. Differential Equations 53 (1984), no. 1, 24 – 29. · Zbl 0488.34029 [13] M. Sabatini, Dynamics of commuting systems on two-dimensional manifolds, Ann. Mat. Pura Appl. (4) 173 (1997), 213 – 232. · Zbl 0941.34018 [14] M. Shub and D. Sullivan, A remark on the Lefschetz fixed point formula for differentiable maps, Topology 13 (1974), 189 – 191. · Zbl 0291.58014 [15] Massimo Villarini, Regularity properties of the period function near a center of a planar vector field, Nonlinear Anal. 19 (1992), no. 8, 787 – 803. · Zbl 0769.34033 [16] Guangyuan Zhang, Multiplicities of fixed points of holomorphic maps in several complex variables, Sci. China Ser. A 44 (2001), no. 3, 330 – 337. · Zbl 1019.37010 [17] Guang Yuan Zhang, Bifurcations of periodic points of holomorphic maps from \?² into \?², Proc. London Math. Soc. (3) 79 (1999), no. 2, 353 – 380. · Zbl 0974.32011 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2021-08-01 19:18:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6237568259239197, "perplexity": 944.7214522411421}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154219.62/warc/CC-MAIN-20210801190212-20210801220212-00422.warc.gz"}
http://www.show-my-homework.com/2013/05/electricity-exam-questions.html
# Electricity Exam Questions 1. a) State the Faraday’s Law. b) Calculate the magnetic flux for the case of a 40 cm radius circular loop. The magnetic field vector makes an angle of 30 degree with the loop’s plane with the magnitude set to 2 Tesla. Provide a sketch. Consider that fact that the loop has a resistance (R) of 2.5 Ohms. Calculate i) the induced current and ii) indicate (on your drawing) its direction at t=1 sec if the magnetic field is changing in time as follows: $B = B0 +B1*t$,  $B1 =0.3$ T/s, $B0 = 2T$ 2. A circular loop has a radius of R=2.5cm and carries 0.5A current (IL) flowing in the counter-clockwise direction. The center of the loop is a distance $d1=10$ cm above a long straight wire carrying current $I1=2A$ f1owing to the right. In addition, another long wire with current $I2=3$ A flowing in the downward direction is placed at a distance $d2=15$ cm to the left from the center. a) What is the resulting (total) magnetic field (magnitude and direction) at the center of V the loop? b) Calculate magnitudes and indicate directions for forces acting on short (d(l)=1mm) segments of the loop at point a. c) Calculate the same for point b. 3. In the circuit shown below $E=9V$ , $R1=15$ Ohms, and $L=0.1$H. Switch S1 is closed at t=0 s. Just after the switch is closed Part I. a) What is the potential difference ($V_{bc}=V_b-V_c$) across the inductance? b) Potentials at points a and b are in the following relationships: i)Point a is at higher potential than b; ii) Point b is at higher potential than a iii) The potentials are equal iv)Not enough information. Part II. c) What is the maximum value for current through Ri at sufficiently longer times (i.e. very long time after the switch was closed)? . d) How much time (t60%) is needed to reach 60% of that maximum value? e) What is the rate of current change when it reached the target value (i.e. 60% of the maximum)? a) The induced voltage in a circuit that is located into a changing magnetic field is equal with the speed of variation of the magnetic flux through the surface of the circuit, taken with changed sign (-sign). b) Surface area $S = pi*r^2 =pi*0.4^2 =0.503 m^2$ flux $phi = B*S*cos(90-30) =2*0.503*cos(60) =0.503 Wb$ see first figure attached for the sketch. c) $B= B0+B1*t$ If the field is perpendicular to the surface $phi_0 = (B0+B1*t)*S$ $V_{ind}0 = – d(phi_0)/d t = – B1*S= 0.3*0.503 = 0.1509 V$ $I_{ind}0 = V_{ind}0/R =0.1509/2.5 =0.06036 A =60.36 mA$ If the field is making a 30 degree angle with loop plane $phi = phi_0*cos(90-30)$ $V_{ind} = V_{ind}0*cos(90-30) =0.07545 V$ $I_{ind}0 = V_{ind}/R =0.7545/2.5 =0.03018 A =30.18 mA$ (the induced current is opposing the variation of the magnetic field, see attached picture) 2. a) $mu =4*pi*10^{-7} (H/m)$ field of loop alone $B(loop) = mu*I_L/(2*R) =1.256*10^{-5} T$ out of page field of current $I_1$ is $B(I_1) = mu*I_1/(2*pi*d_1) =4*10^{-6} T$ out of page field of current $I_2$ is $B(I2) = mu*I_2/(2*pi*d2) =4*10^{-6} T$ out of page Total field $B = (12.56+4+4)*10^{-6} =20.56*10^{-6} T$ out of page b) $B(I_1) = mu*I_1/(2*pi*(d_1-R)) =5.33*10^{-6} T$ $F(I_1) = (B(I_1) x d_l)*I_L =5.33*10^{-6}*0.001*0.5 =2.67*10^{-9} N$ directed from a to the center of loop $B(I_2)= 4*10^6 T$ $F(I_2) =(B(I_2) x d_l)*I_L =4*10^{-6}*0.001*0.5 =2*10^{-9} N$ directed also from point a to the center of loop $F = F(I_1)+F(I_2) = 4.67*10^{-9} N$ directed also from point a to the center of loop. c) $B(I_1) =4*10^{-6} T$ $F(I_1) = (B(I_1) x d_l)*I_L = 2*10^{-9} N$ directed from point b to the center of loop $B(I_2) = mu*I_2/(2*pi*(d_2+R)) =3.42*10^{-6} T$ $F(I_2) = (B(I_2) x d_l)*I_L = 1.71*10^{-9} N$ directed from point b to the center of loop $F = (1.71+2)*10^{-3} = 3.71*10^{-9} N$ directed from point b to the center of loop 3. Part I) Just after the switch is closed a) $V_{bc} = V_b- V_c = E = +9 V$ (the induced voltage in the coil opposes the increasing of current in the circuit) b) At the first moment there is no current in the circuit. Hence $V_a = V_b > V_c$ Part II) c) at very long time coil is acting like a short circuit. $I(R_1) = E/R_1 = 9/15 =0.6 A$ d) $I = I_{max}(1-exp(-t/tau))$ $tau =L/R = 0.1/15 =6.67 ms$ $I/I_{max} = 1-exp(-t/0.00667)$ $0.6 =1-exp(-t/0.00667)$ $0.4 = exp(-t/0.00667)$ $t/0.00667 =0.916$ $t =0.00611 =6.11 ms$ e) $d I/d t = I_{max}/tau*exp(-t/tau)$ at $t=6.11 ms$ $d I/d t = (0.6/0.00667)*exp(-6.11/6.667) = 36 (A/s)$
2018-04-25 12:16:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6765260696411133, "perplexity": 704.7010534781839}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947803.66/warc/CC-MAIN-20180425115743-20180425135743-00592.warc.gz"}
http://warholprints.com/library/advanced-signal-processing-and-digital-noise-reduction
# Advanced Signal Processing and Digital Noise Reduction Format: Paperback Language: English Format: PDF / Kindle / ePub Size: 11.88 MB Remember that light travels at the speed of light! We call it the “Epistemic Separability Principle,” or ESP for short. The key effect of this constraint is that both masses rotate about the axle with the same angular frequency ω. The various parts of a cycle are described by the phase of the wave; all waves are referenced to an imaginary synchronous motion in a circle; thus the phase is measured in angular degrees, one complete cycle being 360°. Pages: 416 Publisher: Wiley; 1 edition (July 1996) ISBN: 0471958751 The 3D Electrodynamic Wave Simulator: Demonstration Disk Light: A Very Short Introduction (Very Short Introductions) Elements Of Wave Mechanics Bluff-Body Wakes, Dynamics and Instabilities: IUTAM Symposium, Göttingen, Germany September 7-11, 1992 (IUTAM Symposia) Nonlinear Waves, Solitons and Chaos So is the statement of this theorem clear? We're going to become good and figure out some nice way of choosing wave functions, but no , e.g. Selected Topics in Nonlinear Wave Mechanics http://warholprints.com/library/selected-topics-in-nonlinear-wave-mechanics. Like homonyms, words that depend on the context in which they are used, quantum reality shifts its nature according to its surroundings Wave Motion (Cambridge Texts in Applied Mathematics) fouleemarket.com. The important thing to know is that, under the right circumstances, you can wind up with an electric field that alternates as follows epub. The situation in the sciences is this: A concept or an idea which cannot be measured or cannot be referred directly to experiment may or may not be useful. In other words, suppose we compare the classical theory of the world with the quantum theory of the world, and suppose that it is true experimentally that we can measure position and momentum only imprecisely epub. In the case of a longitudinal wave, a wavelength measurement is made by measuring the distance from a compression to the next compression or from a rarefaction to the next rarefaction PARTON ET AL:APPL ELECTROMAG, read online PARTON ET AL:APPL ELECTROMAG, NETICS 2ND. It's just any function of space that is normalizable. Never the less, you are commanded to compute the following quantity. This quantity is also by definition what we call the expectation value of the Hamiltonian in the state psi. I love the fact, the remarkable fact that we're going to show now, is that this thing provides an upper bound for the ground state energy for all psi , source: Radiation and Scattering of Waves (IEEE/OUP Series on Electromagnetic Wave Theory) warholprints.com. So WHO in the heck are we going to believe??? We'll get to that answer in just a minute, but based on the findings thus far, at that point in time, it was at least agreed upon, that ALL things were comprised of and existed because of energy. What does all that have to do with you and your life epub? Adhesion: force of attraction between two unlike materials. Air resistance: force of air on objects moving through it By Robert G. Dean - Water Wave read here http://warholprints.com/library/by-robert-g-dean-water-wave-mechanics-for-engineers-and-scientists-1-st-first-edition. The authors imagine a type of quantum switch that controls whether a simple optical measurement tests for particlelike or wavelike behavior in a single photon pdf. Note also that the wave packets, i. e., the broad regions of large positive and negative amplitudes, move to the right with increasing time as well. Figure 1.18: As in the upper panel of figure 1.16 except a dispersive case with phase and group velocities in opposite directions. Since velocity is distance moved ∆x divided by elapsed time ∆t, the slope of a line in figure 1.16, ∆t/∆x, is one over the velocity of whatever that line represents , cited: Gauge Theory and Variational read epub http://www.morinofood.com/?library/gauge-theory-and-variational-principles-dover-books-on-physics. Field theory handbook: Including coordinate systems, differential equations, and their solutions An introduction to Relativistic Quantum Field Theory Shock Wave Engine Design Charge Multiplicity Asymmetry Correlation Study Searching for Local Parity Violation at Rhic for Star Collaboration (Springer Theses) But we still can’t agree on what it is they bought. D-Wave, the company that built the thing, calls it the world’s first quantum computer, a seminal creation that foretells the future of mathematical calculation. But many of the world’s experts see it quite differently, arguing the D-Wave machine is something other than the computing holy grail the scientific community has sought since the mid-1980s Handbook on Array Processing and Sensor Networks rjlexperts.com. Sometimes in physics, when a model makes multiple predictions based on a square root function, they can all be true. Dirac found this out when he took Schrodinger’s equation and applied Einstein’s special relativity to it , e.g. Embedded Signal Processing read for free warholprints.com. And you substitute, and you get that e ground state is less than or equal than m alpha squared over pi h squared. And to better write it as 2 over pi times minus m alpha squared over 2h-2nd which is the true ground state energy. So let's just make sure we understand what has happened online. Now, what I've drawn here isn't the only type of wave you can have. But the definition is more general than just what I've drawn here. For example, you could have a sound wave. If you just look at all of the molecules of the air, they have some density that looks something like that Advances in Wave Interaction and Turbulence: Proceedings of an Ams-Ims-Siam Joint Summer Research Conference on Dispersive Wave Turbulence, Mount ... MA, June 11-15, 20 (Contemporary Mathematics) Advances in Wave Interaction and. The Strange Theory of Light and Matter, Penguin, 1985 Haselhurst, Geoff The Metaphysics of Space and Motion and the Wave Structure of Matter, 2000 Serway, R. Physics for Scientists and Engineers Third Edition, Saunders College Publishing, 1992 Wolff, Milo Exploring the Physics of the Unknown Universe, Technotran Press, CA. 1990 Notice that the reflection case illustrates a point about Fermat’s principle: The minimum time may actually be a local rather than a global minimum — after all, in figure 3.10, the global minimum distance from A to B is still just a straight line between the two points Conformal Quantum Field Theory in D-dimensions (Mathematics and Its Applications) http://warholprints.com/library/conformal-quantum-field-theory-in-d-dimensions-mathematics-and-its-applications! Wigner Measure and Semiclassical Limits of Nonlinear Schrodinger Equations (Courant Lecture Notes) Vortex Structures in a Stratified Fluid: Order from Chaos (Applied Mathematics) Relativistic Many-Body Theory: A New Field-Theoretical Approach (Springer Series on Atomic, Optical, and Plasma Physics) (Volume 63) Applied Dynamics of Ocean Surface Waves (Advanced Series on Ocean Engineering, V. 1) The H.264 Advanced Video Compression Standard Introduction to Gauge Field Theories (Texts and Monographs in Physics) Breaking Waves: IUTAM Symposium Sydney, Australia 1991 (IUTAM Symposia) Mathematical & Numerical Aspects of Wave Propagation: Proceedings in Applied Mathematics The Wave Equation on a Curved Space-Time (Cambridge Monographs on Mathematical Physics) Applied Digital Optics: From Micro-optics to Nanophotonics An Introduction To Electromagnetic Wave Propagation And Antennas Linear Elastic Waves (Cambridge Texts in Applied Mathematics) Wave Motion, Intelligent Structures and Nonlinear Mechanics: A Herbert Uberall Festschrift Volume (Stability, Vibration and Control of Structures Series) Modern Pulsed and Continuous-Wave Electron Spin Resonance Selected Topics in Field Quantization: Volume 6 of Pauli Lectures on Physics (Dover Books on Physics) The physics of vibrations and waves Combinatorics and Physics: Mini-workshop on Renormalization December 15-16, 2006 Conference on Combinatorics and Physics March 19-23, 2007 ... Fur Mathematik Bon (Contemporary Mathematics) Field and Wave Electromagnetics Many-Particle Theory, Decoherence, Entanglement and Information Protection in Complex Quantum Systems: Proceedings of the NATO ARW on Decoherence, Entanglement and ... to 30 April 2004. (Nato Science Series II:) At first, attempts to advance Bohr's quantum ideas—the so-called old quantum theory—suffered one defeat after another. Then a series of developments totally changed the course of thinking. In 1923 Louis de Broglie, in his Ph. D. thesis, proposed that the particle behavior of light should have its counterpart in the wave behavior of particles epub. And he will have gotten here, at that point. The point on the line that was here-- on the purple period of time-- it had some upward momentum , source: The High-Latitude Ionosphere read online http://warholprints.com/library/the-high-latitude-ionosphere-and-its-effects-on-radio-propagation-cambridge-atmospheric-and-space. This was called the equivalence principle by Einstein. Since the gravitational force on the Earth points downward, it follows that we must be constantly accelerating upward as we stand on the surface of the Earth Semiconductor Cavity Quantum read here yvonne.divingtravelhk.com! This leads to a way of defining a dot product of four-vectors online. At the age of 18 he graduated with an arts degree. He was then assigned a research topic in history of his choice. But he did not complete his research in history. Instead he decided to study theoretical physics, a subject he chose to devote his life to Theory of Many-Particle Systems (AIP Translation Series) http://rosemariecenters.com/freebooks/theory-of-many-particle-systems-aip-translation-series. Transmutation: nuclear change from one element to another. Transparent: material transmitting light without distorting directions of waves. Transverse waves: wave in which disturbance is perpendicular to direction of travel of wave Quantized Vortex Dynamics and Superfluid Turbulence warholprints.com. Focal point: location at which rays parallel to the optical axis of an ideal mirror or lens converge to a point. Forbidden gap: energy values that electrons in a semiconductor or insulator may not have Hydrodynamics of High-Speed read epub http://warholprints.com/library/hydrodynamics-of-high-speed-marine-vehicles. So second order differential equation in space. The H operator has partial derivatives, but this time time, you might as well say that this is minus h squared over 2m. The second Psi vx squared plus v of x tines Psi of x Radiation and Scattering of Waves (IEEE/OUP Series on Electromagnetic Wave Theory) http://warholprints.com/library/radiation-and-scattering-of-waves-ieee-oup-series-on-electromagnetic-wave-theory. Then the kinetic energy is roughly $\tfrac{1}{2}mv^2=$ $p^2/2m=$ $\hbar^2/2ma^2$. (In a sense, this is a kind of dimensional analysis to find out in what way the kinetic energy depends upon the reduced Planck constant, upon $m$, and upon the size of the atom. We need not trust our answer to within factors like $2$, $\pi$, etc. We have not even defined $a$ very precisely.) Now the potential energy is minus $e^2$ over the distance from the center, say $-e^2/a$, where, as defined in Volume I, $e^2$ is the charge of an electron squared, divided by $4\pi\epsO$ online. It makes a great, light summer reading (it's a comics book !). The material discussed in the first couple of weeks is presented, for instance, in Notice that Eisberg & Resnick conatins much more material than we will need, both in depth and breadth , cited: Imaging Phonons: Acoustic Wave read for free read for free. Physicists appeared to be divided into two groups Introduction to the Theory of download pdf Introduction to the Theory of Quantized. After returning from military service in autumn 1911, he took up an appointment as an assistantship in experimental physics at the University of Vienna Advances in the Theory of Shock Waves (Progress in Nonlinear Differential Equations and Their Applications) pv.ourdiscoveryschool.com. The USC team has barely paused for breath in its race to study quantum computing. USC's Quantum Computing Center received an upgrade to a new 512-qubit "Vesuvius" chip two months ago—the next machine up for a test drive ref.: Revised Quantum Electrodynamics (Contemporary Fundamental Physics) http://yvonne.divingtravelhk.com/library/revised-quantum-electrodynamics-contemporary-fundamental-physics. 4.3
2017-01-22 08:09:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5158753991127014, "perplexity": 1305.2080810156447}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281419.3/warc/CC-MAIN-20170116095121-00443-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.ringomok.com/statistics-in-the-new-hsc/
Statistics in the New HSC One notable addition to the HSC new syllabus is the topic of Statistical Analysis which has been missing from the Stage 6 Calculus courses since time before I was born. This post is just a rambling of my thoughts on the topic: my initial reactions, a keen kindling of interest and some thoughts on the teaching of this topic across NSW. Initial Reactions In my formative years of high school, the topic of Statistics often involved drawing up tables, tallying up personally irrelevant data, doing mundane calculations on a calculator, and getting out a protractor and ruler to do summary statistic diagrams (which spells TREK!). I could not help but notice an adverse reaction from everyone, from both teachers and students, towards the topic. People treated Statistics as this topic that you just had to do because it was related to Mathematics but it really wasn’t Mathematics. This attitude of course had a bleeding effect onto my own attitude towards the topic. Even today in the staffroom that I work in, there are many teachers who still hold a major dislike towards Statistics. It didn’t help that during my first year of University, I was completely turned off Statistics as my lecturer mumbled and fumbled his way through lectures with an accent and stutter. I was turned off learning anything new in what I perceived to be a completely useless and uninteresting subject. I also didn’t so well but that was probably due to my lack of interest in it. I didn’t do anymore Statistics study in my Degree after first year. So when I heard that Statistics was being added into the HSC, I initially reacted with shock and disappointment. I could not believe it. I believe this reaction was due to the culmination of all the negative experiences I had with Statistics compounded with the fact that I had not done my own research and learning in the subject – I was a poor student of Statistics and now I have to teach it? Perhaps this was an experience that other teachers can relate to. A Keen Kindling of Interest Such a negative and preconceived view that I had would require some eye-opening insight and mathematical maturity to overcome – and so begins my journey to gradually appreciate and nurture my interest in it. In teaching the IB courses with Statistics, I started interacting more with senior level Statistics just out of necessity. Unfortunately, my earlier classes in my career had to suffer from the bleeding effect of a negative attitude towards the topic, emanating from their teacher. Through teaching, I found that I really didn’t know much about the topic – I thought that what I learned in high school was sufficient but there was a lot more that I didn’t really know as I hadn’t learned it for myself. I couldn’t even explain what p-value meant! From 2017, my head of department modelled to his staff what using data to make informed decisions in teaching looked like. This started with collecting and tallying up multiple choice responses in a half-yearly or yearly assessment task. The insights drawn from this exercise were surprising. Anecdotal evidence and judgement in what students need is often not enough – cold hard facts from the data they produce is more important: You may have taught something, but have your students learned it? We also had staff meetings where we looked at RAP data from HSC performance. The trends we saw such as how the state and our school did not really understand significant figures was a surprising one. We definitely taught it, we thought it was easy, but obviously the students didn’t get it. And so we did something about it in our teaching of the next cohort. The power of Statistical Analysis was so obvious. I had merely been blinded to do anything with it because of my stubborn preconceptions. In doing more exercises like these, later building upon the analysis of cohort multiple choice, I also started adding more granularity to the data by examining class clusters and per-question responses. The performance insights as I looked from class to class was interesting to say the least. For example, my class did not do as well in the topic of Bits and Bytes (when I thought we nailed it). This informed my own teaching to go back to revisit that. Such insight wouldn’t have been possible if we just looked at whole cohort data. Further conversations with my head of department revealed to me that he loves Statistics. In the school, he is also the Director of Statistics and helps other heads of departments in other faculties to understand their data. I wanted to be like him because what he was doing was so beneficial and it was kinda cool too – helping others find insight in something that they would otherwise be unable to comprehend fully so they can do something about it. Before I knew it, data analysis became something I am also passionate about. In wanting more, I started googling for courses and found that at the University of Sydney, the Graduate Certificate of Data Science offered what looks to be the perfect packaged of up-skilling my data analysis abilities. And so I applied and I am now studying part time there! In my lectures I have learned what I should have learned in first year Statistics. It was probably due to a combination of my increased awareness of the relevance of Statistics and my own mathematical maturity that allowed me to learn this stuff properly now. Some of the material I learned throughout my undergraduate degree helped also (like Measure Theory). It also helped that my lecturer was quite audible! So Statistics is being added into the HSC courses? Bring it on! Comments on the New Syllabus When teaching the new syllabus, some of the material can be cross referenced with textbooks, worksheets and resources from the old syllabus. These dot points don’t really present a difficulty in the teaching. However, there aren’t many resources readily available in the usual channels of published textbooks or worksheets for Statistics. These resources will need time to build up. Teachers will need time to up-skill their own understanding of the topic before they teach it – if they don’t, it’ll become a classic example of the blind leading the blind. It also doesn’t help that the syllabus doesn’t define what a random variable is very clearly. A random variable is a variable whose possible values are outcomes of a statistical experiment or a random phenomenon. (p. 73) Know that a random variable describes some aspect in a population from which samples can be drawn. (p. 47) HSC Mathematics Advanced Syllabus There’s a few problems with this. The first definition presented in the glossary is quite a self-referential definition. A random variable is a variable. Great. The second definition in the content outcomes is merely a qualitative description. A vague understanding of what a random variable does rather than what it is. So what is a random variable? A random variable, usually denoted with a capital X, is actually a function that maps each element of a sample space to a real number (there’s a further generalisation in University mathematics, but for the sake of a high school understanding this is suficient). Basically, the domain of a random variable is the sample space and the range is a subset of the real numbers. If the number of elements in the sample space is countable then it’s a discrete random variable and if it’s uncountable, then it’s continuous. (The definition of countable and uncountable is for another day). For example, suppose we roll a 4 sided die with faces coloured blue, green, red and yellow. An example random variable that allows us to model the scenario is as follows: $X(blue) = 1, X(green) = 2, X(red) = 3, X(yellow) = 4$ There are of course an infinite number of ways you can define this random variable. There’s nothing stopping you from defining them as $$X(blue) = 342, X(green) = 111, X(red)=-2313, X(yellow)=99999$$ but that just makes things hard on yourself and most likely will have no benefit in modelling the situation. Now we can ask questions like “what is the probability of getting a green or a blue?” which translates to finding $$P(X < 3)$$. So $P(X<3) = P(X=2) + P(X=1) = P(\{green\}) + P(\{blue\})$ Now if the dice is fair, the random variable follows a uniform distribution which means each outcome has an equally likely chance of occurring. By mapping the sample space of blue, green, red and yellow to numbers (1, 2, 3, 4 in this example), we don’t have to concern ourselves with those words but rather focus on the behaviour of the numbers under a certain distribution. Most of the time when it’s numbers we are studying, we just let $$X(x) = x$$ for all $$x \in \Omega$$ where $$\Omega$$ is the sample space. This is why sometimes, capital X and lower case x are interchangeable in statistics questions and we still get the correct answers. I guess I should write more (clearly) on these kind of things later in the future. All in all, I welcome the addition of Statistics to the new syllabus and I am excited to teach it to my students!
2019-05-20 15:10:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4739477336406708, "perplexity": 613.9786422265813}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256040.41/warc/CC-MAIN-20190520142005-20190520164005-00009.warc.gz"}
https://zbmath.org/authors/?q=ai%3Aqin.qing-hua
# zbMATH — the first resource for mathematics ## Qin, Qing-Hua Compute Distance To: Author ID: qin.qing-hua Published as: Qin, Q-H.; Qin, Q. H.; Qin, Q.-H.; Qin, Qing H.; Qin, Qing Hua; Qin, Qing-Hua; Qin, Qing-hua Documents Indexed: 99 Publications since 1991, including 3 Books #### Co-Authors 1 single-authored 1 Ma, Huaifa #### Serials 1 International Journal of Solids and Structures 1 Computational Mechanics #### Fields 1 Numerical analysis (65-XX) 1 Mechanics of deformable solids (74-XX) 1 Fluid mechanics (76-XX) #### Citations contained in zbMATH 75 Publications have been cited 573 times in 305 Documents Cited by Year A meshless method for generalized linear or nonlinear Poisson-type problems. Zbl 1195.65180 Wang, Hui; Qin, Qing-Hua 2006 A meshless model for transient heat conduction in functionally graded materials. Zbl 1097.80001 Wang, H.; Qin, Q-H.; Kang, Y-L. 2006 The Trefftz finite and boundary element method. Zbl 0982.74003 Qin, Qing-Hua 2000 Meshless approach for thermo-mechanical analysis of functionally graded materials. Zbl 1244.74234 Wang, Hui; Qin, Qing-Hua 2008 Nonlinear analysis of Reissner plates on an elastic foundation by the BEM. Zbl 0790.73073 Qin, Q. H. 1993 Application of hybrid-Trefftz element approach to transient heat conduction analysis. Zbl 0900.73802 Jirousek, J.; Qin, Q. H. 1996 Hybrid Trefftz finite-element approach for plate bending on an elastic foundation. Zbl 0804.73070 Qin, Q. H. 1994 A new meshless method for steady-state heat conduction problems in anisotropic and inhomogeneous media. Zbl 1119.80385 Wang, H.; Qin, Q.-H.; Kang, Y. L. 2005 Variational formulations for TFEM of piezoelectricity. Zbl 1057.74043 Qin, Qing-Hua 2003 Hybrid-Trefftz finite element method for Reissner plates on an elastic foundation. Zbl 0849.73071 Qin, Q. H. 1995 2D Green’s functions of defective magnetoelectroelastic solids under thermal loading. Zbl 1182.74052 Qin, Qing-Hua 2005 A closed crack tip model for interface cracks in thermopiezoelectric materials. Zbl 0932.74061 Qin, Qing-Hua; Mai, Yiu-Wing 1999 Fundamental-solution-based hybrid FEM for plane elasticity with special elements. Zbl 1384.74047 Wang, Hui; Qin, Qing-Hua 2011 Solving anti-plane problems of piezoelectric materials by the trefftz finite element approach. Zbl 1038.74646 Qin, Q.-H. 2003 A family of quadrilateral hybrid-Trefftz $$p$$-elements for thick plate analysis. Zbl 0862.73063 Jirousek, J.; Wróblewski, A.; Qin, Q. H.; He, X. Q. 1995 Boundary integral based graded element for elastic analysis of 2D functionally graded plates. Zbl 1348.74336 Wang, Hui; Qin, Qing-Hua 2012 BEM for crack-inclusion problems of plane thermopiezoelectric solids. Zbl 0974.74076 Qin, Qing Hua; Lu, Meng 2000 Solving potential problems by a boundary-type meshless method-the boundary point method based on BIE. Zbl 1195.65186 Ma, Hang; Qin, Qing-Hua 2007 Thermoelectroelastic solutions for internal bone remodeling under axial and transverse loads. Zbl 1085.74032 Qin, Qing-Hua; Ye, Jian-Qiao 2004 Thermoelectroelastic Green’s function and its application for bimaterial of piezoelectric materials. Zbl 0920.73343 Qin, Qing-Hua; Mai, Yiu-Wing 1998 A new hybrid finite element approach for three-dimensional elastic problems. Zbl 1291.74175 Cao, C.; Qin, Q.-H.; Yu, A. 2012 Eigenstrain formulation of boundary integral equations for modeling particle-reinforced composites. Zbl 1244.74171 Ma, Hang; Yan, Cheng; Qin, Qing-Hua 2009 Matlab and C programming for Trefftz finite element methods. With CD-ROM. Zbl 1359.65005 Qin, Qing-Hua; Wang, Hui 2009 Application of hybrid Trefftz finite element method to nonlinear problems of minimal surface. Zbl 1194.65033 Wang, Hui; Qin, Qing-Hua; Arounsavat, Detdexa 2007 General solutions for thermopiezoelectrics with various holes under thermal loading. Zbl 0999.74052 Qin, Qing-Hua 2000 Postbuckling analysis of thin plates by a hybrid Trefftz finite element method. Zbl 0860.73071 Qin, Q. H. 1995 Thermoelectroelastic solution for elliptic inclusions and application to crack-inclusion problems. Zbl 1076.74512 Qin, Qing-Hua 2000 A new special element for stress concentration analysis of a plate with elliptical holes. Zbl 1401.74277 Wang, Hui; Qin, Qing-Hua 2012 Solving the nonlinear Poisson-type problems with F-Trefftz hybrid finite element model. Zbl 1259.65171 Wang, Hui; Qin, Qing-Hua; Liang, Xing-Pei 2012 FE approach with Green’s function as internal trial function for simulating bioheat transfer in the human eye. Zbl 1269.74156 Wang, H.; Qin, Q. H. 2010 Boundary point method for linear elasticity using constant and quadratic moving elements. Zbl 1303.74037 Ma, Hang; Zhou, Juan; Qin, Qing-Hua 2010 A direct constraint-Trefftz FEM for analysing elastic contact problems. Zbl 1131.74341 Wang, K. Y.; Qin, Q. H.; Kang, Y. L.; Wang, J. S.; Qu, C. Y. 2005 Formulation of hybrid Trefftz finite element method for elastoplasticity. Zbl 1081.74043 Qin, Qing-Hua 2005 Green function and its application for a piezoelectric plate with various openings. Zbl 0951.74018 Qin, Q.-H. 1999 Green’s function for thermopiezoelectric plates with holes of various shapes. Zbl 0933.74022 Qin, Q.-H. 1999 Method of fundamental solutions for 3D elasticity with body forces by coupling compactly supported radial basis functions. Zbl 1403.74306 Lee, Cheuk-Yu; Wang, Hui; Qin, Qing-Hua 2015 A new hybrid finite element approach for plane piezoelectricity with defects. Zbl 1401.74266 Cao, Changyong; Yu, Aibing; Qin, Qing-Hua 2013 Nonlinear analysis of thick plates on an elastic foundation by HT FE with $$p$$-extension capabilities. Zbl 0918.73273 Qin, Q. H.; Diao, S. 1996 A variational principle and hybrid Trefftz finite element for the analysis of Reissner plates. Zbl 0900.73769 Jin, F. S.; Qin, Q. H. 1995 Eigenstrain boundary integral equations with local eshelby matrix for stress analysis of ellipsoidal particles. Zbl 1407.74007 Ma, Hang; Yan, Cheng; Qin, Qing-hua 2014 Post-buckling solutions of hyper-elastic beam by canonical dual finite element method. Zbl 1298.74085 Cai, Kun; Gao, David Y.; Qin, Qing H. 2014 Hybrid fundamental-solution-based FEM for piezoelectric materials. Zbl 1300.74050 Cao, Changyong; Qin, Qing-Hua; Yu, Aibing 2012 Micro-mechanical analysis of composite materials by BEM. Zbl 1130.74475 Yang, Qing-Sheng; Qin, Qing-Hua 2004 Dual variational formulation for Trefftz finite element method of elastic materials. Zbl 1079.74648 Qin, Qing-Hua 2004 A new special coating/fiber element for analyzing effect of interface on thermal conductivity of composites. Zbl 1410.74075 Wang, Hui; Qin, Qing-Hua 2015 Special elements for composites containing hexagonal and circular fibers. Zbl 1359.74437 Qin, Qing H.; Wang, H. 2015 Hybrid fundamental solution based finite element method: theory and applications. Zbl 1380.65364 Cao, Changyong; Qin, Qing-Hua 2015 Special circular hole elements for thermal analysis in cellular solids with multiple circular holes. Zbl 1359.80013 Qin, Qing Hua; Wang, Hui 2013 Numerical implementation of local effects due to two-dimensional discontinuous loads using special elements based on boundary integrals. Zbl 1351.74144 Wang, Hui; Qin, Qing-Hua 2012 Determination of welding residual stresses by inverse approach with eigenstrain formulations of BIE. Zbl 1237.74037 Ma, Hang; Wang, Ying; Qin, Qing-Hua 2012 Two-dimensional polynomial eigenstrain formulation of boundary integral equation with numerical verification. Zbl 1237.74196 Ma, Hang; Guo, Zhao; Qin, Qing-hua 2011 A fundamental solution based FE model for thermal analysis of nanocomposites. Zbl 1275.82010 Wang, H.; Qin, Q. H. 2011 Computational model for short-fiber composites with eigenstrain formulation of boundary integral equations. Zbl 1231.74459 Ma, Hang; Xia, Li-Wei; Qin, Qing-Hua 2008 Performance and numerical behavior of the second-order scheme of precise time-step integration for transient dynamic analysis. Zbl 1177.65117 Ma, Hang; Yin, Feng; Qin, Qing-Hua 2007 Boundary integral equation supported differential quadrature method to solve problems over general irregular geometries. Zbl 1109.76350 Ma, H.; Qin, Q.-H. 2005 A second-order scheme for integration of one-dimensional dynamic analysis. Zbl 1093.37033 Ma, Hang; Qin, Qing-Hua 2005 Micromechanics-BE solution for properties of piezoelectric materials with defects. Zbl 1130.74470 Qin, Qing-Hua 2004 Self-consistent boundary element solution for predicting overall properties of cracked bimaterial solid. Zbl 1021.74046 Qin, Qing-Hua 2002 Fracture and damage analysis of a cracked body by a new boundary element model. Zbl 0879.73080 Qin, Q. H.; Yu, S. W. 1997 Nonlinear analysis of thick plates by HT FE approach. Zbl 0900.73822 Qin, Q. H. 1996 A new procedure for the nonlinear analysis of Reissner plate by boundary element method. Zbl 0872.73069 Sun, Y. B.; He, X. Q.; Qin, Q. H. 1994 Unconditionally stable FEM for transient linear heat conduction analysis. Zbl 0805.65094 Qin, Q. H. 1994 Analysis solution method for 3D planar crack problems of two-dimensional hexagonal quasicrystals with thermal effects. Zbl 07186546 Li, Yuan; Zhao, MingHao; Qin, Qing-Hua; Fan, CuiYing 2019 $$n$$-sided polygonal hybrid finite elements with unified fundamental solution kernels for topology optimization. Zbl 07183372 Wang, Hui; Qin, Qing-Hua; Lee, Cheuk-Yu 2019 Closed-form solutions of an elliptical crack subjected to coupled phonon-phason loadings in two-dimensional hexagonal quasicrystal media. Zbl 1425.74421 Li, Yuan; Fan, CuiYing; Qin, Qing-Hua; Zhao, MingHao 2019 Efficient hypersingular line and surface integrals direct evaluation by complex variable differentiation method. Zbl 1426.65034 Lee, Cheuk-Yu; Wang, Hui; Qin, Qing-Hua 2018 Analysing effective thermal conductivity of 2D closed-cell foam based on shrunk Voronoi tessellations. Zbl 1391.74063 Wang, Hui; Liu, B.; Kang, Y.-X.; Qin, Qing-Hua 2017 Postbuckling analysis of a nonlinear beam with axial functionally graded material. Zbl 1359.74395 Cai, Kun; Gao, David Y.; Qin, Qing H. 2014 A novel hybrid finite element model for modeling anisotropic composites. Zbl 1282.74021 Cao, Changyong; Yu, Aibing; Qin, Qing-Hua 2013 A mathematical model of cortical bone remodeling at cellular level under mechanical stimulus. Zbl 1345.74085 Qin, Qing-Hua; Wang, Ya-Nan 2012 Trefftz functions and application to 3D elasticity. Zbl 1421.74112 Lee, Cheuk-Yu; Qin, Qing-Hua; Wang, Hui 2008 Asymptotic fields for dynamic crack growth in non-associative pressure sensitive materials. Zbl 1087.74626 Zhang, Xi; Qin, Qing-Hua; Mai, Yiu-Wing 2003 Variational principles, FE and MPT for analysis of nonlinear impact-contact problems. Zbl 0852.73070 Qin, Q. H.; He, X. Q. 1995 Variational principles and hybrid element on a sandwich plate of Lrusakov-Du’s type. Zbl 0850.73368 Qin, Qing-Hua; Jin, Fu-Sheng 1991 Variational principles and hybrid approach for finite deformation analysis of shells. Zbl 0738.73080 Qin, Q. H.; Jin, F. S. 1991 Analysis solution method for 3D planar crack problems of two-dimensional hexagonal quasicrystals with thermal effects. Zbl 07186546 Li, Yuan; Zhao, MingHao; Qin, Qing-Hua; Fan, CuiYing 2019 $$n$$-sided polygonal hybrid finite elements with unified fundamental solution kernels for topology optimization. Zbl 07183372 Wang, Hui; Qin, Qing-Hua; Lee, Cheuk-Yu 2019 Closed-form solutions of an elliptical crack subjected to coupled phonon-phason loadings in two-dimensional hexagonal quasicrystal media. Zbl 1425.74421 Li, Yuan; Fan, CuiYing; Qin, Qing-Hua; Zhao, MingHao 2019 Efficient hypersingular line and surface integrals direct evaluation by complex variable differentiation method. Zbl 1426.65034 Lee, Cheuk-Yu; Wang, Hui; Qin, Qing-Hua 2018 Analysing effective thermal conductivity of 2D closed-cell foam based on shrunk Voronoi tessellations. Zbl 1391.74063 Wang, Hui; Liu, B.; Kang, Y.-X.; Qin, Qing-Hua 2017 Method of fundamental solutions for 3D elasticity with body forces by coupling compactly supported radial basis functions. Zbl 1403.74306 Lee, Cheuk-Yu; Wang, Hui; Qin, Qing-Hua 2015 A new special coating/fiber element for analyzing effect of interface on thermal conductivity of composites. Zbl 1410.74075 Wang, Hui; Qin, Qing-Hua 2015 Special elements for composites containing hexagonal and circular fibers. Zbl 1359.74437 Qin, Qing H.; Wang, H. 2015 Hybrid fundamental solution based finite element method: theory and applications. Zbl 1380.65364 Cao, Changyong; Qin, Qing-Hua 2015 Eigenstrain boundary integral equations with local eshelby matrix for stress analysis of ellipsoidal particles. Zbl 1407.74007 Ma, Hang; Yan, Cheng; Qin, Qing-hua 2014 Post-buckling solutions of hyper-elastic beam by canonical dual finite element method. Zbl 1298.74085 Cai, Kun; Gao, David Y.; Qin, Qing H. 2014 Postbuckling analysis of a nonlinear beam with axial functionally graded material. Zbl 1359.74395 Cai, Kun; Gao, David Y.; Qin, Qing H. 2014 A new hybrid finite element approach for plane piezoelectricity with defects. Zbl 1401.74266 Cao, Changyong; Yu, Aibing; Qin, Qing-Hua 2013 Special circular hole elements for thermal analysis in cellular solids with multiple circular holes. Zbl 1359.80013 Qin, Qing Hua; Wang, Hui 2013 A novel hybrid finite element model for modeling anisotropic composites. Zbl 1282.74021 Cao, Changyong; Yu, Aibing; Qin, Qing-Hua 2013 Boundary integral based graded element for elastic analysis of 2D functionally graded plates. Zbl 1348.74336 Wang, Hui; Qin, Qing-Hua 2012 A new hybrid finite element approach for three-dimensional elastic problems. Zbl 1291.74175 Cao, C.; Qin, Q.-H.; Yu, A. 2012 A new special element for stress concentration analysis of a plate with elliptical holes. Zbl 1401.74277 Wang, Hui; Qin, Qing-Hua 2012 Solving the nonlinear Poisson-type problems with F-Trefftz hybrid finite element model. Zbl 1259.65171 Wang, Hui; Qin, Qing-Hua; Liang, Xing-Pei 2012 Hybrid fundamental-solution-based FEM for piezoelectric materials. Zbl 1300.74050 Cao, Changyong; Qin, Qing-Hua; Yu, Aibing 2012 Numerical implementation of local effects due to two-dimensional discontinuous loads using special elements based on boundary integrals. Zbl 1351.74144 Wang, Hui; Qin, Qing-Hua 2012 Determination of welding residual stresses by inverse approach with eigenstrain formulations of BIE. Zbl 1237.74037 Ma, Hang; Wang, Ying; Qin, Qing-Hua 2012 A mathematical model of cortical bone remodeling at cellular level under mechanical stimulus. Zbl 1345.74085 Qin, Qing-Hua; Wang, Ya-Nan 2012 Fundamental-solution-based hybrid FEM for plane elasticity with special elements. Zbl 1384.74047 Wang, Hui; Qin, Qing-Hua 2011 Two-dimensional polynomial eigenstrain formulation of boundary integral equation with numerical verification. Zbl 1237.74196 Ma, Hang; Guo, Zhao; Qin, Qing-hua 2011 A fundamental solution based FE model for thermal analysis of nanocomposites. Zbl 1275.82010 Wang, H.; Qin, Q. H. 2011 FE approach with Green’s function as internal trial function for simulating bioheat transfer in the human eye. Zbl 1269.74156 Wang, H.; Qin, Q. H. 2010 Boundary point method for linear elasticity using constant and quadratic moving elements. Zbl 1303.74037 Ma, Hang; Zhou, Juan; Qin, Qing-Hua 2010 Eigenstrain formulation of boundary integral equations for modeling particle-reinforced composites. Zbl 1244.74171 Ma, Hang; Yan, Cheng; Qin, Qing-Hua 2009 Matlab and C programming for Trefftz finite element methods. With CD-ROM. Zbl 1359.65005 Qin, Qing-Hua; Wang, Hui 2009 Meshless approach for thermo-mechanical analysis of functionally graded materials. Zbl 1244.74234 Wang, Hui; Qin, Qing-Hua 2008 Computational model for short-fiber composites with eigenstrain formulation of boundary integral equations. Zbl 1231.74459 Ma, Hang; Xia, Li-Wei; Qin, Qing-Hua 2008 Trefftz functions and application to 3D elasticity. Zbl 1421.74112 Lee, Cheuk-Yu; Qin, Qing-Hua; Wang, Hui 2008 Solving potential problems by a boundary-type meshless method-the boundary point method based on BIE. Zbl 1195.65186 Ma, Hang; Qin, Qing-Hua 2007 Application of hybrid Trefftz finite element method to nonlinear problems of minimal surface. Zbl 1194.65033 Wang, Hui; Qin, Qing-Hua; Arounsavat, Detdexa 2007 Performance and numerical behavior of the second-order scheme of precise time-step integration for transient dynamic analysis. Zbl 1177.65117 Ma, Hang; Yin, Feng; Qin, Qing-Hua 2007 A meshless method for generalized linear or nonlinear Poisson-type problems. Zbl 1195.65180 Wang, Hui; Qin, Qing-Hua 2006 A meshless model for transient heat conduction in functionally graded materials. Zbl 1097.80001 Wang, H.; Qin, Q-H.; Kang, Y-L. 2006 A new meshless method for steady-state heat conduction problems in anisotropic and inhomogeneous media. Zbl 1119.80385 Wang, H.; Qin, Q.-H.; Kang, Y. L. 2005 2D Green’s functions of defective magnetoelectroelastic solids under thermal loading. Zbl 1182.74052 Qin, Qing-Hua 2005 A direct constraint-Trefftz FEM for analysing elastic contact problems. Zbl 1131.74341 Wang, K. Y.; Qin, Q. H.; Kang, Y. L.; Wang, J. S.; Qu, C. Y. 2005 Formulation of hybrid Trefftz finite element method for elastoplasticity. Zbl 1081.74043 Qin, Qing-Hua 2005 Boundary integral equation supported differential quadrature method to solve problems over general irregular geometries. Zbl 1109.76350 Ma, H.; Qin, Q.-H. 2005 A second-order scheme for integration of one-dimensional dynamic analysis. Zbl 1093.37033 Ma, Hang; Qin, Qing-Hua 2005 Thermoelectroelastic solutions for internal bone remodeling under axial and transverse loads. Zbl 1085.74032 Qin, Qing-Hua; Ye, Jian-Qiao 2004 Micro-mechanical analysis of composite materials by BEM. Zbl 1130.74475 Yang, Qing-Sheng; Qin, Qing-Hua 2004 Dual variational formulation for Trefftz finite element method of elastic materials. Zbl 1079.74648 Qin, Qing-Hua 2004 Micromechanics-BE solution for properties of piezoelectric materials with defects. Zbl 1130.74470 Qin, Qing-Hua 2004 Variational formulations for TFEM of piezoelectricity. Zbl 1057.74043 Qin, Qing-Hua 2003 Solving anti-plane problems of piezoelectric materials by the trefftz finite element approach. Zbl 1038.74646 Qin, Q.-H. 2003 Asymptotic fields for dynamic crack growth in non-associative pressure sensitive materials. Zbl 1087.74626 Zhang, Xi; Qin, Qing-Hua; Mai, Yiu-Wing 2003 Self-consistent boundary element solution for predicting overall properties of cracked bimaterial solid. Zbl 1021.74046 Qin, Qing-Hua 2002 The Trefftz finite and boundary element method. Zbl 0982.74003 Qin, Qing-Hua 2000 BEM for crack-inclusion problems of plane thermopiezoelectric solids. Zbl 0974.74076 Qin, Qing Hua; Lu, Meng 2000 General solutions for thermopiezoelectrics with various holes under thermal loading. Zbl 0999.74052 Qin, Qing-Hua 2000 Thermoelectroelastic solution for elliptic inclusions and application to crack-inclusion problems. Zbl 1076.74512 Qin, Qing-Hua 2000 A closed crack tip model for interface cracks in thermopiezoelectric materials. Zbl 0932.74061 Qin, Qing-Hua; Mai, Yiu-Wing 1999 Green function and its application for a piezoelectric plate with various openings. Zbl 0951.74018 Qin, Q.-H. 1999 Green’s function for thermopiezoelectric plates with holes of various shapes. Zbl 0933.74022 Qin, Q.-H. 1999 Thermoelectroelastic Green’s function and its application for bimaterial of piezoelectric materials. Zbl 0920.73343 Qin, Qing-Hua; Mai, Yiu-Wing 1998 Fracture and damage analysis of a cracked body by a new boundary element model. Zbl 0879.73080 Qin, Q. H.; Yu, S. W. 1997 Application of hybrid-Trefftz element approach to transient heat conduction analysis. Zbl 0900.73802 Jirousek, J.; Qin, Q. H. 1996 Nonlinear analysis of thick plates on an elastic foundation by HT FE with $$p$$-extension capabilities. Zbl 0918.73273 Qin, Q. H.; Diao, S. 1996 Nonlinear analysis of thick plates by HT FE approach. Zbl 0900.73822 Qin, Q. H. 1996 Hybrid-Trefftz finite element method for Reissner plates on an elastic foundation. Zbl 0849.73071 Qin, Q. H. 1995 A family of quadrilateral hybrid-Trefftz $$p$$-elements for thick plate analysis. Zbl 0862.73063 Jirousek, J.; Wróblewski, A.; Qin, Q. H.; He, X. Q. 1995 Postbuckling analysis of thin plates by a hybrid Trefftz finite element method. Zbl 0860.73071 Qin, Q. H. 1995 A variational principle and hybrid Trefftz finite element for the analysis of Reissner plates. Zbl 0900.73769 Jin, F. S.; Qin, Q. H. 1995 Variational principles, FE and MPT for analysis of nonlinear impact-contact problems. Zbl 0852.73070 Qin, Q. H.; He, X. Q. 1995 Hybrid Trefftz finite-element approach for plate bending on an elastic foundation. Zbl 0804.73070 Qin, Q. H. 1994 A new procedure for the nonlinear analysis of Reissner plate by boundary element method. Zbl 0872.73069 Sun, Y. B.; He, X. Q.; Qin, Q. H. 1994 Unconditionally stable FEM for transient linear heat conduction analysis. Zbl 0805.65094 Qin, Q. H. 1994 Nonlinear analysis of Reissner plates on an elastic foundation by the BEM. Zbl 0790.73073 Qin, Q. H. 1993 Variational principles and hybrid element on a sandwich plate of Lrusakov-Du’s type. Zbl 0850.73368 Qin, Qing-Hua; Jin, Fu-Sheng 1991 Variational principles and hybrid approach for finite deformation analysis of shells. Zbl 0738.73080 Qin, Q. H.; Jin, F. S. 1991 all top 5 #### Cited by 505 Authors 38 Qin, Qinghua 23 Qin, Qing-Hua 14 Ma, Hang 12 Wang, Hui 10 Li, Zi-Cai 9 Chen, Wen 9 Fu, Zhuojia 8 Lee, Ming-Gong 6 Li, Junpu 6 Teixeira de Freitas, João António 6 Zhao, Minghao 5 Moldovan, Ionuţ Dragoş 5 Pasternak, Iaroslav 5 Potier-Ferry, Michel 5 Shanazari, Kamal 5 Sze, Kam-Yim 5 Tri, Abdeljalil 5 Zahrouni, Hamid 4 Cen, Song 4 Chen, Wen 4 Chen, Zengtao 4 Cheng, Alexander H.-D. 4 Dehghan Takht Fooladi, Mehdi 4 Fan, CuiYing 4 Gao, David Yang 4 Grabski, Jakub Krzysztof 4 Hou, Pengfei 4 Hu, Keqiang 4 Kolodziej, Jan Adam 4 Mackerle, Jaroslav 4 Wu, Xionghua 4 Yan, Cheng 4 Young, Lih-Jier 4 Zhang, Jianming 3 Cao, Changyong 3 Chen, Ching-Shyang 3 Chu, Po-Chun 3 Fallahi, Mahmood 3 Guo, Zhao 3 Hematiyan, Mohammad Rahim 3 Hidayat, Mas Irfan Purbawanto 3 Hosami, Mohammad 3 Kong, Weibin 3 Lee, Cheuk-Yu 3 Li, Peichao 3 Li, Yuan 3 Loboda, Volodymyr V. 3 Parman, Setyamartana 3 Pasternak, Roman 3 Reutskiy, Sergiy Yu 3 Sulym, Heorhiy 3 Wang, Hui 3 Wang, Keyong 3 Wei, Xing 2 Abdalla, Abdurahman Masoud 2 Abo-dahab, S. M. 2 Al-Gahtani, Husain Jubran 2 Ferri Aliabadi, Mohammad Hossien 2 Ariwahjoedi, Bambang 2 Bulko, Roman 2 Cai, Kun 2 Cao, Leilei 2 Chen, Weiqiu 2 Cheung, Yau Kai 2 Chiang, John Y. 2 Cismaşiu, Ildi 2 Desmet, Wim 2 Dhanasekar, Manicka 2 Ding, Haojiang 2 Dou, Fangfang 2 Feng, Wenjie 2 Fernández, Jose Ramon 2 Gao, Cun-Fa 2 García Aznar, José Manuel 2 Govorukha, Volodymyr B. 2 He, Donghong 2 He, P. Q. 2 He, Xiaoqiong 2 Herrmann, Klaus P. 2 Huang, Hung-Tsai 2 Jiang, Aimin 2 Jiang, Quan 2 Kamiya, Norio 2 Kamlah, Marc 2 Kang, Yunling 2 Kita, Eisuke 2 Kovářik, Karel 2 Li, Chenfeng 2 Li, Guangyao 2 Li, Shaojun 2 Li, Xiaolin 2 Liang, Kuankuan 2 Lin, Ji 2 Liu, Lin 2 Liu, Tong 2 Ma, Peng 2 Martínez, Rebeca 2 Mirzaei, Davoud 2 Mukherjee, Subrata 2 Mukhtar, Faisal M. ...and 405 more Authors all top 5 #### Cited in 53 Serials 105 Engineering Analysis with Boundary Elements 14 Applied Mathematics and Computation 14 International Journal for Numerical Methods in Engineering 13 Applied Mathematical Modelling 12 Acta Mechanica 12 Computational Mechanics 10 Computers & Mathematics with Applications 10 European Journal of Mechanics. A. Solids 9 Computer Methods in Applied Mechanics and Engineering 6 Archive of Applied Mechanics 6 Mathematical Problems in Engineering 6 Engineering Computations 5 International Journal of Engineering Science 5 Numerical Methods for Partial Differential Equations 5 Communications in Numerical Methods in Engineering 5 Mathematics and Mechanics of Solids 5 Acta Mechanica Sinica 4 International Journal of Heat and Mass Transfer 4 Journal of Engineering Mathematics 4 Applied Mathematics and Mechanics. (English Edition) 4 International Journal of Computational Methods 3 Journal of Computational and Applied Mathematics 3 ZAMM. Zeitschrift für Angewandte Mathematik und Mechanik 2 International Journal of Solids and Structures 2 Meccanica 2 Applied Numerical Mathematics 2 Applied Mathematics Letters 2 Numerical Algorithms 2 Computational and Applied Mathematics 2 Journal of Shanghai University 2 International Journal of Structural Stability and Dynamics 1 Applicable Analysis 1 Archives of Mechanics 1 Journal of Computational Physics 1 Mathematical Biosciences 1 Wave Motion 1 ZAMP. Zeitschrift für angewandte Mathematik und Physik 1 Chaos, Solitons and Fractals 1 Computing 1 Finite Elements in Analysis and Design 1 European Journal of Applied Mathematics 1 Advances in Engineering Software 1 Continuum Mechanics and Thermodynamics 1 International Journal of Numerical Methods for Heat & Fluid Flow 1 Journal of Inverse and Ill-Posed Problems 1 Nonlinear Dynamics 1 Nonlinear Analysis. Real World Applications 1 Archives of Computational Methods in Engineering 1 Journal of Applied Mathematics 1 Inverse Problems in Science and Engineering 1 Advances in Mathematical Physics 1 Arabian Journal for Science and Engineering 1 Journal of Theoretical Biology all top 5 #### Cited in 25 Fields 204 Mechanics of deformable solids (74-XX) 141 Numerical analysis (65-XX) 42 Partial differential equations (35-XX) 30 Classical thermodynamics, heat transfer (80-XX) 21 Fluid mechanics (76-XX) 7 Biology and other natural sciences (92-XX) 6 Statistical mechanics, structure of matter (82-XX) 5 Optics, electromagnetic theory (78-XX) 5 Operations research, mathematical programming (90-XX) 4 Ordinary differential equations (34-XX) 3 Integral transforms, operational calculus (44-XX) 2 Mechanics of particles and systems (70-XX) 2 Systems theory; control (93-XX) 1 General and overarching topics; collections (00-XX) 1 History and biography (01-XX) 1 Combinatorics (05-XX) 1 Real functions (26-XX) 1 Potential theory (31-XX) 1 Dynamical systems and ergodic theory (37-XX) 1 Harmonic analysis on Euclidean spaces (42-XX) 1 Integral equations (45-XX) 1 Calculus of variations and optimal control; optimization (49-XX) 1 Differential geometry (53-XX) 1 Probability theory and stochastic processes (60-XX) 1 Quantum theory (81-XX)
2021-01-21 05:33:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49287107586860657, "perplexity": 13885.805583529385}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703522242.73/warc/CC-MAIN-20210121035242-20210121065242-00592.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/algebra-1-common-core-15th-edition/chapter-12-data-analysis-and-probability-chapter-test-page-791/14
Chapter 12 - Data Analysis and Probability - Chapter Test - Page 791: 14 P(orange)=$\frac{1}{2}$ Work Step by Step Orange section=2 Blue section=1 Green section=1 Total sections=4 P(event)=$\frac{numberoffavorableoutcomes}{numberofpossibleoutcomes}$ The number of possible outcomes is 4 because that is the total number of sections. 2 sections are orange so that is the number of favorable outcomes. P(orange)=$\frac{2}{4}$ -simplify- P(orange)=$\frac{1}{2}$ After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2020-11-27 16:59:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7993638515472412, "perplexity": 2165.0197993521915}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141193856.40/warc/CC-MAIN-20201127161801-20201127191801-00602.warc.gz"}
http://www.thetaris.com/wiki/Discounting
Welcome to THETAWIKI. If you like to create or edit a page please make sure to login or register an account. All registered users please make sure to provide a valid email address. # Discounting ## Interest Rates and Discounting This section briefly reviews various terms related to the concept of time value of money. Terms covered are 'Discounting', 'Stochastic discount factor', 'Discount bond','Bank account' and 'Numeraire'. Discounting is to obtain the present value of future cash flows. It is the act of representing future cash flows as an equivalent immediate cash amount. Depending on the type of future cash flows, the discount rate can be a risk-free interest rate ($r(t)$) for risk-free investments, or a rate ($\mu (t)$) with some risk premium over and above the risk-free rate for risky investments. A stochastic discount factor is a stochastic process that depends on the stochastic evolution of future interest rates: $D(t,T)=e^{-\int_t^T r(s)ds}, \qquad (1)$ where $D(t,T)$ is a stochastic discount factor at time $t$ with maturity $T$, and $r(s)$ is future interest rate at time $s$. A discount bond $P(t,T)$ (also known as zero coupon bond) is a theoretical construct used to discount future cash flows. It has a price known at time $t$. The relation between discount bond and stochastic discount factor is: $P(t,T)=\mathrm{E} \left [ D(t,T) \mid \mathcal{F}_t \right ], \qquad (2)$ i.e. mathematically, discount bond is the equivalent of the risk-neutral expectation of the stochastic discount factor. A bank account is a process that starts with a unit cash amount and grows at the risk-free rate. Mathematically, it is $B(t,T) = B(0)e^{-\int_t^T r(s)ds}, \qquad (3)$ where $B(t,T)$ is a bank account process. It is the solution to the following deterministic process $dB(t) = r(t)\,B(t)dt, \quad B(0) = 1. \qquad (4)$ A numeraire is any positive non-dividend-paying asset. We can choose a numeraire to normalize other assets for more convenient pricing. The choice of a numeraire induces a particular measure associated with this numeraire, and normalized assets are Martingales under this numeraire measure. A typical numeraire example in option pricing is to use the bank account as numeraire. When asset prices are normalized by the bank account, they are Martingales under the bank account numeraire measure (also known as the risk-neutral measure). Under this bank account measure, the original assets grow at the risk-free rate. This is also why we have the risk-free rate as the new drift term in the Black-Scholes PDE. An example of stock processes using the bank account as numeraire can be found at Geometric Brownian Motion. ## Discount numeraire in ThetaML In the following, we will illustrate by example the concept of discounting in ThetaML via our example discount numeraire EUR. The following ThetaML model computes a discount factor EUR that is initially fixed at 1 EUR. The value of 1 EUR is defined in the currency unit Euro, i.e. 1 EUR = 1 Euro. The future values of the discount factor EUR decays exponentially at a constant interest rate r. By dividing security prices by the value of one EUR ,we normalize the security prices in units of Euro coins. model DiscountFactor %This model computes the discount factors under the assumption %of constant interest rates; the interest rate 'r' is for debt %securities denominated in the currency Euro; %the discount factors 'EUR' is a process variable, 'EUR' implicitly incorporate %scenario and time indexes import r "Constant interest rate" export EUR "Discount factor process" %initialize the discount factor at 1 Euro EUR = 1 %'loop ... inf' is an infinite loop; this infinite loop computes an interest %rate process of an arbitrary length; the lifetime of the infinite loop is %automatically extended to the desired length depending on a specific pricing %application loop inf %the ThetaML command 'theta' passes time by '@dt' units %the ThetaML parameter '@dt' denotes an arbitrary time unit; its specific %value depends on a specific pricing application theta @dt %the discount factor decays at a rate of 'r' for the time interval '@dt' EUR = EUR * exp(-r * @dt) end end`
2017-10-19 12:29:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4990081787109375, "perplexity": 2091.3819607922896}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823284.50/warc/CC-MAIN-20171019122155-20171019142155-00316.warc.gz"}
https://findcitytune.com/az-doe-or-die-ii-album-stream/
# AZ – Doe Or Die II [Album Stream] It has been 26 years since AZ released his debut album Doe Or Die and 12 years since his last album. Almost three decades in the making, the legendary New York rapper is back with the sequel to his first effort. Taking it back to the essence, AZ enlists the likes of Rick Ross, Conway the Machine, Lil Wayne, Dave East and T-Pain for Doe Or Die II. It’s only 13 tracks in length, so don’t expect a longwinded project. Stream it all below and get your dose of AZ music for the new generation.
2021-10-26 00:23:07
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9279203414916992, "perplexity": 6884.703469680434}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587770.37/warc/CC-MAIN-20211025220214-20211026010214-00452.warc.gz"}
http://mathhelpforum.com/calculus/204429-how-do-cos-x-cos-x-cos-x-all-differ-derivative-wise.html
Thread: How do cos²X and cos X² and cos (X)² all differ derivative wise?! 1. How do cos²X and cos X² and cos (X)² all differ derivative wise?! How are Cos²X and Cos X² and Cos (X)² all differ derivative wise?! 2. Re: How do cos²X and cos X² and cos (X)² all differ derivative wise?! Originally Posted by skinsdomination09 How are Cos²X and Cos X² and Cos (X)² all differ derivative wise?! Are you clear that $\cos^2(x)~\&~\cos(x^2)$ are two different functions? Now many computer algebra systems use the notation $\cos(x)^2$ for what traditionally has been written as $\cos^2(x)$. 3. Re: How do cos²X and cos X² and cos (X)² all differ derivative wise?! Originally Posted by Plato Are you clear that $\cos^2(x)~\&~\cos(x^2)$ are two different functions? Now many computer algebra systems use the notation $\cos(x)^2$ for what traditionally has been written as $\cos^2(x)$. Yeah but what confuses me is cos x² vs. cos (x)². Are they different? When parenthesis aren't there I always get mixed up 4. Re: How do cos²X and cos X² and cos (X)² all differ derivative wise?! Originally Posted by skinsdomination09 Yeah but what confuses me is cos x² vs. cos (x)². Are they different? When parenthesis aren't there I always get mixed up Outside CAS no one uses $\cos(x)^2$. So there is no reason to be confused, The derivative of $\cos^2(x)$ is $-2\cos(x)\sin(x)~.$ The derivative of $\cos(x^2)$ is $-2x\sin(x^2)~.$ BTW. Because these are functions most of us insist that () be used. I will mark $\cos~x$ as wrong. 5. Re: How do cos²X and cos X² and cos (X)² all differ derivative wise?! Originally Posted by Plato Outside CAS no one uses $\cos(x)^2$. So there is no reason to be confused, The derivative of $\cos^2(x)$ is $-2\cos(x)\sin(x)~.$ The derivative of $\cos(x^2)$ is $-2x\sin(x^2)~.$ Thanks but can you explain the cos²(x) one. It doesn't quite look like either the product or chain so I'm slightly confused. Sorry to be so difficult 6. Re: How do cos²X and cos X² and cos (X)² all differ derivative wise?! Originally Posted by Plato Outside CAS no one uses $\cos(x)^2$. So there is no reason to be confused, The derivative of $\cos^2(x)$ is $-2\cos(x)\sin(x)~.$ The derivative of $\cos(x^2)$ is $-2x\sin(x^2)~.$ BTW. Because these are functions most of us insist that () be used. I will mark $\cos~x$ as wrong. Sorry but when I look the top one I think chain rule. So i'd derive cos² (the outside) and get -2sin and leave the inside the same so i'd end up with -2sin(X). Why is this wrong? 7. Re: How do cos²X and cos X² and cos (X)² all differ derivative wise?! Originally Posted by skinsdomination09 Thanks but can you explain the cos²(x) one. It doesn't quite look like either the product or chain so I'm slightly confused. Sorry to be so difficult If $f$ is differentiable function then the derivative $D_x(f^2(x))=2f(x)f^{\prime}(x)$. It is really the product rule because $f^2(x)=f(x)\cdot f(x)~.$ Thanks! , , , , , , , , cos x² derivative Click on a term to search for related topics.
2017-03-27 03:11:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 26, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9414949417114258, "perplexity": 2644.144387951639}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189377.63/warc/CC-MAIN-20170322212949-00260-ip-10-233-31-227.ec2.internal.warc.gz"}
https://autowarefoundation.gitlab.io/autoware.auto/AutowareAuto/perception-stack.html
Autoware.Auto Autoware.Auto 3D perception stack # Running the Autoware.Auto 3D perception stack This section leverages the velodyne_node, which accepts UDP data as an input. Download the sample pcap file containing two LiDAR point clouds generated by the Velodyne VLP-16 Hi-Res: Place the pcap file within the adehome directory, for example ade-home/data/. ADE Terminal 1 - start rviz2: $ade enter ade$ export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/nvidia/lib64/ # see the note below ade$ rviz2 -d /home/"${USER}"/AutowareAuto/install/autoware_auto_examples/share/autoware_auto_examples/rviz2/autoware.rviz Note Systems with an NVIDIA graphics card must set the LD_LIBRARY_PATH in order to load the correct driver; see issue #49 for more information. ADE Terminal 2 - start udpreplay:$ ade enter ade$udpreplay ~/data/route_small_loop_rw-127.0.0.1.pcap ADE Terminal 3 - start the velodyne_node:$ ade enter ade$cd AutowareAuto ade$ source install/setup.bash ade$ros2 run velodyne_node velodyne_cloud_node_exe __params:=/home/"${USER}"/AutowareAuto/src/drivers/velodyne_node/param/vlp16_test.param.yaml Note The steps above leverage a pcap file, however the velodyne_node can be connected directly to the sensor. Update the IP address and port arguments in the yaml file to connect to live hardware. When the velodyne_node is running, the resulting LiDAR point cloud can be visualized within rviz2 as a sensor_msgs/PointCloud2 topic type. The data will look similar to the image shown below. We will now start with the ray ground filter node, for which we will need the Velodyne driver that we ran previously and a pcap capture file being streamed with udpreplay For this step we will need a fourth ADE terminal, in addition to the previous three: $ade enter ade$ cd AutowareAuto ade$source install/setup.bash ade$ ros2 run ray_ground_classifier_nodes ray_ground_classifier_cloud_node_exe __params:=/home/"${USER}"/AutowareAuto/src/perception/filters/ray_ground_classifier_nodes/param/vlp16_lexus.param.yaml This will create two new topics (/nonground_points and /points_ground) that output sensor_msgs/PointCloud2s that we can use to segment the Velodyne point clouds. With rviz2 open, we can add visualizations for the two new topics, alternatively an rviz2 configuration is provided in AutowareAuto/src/tools/autoware_auto_examples/rviz2/autoware_ray_ground.rviz that can be loaded to automatically set up the visualizations. Autoware.Auto ray ground filter snapshot Another component in the Autoware.Auto 3D perception stack is the downsampling filter, which is implemented in the voxel_grid_nodes package. We will run the the voxel grid downsampling node in a new ADE terminal, using the same method as for the other nodes.$ ade enter ade$cd AutowareAuto ade$ source install/setup.bash ade$ros2 run voxel_grid_nodes voxel_grid_cloud_node_exe __params:=/home/"${USER}"/AutowareAuto/src/perception/filters/voxel_grid_nodes/param/vlp16_lexus_centroid.param.yaml After this we will have a new topic, named (/points_downsampled) that we can visualize with the provided rviz2 configuration file in src/tools/autoware_auto_examples/rviz2/autoware_voxel.rviz Autoware.Auto voxel grid downsampling snapshot
2020-01-17 20:13:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1724013239145279, "perplexity": 12026.967320329684}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250590107.3/warc/CC-MAIN-20200117180950-20200117204950-00488.warc.gz"}
http://shop.oreilly.com/product/9780937175842.do
Learning GNU Emacs By Debra Cameron, Bill Rosenblatt Publisher: O'Reilly Media Final Release Date: October 1991 Pages: 442 GNU Emacs is the most popular and widespread of the Emacs family of editors. It is also the most powerful and flexible. (Unlike all other text editors, GNU Emacs is a complete working environment---you can stay within Emacs all day without leaving.) This book tells you how to get started with the GNU Emacs editor. It will also "grow" with you: as you become more proficient, this book will help you learn how to use Emacs more effectively. It will take you from basic Emacs usage (simple text editing) to moderately complicated customization and programming.Topics covered include: Using Emacs to read and write electronic mail. Using Emacs as a "shell environment". How to take advantage of "built-in" formatting features. Customizing Emacs. Whys and hows of writing macros to circumvent repetitious tasks. Emacs as a programming environment. The basics of Emacs LISP. The Emacs interface to the X Window System. How to get Emacs. The book is aimed at new Emacs users, whether or not they are programmers. Also useful for readers switching from other Emacs implementations to GNU Emacs. Covers Version 18.57 of the GNU Emacs editor. Product Details Colophon Recommended for You Customer Reviews
2017-01-19 10:21:20
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8899415135383606, "perplexity": 6717.550497967958}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00238-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.nature.com/articles/s41467-018-03596-z?error=cookies_not_supported&code=60ebd9e1-505b-46f3-b3aa-796e81d9d28c
Article | Open # Fluorescent label-free quantitative detection of nano-sized bioparticles using a pillar array • Nature Communicationsvolume 9, Article number: 1254 (2018) • doi:10.1038/s41467-018-03596-z Accepted: Published online: ## Abstract Disease diagnostics requires detection and quantification of nano-sized bioparticles including DNA, proteins, viruses, and exosomes. Here, a fluorescent label-free method for sensitive detection of bioparticles is explored using a pillar array with micrometer-sized features in a deterministic lateral displacement (DLD) device. The method relies on measuring changes in size and/or electrostatic charges of 1 µm polymer beads due to the capture of target bioparticles on the surface. These changes can be sensitively detected through the lateral displacement of the beads in the DLD array, wherein the lateral shifts in the output translates to a quantitative measurement of bioparticles bound to the bead. The detection of albumin protein and nano-sized polymer vesicles with a concentration as low as 10 ng mL−1 (150 pM) and 3.75 μg mL−1, respectively, is demonstrated. This label-free method holds potential for point-of-care diagnostics, as it is low-cost, fast, sensitive, and only requires a standard laboratory microscope for detection. ## Introduction Disease diagnosis requires identification and quantification of various bioparticles such as DNA, RNA, proteins, virus, exosomes, and bacteria. Current clinical laboratories use well-established sandwich assay, PCR, gel electrophoresis, and flow-cytometry methods for detection of these bioparticles1,2. However, these methods use fluorescent labels that increase detection cost and complexity by reliance on expensive optical systems and involvement of multiple sample processing steps requiring minimum sample volumes. Thus, fluorescent label-free bioparticle detection gains traction as an alternative means in disease diagnosis. Technological advancement in label-free methods using microcantilever3, surface-enhanced Raman scattering (SERS)4, surface plasmon resonance (SPR)5,6,7, magnetic beads8, electrochemical detection9, and quartz crystal microbalance10 provide real-time information on bioparticle interactions, resulting in greater understanding of biochemical functions, drug interactions, and sensitive quantification of these bioparticles. The biosensor tracks changes in biophysical interactions of binding events, mass changes, refractive index or chemical reactions, and transduces the information as mechanical, electrical, or optical signals, and have shown detection of proteins down to femtomolar levels. However, these techniques often require precision engineering of nano-features, complex optical setups, secondary antibodies in sandwich assays, novel nanoprobes (e.g., graphene oxide, carbon nanotubes, and gold nanorods) or additional amplification step such as aggregation of nanoparticles to reduce the limit of detection (LOD)11. Deterministic lateral displacement (DLD) pillar array platforms have been used for size-sensitive separation of circulating tumour cells to bioparticles such as DNA and exosomes12,13,14,15. For a fixed critical DLD cut-off size (Dc), larger particles get displaced laterally relative to particles smaller than the Dc16,17. To separate nano-sized particles, it is challenging and costly to operate due to nanofabrication, precision injection of sample, and low throughput due to the small gap size18. So far, DLD research has mainly been focused on bioparticle separation and potential use of this technique for detection has not been extensively explored19,20. Here, a fluorescent label-free method for sensitive detection of nano-sized proteins and polymer vesicles using a DLD pillar array with micrometer-sized features is demonstrated. Bioparticles of interest are captured or adsorbed onto polymer microbeads with specific ligands and detected quantitatively based on lateral displacement of the microbeads in the pillar array. Two domains exist for this bioparticle detection phenomenon: for small bioparticles, electrostatic interactions dominate, and for large bioparticles, physical particle size increase has a dominant role. The detection is performed through lateral displacement changes as a result from the modulation of microbead surface charge or size induced by the adsorption of bioparticles. The extent of the lateral displacement can be correlated to the amount of bioparticles in the sample. Using this bioparticle-on-bead method, changes in lateral shift correlating to the bioparticle concentrations can be sensitively discriminated. The detection of albumin proteins and nano-sized polymer vesicles with a concentration as low as 10 ng mL−1 (150 pM) and 3.75 μg mL−1, respectively is demonstrated. This work sets the precedent for sensitive detection and quantification of biological particles via sensitive changes in size or electrostatic interactions of the microbead carrier on DLD. We pushed the boundaries of DLD and applied our model for fluorescent label-free detection of nano-sized proteins and polymer vesicles, which open opportunities for low-cost medical diagnosis, liquid biopsy, and detection of biologically relevant DNA, RNA, exosomes, viruses, and proteins. ## Results ### Electrostatic influence on particle–DLD interactions Electrostatic forces in DLD are non-trivial and can significantly influence particle-DLD interactions21 (Fig. 1b). We investigated these effects using a × 600 magnification and high-speed capture of 1000 fps, and different particle positions were experimentally tracked and superimposed (Fig. 2a). The DLD segment used for this capture had a gap of 4 µm and gradient of 0.75° resulting in a Dc of 700 nm and the interaction between 1 µm particle and the DLD pillar is ensured. The difference between different ionic media is evident in the particle motion and distance between particle positions. The electrostatics force repels the beads from the pillar at low ionic concentration, which displace the beads into a streamline further away from the pillar. The simulations show that small shifts in particle streamlines could drastically change the curvature of motion of particle. We also measured the mean flow velocity of these particles and found that increasing the ionic concentration reduces the average particle flow velocity (Supplementary Fig. 1). This confirms that at lower ionic concentrations, electrostatic interactions shifts the beads into a streamline away from the pillar resulting in an increase in velocity and shift in lateral displacement. To evaluate the extend of electrostatic lateral shift in Dapp, a baseline reference curve for DLD device was developed by characterising Dapp using DLD-S1 device under control conditions of native poly-dimethylsiloxane (PDMS) surface with 1 µm polystyrene (PS) National Institute of Standards and Technology (NIST) beads and NaCl solutions of different concentrations (Fig. 2b). The curve represents the mean Dapp size based on various conditions of separation spectrums (Supplementary Fig. 2). At ~ 350 µM NaCl, Dapp = Dp where the stipulated particle size is comparable to the Dapp. NaCl concentrations lower than 350 µM yield a greater Dapp due to increase in electrostatic repulsion, whereas higher NaCl concentrations would mean an increase in surface charge shielding and smaller Dapp. It is important to note that the size of the particle do not change, rather the changing electrostatic interactions shifts the particle into different streamlines in DLD resulting in sensitive changes in apparent size. Although the actual size of particle does not change, the cause of the smaller Dapp influence has not yet been investigated, a likely reason is due to the attraction of particle towards the pillar via hydrophobic interaction when the electrostatic effect is shielded22. This is supported by the increase of bead adhesion on pillar at high ionic concentration above 100 mM23. The Dapp curve serves as a baseline reference for comparison with various parameters of surface charge, particle surface, and different buffers to be tested and optimized for bioparticle detection. ### Optimizing surface charge parameters for DLD separation Three parameters are used to experimentally investigate the surface charge effects on Dapp shifts namely, PDMS device surface charges, bead surface charges, and pH of the fluid media. Figure 3 summarizes the effects of various parameters on Dapp curve plots at different ionic concentrations. Despite large difference in Dapp shifts across different parameters, there is minimal difference in bead sizes as observed with transmission electron microscopy (TEM) and dynamic light scattering (DLS) (Supplementary Fig. 3). Figure 3 shows the measured mean lateral shifts in Dapp between various DLD experimental parameters and the reference standard curves of native PDMS, 1 µm PS NIST beads, and NaCl solution. Using the control Dapp curve as the reference, the shift in Dapp curves in the DLD-S1 can be measured to characterize the influence of these parameters (Supplementary Fig. 4). The positive ( + ) or negative ( – ) shift of Dapp is made relative to the control Dapp. The plasma treated surface increases the mean Dapp by + 164 nm compared with native PDMS. Plasma activation increases the number of SiO surface groups and creates a highly negatively charged PDMS surface with isoelectric point of pH 2 to pH 524,25,26. By exposing plasma activated surface to different pH solutions, the Dapp values change dramatically as the different pH influences the magnitude and ionization of the surface groups27. The use of NaOH ranging from pH 9.7 to 11.5 (50–3000 µM) would alter the electrostatic interactions by increasing the mean Dapp to + 342 nm compared with pure native PDMS in NaCl solution. The use of HCl on the contrary reduces the electrostatic influence of the highly-charged plasma activated PDMS surface25. This is expected as the association of H+ to the surface of the DLD device would result in reduction of negatively charged groups, thus reducing electrostatic interactions to + 135 nm mean Dapp. It is important to note that the surface charge of plasma-treated PDMS is still negative as it is still beyond its isoelectric point. Different bead surfaces namely PS-, PS-carboxylated (COOH)-, and poly-allylamine hydrochloride (PAH)-coated beads were also tested. Using the same plasma-activated surface, COOH beads displayed a mean Dapp shift of + 167 nm, which is similar to plain PS beads at + 164 nm. Further measurements of colloidal zeta-potential suggest similar colloidal stability and potential of – 21.4 mV for PS and – 29.5 mV for PS-COOH in deionised (DI) water. To induce a positively charged bead, the PS-COOH bead was coated with positively charged PAH which would physically adsorb to the negatively charged bead. The zeta potential measurement shows the PAH coating results in the + 43 mV surface zeta potential in DI water. The beads were flowed into the DLD setup and as expected, the positively charged beads attracted to the negatively charged DLD device surface at the entrance of the reservoir and could not enter the DLD pillar region (Supplementary Fig. 5). Therefore, electrostatic interactions significantly influence particle separation in DLD device and ensuring charge repulsion of particle-DLD surface is necessary. ### Florescent label-free detection of protein coated beads A protein-coated bead is predicted to change the surface properties of the bead substrate which will result in a change in lateral Dapp shift. The amount of Dapp shift is hypothesized to correlate to the amount of proteins on the bead surface. Native albumin protein has a hydrodynamic radius of 3.7 nm and a protein coated bead would increase the theoretical hydrodynamic size of the bead by at most 10 nm28. This size difference is lower than the resolution of our device (Supplementary Note 1 and Supplementary Fig. 6). However, using the particle-pillar electrostatic effects, these differences could be easily detected in the DLD setup by quantifying the changes in the Dapp of beads. The DLD-S2 device has a higher theoretical resolution limit of ~ 10 nm and would be used in this study to increase the sensitivity of electrostatic-induced interaction. PS microbeads were suspended in different concentrations of albumin solutions to form an albumin coat on the bead with DI water as the adsorption media. The beads suspended in proteins of concentration range from 1 to 10.0 mg mL−1 showed decreasing Dapp (Fig. 4a). The zeta-potential of the beads in NaCl solutions was measured and it was found that the surface charges were shielded (Supplementary Fig. 7). This result corresponds to a decrease in mean Dapp for an albumin coated bead since electrostatic charges are muted due to the albumin coat in NaCl solution. The use of NaCl in the range of 2000 mM elicited the largest difference between the different protein concentrations (Fig. 4b and Supplementary Fig. 8a). We tested the effects of using alkaline pH solutions on the increase in electrostatic interactions and detection sensitivity. At high pH, the albumin protein would unfold and the charge density would have increased significantly due to disassociation of H+ charged groups from the albumin29. The impact of pH of medium on protein-coated beads' apparent diameter as compared to the Dapp in NaCl solution showed a significant increase of approximately 400-fold in sensitivity of protein coat detection (from 1 mg mL−1 to 2.5 µg mL−1). Figure 4c shows the separation of 2.5–10 µg mL−1 albumin protein-coated COOH beads in alkaline NaOH solution of concentration 2.5–10.0 mM (pH 11.4–12). At 2.5 µg mL−1 of albumin, the protein coat formed on COOH beads have a mean Dapp difference of 18 nm. However, at 5.0 µg mL−1, the mean difference has increased to 44 nm, which shows difference in peaks across all alkaline pH. The most significant observable difference is at 10 µg mL−1 protein concentration where there is a relatively large increase of 127 nm in Dapp of coated beads compared to the non-coated beads in alkaline pH 12 NaOH solution (Fig. 4d and Supplementary Fig. 8b). On the contrary, low pH solutions were tested and it was found that even at 1 mM HCl, the albumin-coated beads started to stick non-specifically to the device (Supplementary Fig. 9). This is because the charges on the beads have changed from negative to positive for pH 4. This also indirectly confirms the presence of albumin on the beads as the pI of albumin is ~ 4.730. This charge inversion facilitates the electrostatic attraction force between the positively charged beads and surface of negatively charged microchannels. An optimized adsorption albumin protocol was performed using pH 5.5 2-morpholinoethanesulfonic acid (MES) buffer and lower concentrations of the beads. This, combined with the use of pH 12 NaOH solution for DLD separation, results in a significant decrease in the limit of albumin detection concentration (Fig. 5 and Supplementary Fig. 10). Four independent sets of samples were tested using four PDMS devices and the mean Dapp of the four samples were averaged and plotted as $D app ¯ ¯$ in Fig. 5 for various concentration of protein adsorption ranging from 100 to 1000 ng mL−1 (Supplementary Fig. 11). The results showed that we could detect as low as 100 ng mL−1 of albumin, which corresponds to approximately 1.5 nM of albumin using this label-free approach. The $D app ¯ ¯$ of 750 ng mL−1 and 1000 ng mL−1 were 822 and 832 nm, respectively, which did not yield a significant difference of a  10 nm Dapp change, which is at the limits of resolution for the DLD-S2 device. To ensure specific binding to proteins and higher detection sensitivity, beads conjugated with human serum albumin (HSA) antibody were used to detect the presence of HSA (Fig. 6a). The binding of HSA to the antibody was confirmed using fluorescence labelled secondary antibody binding (Supplementary Fig. 12). The limits of detection of HSA using antibody testing were found to be 10 times more sensitive, ranging from 10 to 75 ng mL−1, than the physical adsorption of albumin on 1 µm bead. Similarly, n = 3 independent sets of readings were performed using three PDMS device and the corresponding $D app ¯ ¯$ detection range for 10–75 ng mL−1 of HSA protein were 771–842 nm, respectively (Fig. 6b and Supplementary Fig. 13). The $D app ¯ ¯$ of 25 ng mL−1 of HSA is not statistically significant compared with 10 ng mL−1 as the $D app ¯ ¯$ difference is ~ 10 nm, which is close to the LOD of the DLD-S2 device. The detection was performed under NaOH pH 12 for comparison with earlier studies on albumin adsorption to bead. The use of antibody-conjugated beads increases the specificity of protein binding and sensitivity of detection to as low as 150 pM. This method of protein detection does not use fluorescence label, secondary antibody or nanoparticle aggregation methods and is comparable to existing label-free protein detection such as SERS, SPR, or microcantilevers31,32. ### Fluorescence label-free detection of vesicle on DLD Extracellular vesicles (EVs) have importance in intercellular communication, regeneration, and transport.33,34,35,36,37 Furthermore, integral proteins present in the membrane of EVs have significant roles in mediating these communications and have emerged as biomarkers for exosomes in various pathological and normal conditions.38,39,40,41,42,43 Thus, the detection of EVs and its membrane proteins is crucial for disease diagnosis. Owing to advantages of mechanical stability and membrane tunability, polymer vesicles were chosen over lipid vesicles for incorporating the membrane proteins for the current study.44,45,46 BD21 vesicles with a TEM size range of 132 ± 31 nm were prepared and characterised for size, shape, and surface charge (Methods and Supplementary Fig. 14). Similar to protein detection, these nanovesicles were adsorbed onto the surface of the beads and the detection range was found to be between 0.32 and 2.5 mg mL−1 under 0.1 × phosphate buffered saline (PBS) media (Supplementary Figs. 14 and 15, and Supplementary Note 2). To confirm that the lateral displacement is dominantly driven by the size increase instead of charge, beads were coated with dissolved vesicles. The dissolved vesicles were prepared via detergent dissolution which resulted in the size of ~ 8 nm and similar surface charge compared with the undissolved vesicle. It was observed that the apparent diameter of the beads coated with dissolved vesicle is similar to the uncoated beads (Supplementary Fig. 16). To further enhance the detection specificity and sensitivity of nano-vesicles, primary antibodies conjugated beads were used to bind to polymer nano-vesicles reconstituted with Aquaporin-1 proteins (Aqp1). As most EVs contain surface markers and proteins, Aqp1 was reconstituted onto polymer vesicles to demonstrate the detection by antibody coated beads in DLD device based on change in bead size. Aqp1 is a membrane pore protein which allows the permeability of water and was used as a model protein for detection of nano-vesicles.47,48 The incorporation of Aqp1 membrane protein to vesicles was done by widely used detergent-mediated reconstitution method that involves vesicle dissolution using detergent and protein reconstitution with removing the detergent using biobeads (Methods). It is important to note that the reconstituted vesicles are smaller compared to the original size of vesicles which could be due to a faster rate of detergent removal (Supplementary Fig. 17)49. The concentration of reconstituted Aqp1 vesicles were then assessed by Bicinchoninic Acid (BCA) assay (Supplementary Fig. 18), whereas its functionality was shown by the shrinking of Aqp1 vesicles upon gradual exposure to hyperosmotic sucrose solution compared with vesicles without Aqp1 (Supplementary Table 1). The binding specificity of the BD21 nano-vesicles to the antibody-conjugated beads was confirmed using fluorescent probes (Fig. 7a–f). Interestingly, the DLD vesicle detection through the antibody-based capture of BD21 vesicles showed a 90-fold increase in limits of detection in comparison with the DLD detection with physical adsorption on beads from 0.33 mg mL−1 down to 3.75 µg mL−1 (Fig. 7 and Supplementary Fig. 14). Four sets of experiments were performed using four DLD-S2 devices to acquire the data and the corresponding analysis (Supplementary Fig. 19). The detection range of these nano-vesicles now span two orders of magnitude from 3.75 to 375 µg mL−1 with a $D app ¯ ¯$ ranging from 807 to 925 nm under 0.1 × PBS buffer (Fig. 7). At this ionic concentration, the electrostatic interactions are muted and changes in the bead mean Dapp size is correlated to the increase in the amount of vesicles bound to the antibody-conjugated bead. At low vesicle concentrations, the sparsely bound vesicles hardly change the average diameter of the bead, whereas at 375 µg mL−1, the size of the bead increases by 140 nm compared with the control. This concentration is within the detection range of some exosomes isolated from physiological conditions, which can vary from several µg mL−150,51 down to ng mL−152 depending on the source of the exosome (plasma or serum), its cells type, pathological status (healthy or diseased), and purpose. ## Discussion Using albumin and polymer vesicle as proof of concept, we demonstrated a fluorescent label-free method for detection of nano-sized albumin proteins and vesicles using a PDMS DLD pillar array with micrometer-sized features. The advantages of this method are multi-fold. Fabrication of such a pillar array with micrometer-sized features is much less challenging as compared with the devices that require nanofabrication. The fluid flow does not require high pressures and the detection can be easily performed using standard bright-field bench-top microscopes18. The attachment of bioparticles on microbeads also significantly reduces the effect of diffusion. Specific ligands can be immobilized on different microbeads to capture different bioparticles of interest. Moreover, the electrostatic interactions between surface proteins and DLD pillars could be modulated in real-time using different buffer ionic concentrations and the resultant lateral shift of the microbeads can be used to detect different amounts of proteins21. The detection of HSA proteins on beads via electrostatics dominant change in DLD has a LOD of 10 ng mL−1, whereas detection of polymer vesicles based on particle size change has a detection limit of 3.75 µg mL−1. The vesicle membrane protein detection by antibody-coated beads in DLD device can be further extended to specific detection of EVs such as exosomes based on their respective membrane proteins. We have demonstrated that the fluorescence label-free method for nano-sized bioparticle detection is inexpensive, sensitive and only requires a standard laboratory microscope for the measurement of bead lateral position in the DLD PDMS device. Furthermore, with a 50 fps capture framerate, it is possible to integrate the detection onto portable imaging solutions and hold great potential for use in point of care diagnostics. ## Methods ### Device design Using Eq. 1, two DLD devices were designed with incremental step resolution (Supplementary Fig. 6). DLD System 1 (DLD-S1) was designed for high-dynamic range of Dapp and DLD System 2 (DLD-S2) was designed for high resolution displacement. DLD-S1 has a 100 nm Dapp increment measurement and DLD-S2 is 50 nm. The two systems also have different sensitivity for separation of 1 µm bead substrates in ionic buffer. DLD system 1 (DLD-S1) has dynamic range from 1 µM to 3 mM, whereas DLD system 2 (DLD-S2) has greater range of 0.5–150 mM. These DLD devices have 14 DLD segments connected in series with gap sizes fixed at 4 µm for DLD-S1 and 2 µm for DLD-S2. This results in a particle size resolvable quasi-resolution of 34 nm for DLD-S1 and 17 nm for DLD-S2. The input sample stream is sandwiched by two buffer streams to a width of a single input channel. ### Electrostatic and size dominant separation in DLD DLD is a robust label-free microfluidics particle separation technique pioneered by Huang et al13. This technique uses pillar array with certain gap which is tilted in an angle and generates unique number of streamline between the gap that can laterally displace particle above the critical diameter. This Dc of the separation is influenced by the gap between pillars and the row shift fraction. Davis et al.16,53 proposed an empirical formula for DLD array.54 $D c =1.4G ε 0.48$ (1) Where $G$ is the gap or pore size between pillars and $ε$ is the row shift fraction $( ε = tanθ )$ when $θ$ is the angle of the gradient. Particles larger than this Dc will displaced laterally, whereas particles smaller than Dc flow through the array without any lateral displacement. Therefore, for lateral displacement to occur, the particle diameter (Dp) must be greater than Dc. Although Eq. 1 determines the particle physical size for separation, it does not account for the influence of electrostatic forces on the particle cut-off size in DLD. We previously determined that electrostatic force effects on DLD separation is non-trivial even for particles as large as 1 µm21. Using our DLD device, we can measure the effect of electrostatic forces by changes in the apparent size of the particles (Supplementary Fig. 20). $D app = D F - EDL + D p$ (2) The $D F - EDL$ term describes the additional displacement of the particle due to the summation of hydrodynamic and electrostatic forces acting between the DLD pillar and particle. When $D F - EDL$ is positive ($D app$>$D p$), the particle appears larger than it physically is in the DLD device and has a greater lateral displacement (Fig. 1). Interestingly, the converse is possible when $D F - EDL$ is negative ($D app$<$D p$), and the particle reduces its apparent size resulting in reduced lateral displacement. This does not mean that the electrostatic force becomes attractive, rather the surface charges on pillar and particle surface are being shielded such that the baseline repulsive force in a stable colloidal system (when $D app$=$D p$) is reduced. $D F - EDL$can be approximated from the following equation, $F F - EDL = 2 π λ D R ε 0 ε σ p 2 + σ s 2 e - 2 D F - EDL ∕ λ D + 2 σ p σ s e - D F - EDL ∕ λ D$ (3) in which $σ p$ is particle surface charge, $σ s$ is device surface charge, and $λ D$ is the debye length of the solution. This Debye length ($λ D$) depends on the charge (z) and ionic concentration (c), temperature (T), Boltzmann constant $k b$, electron charge (e) and Avogadro number ($N A$) $λ D = N A e 2 ε ε 0 k b T ∑ i z i 2 c i ∞ - 1 ∕ 2$ (4) It is predicted from the equations that as ionic concentrations of solution decreases, the electrostatic double layer would increase, suggesting greater electrostatic effects from repulsive surface22. This increase in repulsive force virtually increase the diameter ($D p$) of the particle by the electrostatics double layer force ($D F - EDL$) results in greater lateral displacement of the particle. Fluid flow velocities do affect the separation but it is not very significant and would require increase in excess of 100-fold before there can be an observable effect. Thus, the electrostatic force interaction on particles in DLD can be primarily influence by these three factors—surface charges on the device, particle, and ionic concentration of media55. Therefore, this electrostatic-based displacement can be used for detection of nano-sized biomolecules such as protein or DNA on microbeads, as the presence of biomolecules coat changes the overall surface charge of the microbeads, and hence the electrostatics force and lateral displacement in sensitive DLD. In contrast, the size-based change is dominant for larger bioparticles coat such as vesicles. The adsorption of the vesicles with a diameter (Dv) of 50–200 nm to the microbead surface increases the overall size of the bead from a diameter (Dp) of 1 µm to Dapp of ~ 1.05–1.4 µm. It is hypothesized that the Dapp is increased as the amount of adsorbed vesicles increases. At lower vesicle concentration, the increase in Dapp is small due to random adsorption of small amount of vesicles on beads, whereas at higher concentration when the bead surface is fully adsorbed with the vesicles, the Dapp reaches saturation, and hence plateaus off. The usage of antibody specific to the membrane protein increases the sensitivity of the vesicle binding, and hence results in lower LOD of the vesicles. ### Device fabrication Briefly, SU-8 2005 (MicroChem, USA) was used to develop the DLD device negative mould on a 4-inch silicon wafer at a height of 3 µm. The SU-8 was patterned using a hard chrome glass mask (Infinite Graphics, Singapore) on a SUSS-MA8 lithography mask aligner. The final device was fabricated using PDMS cured on the silicon SU-8 mould. The input and output holes were punched and the PDMS device was bonded onto a glass slide using a oxygen plasma treatment for 2 min in the March PX-250 plasma machine. All devices used in this work were fabricated from the same mould to ensure consistency in results. ### Sample and buffer solutions Three types of bead samples were used: 1 µm NIST PS beads (Bangslab NT15N, USA), 1 µm amine beads (bangslab PA03N, USA), and 1 µm carboxylated beads (Polyscience 17458, USA). These beads were subsequently diluted to the required concentration of 0.1% (w/v) for all separation experiment. Sodium chloride (Sigma S5150, Singapore), sodium hydroxide (Sigma S2770), and hydrogen chloride (Sigma H9892) were prepared as a stock solution of 1 M. Non-ionic Pluronic F-127 (Sigma P2443) was prepared at 1% (w/v), whereas albumin solution (Sigma A9576) was purchased as 30% (v/v) stock solution. The 50 mM MES buffer pH 5 solution was prepared for the optimized protein adsorption on beads. The 1 × PBS solution (Thermofisher 10010023) was used as the stock PBS solution for vesicle detection. All solutions were diluted subsequently in 18.2 MOhm cm Millipore ultra-pure DI water to the required concentration. ### Antibody against HSA conjugation on beads Polyclonal rabbit anti-HSA (Abcam, ab34856) was diluted 80 × and conjugated on 1 µm PS-COOH beads surface via N-(3-dimethylaminopropyl)-N'-ethylcarbodiimide hydrochloride (EDC, Sigma Singapore E7750) and N-hydroxysuccinimide (NHS, Sigma Singapore 130672) coupling. Briefly, EDC and NHS was added to PS-COOH beads to activate the carboxyl groups and mixed under vortex for 1 h at 1650 r.p.m., at 4 °C. After activation, the beads were washed three times by centrifugation at 6000 r.p.m. for 5 mins. Before adding the antibody, the bead solution was subjected to probe sonication for 1 min, to ensure uniform dispersion of the beads. The antibody-beads solution was incubated for 3 h at 1650 r.p.m. vortexing at 4 °C. ### Antibody-conjugated beads-based detection of HSA on DLD The conjugated beads were mixed with different concentration of HSA (Abcam) of 0.075, 0.05, 0.025, and 0.01 µg mL−1 for 1 h at room temperature and washed by centrifugation at 6000 r.p.m. for 3 min before putting in the sample inlet for DLD separation with 10 mM NaOH buffer. The device used was plasma-treated PDMS, which has been treated with pluronic F-127 for 30 min to prevent particle adhesion. The number of particles in the sub-channels were counted and plotted as the particle output distribution. The mean of the distribution would correspond to the apparent particle size in the DLD device. The LOD of the vesicle was then determined from the apparent diameter difference with the uncoated beads. ### Polymer vesicle preparation Poly(butadiene-b-ethylene oxide) (PBd(1200)-PEO(600)) di-block co-polymer (P9089-BdEO, polymer source) was obtained to prepare vesicles, referred here as BD21, by film hydration method. Briefly, 5 mg of polymer was dissolved in 200 µl of chloroform, which was evaporated slowly by using stream of nitrogen in fume hood to make a thin film of polymer. To prepare dye labelled vesicles, Rhodamine B octadecyl ester perchlorate (RBOE, Sigma) dye was added in the polymer solution in chloroform before making the thin film. Polymer thin films (with and without RBOE) was vacuum dried for 4 h. To this, 1 mL of 1 × PBS (pH 7.2) was added and stirred overnight on magnetic stirrers (IKA-Werke multi-position, RT 15 Power, Germany) at 400 r.p.m. to make vesicles that was downsized by extrusion (6 times through 0.45 µm and 6 times through 0.22 µm filter) and dialysed to remove the free RBOE. Both BD21 and BD21-RBOE vesicles were characterized for size and surface charge by ZetaSizer (Malvern Instrument, UK) and transmission electron microscopy (JEOL 2010F transmission electron microscope from Jeol Ltd, Tokyo, Japan). To confirm the incorporation of RBOE dye in vesicles, fluorescence intensity spectra was measured by fluorescence spectroscopy at excitation of 533 nm. ### Reconstitution of membrane protein in polymer vesicles The vesicle was solubilized using 50 µL of 10% Triton x-100. To this vesicle–detergent–micelle suspension, Aqp1 membrane protein (ab114210, Abcam) was added in 1:1 weight ratio to vesicle and incubated for 1 h at 4 °C. After this, 200 mg of Bio-Beads SM-2 was added for detergent removal accompanied with incorporation of Aqp1 in polymer vesicles. The Aqp1 reconstituted vesicle was characterized for vesicle size and fluorescence intensity by DLS and microplate reader, respectively. Furthermore, confirmation for the presence, functionality, and quantity of Aqp1 in vesicle after reconstitution was done by immunoassay by attaching them on PS micro-beads, osmotic permeability assay, and BCA assay, respectively. As prepared RBOE-BD21 vesicles were mixed with 1 µm-size PS beads and incubated at 1650 r.p.m. for 1 h. After this, the beads were washed three times by centrifugation at 6000 r.p.m. for 5 mins. The vesicles attachment on to the beads was characterized by the fluorescence intensity measurement at 533 nm excitation by Infinite M200 PRO multimode microplate reader (Tecan) and bright-field and fluorescence microscopy imaging. ### Antibody against Aqp1 conjugation on beads Polyclonal rabbit anti-Aqp1 antibody against Aqp1 membrane protein (AQP001, Alomone lab) was diluted 20 × and conjugated on 1 µm PS-COOH beads surface via EDC/NHS coupling. Briefly, EDC/NHS was added to PS-COOH beads to activate the carboxyl groups and mixed under vortexing for 1 h at 1650 r.p.m., at 4 °C. After activation, the beads were washed three times by centrifugation at 6000 r.p.m. for 5 min. Before adding the antibody, the bead solution was subjected to probe sonication for 1 minute to ensure uniform dispersion of the beads. The antibody beads solution was incubated for 3 h at 1650 r.p.m. vortexing, at 4 °C. The conjugation of antibody to beads was confirmed by binding of mouse anti-rabbit antibody against rabbit anti-Aqp1 bound to Aqp1 protein. ### Beads based detection of Aqp1 vesicles in suspension Before adding the Aqp1-vesicles for detection, antibody conjugated beads were incubated with blocker solution (1 × PBS solution of 1% bovine serum albumin with 0.01% of pluronic), at 700 r.p.m. for 2 h, room temperature to reduce the nonspecific binding. After blocking, antibody beads were washed thrice at 6000 r.p.m. for 5 min at 4 °C and mixed with vesicles with and without Aqp1 for incubation at 700 r.p.m. for 1 h, room temperature. Next, the beads were washed thrice at 6000 r.p.m. for 5 min and mixed with primary antibody (mouse anti-Aqp1, Abcam ab117970, 20 × dilution) at 700 r.p.m., 2 h, room temperature), washed thrice (6000 r.p.m. for 5 min at 4 °C) and mixed with secondary antibody (Alexa Fluor 647-conjugated goat anti-mouse IgG, Abcam ab150113, 20 × dilution). Finally, the beads were washed thrice at 6000 r.p.m. for 5 min at 4 °C before characterization using bright-field and fluorescence microscopy imaging. ### Beads-based detection of Aqp1 vesicles on DLD The conjugated antibody is mixed with different concentration of vesicles with Aqp1 membrane protein and put in the sample inlet for separation with 0.1 × PBS as the buffer. The DLD device used was native PDMS device which has been treated with pluronic F-127 for 30 min to prevent particle adhesion. The number of particles in the sub-channels were counted and plotted as the particle output distribution. The mean of the distribution would correspond to the apparent particle size in the DLD device. The LOD of the vesicle is then determined from the apparent diameter difference with the uncoated beads. ### Experimental setup The fluid flow in the microfluidic device was driven by output fluid extraction using a Chemyx syringe pump and a Hamilton 100 µl glass syringe. The input sample and buffer reservoirs were exposed to atmospheric pressure. This method facilitates rapid washing and change of buffer solutions. The experiment was visualised using an upright microscope and the particle flows for input and output regions of the device were captured using high speed Phantom M310 camera. The frame rates used were 50fps for the detection zone at × 100 magnification and 1000 fps for the high-speed imaging of individual bead motion within the DLD pillars at × 600 magnification. The number of particles flowing at different outlet sub-channels were counted and plotted as the particle output distribution for data analysis (Supplementary Note 3). ### Particle trajectory modelling The computational modelling was performed using COMSOL Multiphysics 5.0. The geometry of the simulation was set as the actual system 1 device with 6 µm pillar, 4 µm gap, and 0.75° diameter with 250 µm s−1 velocity. The stokes flow module was used to get the velocity profile across the pillar and particle tracing of 1 µm diameter at different position relative to the pillar, which mimic the position tracking from the experiment, was used to get the trace of the particle over time with the time-dependent study. ### Data availability The data that support the findings of this study are available from the corresponding author upon reasonable request. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## References 1. 1. Witwer, K. W. et al. Standardization of sample collection, isolation and analysis methods in extracellular vesicle research. J. Extracell. Vesicles 2, 20360 (2013). 2. 2. Shen, J., Li, Y., Gu, H., Xia, F. & Zuo, X. Recent development of sandwich assay based on the nanobiotechnologies for proteins, nucleic acids, small molecules, and ions. Chem. Rev. 114, 7631–7677 (2014). 3. 3. Kosaka, P. M. et al. Detection of cancer biomarkers in serum using a hybrid mechanical and optoplasmonic nanosensor. Nat. Nanotechnol. 9, 1047–1053 (2014). 4. 4. Cheng, Z. et al. Simultaneous detection of dual prostate specific antigens using surface-enhanced Raman scattering-based immunoassay for accurate diagnosis of prostate cancer. ACS Nano 11, 4926–4933 (2017). 5. 5. Yang, L., Li, P., Liu, H., Tang, X. & Liu, J. A dynamic surface enhanced Raman spectroscopy method for ultra-sensitive detection: from the wet state to the dry state. Chem. Soc. Rev. 44, 2837–2848 (2015). 6. 6. Krishnan, S., Mani, V., Wasalathanthri, D., Kumar, C. V. & Rusling, J. F. Attomolar detection of a cancer biomarker protein in serum by surface plasmon resonance using superparamagnetic particle labels. Angew. Chem. Int. Ed. Engl. 50, 1175–1178 (2011). 7. 7. Im, H. et al. Label-free detection and molecular profiling of exosomes with a nano-plasmonic sensor. Nat. Biotechnol. 32, 490–495 (2014). 8. 8. Gaster, R. S. et al. Matrix-insensitive protein assays push the limits of biosensors in medicine. Nat. Med. 15, 1327–1332 (2009). 9. 9. Fu, Y. et al. Highly sensitive detection of protein biomarkers with organic electrochemical transistors. Adv. Mater. 29, 1703787 (2017). 10. 10. Deng, X. et al. A highly sensitive immunosorbent assay based on biotinylated graphene oxide and the quartz crystal microbalance. ACS Appl. Mater. Interfaces 8, 1893–1902 (2016). 11. 11. Yang, S., Dai, X., Stogin, B. B. & Wong, T.-S. Ultrasensitive surface-enhanced Raman scattering detection in common fluids. Proc. Natl Acad. Sci. 113, 268–273 (2016). 12. 12. McGrath, J., Jimenez, M. & Bridle, H. Deterministic lateral displacement for particle separation: a review. Lab. Chip. 14, 4139–4158 (2014). 13. 13. Huang, L. R., Cox, E. C., Austin, R. H. & Sturm, J. C. Continuous particle separation through deterministic lateral displacement. Science 304, 987–990 (2004). 14. 14. Nagrath, S. et al. Isolation of rare circulating tumour cells in cancer patients by microchip technology. Nature 450, 1235–1239 (2007). 15. 15. Salafi, T., Zeming, K. K. & Zhang, Y. Advancements in microfluidics for nanoparticle separation. Lab. Chip. 17, 11–33 (2017). 16. 16. Davis, J. A. et al. Deterministic hydrodynamics: taking blood apart. Proc. Natl Acad. Sci. USA 103, 14779–14784 (2006). 17. 17. Kim, S.-C. et al. Broken flow symmetry explains the dynamics of small particles in deterministic lateral displacement arrays. Proc. Natl Acad. Sci. 114, E5034–E5041 (2017). 18. 18. Wunsch, B. H. et al. Nanoscale lateral displacement arrays for the separation of exosomes and colloids down to 20 nm. Nat. Nanotechnol. 11, 936–940 (2016). 19. 19. Beech, J. P. et al. Separation of pathogenic bacteria by chain length. Anal. Chim. Acta 1000, 223–231 (2017). 20. 20. Henry, E. et al. Sorting cells by their dynamical properties. Sci. Rep. 6, 34375 (2016). 21. 21. Zeming, K. K., Thakor, N. V., Zhang, Y. & Chen, C.-H. Real-time modulated nanoparticle separation with an ultra-large dynamic range. Lab. Chip. 16, 75–85 (2016). 22. 22. Donaldson, S. H. Jr et al. Asymmetric electrostatic and hydrophobic–hydrophilic interaction forces between mica surfaces and silicone polymer thin films. ACS Nano 7, 10094–10104 (2013). 23. 23. Xu, L.-C. & Logan, B. E. Interaction forces between colloids and protein-coated surfaces measured using an atomic force microscope. Environ. Sci. Technol. 39, 3592–3600 (2005). 24. 24. Beattie, J. K. The intrinsic charge on hydrophobic microfluidic substrates. Lab. Chip. 6, 1409–1411 (2006). 25. 25. Barisik, M., Atalay, S., Beskok, A. & Qian, S. Size dependent surface charge properties of silica nanoparticles. J. Phys. Chem. C. 118, 1836–1842 (2014). 26. 26. Kosmulski, M. Positive electrokinetic charge of silica in the presence of chlorides. J. Colloid Interface Sci. 208, 543–545 (1998). 27. 27. Atalay, S., Barisik, M., Beskok, A. & Qian, S. Surface charge of a nanoparticle interacting with a flat substrate. J. Phys. Chem. C. 118, 10927–10935 (2014). 28. 28. Piliarik, M. & Sandoghdar, V. Direct optical sensing of single unlabeled small proteins and super-resolution microscopy of their binding sites. Nat. Commun. 5, 4495 (2014). 29. 29. Barbosa, L. R. et al. The importance of protein-protein interactions on the pH-induced conformational changes of bovine serum albumin: a small-angle X-ray scattering study. Biophys. J. 98, 147–157 (2010). 30. 30. Yu, H., Qiu, X., Nunes, S. P. & Peinemann, K.-V. Biomimetic block copolymer particles with gated nanopores and ultrahigh protein sorption capacity. Nat. Commun. 5, 4110 (2014). 31. 31. Arlett, J. L., Myers, E. B. & Roukes, M. L. Comparative advantages of mechanical biosensors. Nat. Nanotechnol. 6, 203 (2011). 32. 32. Wu, B. et al. Detection of C-reactive protein using nanoparticle-enhanced surface plasmon resonance using an aptamer-antibody sandwich assay. Chem. Commun. 52, 3568–3571 (2016). 33. 33. Tkach, M. & Théry, C. Communication by extracellular vesicles: where we are and where we need to go. Cell 164, 1226–1232 (2016). 34. 34. Sung, B. H., Ketova, T., Hoshino, D., Zijlstra, A. & Weaver, A. M. Directional cell movement through tissues is controlled by exosome secretion. Nat. Commun. 6, 7164 (2015). 35. 35. Hoshino, D. et al. Exosome secretion is enhanced by invadopodia and drives invasive behavior. Cell Rep. 5, 1159–1168 (2013). 36. 36. Wiley, R. D. & Gummuluru, S. Immature dendritic cell-derived exosomes can mediate HIV-1 trans infection. Proc. Natl Acad. Sci. USA 103, 738–743 (2006). 37. 37. Li, J. et al. Exosomes mediate the cell-to-cell transmission of IFN-[alpha]-induced antiviral activity. Nat. Immunol. 14, 793–803 (2013). 38. 38. Almén, M. S., Nordström, K. J., Fredriksson, R. & Schiöth, H. B. Mapping the human membrane proteome: a majority of the human membrane proteins can be classified according to function and evolutionary origin. BMC Biol. 7, 50 (2009). 39. 39. Sandfeld-Paulsen, B. et al. Exosomal proteins as prognostic biomarkers in non-small cell lung cancer. Mol. Oncol. 10, 1595–1602 (2016). 40. 40. Kowal, J. et al. Proteomic comparison defines novel markers to characterize heterogeneous populations of extracellular vesicle subtypes. Proc. Natl Acad. Sci. USA 113, E968–E977 (2016). 41. 41. Van Der Vlist, E. J., Nolte, E. N., Stoorvogel, W., Arkesteijn, G. J. & Wauben, M. H. Fluorescent labeling of nano-sized vesicles released by cells and subsequent quantitative and qualitative analysis by high-resolution flow cytometry. Nat. Protoc. 7, 1311–1326 (2012). 42. 42. Jeong, S. et al. Integrated magneto–electrochemical sensor for exosome analysis. ACS Nano 10, 1802–1809 (2016). 43. 43. Shao, H. et al. Protein typing of circulating microvesicles allows real-time monitoring of glioblastoma therapy. Nat. Med. 18, 1835–1840 (2012). 44. 44. Discher, B. M. et al. Polymersomes: tough vesicles made from diblock copolymers. Science 284, 1143–1146 (1999). 45. 45. Christian, D. A. et al. Spotted vesicles, striped micelles and Janus assemblies induced by ligand binding. Nat. Mater. 8, 843–849 (2009). 46. 46. Tanner, P. et al. Polymeric vesicles: from drug carriers to nanoreactors and artificial organelles. Acc. Chem. Res. 44, 1039–1049 (2011). 47. 47. Blanc, L. et al. The water channel aquaporin-1 partitions into exosomes during reticulocyte maturation: implication for the regulation of cell volume. Blood 114, 3928–3934 (2009). 48. 48. Papadopoulos, M. C. & Verkman, A. S. Aquaporin water channels in the nervous system. Nat. Rev. Neurosci. 14, 265–277 (2013). 49. 49. Ollivon, M., Lesieur, S., Grabielle-Madelmont, C. & Mt, Paternostre Vesicle reconstitution from lipid–detergent mixed micelles. Biochim. Et. Biophys. Acta (BBA) Biomembr. 1508, 34–50 (2000). 50. 50. Zech, D., Rana, S., Büchler, M. W. & Zöller, M. Tumor-exosomes and leukocyte activation: an ambivalent crosstalk. Cell Commun. Signal. 10, 37 (2012). 51. 51. Beltrami, C. et al. Human pericardial fluid contains exosomes enriched with cardiovascular-expressed microRNAs and promotes therapeutic angiogenesis. Mol. Ther. 25, 679–693 (2017). 52. 52. Song, X. et al. Cancer cell-derived exosomes induce mitogen-activated protein kinase-dependent monocyte survival by transport of functional receptor tyrosine kinases. J. Biol. Chem. 291, 8453–8464 (2016). 53. 53. Inglis, D. W., Davis, J. A., Austin, R. H. & Sturm, J. C. Critical particle size for fractionation by deterministic lateral displacement. Lab. Chip. 6, 655–658 (2006). 54. 54. Zeming, K. K., Salafi, T., Chen, C.-H. & Zhang, Y. Asymmetrical deterministic lateral displacement gaps for dual functions of enhanced separation and throughput of red blood cells. Sci. Rep. 6, 22934 (2016). 55. 55. Butt, H.-J., Cappella, B. & Kappl, M. Force measurements with the atomic force microscope: technique, interpretation and applications. Surf. Sci. Rep. 59, 1–152 (2005). ## Acknowledgements We acknowledge the funding support from Singapore Ministry of Education AcRF Tier 1 and Tier 3 funding (R397-000-270-114, MOE2016-T3-1-004). We also acknowledge Scholarship from NUS Graduate school for integrative science and engineering and infrastructural support from the National University of Singapore. ## Author information ### Affiliations 1. #### Department of Biomedical Engineering, Faculty of Engineering, National University of Singapore, Singapore, 117583, Singapore • Kerwin Kwek Zeming • , Thoriq Salafi • , Swati Shikha •  & Yong Zhang 2. #### NUS Graduate School for Integrative Sciences and Engineering, National University of Singapore, Singapore, 117456, Singapore • Thoriq Salafi •  & Yong Zhang ### Contributions K.K.Z. and T.S. contributed equally to the work. K.K.Z. contributed to the conception, device design, fabrication, and protein detection experiments. T.S. contributed to the Comsol simulations, antibody-based protein detection, vesicle detection, and post-review experiments. S.S. contributed to the vesicle fabrication, characterization, and detection. Y.Z. contributed to the overall planning, project conception, data analysis, and writing of the manuscript. ### Competing interests The authors declare no competing interests. ### Corresponding author Correspondence to Yong Zhang.
2018-04-22 17:53:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 34, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6522197127342224, "perplexity": 7371.807693412119}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945637.51/warc/CC-MAIN-20180422174026-20180422194026-00080.warc.gz"}
https://yes-great.com/FirstOrder/sp/storycj-82120480-r.html
Home # First order Saharon Shelah developed Morley’s ideas with great resourcefulness and energy. His main aim was to stretch the ‘Few is beautiful’ idea by showing that there are clear dividing lines between kinds of theory T. On one side of a dividing line are theories with some good structural property that forces the number of nonisomorphic models of a given cardinality to be small. On the other side, every theory has (for example) two models of the same cardinality that are not isomorphic but are extremely hard to tell apart. Shelah coined the name classification theory for this research. The text of Lascar listed below is an elegant introduction to this whole programme, from Łoś to Shelah. Meanwhile Shelah himself has extended it far beyond first-order logic. Even in the first-order case, Shelah had to invent new set-theoretic techniques (such as proper forcing) to carry out his constructions. The first order stormtrooper have a lot of different ranks and variants which I have made the this post to show you guys what all the ranks and variants mean and what they do so let's begin There are several proofs of this theorem, and not all of them are model-theoretic. Without the last sentence, the theorem is known as Craig’s interpolation theorem, since William Craig proved this version a few years before Roger Lyndon found the full statement in 1959. As Craig noted at the time, his interpolation theorem gives a neat proof of Evert Beth’s definability theorem, which runs as follows. Abstract: We establish that first-order methods avoid saddle points for almost all initializations. Our results apply to a wide variety of first-order methods, including gradient descent, block coordinate.. [10%+10% ] ټ dzĿ ׺ 볪 If L is the first-order language of signature K, then Tarski’s model-theoretic truth definition tells us when a sentence of L is true in A, and when an assignment of elements of A to variables satisfies a formula of L in A. Instead of talking of assignments satisfying a formula, model theorists often speak of the set of n-tuples of elements of A that is defined by a formula φ(v1,…,vn); the connection is that an n-tuple (a1,…,an) is in the defined set if and only if the assignment taking each vi to ai satisfies the formula.The Stanford Encyclopedia of Philosophy is copyright © 2016 by The Metaphysics Research Lab, Center for the Study of Language and Information (CSLI), Stanford University The Fìrst Order, or simply the Order, was a galactic military junta ruled by Supreme Leader Váliant and allied with the Knights of Ren that came into existence as a result of the defeat of the Galactic Empire There is a proof of this theorem in the entry on classical logic. The theorem has several useful paraphrases. For example it is equivalent to the following statement: The Rise of Skywalker takes place a year after The Last Jedi. The First Order is now led by Supreme Leader Kylo Ren after Snoke’s death. Allegiant General Pryde, who served Palpatine in the Empire,[18] has now joined General Hux at the top of the military hierarchy. Kylo Ren discovers a physically impaired[19] Palpatine in exile on the Sith world Exegol. Palpatine reveals he created Snoke as a puppet to control the First Order and has built a secret armada of Star Destroyers called the Final Order. In a bid to form a new Sith Empire, Palpatine promises Kylo control over the fleet on the condition that he find and kill Rey, who is revealed to be Palpatine's granddaughter.[20] First order of system is defined as first derivative with respect to time and second order of system is In theory, first order system is a system which has one integrator. As number of order increases.. ### Differential equationsedit The first Star Wars film made by Disney after it purchased the franchise is set 30 years after Return The film sees the return of Emperor Palpatine as The First Order and the rebels led by Rey prepare.. In the film, the First Order is led by a mysterious figure named Snoke, who has assumed the title of Supreme Leader.[5] Like the Empire before them, the Order commands a vast force of stormtroopers.[9][10] The First Order uses regular and Special Forces versions of the Empire's venerable TIE fighter.[11] Its primary base of operations is Starkiller Base,[12] a mobile ice planet which converted into a superweapon capable of destroying entire star systems across the galaxy by firing through hyperspace.[9] The base commander of Starkiller is General Hux, a ruthless young officer dedicated to the Order.[13] Example of using the integrated rate law to solve for time and concentration, and calculating the half life for a first-order reaction Anatolii Mal’tsev first gave the compactness theorem in 1938 (for first-order logic of any signature), and used it in 1940/1 to prove several theorems about groups; this seems to have been the first application of model theory within classical mathematics. Leon Henkin and Abraham Robinson independently rediscovered the theorem a few years later and gave some further applications. The theorem fails badly for nearly all infinitary languages. ### The First Order - Index - Wuxiaworl 1. The elementary amalgamation theorem is a consequence of the compactness theorem in the next section. 2. For space to surface delivery, the First Order is also seen deploying several standard troop transports. Elite units and high value command personnel such as Kylo Ren use the Upsilon-class command shuttle, a stylistic evolution of the old Imperial Lambda-class T-4a shuttle (but without the third fin on top, and now sporting large wings that retract upon themselves on landing). 3. Судьба/Великий приказ. Fate/Grand Order: First Order. First Order. 31 декабря 2016 4. First Things First Trump's First Order of Business Uploaded by Brad. + Add a Video. Still, Trump's first order of withdrawing the TPP is admirable, in my opinion 5. Перевод слова order, американское и британское произношение, транскрипция was out of order — он расположил названия всех штатов по алфавиту, и только Калифорния оказалась не.. For example the substructure of the field R generated by the number 1 consists of 1, 0 (since it is named by the constant 0), 1+1, 1+1+1 etc., −1, −2 etc., in other words the ring of integers. (There is no need to close off under multiplication too, since the set of integers is already closed under multiplication.) If we had included a symbol for 1/x too, the substructure generated by 1 would have been the field of rational numbers. So the notion of substructure is sensitive to the choice of signature. Describe First Order elimination Kinetics? How is this different to zero order kinetics? First order kinetics occur when a constant proportion of the drug is eliminated per unit time Using first order problem solving, and that's similar to reductionistic thinking. What we do is say, Okay, I don't have the medicine that I need right now.. If A is an L-structure, then we form the diagram of A as follows. First add to L a supply of new individual constants to serve as names for all the elements of A. (This illustrates how in first-order model theory we easily find ourselves using uncountable signatures. The ‘symbols’ in these signatures are abstract set-theoretic objects, not marks on a page.) Then using L and these new constants, the diagram of A is the set of all the atomic sentences and negations of atomic sentences that are true in A. ۳ ġ 5 The elementary amalgamation theorem: Suppose L is a first-order language, A is an L-structure and B, C are two elementary extensions of A. Then there are an elementary extension D of B and an elementary embedding e of C into D such that (i) for each element a of A, e(a) = a, and (ii) if c is an element of C but not of A, then e(c) is not in B. Namely, if an element of A is named by a new constant c, then map that element to the element of B′ named c. A variant of this lemma is used in the proof of the elementary amalgamation theorem. The First Order, EPUB and PDF Download. latest chapter. Create your own ebook with ASIANOVEL. The First Order. This is a brand new story. Survive the darkness, see the light ### The First Order StarWars 1. 漮Ŀ/Ŀ/ 2. Understanding the first derivative as an instantaneous rate of change or as the slope of the tangent line. The first derivative primarily tells us about the direction the function is going 3. where the formulas φ1, …, φn, ψ are all atomic. A universal Horn sentence (also known to the computer scientists as a Horn clause) is a sentence that consists of universal quantifiers followed by a quantifier-free Horn formula; it is said to be strict if no negation sign occurs in it (i.e. if it doesn’t come from a quantifier-free Horn formula of the third kind). 4. Ǹܿ尩/󷣴2262 5. From these responses, we can conclude that the first order control systems are not stable with the ramp and parabolic inputs because these responses go on increasing even at infinite amount of time. The first order control systems are stable with impulse and step inputs because these responses have bounded output. But, the impulse response doesn’t have steady state term. So, the step signal is widely used in the time domain for analyzing the control systems from their responses. 6. Browse online math notes in Applications of First-Order ODE that will be helpful in learning math or refreshing your knowledge All three of these programmes generated new techniques for proof, constructions and classifications. As we should expect, researchers have explored the range of application of each technique. One result of this has been the emergence of several useful classes of first-order theories which relate to more than one of the three programmes. For example a central tool of Shelah’s classification theory was his notion of forking, a far-reaching generalisation of earlier algebraic notions of dependence relation. The class of simple theories is defined by the fact that forking has certain nice properties while the class of rosy theories is characterised by the existence of a good notion of independence coming from a further generalisation of forking called þ-forking; several natural examples of simple theories came to light in geometric model theory, and the complete theories of o-minimal structures are examples of rosy theories. In parallel with these technical advances, first-order model theory continues to grow more closely involved with problems in number theory, functional analysis and other branches of pure and, and even applied, mathematics. The First Order is a fictional autocratic military dictatorship in the Star Wars franchise, introduced in the 2015 film Star Wars: The Force Awakens. Formed following the fall of the Galactic Empire after the events of Return of the Jedi (1983).. Two L-structures that are models of exactly the same sentences of L are said to be elementarily equivalent. Elementary equivalence is an equivalence relation on the class of all L-structures. The set of all the sentences of L that are true in the L-structure A is called the complete theory of A, in symbols Th(A). A theory that is Th(A) for some structure A is said to be complete. (By the completeness theorem for first-order logic, for which see the entry on classical logic, a theory is complete if and only if it is maximal syntactically consistent.) The two structures A and B are elementarily equivalent if and only if Th(A) = Th(B).Kylo fails to retrieve the map fragment that would lead him to Luke, and the Resistance manages to destroy Starkiller Base moments before it is able to fire on the Resistance base on D'Qar, though Kylo and General Hux are able to escape the explosion, as well as Captain Phasma offscreen. First Order Motion Model for Image Animation. The first row on the right for each dataset shows the source videos. The bottom row contains the animated sequences with motion transferred from the.. ٽ Ȳġ ȣ ״Ͻ KHA847565ǿδ δ ǰ 漮 ǰ ðھ? First-order logic—also known as predicate logic, quantificational logic, and first-order predicate calculus—is a collection of formal systems used in mathematics, philosophy, linguistics.. In the previous chapter, we have seen the standard test signals like impulse, step, ramp and parabolic. Let us now find out the responses of the first order system for each input, one by one. The name of the response is given as per the name of the input signal. For example, the response of the system for an impulse input is called as impulse response.The First Order tracks the small Resistance fleet via a hyperspace jump using new "hyperspace tracking" technology. Running low on fuel, the remaining Resistance fleet is pursued by the First Order. This devolves into a siege-like battle of attrition, as one by one the smaller Resistance ships run out of fuel and are destroyed by the pursuing First Order fleet. Finn and a Resistance mechanic, Rose, embark on a mission to disable the First Order's tracking device. ### Depictions in filmedit A first-order language is given by a collection S of symbols for relations, functions, and constants, which, in combination with the symbols of elementary logic, single out certain combinations of.. Possibly the first time a BBC presenter has been in line with a group of Tory backbenchers In what R describes as the EU's first major concession, an EU official told the news agency that.. The five theorems reported in this section are in some sense the pillars of classical model theory. All of them are theorems about first-order model theory. A great deal of the work done in the third quarter of the twentieth century was devoted to working out the consequences of these theorems within first-order model theory, and the extent to which similar theorems hold for languages that are not first-order. The Transfer Fcn First Order block implements a discrete-time first order transfer function of the input. The transfer function has a unity DC gain ## First-order - Wikipedi The structures A and B are called the factors of A × B. In the same way we can form products of any number of structures. If all the factors of a product are the same structure A, the product is called a power of A. A theorem called the Feferman-Vaught theorem tells us how to work out the complete theory of the product from the complete theories of its factors. Snoke is a powerful figure in the dark side of the Force and has corrupted Ben, the son of Han Solo and Leia Organa who had been an apprentice to his uncle, the Jedi Master Luke Skywalker. Masked and using the name Kylo Ren, he is one of Snoke's enforcers, much like his grandfather Darth Vader had been the enforcer of Emperor Palpatine during the days of the Empire decades earlier. Kylo is the master of the Knights of Ren, a mysterious group of elite warriors who work with the First Order.[14][15] Kylo and Hux are rivals for Snoke's approval,[16] and the third member of the "commanding triumvirate" of the First Order is the formidable Captain Phasma, the commander of the stormtroopers.[17] Mathematical model theory carries a heavy load of notation, and HTML is not the best container for it. In what follows, syntactic objects (languages, theories, sentences) are generally written in roman or greek letters (for example L, T, φ), and set-theoretic objects such as structures and their elements are written in italic (A, a). Two exceptions are that variables are italic (x, y) and that sequences of elements are written with lower case roman letters (a, b). ### 3.1 The compactness theorem ͽ н ߰ н 񵵸 If we want to approximate this to first order, it just means that you use up to the term and scrap the rest, meaning thatwhich is a first-order Taylor series approximation of about In 2006, Jonathan Pila and Alex Wilkie showed that provided one removes the subsets defined using only polynomial inequalities, subsets of Rn definable in an o-minimal expansion of the real field have few rational points. Subsequently, following a strategy first employed by Pila and Umberto Zannier to reprove the Manin-Mumford conjecture various authors have used this o-minimal counting theorem to solve some important open problems in diophantine geometry. ⺻/ ϵġ ٴ ũ first order of business meaning: a situation or subject that must be dealt with before anything else Meaning of first order of business in English The unit ramp response, c(t) follows the unit ramp input signal for all positive values of t. But, there is a deviation of T units from the input signal.Kobi Peterzil and Sergei Starchenko have developed a theory of o-minimal complex analysis. Just as with the classical approach to complex analysis, one may interpret the complex numbers as the set of ordered pairs of real numbers with addition and multiplication defined by the usual rules involving their real and imaginary parts. Of their results in this area, their algebraicity theorem, which asserts that if a subset of Cn is complex analytic (meaning that it is closed and is locally defined by the vanishing of finitely many complex analytic functions) and definable in some o-minimal expansion of the real field, then it must be algebraic, that is defined by the vanishing of polynomial equations, is the most striking result and has been strong consequences in the study of functional transcendence and homogeneous dynamics. ### First Order - YouTub • Suppose that L is a first-order language and M is the first-order language got by adding to L a new predicate symbol R. Suppose also that T is a theory in M. We say that T implicitly defines R if it is false that there are two M-structures which are models of T, have the same elements and interpret all the symbols of L in the same way but interpret the symbol R differently. We say that T defines R explicitly if there is a formula φ(x1,…,xn) of L such that in every model of T, the formulas φ and R(x1,…,xn) are satisfied by exactly the same n-tuples (a1,…,an) of elements. It is easy to see that if T defines R explicitly then it defines R implicitly. (This fact is known as Padoa’s method; Padoa used the failure of implicit definability as a way of proving the failure of explicit definability.) Beth’s theorem is the converse: • [Ÿ] ǰ ġ/ • If A and B are L-structures, we form their product C = A × B as follows. The elements of C are the ordered pairs (a,b) where a is an element of A and b is an element of B. The predicate symbols are interpreted ‘pointwise’, i.e. so that for example • iaturized Death Star technology. Leia sends out transmissions to allies "in the Outer Rim" begging for aid, but they inexplicably do not appear. Just as the First Order breaches the base, Luke Skywalker appears to challenge them. A full barrage by their artillery has no effect on Luke, so Kylo Ren descends to duel him in person. Ren realizes that Luke is a Force projection; while Ren is distracted, the surviving Resistance escape the planet. • Zerochan has 32 Fate/Grand Order: First Order anime images, Android/iPhone wallpapers, fanart, and many more in its gallery • 其名为圣杯探索——冠位指定(Grand Order) • 5cm 1 20 ### First Order (Star Wars) - Wikipedi 1. If the ultrafilter U is nonprincipal, i.e. contains no finite sets, then the diagonal map is not onto the domain of U-prod A, and in fact U-prod A is generally much larger than A. So we have a way of constructing large elementary extensions. The axiom of choice guarantees that every infinite set has many nonprincipal ultrafilters over it. Ultrapowers are an essential tool for handling large cardinals in set theory (see the entry on set theory). 2. Only after reaching the First Order warrior one can project battle qi outside the body; someone who had just successfully condensed battle qi was unable to project battle qi outwards 3. A construction is a procedure for building a structure. We have already seen several constructions in the theorems above: for example the omitting types construction and the initial model construction. Here are three more. 4. Ŀ 漮 漮Ŀ KC ### First-order Model Theory (Stanford Encyclopedia of Philosophy • With her political standing severely weakened, and the New Republic Senate gridlocked and unwilling to recognize the First Order's military buildup, Leia Organa decides to withdraw and form her own small private army, known as the Resistance, to fight the First Order within its own borders. She is joined by other members of the former Rebel Alliance such as Admiral Ackbar. Publicly the New Republic continues to disavow direct association with the Resistance to maintain plausible deniability, and though the majority of the Senate does not want to intervene against the First Order, several Senators privately channel funds and resources to the Resistance. This state of affairs continued on for the next six years until the events of The Force Awakens.[3][4] Comic book writer Charles Soule, creator of the 2015 Marvel Comics series Star Wars: Poe Dameron, explained that immediately prior to the events of The Force Awakens, "The New Republic and the First Order are in a position of detente, and while there have been a few small skirmishes between the Resistance and the First Order, it's very much a sort of cold war."[7] • Ǯ 100ml/Ǯ/ǰ,,ǰ,,, • logic: and games | logic: classical | logic: infinitary | model theory | set theory | Tarski, Alfred: truth definitions • Ŀ ǰ ðھ? ► Воспроизвести все. First Order. MetaNerdz Lore. 36 видео. The First Order CRUISER Docked on the Resurgent - HIDING Since Ep. 7 - Star Wars First Order Ship A first-order circuit can only contain one energy storage element (a capacitor or an inductor). The circuit will also contain resistance. So there are two types of first-order circuit Ǽ ȫ ٴ 1Ʈ ҿٴ β2mm ģȯ Ʈ Ź å We recall and refine some definitions from the entries on classical logic and model theory. A signature is a set of individual constants, predicate symbols and function symbols; each of the predicate symbols and function symbols has an arity (for example it is binary if its arity is 2). Each signature K gives rise to a first-order language, by building up formulas from the symbols in the signature together with logical symbols (including =) and punctuation. ## Video: Response of the First Order System - Tutorialspoin ### First-order - definition of first-order by The Free Dictionar • DIY ڼ ȭ׸ • The truth is, getting your first order on Fiverr is just to get the ball rolling and after that, things become an awful lot easier. Firstly, let's take a look at some key issues that sellers need to conside • Webnovel - novel - The First Order - The Speaking Pork Trotter - Sci-fi - This is a brand new story. Survive the darkness, see the light There is no righ • The solution of this separable first‐order equation is where x o denotes the amount of substance present at time t = 0. The graph of this equation (Figure 4) is known as the exponential decay curv ### 3.2 The diagram lemma On First-Order Meta-Learning Algorithms. 8 Mar 2018 • Alex Nichol • Joshua Achiam • John This family includes and generalizes first-order MAML, an approximation to MAML obtained by ignoring.. When mathematicians introduce a class of structures, they like to define what they count as the basic maps between these structures. The basic maps between structures of the same signature K are called homomorphisms, defined as follows. A homomorphism from structure A to structure B is a function f from dom(A) to dom(B) with the property that for every atomic formula φ(v1,…,vn) and any n-tuple a = (a1,…,an) of elements of A, This construction has some variants. We can define an equivalence relation on the domain of a product C, and then take a structure D whose elements are the equivalence classes; the predicate symbols are interpreted in D so as to make the natural map from dom(C) to dom(D) a homomorphism. In this case the structure D is called a reduced product of the factors of C. It is a reduced power of A if all the factors are equal to A; in this case the diagonal map from A to D is the one got by taking each element a to the equivalence class of the element (a,a,…). 帮Ʈ 30cm 40cm The elements of A are the elements of dom(A). Likewise the cardinality or power of A is the cardinality of its domain. Since we can recover the signature K from the first-order language L that it generates, we can and will refer to structures of signature K as L-structures. We think of c as a name for the element cA in the structure A, and likewise with the other symbols. ֹк긯 ǰ ðھ? ### 3.3 The Lyndon interpolation theorem On both the sides, the denominator term is the same. So, they will get cancelled by each other. Hence, equate the numerator terms. This rather heavy definition gives little clue how useful saturated structures are. If every structure had a saturated elementary extension, many of the results of model theory would be much easier to prove. Unfortunately the existence of saturated elementary extensions depends on features of the surrounding universe of sets. There are technical ways around this obstacle, for example using weakenings of the notion of saturation. We have two main ways of constructing elementary extensions with some degree of saturation. One is by ultrapowers, using cleverly constructed ultrafilters. The other is by taking unions of elementary chains, generalising the proof we gave for the upward Löwenheim-Skolem theorem. First-order calculations are an excellent way to complement the computational design techniques that now dominate the industry, he says, so he begins by providing the tools for.. with 10% off your first credit order. 10% off your first credit order. Cannot be used on sale, digital download products, in conjunction with any other offers or on financial services products and delivery.. 1.The first order sensory neurons are in the dorsal root ganglia or the sensory ganglia of cranial nerves. Typically, the perception of pain travels through three orders of neurons ### 3.4 The omitting types theorem $$C(s)=\frac{1}{T\left (\ s+\frac{1}{T} \right )} \Rightarrow C(s)=\frac{1}{T}\left ( \frac{1}{s+\frac{1}{T}} \right )$$In The Force Awakens, the First Order is commanded by Supreme Leader Snoke and seeks to destroy the New Republic, the Resistance, and Luke Skywalker. Snoke's apprentice, Kylo Ren, is the master of the Knights of Ren, a mysterious group of elite warriors who work with the First Order. In the 2017 sequel The Last Jedi, Ren kills Snoke and becomes the new Supreme Leader. In the 2019 film The Rise of Skywalker, the First Order allies with the Final Order, an armada of Star Destroyers built by Palpatine, who is revealed to have been secretly controlling the First Order via his puppet ruler, Snoke, prior to the latter being usurped by Ren.[1][2] Будет ли перевод еще к двум спешелам? (Fate Grand Order -Moonlight Lostroom- и Fate Grand Order x Himuro no Tenchi 7-nin no Saikyou Ijin Hen). 1515643403 There is another proof using the elementary amalgamation theorem and the elementary chain theorem. One can show that the structure A has a proper elementary extension A′. (There is a proof of this using the compactness theorem and the diagram lemma — see 3.1 and 3.2 below; another proof is by ultrapowers — see 4.1 below.) Now use A′ and again A′ for the structures B and C in the elementary amalgamation theorem. Then D as in the theorem is an elementary extension of A, and by (ii) in the theorem, it must contain elements that are not in A′, so that it is a proper elementary extension. Repeat to get a proper elementary extension of D, and so on until you have an infinite elementary chain. Use the elementary chain theorem to find an elementary extension of A that sits on top of this chain. Keep repeating these moves until you have an elementary extension of A that has cardinality at least λ. Then if necessary use the downward Löwenheim-Skolem theorem to pull the cardinality down to exactly λ. This kind of argument is very common in first-order model theory. By careful choice of the amalgams at the steps in the construction, we can often ensure that the top structure has further properties that we might want (such as saturation, see 4.2 below). First-order model theory, also known as classical model theory, is a branch of mathematics that deals with the relationships between descriptions in first-order languages and the structures that satisfy.. Here at First Watch, we begin each morning at the crack of dawn, slicing fresh fruits and vegetables, baking muffins and whipping up our French toast batter from scratch. Everything is made to order and.. The First Order is a Romantic novels, some original, some translated from Chinese. Themes of heroism, of valor, of ascending to Immortality, of combat, of magic, of Eastern mythology and legends First order change deals with the existing structure, doing more or less of something, and involving Second order change is creating a new way of seeing things completely. It requires new learning and.. to mean that φ is true in A, or in other words, A is a model of φ. If φ(v1,…,vn) is a formula with free variables as shown, we write The long-awaited anime version of Fate/Grand Order, the new Fate RPG presented by Type-Moon, which has been downloaded more than 7 million times is here! Fate/Grand Order - First Order According to Star Wars: The Force Awakens: The Visual Dictionary (2015) and the novel Star Wars: Aftermath (2015) by Chuck Wendig, after the Galactic Empire was defeated in Return of the Jedi at the climactic Battle of Endor in 4 ABY, thousands of worlds rose up to join the Rebel Alliance and destroy the disorganized Imperials, who fell victim to warlordism. The Alliance formally reorganized itself as the New Republic, and retook the Core Worlds, including the galactic capital Coruscant. One year after Endor, the remaining Imperial Fleet made a final, massive attempt at a counter-offensive which came to a climax at the planet Jakku, the biggest battle in the war since Endor. The Imperial counter-offensive was decisively defeated. The remaining Imperial forces were pushed back to a handful of sectors on the fringe of the Outer Rim, containing only a small fraction of the galaxy's population and industrial base. These sectors were a heavily fortified final redoubt, and the New Republic deemed that they posed too small a threat to justify the high cost in life that liberating them would require. The New Republic forced the Empire to settle for the Galactic Concordance, a humiliating armistice agreement which imposed strict disarmament plans and punishing reparations on the remaining Imperials.[3][4][5] $$\Rightarrow \frac{1}{s\left ( sT+1 \right )}=\frac{A\left ( sT+1 \right )+Bs}{s\left ( sT+1 \right )}$$ ### first-order - Wiktionar • to mean that the n-tuple a is in the set defined by φ. (The entry on classical logic uses the notation ‘A,s ⊨ φ’, where s is any assignment to all the variables of L that assigns to each variable vi free in φ the i-th element in the n-tuple a.) • 巳Ź Ŀ ġ • ǰ ðھ? • 3. First Order Ordinary Linear Differential Equations Ordinary Differential equations does not 4. Some useful Terms: First order differential equation with y as the dependent variable and x as the..  Ʈ/ Ʈ ױպ ť ü 繫 㸮 ### First Order • ! ټ dz • Examples of Pseudo First Order Reaction. Consider a reaction in which one reactant is in excess, say hydrolysis of ethyl acetate. During the hydrolysis of 0.01 mol of ethyl acetate with 10 mol of water.. • 「first-order」の部分一致の例文検索結果. 該当件数 : 5596件 • Want to discover art related to first_order? Check out inspiring examples of first_order artwork on DeviantArt, and get inspired by our community of talented artists • If A and B are structures of signature K with dom(A) a subset of dom(B), and the interpretations in A of the symbols in K are just the restrictions of their interpretations in B, then we say that A is a substructure of B and conversely B is an extension of A. If moreover B has some elements that are not in A, we say that A is a proper substructure of B and B is an proper extension of A. If B is a structure and X is a nonempty subset of dom(B), then there is a unique smallest substructure of B whose domain contains all of X. It is known as the substructure of B generated by X, and we find it by first adding to X all the elements cB where c are individual constants of K, and then closing off under the functions FB where F are function symbols of K. • 빮1-6//빮, Ż 21 漮 帲ij ξ / DIY ŰƮ ũ ũ ũŰ ũޱ ũƮ ǰ ij ո Partly because of the difficulty of communications between Siberia and the West, these results of Zilber took some time to digest, and in part they had to be rediscovered in the West. But when the message did finally get through, the result was a new branch of model theory which has come to be known as geometric model theory. The programme is broadly to classify structures according to (a) what groups or fields are interpretable in them (in the sense sketched in the entry on model theory) and (b) whether or not the structures have ‘modular geometries’; and then to use this classification to solve problems in model theory and geometry. From the mid 1980s the leader of this research was Ehud Hrushovski. In the early 1990s, using joint work with Zilber, Hrushovski gave a model-theoretic proof (the first complete proof to be found) of the geometric Mordell-Lang conjecture in all characteristics; this was a conjecture in classical diophantine geometry. Bouscaren (ed.) 1998 is devoted to Hrushovski’s proof and the necessary background in model theory. Both (a) and (b) are fundamental to Hrushovski’s argument. Definitions of first-order correlation where b is (f(a1),…,f(an)). If we have ‘⇔’ in place of ‘⇒’ in the quoted condition, we say that f is an embedding of A into B. Since the language includes =, an embedding of A into B is always one-to-one, though it need not be onto the domain of B. If it is onto, then the inverse map from dom(B) to dom(A) is also a homomorphism, and both the embedding and its inverse are said to be isomorphisms. We say that two structures are isomorphic if there is an isomorphism from one to the other. Isomorphism is an equivalence relation on the class of all structures of a fixed signature K. If two structures are isomorphic then they share all model-theoretic properties; in particular they are elementarily equivalent. ## Differential Equations - First Order DE' Meanwhile, The First Order member General Hux (Domhnall Gleeson) seems a likely candidate to serve a role similar to the one Grand Moff Tarkin occupied in the original Galactic Empire Meanwhile, Kylo Ren kills Snoke, replacing him as Supreme Leader of the First Order. Poe Dameron stages a mutiny against Holdo, believing her inept and without a plan. Holdo reveals, however, that she didn't trust Poe with her plan due to his reckless assault on the dreadnought. The plan is for the Resistance to flee in cloaked shuttles to an old Rebel Alliance base on the planet Crait, while Holdo remains on the Resistance command ship. The First Order discover the ruse, however, destroying most of the shuttlecraft. Finn and Rose locate the tracking device but are captured by Captain Phasma. Holdo sacrifices herself by directing the Resistance command ship to lightspeed jump directly into Snoke's flagship, destroying much of the First Order fleet in the process. Finn manages to kill Captain Phasma and escape with Rose to Crait. Which of the following first order logic statements represents the following: Each finite state automaton has an equivalent pushdown automaton 35 Ŀ ħ ø ׺ ٵʷο ## First-order reaction example (video) Khan Academ We say that A is an elementary substructure of B, and B is an elementary extension of A, if A is a substructure of B and the inclusion map is an elementary embedding. It’s immediate from the definitions that an elementary extension of an elementary extension of A is again an elementary extension of A. ϸõ õ õ Ŀư This Craft Essence features the following character(s): Mash Kyrielight and Ritsuka Fujimaru. 50px The Last Jedi also introduces a new starfighter element to the First Order fleet, the TIE Silencer superiority fighter. Much as TIE Interceptors were the next generation fighter starting to phase out the original Imperial TIE fighters, TIE Silencers are a next-generation fighter given only to the most elite units. Visually they somewhat resemble a cross between a TIE Interceptor and Darth Vader's TIE Advanced x1 prototype, being wider and more elongated, while boasting heavier weapons and shields to be able to face X-wings head-on. Their technical designation is "TIE/vn" (because in earlier drafts, the ship was called "TIE vendetta"). Kylo Ren pilots his own personal TIE Silencer in The Last Jedi, which he uses to assault the Resistance ship Raddus. ## GitHub - AliaksandrSiarohin/first-order-model: This repository contains ʼ Suppose A is an L-structure, X is a set of elements of A, B is an elementary extension of A and b, c are two elements of B. Then b and c are said to have the same type over X if for every formula φ(v1,…,vn+1) of L and every n-tuple d of elements of X, After encouragement from the spirit of Luke Skywalker, Rey uses his old T-65B X-wing to travel to Exegol, and leads the Resistance there too. Finn and Poe engage the Final Order fleet while Rey confronts Palpatine herself. Lando Calrissian and Chewbacca arrive with reinforcements from across the galaxy, and they manage to defeat the Final Order. With help from Ben and the spirits of past Jedi, Rey finally destroys Palpatine for good.[20] The galaxy rises up against the First Order, ultimately defeating it. Solved exercises of First order differential equations. Get detailed solutions to your math problems with our First order differential equations step-by-step calculator The First Order's handful of sectors simply do not possess the galaxy-wide resources the old Empire used to be able to draw upon, and in addition the armistice treaties with the New Republic put strict limitations on how many ships it could physically build. Therefore, unlike the old Galactic Empire's swarm tactics, the First Order's military has had to adapt to a more "quality over quantity" philosophy, making efficient use of what few resources it has. Culturally, the Galactic Empires' Sith-influenced philosophies have been incorporated and streamlined. Its military is built upon "survival of the fittest"; if one soldier cannot fulfill their duty and dies serving the First Order, then so be it. The Order can only become stronger by culling the weak from their ranks.[8] First Order Retrievability Toolbag: You're doing a job. It'll only take five minutes, all i need is that tool, hmm, where is it, i've only just seen it, it was in my toolbox right here for goodness sake.. 5 matching requests on the forum. First Order. Custom preview (See the entry on model theory for the notion ⊨ of model-theoretic consequence. To derive the second statement from the first, note that ‘T ⊨ φ’ is true if and only if there is no model of the theory T ∪ {¬ φ}.) 『Fate/Grand Order - First Order-』Blu-ray/DVD 2017年3月28日(水)発売 Let L be a first-order language and let A and B be L-structures. Suppose e is a function which takes some elements of A to elements of B. We say that e is an elementary map if whenever a sequence of elements a1, …, an in the domain of e satisfy a formula φ(x1,…,xn) of L in A, their images under e satisfy the same formula in B; in symbols We say that e is an elementary embedding of A into B if e is an elementary map and its domain is the whole domain of A. As the name implies, elementary embeddings are always embeddings. ### First order Ets • Suppose we use a set I to index the factors in a product C. An ultrafilter over I is a set U of subsets of I with the properties • The existence of partially saturated elementary extensions of the field R of real numbers is the main technical fact behind Abraham Robinson’s nonstandard analysis. See Section 4 of the entry on model theory for more information on this. Though model theory provided the first steps in nonstandard analysis, this branch of analysis rapidly became a subject in its own right, and its links with first-order model theory today are rather slim. • Every healthy branch of mathematics needs a set of problems that form a serious challenge for its researchers. We close with a brief introduction to some of the research programmes that drove first-order model theory forwards in the second half of the twentieth century. The book of Marcja and Toffalori in the bibliography gives further information about these programmes. There are other current programmes besides these; see for example the handbook edited by Yuri Ershov, which is about model theory when the structures are built recursively. • After simplifying, you will get the values of A, B, C and D as 1, $-T, \: T^2\: and \: −T^3$ respectively. Substitute these values in the above partial fraction expansion of C(s). • first-order (not comparable). (mathematics, logic) Of one of a series of models, languages, relationship, forms of logical discourse, etc., being the simplest one or the first in a sequence. first-order approximation. first-order control. first-order difference. first-order election. first-order fluid.. • The First Order also fields its own evolution of the old AT-series of armored transports, the AT-M6, used as a heavy siege weapon. Dwarfing the older AT-AT, the AT-M6 has numerous design improvements including heavy serrated cable-cutters mounted on its legs—to avoid being tripped up again like AT-AT's were at the Battle of Hoth. These cutters are positioned in such a way that the AT-M6 walks on its "knuckles" instead of the pads of its feet, which—combined with a heavy siege cannon which gives it a hunched-over appearance—gives the AT-M6 an almost gorilla-like profile compared to the more elephant-like AT-AT. ## First Order Font dafont The First Order Handbook Table of Contents Introduction Restricted Areas Rules & Regulations Name and Color Configuration Hotkeys Rank Structure Divisions Knights of Ren Praetorian Guard Shout.. In 1904 Oswald Veblen described a theory as categorical if it has just one model up to isomorphism, i.e. it has a model and all its models are isomorphic to each other. (The name was suggested to him by John Dewey, who also suggested the name ‘disjunctive’ for other theories. This pair of terms come from traditional logic as names of types of sentence.) The depressing news is that there are no categorical first-order theories with infinite models; we can see this at once from the upward Löwenheim-Skolem theorem. In fact if T is a first-order theory with infinite models, then the strongest kind of categoricity we can hope for in T is that for certain infinite cardinals κ, T has exactly one model of cardinality κ, up to isomorphism. This property of T is called κ-categoricity. 25% Off First Order For New Customers + Free Shipping. 25% Off Your First Order + Free Shipping. Added by OBEEZER. Show Coupon Code ī/߰/Ʊ//Ŷ/ ## Аниме Судьба: Великий приказ: Первый Приказ / Fate: Grand Order Buy products related to first order stormtrooper products and see what customers say about first order stormtrooper products on Amazon.com ✓ FREE DELIVERY possible on eligible purchases A remarkable (but in practice not terribly useful) theorem of Saharon Shelah tells us that a pair of structures A and B are elementarily equivalent if and only if they have ultrapowers that are isomorphic to each other. ̽̽ (1700mm) 10 ߰ ǰ ðھ? The new "First Order" came to be ruled by the mysterious Force-wielder known as Supreme Leader Snoke, who was secretly created by the resurrected Emperor Palpatine to control the First Order.[2] Through Snoke, Palpatine seduced Leia's own son Ben Solo to the dark side of the Force, who renamed himself "Kylo Ren".[2] On his turn to the dark side, Ben/Kylo slaughtered most of his uncle Luke Skywalker's other Jedi apprentices (with the rest joining him) and destroyed his new academy. Blaming himself, Luke fled into self-imposed exile to search for the ancient first Jedi Temple. Kylo Ren, meanwhile, took on a position as Snoke's right hand within the First Order's military. All content on this website, including dictionary, thesaurus, literature, geography, and other reference data is for informational purposes only. This information should not be considered complete, up to date, and is not intended to be used in place of a visit, consultation, or advice of a legal, medical, or any other professional. These are the instructions for building the LEGO Star Wars First Order Battle Pack that was released in 2016 ټ ĵ忡 ̿ Ŀ [κ] SET(15x15 • 종려나무 영어로. • 아브라함 족보. • 아틀란타 한국일보 업소록. • 준위. • High roller 뜻. • Ralph lauren watch. • 블랙 사바스 체인지. • 롤 마법의 신발. • 사마리아 사람. • 황정음 남편 나이. • 엘자 갑옷. • 아크릴 인화. • G2 마시멜로 cm13. • Romano mussolini. • 태양의 서커스. • 남자 성장 단계. • 나현. • 미국 북동부. • 전함 의 몰락. • La 디즈니 월드. • 뻐꾸기 새끼 키우기. • 움직이는 크리스마스카드. • 봉와직염사진. • 메인보드 장착. • 몬테 카를로 방법. • 글록 파밍. • 항공사진 지적도. • 중국남자 바람. • 조던 오클리. • 발 피부 질환. • 콘도르파사. • 아이 크림 바르는 순서. • 수상소감. • 지구는 둥글다 갈릴레오. • 입술 색 이 보라색. • 총알 만들기. • 다카르 랠리. • 피의 남작 보상. • Arp spoofing tool windows 7.
2020-10-27 21:30:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5612016320228577, "perplexity": 1545.0118949289563}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107894759.37/warc/CC-MAIN-20201027195832-20201027225832-00086.warc.gz"}
https://docs.injective.network/develop/modules/Injective/exchange/
# Exchange ## Abstract​ The exchange module is the heart of the Injective Chain which enables fully decentralized spot and derivative exchange. It is the sine qua non module of the chain and integrates tightly with the auction, insurance, oracle, and peggy modules. The exchange protocol enables traders to create and trade on arbitrary spot and derivative markets. The entire process of orderbook management, trade execution, order matching and settlement occurs on chain through the logic codified by the exchange module. The exchange module enables the exchange of tokens on two types of markets: 1. Derivative Market: Either a Perpetual Swap Market or a Futures Market. 2. Spot Market
2022-12-02 13:40:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3266350328922272, "perplexity": 6405.763688096362}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710902.80/warc/CC-MAIN-20221202114800-20221202144800-00047.warc.gz"}
https://cs.stackexchange.com/questions/94117/using-a-2nd-neural-network-to-predict-1st-neural-network-prediction-error
# Using a 2nd neural network to predict 1st neural network prediction error So for example, we are trying to predict the amount of rainfall in the afternoon base on continuous features such as humidity and temperature in the morning. 1st neural network: Regression neural network on features to give one output label which is a continuous value for predicted rainfall in the afternoon. 2nd neural network: Features will be the same as the 1st neural network. But now, the label will be the absolute difference between the 1st neural network predicted rainfall and the actual rainfall. From this, we can train the 2nd neural network to recognise what particular set of features will result in the 1st neural network giving a 'bad' prediction and be more wary of that 'bad' prediction. In a way, this is like using another neural network to give the confidence level of the first neural network, solely based on the same features (the humidity and temperature in the morning). I could not find much literature on this subject and am wondering if this idea makes sense in the first place? Perhaps stacking neural networks over each other is a bad idea because it compounds the error from one network to another? I tried this with some data except my 2nd neural network is a classifier which classifies if the error is above a certain threshold (bad prediction) or below a certain threshold (good prediction). However, from a few different model runs, it seems that my 2nd neural network usually gives a matthew's correlation coefficient of about 0. This means my 2nd neural network is as good as guessing whether the 1st neural network prediction is good or bad. So I am not sure if the problem is the idea itself or that my model hyperparameters are bad. More details: I used 10 fold cross-validation for the 1st model to get a predicted rainfall for all the data. Then I used another separate 10 fold cross-validation and a siamese neural network for the 2nd model to predict whether the 1st neural network prediction is good or bad. • If a neural network can identify the error of another neural network with the same structure and features, couldn't the original neural network have learned the error and corrected for it? Jul 10, 2018 at 16:12 • That is true... I was thinking the 2nd network can be used to learn what type of features would make the 1st network perform badly... Jul 11, 2018 at 4:19 It's not an unreasonable approach, but I suspect you'd need to define a custom loss function to make it work well. I can also suggest two different, more sophisticated approaches: the bootstrap, or a variational neural network. In the bootstrap, you train many classifiers (say, 100 of them); each is trained on a different random sample of the training set, and then you look at the distribution of outputs from these classifiers when you feed in the input $x$ to each of them. In a variational network, instead of outputting a single number $y$ for the prediction in response to the input $x$, the network outputs two parameters $(\mu,\sigma)$, with the idea that the network is predicting a Gaussian distribution $\mathcal{N}(\mu,\sigma^2)$ as an approximation for $p(y|x)$. Then you can use this to get a sort of confidence interval for the prediction, e.g., $[\mu-2\sigma,\mu+\sigma]$. I'm not an expert on this, but I think a variational network is actually very close to what you suggested; we can think of it as two networks, one that outputs $\mu$ (your first network) and one that outputs $\sigma$ (your second network). However, variational networks are trained with a special loss function. In particular, if we have an instance $(x_i,y_i)$ in the training set, the loss for the network is the log likelihood $-\log p(y_i)$ where here $p(y_i)$ represents the probability of getting the output $y_i$ from a Gaussian distribution with parameters $\mathcal{N}(\mu,\sigma^2)$, where $\mu,\sigma$ are the two outputs from the network. Since the Gaussian distribution has probability density function $$p(y_i) = c e^{-(y_i-\mu)/2\sigma}$$ where $c$ is a constant. Therefore, the loss function is $$-\log p(y_i) = (y_i-\mu)/2\sigma + c'$$ where $c'$ is a constant that can be ignored. Thus, I think you can think of the variational approach as being equivalent to your approach, but with a custom loss function chosen to be appropriate for what you're trying to achieve.
2023-03-27 06:17:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8491384983062744, "perplexity": 295.80931128429603}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948609.41/warc/CC-MAIN-20230327060940-20230327090940-00117.warc.gz"}
https://sciencebehindthesport.wvu.edu/science-behind-cycling/brakes
# Brakes ## The Science of Stopping: It's All About Friction - Bicycle Brakes Convert Kinetic Energy (Motion) Into Thermal Energy (Heat). ### Braking Distance The approximate braking distance can be found by determining the work required to dissipate the bike’s kinetic energy: Through the Work-Energy Principle it can then be said that: $\left(\mu ×\mathrm{mass}×\mathrm{gravity}\right)×\mathrm{distance}=\frac{1}{2}×\mathrm{mass}×{\mathrm{velocity}}^{2}$ Finally, by rearranging the equation and cancelling like terms we can form an equation for braking distance: $\mathrm{distance}=\frac{{\mathrm{velocity}}^{2}}{\left(2×\mu ×\mathrm{gravity}\right)}$ μ = coefficient of friction ### Rim Brake • How's it Work? Rubber pads are pressed against the rim of the wheel. • Advantages: inexpensive, lightweight, easy to maintain, mechanically simple • Disadvantages: easily contaminated, less braking power ### Disc Brake • How's it Work? Metallic or ceramic pads are pressed against a metal rotor that's attached to the wheel. • Advantages: powerful, protected from contaminates, better heat dissipation • Disadvantages: expensive, heavy, difficult to maintain
2020-06-04 04:23:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4596339166164398, "perplexity": 7612.0580869491405}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347439019.86/warc/CC-MAIN-20200604032435-20200604062435-00472.warc.gz"}
https://www.semanticscholar.org/paper/The-%24K%24-theory-of-twisted-multipullback-quantum-odd-Hajac-Nest/624b6c5f238158ea805beb47523d45de9c1f8c49
# The $K$-theory of twisted multipullback quantum odd spheres and complex projective spaces @article{Hajac2015TheO, title={The \$K\$-theory of twisted multipullback quantum odd spheres and complex projective spaces}, author={Piotr M. Hajac and Ryszard Nest and David Pask and Aidan Sims and Bartosz Zieli'nski}, journal={Journal of Noncommutative Geometry}, year={2015} } • Published 30 December 2015 • Mathematics • Journal of Noncommutative Geometry We find multipullback quantum odd-dimensional spheres equipped with natural $U(1)$-actions that yield the multipullback quantum complex projective spaces constructed from Toeplitz cubes as noncommutative quotients. We prove that the noncommutative line bundles associated to multipullback quantum odd spheres are pairwise stably non-isomorphic, and that the $K$-groups of multipullback quantum complex projective spaces and odd spheres coincide with their classical counterparts. We show that these… 18 Citations • Mathematics Banach Center Publications • 2020 By a diagonal embedding of $U(1)$ in $SU_q(m)$, we prolongate the diagonal circle action on the Vaksman-Soibelman quantum sphere $S^{2n+1}_q$ to the $SU_q(m)$-action on the prolongated bundle. Then • Mathematics • 2017 The $K_0$-group of the C*-algebra of multipullback quantum complex projective plane is known to be $\mathbb{Z}^3$, with one generator given by the C*-algebra itself, one given by the section module • Mathematics • 2015 Our main theorem is that the pullback of an associated noncommutative vector bundle induced by an equivariant map of quantum principal bundles is a noncommutative vector bundle associated via the • Mathematics • 2020 The multipullback quantization of complex projective spaces lacks the naive quantum CW-complex structure because the quantization of an embedding of the nskeleton into the (n + 1)-skeleton does not • Mathematics Journal of Noncommutative Geometry • 2018 Our main theorem is that the pullback of an associated noncommutative vector bundle induced by an equivariant map of quantum principal bundles is a noncommutative vector bundle associated via the • Mathematics • 2020 We construct distinguished free generators of the $K_0$-group of the C*-algebra $C(\mathbb{CP}^n_\mathcal{T})$ of the multipullback quantum complex projective space. To this end, first we prove a Taking a groupoid C*-algebra approach to the study of the quantum complex projective spaces $\mathbb{P}^{n}\left( \mathcal{T}\right)$ constructed from the multipullback quantum spheres introduced by • Mathematics • 2022 . In this paper, we refine the classification of compact quantum spaces by K-theory type initiated in [8]. We do it by introducing a multiplicative K-theory functor for unital C*-algebras taking values • Mathematics Journal of Noncommutative Geometry • 2021 Let $H$ be the C*-algebra of a non-trivial compact quantum group acting freely on a unital C*-algebra $A$. It was recently conjectured that there does not exist an equivariant $*$-homomorphism from ## References SHOWING 1-10 OF 33 REFERENCES • Mathematics • 2015 We equip the multi-pullback $C^*$-algebra $C(S^5_H)$ of a noncommutative-deformation of the 5-sphere with a free $U(1)$-action, and show that its fixed-point subalgebra is isomorphic with the • Mathematics • 2012 From N -tensor powers of the Toeplitz algebra, we construct a multi-pullback C*-algebra that is a noncommutative deformation of the complex projective space P.C/. Using Birkhoff’s Representation • Mathematics • 2004 We use a Heegaard splitting of the topological 3-sphere as a guiding principle to construct a family of its noncommutative deformations. The main technical point is an identification of the universal Associated to the standard SUq(n) R-matrices, we introduce quantum spheres S q , projective quantum spaces CP n−1 q , and quantum Grassmann manifolds Gk(C n q ). These algebras are shown to be • Mathematics • 2001 Abstract The irreducible *-representations of the polynomial algebra $\mathcal{O}(S^{3}_{pq})$ of the quantum3-sphere introduced by Calow and Matthes are classified. The K-groups of its universal • Mathematics • 2010 We construct explicit generators of the K-theory and K-homology of the coordinate algebras of functions on the quantum projective spaces. We also sketch a construction of unbounded Fredholm modules, • Mathematics • 2001 The irreducible ∗ -representations of the polynomial algebra O ( S 3 pq ) of the quantum 3-sphere introduced by Calow and Matthes are classified. The K -groups of its universal C ∗ -algebra are shown The Noncommutative Index Theorem is used to prove that the Chern numbers of quantum Hopf line bundles over the standard Podles quantum sphere equal the winding numbers of the repres- entations We study certain principal actions on noncommutative C*-algebras. Our main examples are the Z_p- and T-actions on the odd-dimensional quantum spheres, yielding as fixed-point algebras quantum lens • Mathematics Documenta Mathematica • 2014 We introduce twisted relative Cuntz-Krieger algebras associated to finitely aligned higher-rank graphs and give a comprehensive treatment of their fundamental structural properties. We establish
2023-01-27 11:28:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8788243532180786, "perplexity": 1937.917977883457}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494976.72/warc/CC-MAIN-20230127101040-20230127131040-00412.warc.gz"}
https://physics.stackexchange.com/questions/27388/tree-level-qft-and-classical-fields-particles/27389
# Tree level QFT and classical fields/particles It is well known that scattering cross-sections computed at tree level correspond to cross-sections in the classical theory. For example the tree level cross-section for electron-electron scattering in QED corresponds to scattering of classical point charges. The naive explanation for this is that the power of $\hbar$ in a term of the perturbative expansion is the number of loops in the diagram. However, it is not clear to me how to state this correspondence in general. In the above example the classical theory regards electrons as particles and photons as a field. This seems arbitrary. Moreover, if we consider for example $\phi^4$ theory than the interaction of the $phi$-quanta is mediated by nothing except the $phi$-field itself. What is the corresponding classical theory? Does it contain both $phi$-particles and a $phi$-field? Also, does this correspondence extend to anything besides scattering theory? Summing up, my question is: What is the precise statement of the correspondence between tree-level QFT and behavior of classical fields and particles? • The classical limit of a quantum field theory is classical field theory. The limit you discuss (photons are encoded in the EM field, electrons are particles) seems to be the non-relativistic limit. Generally particles are only well-defined objects in such a limit -- contrary to common belief, there is really no such thing as a particle, but some excitations of massive free fields can look like particles in the appropriate limit. – user566 Nov 26 '11 at 21:11 • The statement "the tree level cross-section for electron-electron scaterring in QED corresponds to scattering of classical point charges" is somewhat wrong. In CED any scattering is accompanied with EMW radiation which is totally absent on the tree level of QED. This is a severe drawback of QED that should be "repaired" with summation of all soft diagrams. It means the initial approximation in QED is too far from a good one. – Vladimir Kalitvianski Nov 26 '11 at 22:24 • Diagrams contributing to the amplitude, including soft emission of any number of photons in initial/final states, are still tree level; when squaring the smplitude to get a cross section one has to involve some limit of one loop diagrams as well. This way one can reproduce the CED scattering cross section as one should (see chapter 6 of Peskin and Schroeder for details). This resummation of soft photons is essentially the reason the radiation has to be treated as a field always, and does not have a particle limit. – user566 Nov 26 '11 at 23:57 • @Moshe: You are right, contribution of soft photons emitted from internal lines is much smaller. In my response I meant the first non vanishing approximation of a tree level, to be exact. – Vladimir Kalitvianski Nov 27 '11 at 9:47 • @Qmechanic You also need to argue that all Feynman diagrams have Euler characteristic 2, even though they aren't all planar diagrams. This is very nontrivial: physics.stackexchange.com/a/176463/92058 – tparker Jun 21 '17 at 22:48 This was something that confused me for awhile as well until I found this great set of notes: homepages.physik.uni-muenchen.de/~helling/classical_fields.pdf Let me just briefly summarize what's in there. The free Klein-Gordon field satisfies the field equation $$(\partial_{\mu} \partial^{\mu} +m^2) \phi(x) = 0$$ the most general solution to this equation is $$\phi(t, \vec{x}) = \int_{-\infty}^{\infty} \frac{d^3k}{(2\pi)^3} \; \frac{1}{2E_{\vec{k}}} \left( a(\vec{k}) e^{- i( E_{\vec{k}} t -\vec{k} \cdot \vec{x})} + a^{*}(\vec{k}) e^{ i (E_{\vec{k}} t- \vec{k} \cdot \vec{x})} \right)$$ where $$\frac{a(\vec{k}) + a^{*}(-\vec{k})}{2E_{\vec{k}}} = \int_{-\infty}^{\infty} d^3x \; \phi(0,\vec{x}) e^{-i \vec{k} \cdot \vec{x}}$$ and $$\frac{a(\vec{k}) - a^{*}(-\vec{k})}{2i} = \int_{-\infty}^{\infty} d^3x \; \dot{\phi}(0,\vec{x}) e^{-i \vec{k} \cdot \vec{x}}$$ Introducing an interaction potential into the Lagrangian results in the field equation $$(\partial^{\mu} \partial_{\mu} + m^2) \phi = -V'(\phi)$$ choosing a phi-4 theory $V(\phi) = \frac{g}{4} \phi^4$ this results in $$(\partial^{\mu} \partial_{\mu} + m^2) \phi = -g \phi^3$$ Introduce a Green's function for the operator $$(\partial^{\mu} \partial_{\mu} + m^2) G(x) = -\delta(x)$$ which is given by $$G(x) = \int \frac{d^4k}{(2\pi)^4} \; \frac{-e^{-i k \cdot x}}{-k^2 + m^2}$$ now solve the full theory perturabtively by substituting $$\phi(x) = \sum_{n} g^n \phi_{n}(x)$$ into the differential equation and identifying powers of $g$ to get the following equations $$(\partial^{\mu} \partial_{\mu} + m^2) \phi_0 (x) = 0$$ $$(\partial^{\mu} \partial_{\mu} + m^2) \phi_1(x) = -\phi_0(x)^3$$ $$(\partial^{\mu} \partial_{\mu} + m^2) \phi_2 (x) = -3 \phi_0(x)^2 \phi_1(x)$$ the first equation is just the free field equation which has the general solution above. The rest are then solved recursively using $\phi_0(x)$. So the solution for $\phi_1$ is $$\phi_1(x) = \int d^4y\; \phi_0(y)^3 \, G(x-y)$$ and so on. As is shown in the notes this perturbative expansion generates all no-loop Feynman diagrams and this is the origin of the claim that the tree level diagrams are the classical contributions... • OK, so classical perturbation theory can be identified in some sense with quantum tree-level perturbation theory. This is nice (I'd upvote but I reached the 30 votes limit today). However, I still don't understand how to generalize the statement about scattering amplitudes. – Squark Dec 2 '11 at 17:47 There is a very easy way to see this and it is through an $\hbar$ series. This claim can be traced back to Sydney Coleman and states that in the ultraviolet one is doing an expansion with $\hbar$ going to zero. A previous answer cited these lectures on classical fields but I would like to start from the generating functional of the scalar field theory and try to understand the classical limit: $$Z[j]=\int[d\phi]e^{\frac{i}{\hbar}\int d^4x\left[\frac{1}{2}(\partial\phi)^2-\frac{1}{2\hbar^2}m^2\phi^2-\frac{\lambda}{4\hbar}\phi^4+j\phi\right]}.$$ Our aim is to recover perturbation theory for the classical fields at tree level as this will prove Coleman's claim. Indeed, the above generating functional can be rewritten in a different form as $$Z[j]=e^{-i\hbar^2\frac{\lambda}{4}\int d^4x\frac{\delta^4}{\delta j(x)^4}}e^{\frac{i}{2\hbar}\int d^4xd^4yj(x)\Delta(x-y)j(y)}.$$ Now, let us focus on the two-point function, being the argument the same for the other correlation functions. We will get $$\left.(-i\hbar)^2\frac{1}{Z}\frac{\delta^2Z}{\delta j(x)\delta j(y)}\right|_{j=0}=i\hbar\Delta(x-y).$$ From these equations it is not difficult to recover the first quantum correction at one loop that is given by $$-i\hbar^4\frac{\lambda}{4}\int d^4\tilde x \frac{\delta^4}{\delta j^4(\tilde x)}\frac{\delta^2}{\delta j(x)\delta j(y)}\left(-\frac{1}{3!8\hbar^3}\int d^4x_1d^4y_1d^4x_2d^4y_2d^4x_3d^4y_3\right.$$ $$\left.j(x_1)\Delta(x_1-y_1)j(y_1)j(x_2)\Delta(x_2-y_2)j(y_2)j(x_3)\Delta(x_3-y_3)j(y_3)\right)$$ and this will be proportional to $\hbar$. This is the conclusion we aimed to that gives evidence for Coleman's claim. A similar analysis can be carried out using effective potential. This proof completes the previous answer but starting from quantum field theory. • I know loop contributions are proportional to hbar. The question is not about proving it. It's about the physical interpretation of this arithmetical fact. – Squark Dec 2 '11 at 17:44 • @Squark: Writing down something in places like this implies some kind of effort and the hope is that OP should be able to understand properly the content of what one is writing. As this is not the case and having seen this repeated again and again. This is my last experience with stackexchange et similia. Good luck and goodbye! – Jon Dec 2 '11 at 21:06 The classical analogue of quantum $\Phi^4$ theory is classical $\Phi^4$ theory, with the same action. There are no particles, but there is still scattering of waves! The correspondence between tree-level QFT and classical fields is on the level of fields only. (Particles make their appearance in classical field theory only in the limit where geometric optics is valid. Even in quantum field theory, the particle picture is not really appropriate except in the geometric optics regime.) Feynman diagrams arise in any perturbative treatment of correlations of fields, even classically. Indeed, Feynman diagrams are just a graphical notation for writing products of tensors with many indices summed via the Einstein summation convention. The indices of the results are the external lines, while the indices summed over are the internal lines. As such sums of products occur in any multipoint expansion of expectations, irrespective of the classical or quantum nature of the system. No connection with particles is implied, unless one imposes it. What is the precise statement of the correspondence between tree-level QFT and behavior of classical fields and particles? What follows are four discussions about the connection between quantum and classical fields, viewed from various angles. This will interest people to varying degrees (I hope). If you care only about the loop expansion, skip down to C. [An initial point: Many people, myself included, would like to see a (relativistic) interacting theory of quantum fields approximated by a (most likely nonrelativistic) theory of quantum particles. The question above may have been posed with this approximation in mind. But I've never seen this approximation.] A. The one framework that I know of that includes both classical and quantum physics is to view the theory as a mapping from observables into what is known as a C*-algebra. A state maps elements (of the algebra) to expectation values. Given a state, a representation of the algebra elements as operators on a Hilbert space can be obtained. (I'm speaking of the GNS reconstruction.) Now let's consider a free scalar field theory. In the quantum case, there will be a vacuum state, and the GNS reconstruction from this state will yield the the usual field theory. (There will also be states with nonzero temperature and nonzero particle density. I mention this simply as one advertisement for the algebraic approach.) In the classical case, there will also be a vacuum state. But the reconstruction from this state will yield a trivial, one-dimensional Hilbert space. And the scalar field will be uniformly zero. [I'm suppressing irrelevant technical details.] Fortunately, in the classical case, there will also be states for every classical solution. For these, the GNS representations will be one-dimensional, with every operator having the same value as the classical solution. So, in the formal $\hbar\to0$ limit, the algebra becomes commutative, it has states that correspond to classical solutions, and its observables take on their classical values in these states. In the case of an interacting theory, the formal $\hbar\to0$ limit isn't so clear because of renormalization. However, if, as I vaguely recall, the various renormalization counterterms are of order $\hbar^n$ for $n > 0$, they don't matter in the formal $\hbar\to0$ limit. In that case, the formal $\hbar\to0$ limit yields the classical theory (as in the free field case). Another interesting example is QED. With $\hbar=0$, the fermionic fields anticommute, which makes them zero in the context of a C*-algebra. So all of the fermionic fields vanish as $\hbar\to0$, and we're left with free classical electrodynamics. You may or may not derive any satisfaction from these formal limits of C*-algebras. In either case, we're done with them. Below, we talk about ordinary QFT. B. Let's now consider a free Klein-Gordon QFT. We'll choose a "coherent" state and obtain an ħ → 0 limit. Actually, this will be a sketch without proofs. The Lagrangian is $\frac12(\partial\phi)^2-\frac12\nu^2\phi^2$. Note $\nu$ instead of $m$. $m$ has the wrong units, so you see a frequency instead. ($c = 1$.) We have the usual free field expansion in terms of creation and annihilation operators. These satisfy: $$[a(k),a^\dagger(l)] = \hbar (2\pi)^2(2k^0) \delta^3(k - l)$$ $k$ and $l$ are not momenta. $\hbar k$ and $\hbar l$ are momenta. And the mass of a single particle is $\hbar\nu$. The particle number operator $N$ is (with $\not \!dk = d^3k (2\pi)^{-3}(2k^0)^{-1}$): $$N = \hbar^{-1}\int\not \!dk a^\dagger(k)a(k)$$ And for some nice function $f(k)$, we define the coherent state $|f\rangle$ by: $$a(k)|f\rangle = f(k)|f\rangle$$ [I omit the expression for $|f\rangle$.] Note that: $$\langle f| N |f\rangle =\hbar^{-1} \int\not \!dk |f(k)|^2$$ As $\hbar\to0$, $|f\rangle$ is composed of a huge number of very light particles. $|f\rangle$ corresponds to the classical solution: $$\Phi(x) = \int\not \!dk [f(k)\exp(ik⋅x) + \text{complex conjugate}]$$ Indeed, for normal-ordered products of fields, we have results like the following: $$\langle f|:φ(x)φ(y):|f\rangle = \Phi(x)\Phi(y)$$ Since the difference between $:φ(x)φ(y):$ and $φ(x)φ(y)$ vanishes as $\hbar\to0$, we have in that limit: $$\langle f| φ(x)φ(y) |f\rangle\to\Phi(x)\Phi(y)$$ If we reconstruct the theory from these expectation values, we obtain a one-dimensional Hilbert space on which $φ(x) = \Phi(x)$. So, with coherent states, we can obtain all of the classical states in the $\hbar\to0$ limit. C. Consider an x-space Feynman diagram in some conventional QFT perturbation theory. Let: $n =$ the number of fields being multiplied. $P =$ the number of arcs (ie, propagators). $V =$ the number of vertices. $L =$ the number of independent loops. $C =$ the number of connected components. Finally, let $H$ be the number of factors of $\hbar$ in the diagram. Then, using standard results, we have: $$H = P - V = n + L - C > 0$$ So, if you set $\hbar\to0$, all Feynman diagrams vanish. All fields are identically zero. This is reasonable. The Feynman diagrams contribute to vacuum expectation values. And the classical vacuum corresponds to fields vanishing everywhere. D. Suppose that we don't want to take $\hbar\to0$, but we do want to consider the theory up to, say, $O(\hbar^2)$. But what is "the theory"? Let the answer be: the Green functions. But all of the connected Feynman diagrams with $n > 3$ have $H > 2$. In order to retain these diagrams and their associated Green functions, we need to ignore the factor $\hbar^n$ that is part of every n-point function. And that is what people do. When people define, say the generating functional for connected Green functions, they insert a factor of $1/\hbar^{n-1}$ multiplying the n-point functions. With these insertions, the above equation sort-of-becomes: $$H" = L$$ In particular, all of the (connected) tree diagrams appear at $O(1)$ in the generating functional. But recall that all of these diagrams vanish as $\hbar\to0$. I don't see any way to interpret them as classical. • Thank you for your answer Greg but note that tree-level scattering does correspond to classical scattering – Squark Apr 28 '12 at 12:28 • According to whom and by what argument? If you really mean "classical", where do the particles come from, given that a classical field theory has no particles? If you mean the particles of nonrelativistic QM cerca 1926, do you now how to obtain that theory from QFT? (I don't, and I've looking for it for a long time now.) – Greg Weeks Apr 29 '12 at 15:46 • @GregWeeks: Feynman diagrams arise in any perturbative treatment of correlations of fields, even classically. Indeed, Feynman diagrams are just a graphical notation for writing products of tensors with many indices summed via the Einstein summation convention. The indices of the results are the external lines, while the indices summed over are the internal lines. As such sums of products occur in any multipoint expansion of expectations, irrespective of the classical or quantum nature of the system. No connection with particles is implied, unless one imposes it. – Arnold Neumaier Apr 29 '12 at 19:50 • @ArnoldNeumaier: I don't follow your tensors and indices description. But I agree that Euclidean Feynman diagrams give you the expectation values of products of suitably randomized fields (in 4-d). But even those become trivial in the ħ → 0 limit, as do their Minkowski space counterparts. The calculation was given above. – Greg Weeks May 2 '12 at 2:00 • @GregWeeks: Think of momentum space as being discrete, and the momenta as indices. Then the prescription for evaluating Feynman diagrams in momentum space is just a big sum that condenses to a product of tensors in Einstein notation. You'd get a perturbation expansion in terms of such tensors also from any finite-dimensional state space. The lines have nothing to do with particles; associating them with ''virtual particles'' is traditional but without any but visual support. – Arnold Neumaier May 2 '12 at 11:39 This is an excellent and very deep question. Consider QED as an example: a classical electromagnetic plane wave has an energy density of $\frac{1}{2} |{\bf E}|^2$, while a gas of photons with frequency $\omega$ and number density $n$ has an energy density of $n \hbar \omega$. (Strictly speaking, the photon number density isn't well-defined because photon number isn't conserved, as photons are constantly splitting in virtual electron-positron pairs and recombining. But for large numbers of photons, these density fluctuations become tiny and the density becomes effectively constant.) Equating the two quantities, we find that a collection of a large number of collinear photons at the same momentum corresponds to an EM wave with amplitude $|{\bf E}| = \sqrt{2 n \hbar \omega}$. So if we hold the number density constant and take $\hbar \to 0$, the corresponding classical wave vanishes entirely. The classical limit of any finite number of quantum particles vanishes; in order to get a well-defined classical limit, you need to take $n \to \infty$ and $\hbar \to 0$ in such a way that their product stays constant. This corresponds to including Feynman diagrams with more and more external legs. This agrees with the notes in Kyle's answer: the solutions to the Lagrangian's classical equation of motion are a sum of tree level Feynman diagrams with all possible numbers of external legs, because a completely classical wave packet corresponds to an infinite number of quantum particles. In a QFT Feynman expansion, each vertex contributes a factor of the coupling constant $g$ and each loop contributes a factor of $\hbar$. The number of loops must clearly be less than the number of vertices, so a weak-coupling expansion where we only consider diagrams with a small number of vertices also turns out to be a semiclassical expansion where we only consider diagrams with a small number of loops (although the order of the two expansions doesn't always match up exactly). The converse is not true, because you can have diagrams with only one loop but many vertices and external legs. Such diagrams correspond to scattering processes which are "fairly classical" and are therefore important in a semiclassical expansion, but extremely weakly coupled and therefore unimportant in a weak-coupling expansion. But QFT is typically useful in contexts where we are concerned with scattering processes for small numbers of particles, so it's natural to keep the number of external legs fixed. In this case, although a Feynman QFT expansion is explicitly only a weak-coupling expansion, in practice it ends up being a simultaneous weak-coupling and semiclassical expansion. In classical field theory we don't need to worry about loops, which makes things easier. But on the other hand, in the classical context it isn't natural to hold the number of external legs fixed, for the reason described above (any wave scattering process gets contributions from Feynman diagrams with all numbers of external legs), which makes things harder. Of course, in practice, in a perturbative expansion you eventually stop after adding up all Feynman diagrams with some maximum number of vertices, which necessarily also have some maximum number of external legs. In a semiclassical context where $\hbar$ is small but positive, this corresponds to waves with small amplitude. So unlike in the QFT context, where a weak-coupling expansion automatically ends up being a semiclassical expansion as well, in the classical context a weak-coupling expansion automatically ends up being a small-wave-amplitude expansion as well. Scattering between large waves would receive contributions from Feynman diagrams with many external legs and therefore a huge number of vertices, which would be impractical to calculate in a weak-coupling expansion. Here's another way to think about that last point. In a linear theory, the amplitude of the outgoing waves is proportional to the amplitude of the incoming waves. So if you send small waves in, small waves come out. But in an interacting theory the classical equations of motion are nonlinear, and you can get feedback loops. It's therefore possible that you can send small waves in, but they combine nonlinearly and large waves come out. A weakly coupled theory should be "fairly linear," so this should be unlikely. So the Feynman diagrams with small numbers of both ingoing and outgoing external legs should be the most important. But in the strongly nonlinear regime, the fact that small incoming waves can produce large outgoing waves means that Feynman diagrams with few incoming but many outgoing external legs can be important - limiting the usefulness of the expansion. TLDR: the Feynman expansion of a classical field theory is only useful when the field coupling is weak and the scattering waves have small amplitudes.
2019-06-26 18:48:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8210229873657227, "perplexity": 362.5777044439947}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000414.26/warc/CC-MAIN-20190626174622-20190626200622-00010.warc.gz"}
http://www.math.psu.edu/calendars/meeting.php?id=3773
# Meeting Details Title: Floer homology of cotangent bundles Symplectic Topology Seminar David Hurtubise (PSU) talk 1:25PM in McAllister 315 Abstract: Let M be a closed smooth manifold. The cotangent bundle T*M has a natural symplectic structure, and given a time-dependent Hamiltonian on T*M satisfying certain conditions one can define the symplectic Floer homology of T*M. Unlike the compact case, the Floer homology of T*M is not the singular homology of the underlying manifold. Instead, the Floer homology of T*M turns out to be isomorphic to the singular homology of the free loop space of M. In this talk I will outline three different approaches to establishing this isomorphism. The approaches three approaches are due to 1) Viterbo, 2) Abbondandolo and Schwarz, and 3) Salamon and Weber. ### Room Reservation Information Room Number: MB315 10 / 09 / 2008 01:25pm - 02:15pm
2014-09-02 09:51:26
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9120270013809204, "perplexity": 1073.6097957639854}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535921872.11/warc/CC-MAIN-20140909042205-00475-ip-10-180-136-8.ec2.internal.warc.gz"}
https://www.zbmath.org/authors/?q=ai%3Acheng.ye
# zbMATH — the first resource for mathematics ## Cheng, Ye Compute Distance To: Author ID: cheng.ye Published as: Cheng, Y.; Cheng, Ye Documents Indexed: 16 Publications since 1988 all top 5 #### Co-Authors 0 single-authored 1 Chen, Ying 1 Ding, Chao 1 He, Miao 1 Hu, Jianhao 1 Qu, Fengzhong 1 Shi, Bao 1 Wang, Xiao 1 Wu, Zhihui 1 Yang, LiuQing 1 Ye, Ying 1 Zhang, Bin 1 Zhang, Zhujun #### Serials 2 Mathematical Problems in Engineering 1 Tsinghua Science and Technology 1 International Journal of Biomathematics #### Fields 2 Information and communication theory, circuits (94-XX) 1 Ordinary differential equations (34-XX) 1 Operations research, mathematical programming (90-XX) 1 Game theory, economics, finance, and other social and behavioral sciences (91-XX) #### Citations contained in zbMATH 11 Publications have been cited 120 times in 118 Documents Cited by Year Rational and semirational solutions of the nonlocal Davey-Stewartson equations. Zbl 1382.35250 Rao, J.; Cheng, Y.; He, J. 2017 Yang-Baxterization of braid group representations. Zbl 0716.57007 Cheng, Y.; Ge, M. L.; Xue, K. 1991 Numerical manifold method based on the method of weighted residuals. Zbl 1109.74373 Li, S.; Cheng, Y.; Wu, Y.-F. 2005 Stock liquidation via stochastic approximation using Nasdaq daily and intra-day data. Zbl 1128.91031 Yin, G.; Zhang, Q.; Liu, F.; Liu, R. H.; Cheng, Y. 2006 Dynamical model for turbulence. III: Numerical results. Zbl 1025.76534 Canuto, V. M.; Dubovikov, M. S.; Cheng, Y.; Dienstfrey, A. 1996 Integrable systems associated with the Schrödinger spectral problem in the plane. Zbl 0737.35114 Cheng, Y. 1991 Two-level genetic algorithm for clustered traveling salesman problem with application in large-scale TSPs. Zbl 1174.90863 Ding, Chao; Cheng, Ye; He, Miao 2007 Symmetries and hierarchies of equations for the $$(2+1)$$-dimensional N- wave interaction. Zbl 0688.35079 Cheng, Y. 1989 The Virasoro and Kac-Moody symmetries for the principal chiral model. Zbl 0695.35167 Cheng, Y. 1988 Multistage bandit problems. Zbl 0854.62078 Cheng, Y. 1996 New solutions of the Yang-Baxter equation and their Yang-Baxterization. Zbl 0736.17011 Couture, M.; Cheng, Y.; Ge, M. L.; Xue, K. 1991 Rational and semirational solutions of the nonlocal Davey-Stewartson equations. Zbl 1382.35250 Rao, J.; Cheng, Y.; He, J. 2017 Two-level genetic algorithm for clustered traveling salesman problem with application in large-scale TSPs. Zbl 1174.90863 Ding, Chao; Cheng, Ye; He, Miao 2007 Stock liquidation via stochastic approximation using Nasdaq daily and intra-day data. Zbl 1128.91031 Yin, G.; Zhang, Q.; Liu, F.; Liu, R. H.; Cheng, Y. 2006 Numerical manifold method based on the method of weighted residuals. Zbl 1109.74373 Li, S.; Cheng, Y.; Wu, Y.-F. 2005 Dynamical model for turbulence. III: Numerical results. Zbl 1025.76534 Canuto, V. M.; Dubovikov, M. S.; Cheng, Y.; Dienstfrey, A. 1996 Multistage bandit problems. Zbl 0854.62078 Cheng, Y. 1996 Yang-Baxterization of braid group representations. Zbl 0716.57007 Cheng, Y.; Ge, M. L.; Xue, K. 1991 Integrable systems associated with the Schrödinger spectral problem in the plane. Zbl 0737.35114 Cheng, Y. 1991 New solutions of the Yang-Baxter equation and their Yang-Baxterization. Zbl 0736.17011 Couture, M.; Cheng, Y.; Ge, M. L.; Xue, K. 1991 Symmetries and hierarchies of equations for the $$(2+1)$$-dimensional N- wave interaction. Zbl 0688.35079 Cheng, Y. 1989 The Virasoro and Kac-Moody symmetries for the principal chiral model. Zbl 0695.35167 Cheng, Y. 1988 all top 5 #### Cited by 231 Authors 9 He, Jingsong 9 Xue, Kang 6 Mihalache, Dumitru 5 Canuto, Vittorio M. 5 Dubovikov, Mikhail S. 5 Ge, Molin 5 Rao, Jiguang 4 Hu, Taotao 3 An, Xinmei 3 Cheng, Yi 3 Lou, Senyue 3 Ma, Guowei 3 Ma, Jingtang 3 Ragoucy, Eric 3 Ren, Hang 3 Sun, Chunfang 3 Wang, Gangcheng 3 Yin, Gang George 3 Zhang, Dajun 3 Zhang, Qing 3 Zhang, Yong 3 Zhang, Yongshuai 2 Ablowitz, Mark Jay 2 Cambon, Claude 2 Chen, Yong 2 Chen, Yong 2 Crampé, Nicolas 2 Frappat, Luc 2 Gou, Lidan 2 Huang, Anthony X. 2 Huang, Lili 2 Kauffman, Louis Hirsch 2 Lakshmanan, Muthusamy 2 Li, Biao 2 Li, Xiliang 2 Liu, Yaobin 2 Luo, Xudan 2 Musslimani, Ziad H. 2 Porsezian, Kuppusamy 2 Qian, Chao 2 Senthilvelan, Murugaian 2 Shi, Xujie 2 Shi, Ying 2 Stalin, S. 2 Sun, Baonan 2 Sun, Fengxin 2 Vanicat, Matthieu 2 Yang, Bo 2 Yu, Guanglei 2 Yue, Yunfei 2 Zhao, Qing 2 Zhu, Xiaoying 1 Ahmed, Zakir Hussain 1 Al-Douri, Yamur K. 1 An, Xuemei 1 Babichenko, Andrei 1 Baniasadi, Pouya 1 Bodaghpour, S. 1 Cai, Yongchang 1 Cao, Yulei 1 Chassang, Sylvain 1 Chen, Dengyuan 1 Chen, Xiaotong 1 Chen, Yi 1 Chen, Yong 1 Cheng, Yangyang 1 Cheng, Yu-Ming 1 Cheng, Yueming 1 Cheng, Yumei 1 Cheng, Yung-Ming 1 Chow, Kwok Wing 1 Dancer, Karen A. 1 de Lima Martins, Simone 1 Defryn, Christof 1 Dienstfrey, Andrew M. 1 Du, Guijiao 1 Du, Xia-Xia 1 Dubrulle, Bérengère 1 Ezhov, Vladimir Vladimirovich 1 Feng, Baofeng 1 Feng, Hsuan-Ming 1 Fonseca, Tiago J. 1 Foumani, Mehdi 1 Gao, Xuemei 1 Ghasemzadeh, H. 1 Godeferd, Fabien S. 1 Hamodi, Hussan 1 Hao, Si-Yang 1 Hardwick, Janis P. 1 He, Lei 1 He, Lijian 1 Horvat Marc, Andrei 1 Isac, P. S. 1 Isaev, Alexei P. 1 Jian, Kailin 1 Jiang, Haihui 1 Jiao, Yuyong 1 Kang, Fei 1 Kanna, Thambithurai 1 Kara, Imdat ...and 131 more Authors all top 5 #### Cited in 55 Serials 15 Nonlinear Dynamics 12 Journal of Mathematical Physics 8 Quantum Information Processing 7 Physics of Fluids 5 Applied Mathematics Letters 3 Engineering Analysis with Boundary Elements 3 Mathematical Problems in Engineering 3 Communications in Nonlinear Science and Numerical Simulation 2 Computers & Mathematics with Applications 2 Communications in Mathematical Physics 2 Journal of Fluid Mechanics 2 Nonlinearity 2 Nuclear Physics. B 2 Theoretical and Mathematical Physics 2 Wave Motion 2 International Journal for Numerical Methods in Engineering 2 Computers & Operations Research 2 European Journal of Operational Research 2 Complexity 2 Chaos 2 International Journal of Quantum Information 2 Communications in Theoretical Physics 2 Proceedings of the Royal Society of London. A. Mathematical, Physical and Engineering Sciences 1 International Journal of Modern Physics B 1 Computer Methods in Applied Mechanics and Engineering 1 International Journal of Theoretical Physics 1 Letters in Mathematical Physics 1 Rocky Mountain Journal of Mathematics 1 ZAMP. Zeitschrift für angewandte Mathematik und Physik 1 Chaos, Solitons and Fractals 1 The Annals of Statistics 1 Applied Mathematics and Computation 1 Applied Mathematics and Optimization 1 Information Sciences 1 Journal of Computational and Applied Mathematics 1 Journal of Optimization Theory and Applications 1 Journal of Statistical Planning and Inference 1 Opsearch 1 Applied Mathematics and Mechanics. (English Edition) 1 Stochastic Analysis and Applications 1 Applied Numerical Mathematics 1 Applied Mathematical Modelling 1 International Journal of Computer Mathematics 1 Journal of Knot Theory and its Ramifications 1 Journal of Algebraic Combinatorics 1 Mathematical Finance 1 Discrete Dynamics in Nature and Society 1 European Journal of Mechanics. B. Fluids 1 Nonlinear Analysis. Real World Applications 1 International Journal of Computational Methods 1 Stochastics 1 Acta Mechanica Sinica 1 Frontiers of Mathematics in China 1 Journal of Physics A: Mathematical and Theoretical 1 Algorithms all top 5 #### Cited in 27 Fields 49 Partial differential equations (35-XX) 28 Dynamical systems and ergodic theory (37-XX) 21 Quantum theory (81-XX) 15 Fluid mechanics (76-XX) 11 Numerical analysis (65-XX) 11 Mechanics of deformable solids (74-XX) 10 Associative rings and algebras (16-XX) 9 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 8 Statistical mechanics, structure of matter (82-XX) 8 Operations research, mathematical programming (90-XX) 7 Nonassociative rings and algebras (17-XX) 6 Group theory and generalizations (20-XX) 5 Statistics (62-XX) 4 Global analysis, analysis on manifolds (58-XX) 3 Systems theory; control (93-XX) 2 Difference and functional equations (39-XX) 2 Probability theory and stochastic processes (60-XX) 1 Combinatorics (05-XX) 1 Algebraic geometry (14-XX) 1 Topological groups, Lie groups (22-XX) 1 Ordinary differential equations (34-XX) 1 Operator theory (47-XX) 1 Manifolds and cell complexes (57-XX) 1 Computer science (68-XX) 1 Mechanics of particles and systems (70-XX) 1 Relativity and gravitational theory (83-XX) 1 Geophysics (86-XX)
2021-01-16 07:27:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5563846826553345, "perplexity": 13170.245189230793}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703500028.5/warc/CC-MAIN-20210116044418-20210116074418-00074.warc.gz"}
https://math.stackexchange.com/questions/2410346/eigenvalues-of-this-type-of-matrix
# Eigenvalues of this type of matrix Is there a general formula for the eigenvalues of this type of (symmetric) matrix? $A = \left[ \begin{array}{ccccc} 0 & 0 & \cdots & 0 & a_1 \\ 0 & 0 & \cdots & 0 & a_2 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & \cdots & 0 & a_{n-1}\\ a_1 & a_2 & \cdots & a_{n-1} & a_n \end{array} \right]$ Here, $a_i \in \mathbb{R}$. • Try calculating the characteristic polynomial – user275377 Aug 29, 2017 at 20:03 Observation. The first $n-1$ rows are multiples of $(0,0,\ldots,0,1)$, and hence the rank of $A$ is at most $2$, and hence the eigenvalue $\lambda=0$ has multiplicity at least $n-2$. Next, observe that if $\lambda\ne 0$ is an eigenvalue, then there is exists an eigenvector $(c_1,\ldots,c_n)$ $$A = \left[ \begin{array}{ccccc} 0 & 0 & \cdots & 0 & a_1 \\ 0 & 0 & \cdots & 0 & a_2 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & \cdots & 0 & a_{n-1}\\ a_1 & a_2 & \cdots & a_{n-1} & a_n \end{array} \right]\left[\begin{array}{c}c_1 \\ c_2 \\ \vdots \\ c_{n-1} \\ c_n\end{array}\right]=\left[\begin{array}{c}a_1c_n \\ a_2c_n \\ \vdots \\ a_{n-1}c_n \\ \sum a_jc_j\end{array}\right]=\lambda\left[\begin{array}{c}c_1 \\ c_2 \\ \vdots \\ c_{n-1} \\ c_n\end{array}\right]$$ Clearly, $c_n\ne 0$, otherwise $c_1=\cdots=c_n=0$, and hence $$c_j=\frac{c_n}{\lambda}a_j,\quad j=1,\ldots,n-1 \quad\text{and}\quad \lambda c_n=\sum a_jc_j$$ and thus $$\lambda c_n=a_1c_1+\cdots a_{n-1}c_{n-1}+a_nc_n=\frac{c_n}{\lambda}(a_1^2+\cdots a_{n-1}^2)+a_nc_n$$ or $$\lambda^2-a_n\lambda-(a_1^2+\cdots a_{n-1}^2)=0$$ and thus the two remaining eigenvalues are obtained from the quadratic equation above. As a generic matrix, $A$ is a rank-2 matrix of the form $uv^T+vu^T$, where $u^T=(0,\ldots,0,1)$ and $v^T=(a_1,\ldots,a_{n-1},\frac{a_n}2)$. The eigenvalues of this kind of matrix are known to be $\lambda=u^Tv\pm\sqrt{(u^Tu)(v^Tv)}$. In your case, we get $\lambda=\frac{a_n}2\pm\sqrt{a_1^2+\cdots+a_{n-1}^2+\frac{a_n^2}4}$. Since eigenvalues vary continuously with matrix entries, the same answer also agrees with the true answer in the degenerate case where $A$ is a diagonal matrix of rank 1. Experimenting with low values of $n$, I obtained the following ansatz (which has to be proved by induction) for the characteristic polynomial of $A$: $p(\lambda) = (-1)^{n+1} \lambda^{n-2} (-\lambda^2 + a_n \lambda + s)$, with $s = a_1^2 + \cdots + a_{n-1}^2$. Now just apply Bhaskara. • Thus you have answered your own question. Btw, what is Bhaskara ? Aug 29, 2017 at 20:50 • @JeanMarie it is the quadratic formula, discovered by that guy. Aug 29, 2017 at 21:03 Here's a push in the right direction. Noting that each symmetric pair of off-diagonal terms can be written as $a_k e_k e_n^T+a_k e_n e_k^T$, we may write the matrix as \begin{align} A &=\sum_{k=1}^{n-1} (a_k e_k e_n^T+a_k e_n e_k^T)+a_n e_n e_n^T \\ &=\left(\sum_{k=1}^n a_k e_k\right)e_n^T+e_n\left(\sum_{k=1}^n a_k e_k\right)^T - a_n e_n e_n^T \\ &=ae_n^T+e_n a^T-a_n e_n e_n^T \end{align} where $a=(a_1,a_2,\cdots a_n)^T$. To see the use of this expression, consider how $A$ acts on any vector orthogonal to $a$ and $e_n$, and what the dimension of this subspace will be. For the remaining eigenvalues, consider vectors in the span of $a,e_n$.
2022-05-21 19:27:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9961954951286316, "perplexity": 201.8410015035937}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662540268.46/warc/CC-MAIN-20220521174536-20220521204536-00384.warc.gz"}
http://mathmisery.com/wp/
# Learning Math as an Adult (over the age of 30) Well, here I am on this idyllic autumn Saturday debating if I should write a blog post, play some piano, or do a little bit of work for work. According to the actuarial tables, more than half my life is over. And my elbow hurts. So for now, the blog post wins. Piano will be next once the ice treatment takes effect. And because I’m a maniac, I need to do a little bit of work everyday. Don’t worry (and thank you for worrying), but I take my work breaks. A few years ago, I started taking piano lessons. But then had to stop. Then I started again. But then had to stop. And then once more, but then had to stop. And now, the fourth time will be the charm. One of my before-I-die goals — actually before-2030-goals — is to get to a level of piano play ability that would be sufficient for me to be hired as “this night’s entertainment” at a piano bar or some swanky establishment. And so, piano lessons. Hopefully, 2030 comes before I die. Maybe this morbidity is Halloween induced. Do you know what I am learning in my piano lessons? I’m learning how to bounce off the keys and play with some fluidity. I’m learning O Tannenbaum. I’m practicing Czerny 1 & 2. I have a Christmas piano fantasy where I am playing Christmas songs on the piano and people are singing. I had never taken piano lessons as a kid nor did I play it in any capacity other than the occasional “hey there’s a piano over there, let’s bang on some keys!”. I played the flute. But as a boy, did that really count? I just tried to stay bad at playing the flute since it was worse to be good at playing the flute. Ahhh. Playground shenanigans (bullying). But as a kid, and even now as an adult, I maintain a generally high aptitude for language acquisition. At least, that’s what I think. So, somehow, ten or so years ago, when I looked at some flute sheet music from when I was a kid (yes, I still have that stuff), I knew what the notes were. I had forgotten how to play it on the flute, since, well, I actively stayed bad at playing the flute. Piano sheet music was a bit more complex and a little more foreign. But it didn’t take more than a half hour of research to understand how to read it for the majority of use cases. There’s a scene in Groundhog Day, where Phil Connors hears a rendition of Mozart’s Piano Concerto in C, K545, 1st movement on the radio. That’s mostly how I felt one day. But piano lessons were out of budget. But a basic keyboard and a few books were well within budget. So I bought those. And I read. And I played. I watched YouTube videos. I tried to coordinate both hands. And after a few months of this, I came to the conclusion, that I needed piano lessons. So, suddenly, piano lessons were in the budget and a few things left the budget. But does it make sense to start piano lessons so late in one’s life? That is, what can I possibly hope to accomplish by trying to learn something conventionally considered completely out of my life’s study (mathematics). I mean, it would be one thing if I had had started piano lessons as a kid and I had some baseline muscle memory to work off of. But, starting piano when close to half my life was over? This seems like a folly. I should do something better with what’s left of my life. I was literally saying to myself “ok, left thumb down, right index finger down” to train myself to coordinate both my hands. I mean hell, I can do it as I type, why would it be so complicated on piano? But then there’s the blasted reading component and one set of notes being held for longer than another set of notes. I don’t do that when typing. Maybe except with the shift key. Playing piano is hard. And as my current piano teacher puts it “it’s not natural”. But here I am working towards my piano fantasies. And I’m not too old. So here you are, perhaps on an idyllic autumn Saturday afternoon, wondering if it’s possible to learn math as an adult. Maybe you’re 35. Maybe you’re 55. or 75! The short answer is, “Yes, mathematics is not out of your reach”. You’re not too old. The likely difference between your math endeavor and my piano endeavor is your past. This is the “Math Misery?” blog. Tell me your math-fail story. Or tell me which of these it was, if it were any of these. “I was always terrible at math.” “I hated math as a kid. I hated the times tables. I still count on my fingers. Besides I have a calculator now anyway.” “I was good at math up until the 5th grade.” “Ugh, I hated Geometry, but I was really good at Algebra.” “I sucked at Algebra.” “I got to Calculus, but then it was too confusing.” “I got the concepts, but I couldn’t do the problems.” “Fractions. I hate fractions! I mean who uses four-sevenths, anyway?” “Exponents killed me.” “Stats was the worst class.” “Combinations and permutations. I didn’t understand all the exclamation points.” Why does this past matter? For many of us, our math paralysis as adults is a remnant of our math paralysis as a child. Mathematics was hard. And it was torture. I could actively fail at flute and the only real door that was closed was being a music major in college. If you actively failed at mathematics, remediation among other edu-punitive measures were not too far behind. But here you are. An adult. You’ve had that Phil Connors moment. The first thing to do is to shed the fear, trauma, and paralysis that your mind and body have remembered. Don’t worry about not being fast. Don’t worry about the jargon. Nor the notation. Nor the symbolism. Don’t let ageism do you in. Your brain isn’t that old. Don’t worry about this idea that somehow you can’t accomplish greatness in mathematics once you’re past 30. Don’t worry about all these things. Start fresh. Recognize that you are an adult and you now have a far deeper capacity for processing abstract concepts. This is where math likely failed you as a child. And me. There was absolutely NO WAY I was going to comprehend some of the concepts that I found to be obvious in my late 20s that I was exposed to as a teenager. Like I said earlier, my life advantage / superpower has been language acquisition. But as we get older, we all acquire this power. Some of us maintain the advantage we had as kids, others plateau if they never honed their natural ability into a craft. Odds are, your ability to acquire a new discipline of study (broadly language) is no different than mine if we’re both adults over 30. Why do I use this phrase: “language acquisition”? I’m not using it in a proper linguistic context, but rather I’m using it as a catch-all for “knowledge acquisition in a particular field of study”. This knowledge acquisition is many things: jargon, symbolism, notation, mechanics (moving symbols around). But it’s also what we may typically think of when acquiring a new language — stringing together grammatically and syntactically correct and conceptually coherent sentences, paragraphs, and essays. Though, again, not in a formal sense, there is a grammar and syntax to mathematics. This means using the symbols as their uses are designated both in terms of the mechanics of where and how to place them, but also what their context is in a broader mathematical statement. For example, $$x + 3 = 7 = x = 7 – 3 = 4$$ is something we might expect an early math learner to do since they are over using the $$=$$ symbol in its conventional written use, but not in its conceptual use. Why all of this matters with math acquisition is that as an adult, you have now encountered many different “languages” and “language frameworks”. Even this article, may have been out of your depth as a 14-year old. There were things you simply weren’t capable of understanding when you were a kid, for whatever reason. And it’s different for different people. Renting a car. Going on an interview. Negotiating with your landscaper (or landlord). Finding deals for Christmas. Going on a vacation. Taking a cab. And in all these experiences you have either consciously or otherwise, experienced mathematics in some form. Which means that experiencing mathematics in an academic setting now is a completely different ball game from when you were a kid. There are numerous life experiences and contexts that you can draw on to rationalize a concept. We’re not limited to grocery store examples (I mean we were technically never limited to that even when you were a kid, but the irony is that sometimes adults are incapable of understanding mathematics relevant to a kid’s life). In any case, the context is there. And there is an abundance of it. Why did I excel in mathematics as a kid? I didn’t. I excelled at math mechanics and figured out a few concepts. But generally I muddled through a lot. It wasn’t until I was 23, several years out of college, that the “obviousness” of mathematics dawned on me. Then in my 30s, when I reflected on myself as a mathematician in my 20s, I realized I understood nothing. Now when I look back on my 30s, I realize that I didn’t understand as much as I thought I understood. And I will say the same thing about myself when I’m in my 50s looking back at my 40s. And so forth. The point? I’m STILL learning. You can start learning now if you want. The first thing to do is to let go of the feelings of inadequacy or the view that mathematics is this untameable beast beyond your depth. Algebra isn’t hard. Nor are Calculus, Differential Equations, Linear Algebra, etc. If I can learn those, then so can you. This stuff does take time. But not as much as when you were a kid. I tell myself these things as I continue on my piano journey. My ability to process music theory is immeasurably better than when I was a kid. Your ability to process the Rational Root Theorem will be immeasurably better now as an adult than when you were a kid. And you will be surprised when you consume a typical Algebra I & II sequence that would have been a two-year ordeal in high school, in a few months. The advantage that you will have over me and my piano journey, is that you will have at least seen Algebra before. This foreknowledge or fore-experience will allow you to anticipate next steps that you were never able to do as a kid. And that’s why this go around will be much better, so long as you are not anticipating your fears. You’ll get stuck, you’ll get frustrated, but you’ll also understand. Should you go on this journey, it can be nice to have a guide. I’m always happy to help. A few people got this. Hats off to @icecolbeveridge, @Thomas_W_Hunter! Here’s the spoiler! Each of the those words becomes another word if you prepend an O. oPEN, oRALLY, oRANGE. #### A request! Thank you for reading! I want to keep in touch with my readers. If you are interested, please have a look here. Do you enjoy this blog? Consider supporting with a contribution!
2019-11-17 21:23:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49019086360931396, "perplexity": 1370.6546687541986}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669276.41/warc/CC-MAIN-20191117192728-20191117220728-00120.warc.gz"}
http://theoryofcomputing.org/articles/v010a006/
Volume 10 (2014) Article 6 pp. 133-166 The Need for Structure in Quantum Speedups Revised: July 10, 2014 Published: August 12, 2014 [PDF (446K)]    [PS (1663K)]    [PS.GZ (435K)] [Source ZIP] Keywords: decision trees, adversary method, collision problem, Fourier analysis, influences, quantum computing, query complexity ACM Classification: F.1.2, F.1.3 AMS Classification: 81P68, 68Q12, 68Q17 Abstract: [Plain Text Version] Is there a general theorem that tells us when we can hope for exponential speedups from quantum algorithms, and when we cannot? In this paper, we make two advances toward such a theorem, in the black-box model where most quantum algorithms operate. First, we show that for any problem that is invariant under permuting inputs and outputs and that has sufficiently many outputs (like the collision and element distinctness problems), the quantum query complexity is at least the $7^{\text{th}}$ root of the classical randomized query complexity. (An earlier version of this paper gave the $9^{\text{th}}$ root.) This resolves a conjecture of Watrous from 2002. Second, inspired by work of O'Donnell et al. (2005) and Dinur et al. (2006), we conjecture that every bounded low-degree polynomial has a “highly influential” variable. (A multivariate polynomial $p$ is said to be bounded if $0\le p(x)\le 1$ for all $x$ in the Boolean cube.) Assuming this conjecture, we show that every $T$-query quantum algorithm can be simulated on most inputs by a $T^{O(1)}$-query classical algorithm, and that one essentially cannot hope to prove $\mathsf{P}\neq\mathsf{BQP}$ relative to a random oracle. A preliminary version of this paper appeared in the Proc. 2nd “Innovations in Computer Science” Conference (ICS 2011). See Sec. 1.3 for a comparison with the present paper.
2017-04-28 19:48:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7606765627861023, "perplexity": 1322.574753516179}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123048.37/warc/CC-MAIN-20170423031203-00307-ip-10-145-167-34.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/should-i-use-time-dilation-or-length-contraction.957452/
# Should I use time dilation or length contraction? 1. Homework Statement [/B] This is a problem that was in my Physics HW. Two powerless rockets are on a collision course. The rockets are moving with speeds of 0.800c and 0.600c and are initially ## 2.52 × 10^{12} ## m apart as measured by Liz, an Earth observer, as shown in Figure P1.59. Both rockets are 50.0 m in length as measured by Liz. (a) What are their respective proper lengths? (b) What is the length of each rocket as measured by an observer in the other rocket? (c) According to Liz, how long before the rockets collide? (d) According to rocket 1, how long before they collide? (e) According to rocket 2, how long before they collide? (f) If both rocket crews are capable of total evacuation within 90 minutes (their own time), will there be any casualties? My doubt is on letters (d) and (e). I don't know if I am supposed to apply the time Lorentz transformation using the value obtained in (c) or if I should calculate this time based on the speed each rocket sees the other approaching and the distance using length contraction. I found two answers on the internet. ## Homework Equations ## L = L_{0}\sqrt {1 - \frac {v^2} {c^2}} ## ## \Delta t' = \frac 1 {\sqrt {1 - \frac {v^2} {c^2}}} \Delta t ## ## V' = \frac {u - V_x} {1 - \frac {uV_x} {c^2}} ## ## The Attempt at a Solution By using the mentioned equations, I obtained that (a) ## L_1 = 83.3 m ## and ## L_2 = 62.5 m ## . (b) ## L_1 = 27.0 m ## in the frame of rocket 2 and ## L_2 = 21.0 m ## in the frame of rocket 1. (c) ## \frac {\Delta S} {v_1 + v_2} = 6000 sec = 100 min ## . When it comes to letter (d) that something goes wrong. my first approach to it was to use the length contraction observed by 1 and divide it by the speed 1 sees 2 approaching. ## L = L_{0}\sqrt {1 - \frac {v^2} {c^2}} = 2.52 \times 10^{12} \times 0.6 = 1.512 \times 10^{12} ## and ## V' = \frac {u - V_x} {1 - \frac {uV_x} {c^2}} = \frac { 0.8c - ( - 0.6c)} {1 - \frac { (- 0.48c^2)} { c^2 }} = 0.945c ## . Dividing these results we have ## \frac {L} {V'} = 5,333 sec = 88.9 min ## . Although, using ## \Delta t' = \frac 1 {\sqrt {1 - \frac {v^2} {c^2}}} \Delta t ## , where t' is Liz's time of 100 min, we obtain ## 100 min = 1.6666 \times \Delta t ## and ## \Delta t = 60 min ##. This same problem happens when I try to solve (e), and I've taken a look at several solutions on the internet, being half of them solved in the first way, and half in the second. Shouldn't these results agree? If not, why? #### Attachments • Untitled.png 24.1 KB · Views: 561 Last edited by a moderator: jbriggs444 Homework Helper As with most relativity problems, the difficulty is with the relativity of simultaneity. According to Liz on Earth, the start event for both Liz and L1 is simultaneous. The end event is a collision and is naturally simultaneous for all parties involved. According to L1, the start event for L1 is not simultaneous with the start event for Liz. Joao Victor Dantas This makes sense. So 88.9 min would be the time it takes for the observer in rocket 1 to get to the point of collision from the start of HIS measurement and 60 min would be the time is takes for this observer from the start of Liz's measurement, correct? robphy Homework Helper Gold Member Can you draw a position-vs-time graph of the problem? jbriggs444 Homework Helper This makes sense. So 88.9 min would be the time it takes for the observer in rocket 1 to get to the point of collision from the start of HIS measurement and 60 min would be the time is takes for this observer from the start of Liz's measurement, correct? I am not sure that I understand your phrasing here. Consider that L1 has a stopwatch. He starts it at some point. And we are asked for its reading at the event of the collision. At what event does L1 start his stopwatch? I would suggest having him start it at the event that Liz considers to be simultaneous with the scenario start. Maybe it would help to resolve the issue if you calculated the "contracted" distance that each ship travels and then the time to traverse this distance. Use the 100 min. that you calculated as measured by Liz.
2022-05-23 02:06:17
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.932077169418335, "perplexity": 1403.4258066400305}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662552994.41/warc/CC-MAIN-20220523011006-20220523041006-00433.warc.gz"}
https://holooly.com/solutions-v20/battle-of-the-revolving-door-goal-apply-the-basic-definition-of-torque-problem-two-disgruntled-businesspeople-are-trying-to-use-a-revolving-door-which-is-initially-at-rest-see-fig-8-3/
## Subscribe \$4.99/month Un-lock Verified Step-by-Step Experts Answers. ## Textbooks & Solution Manuals Find the Source, Textbook, Solution Manual that you are looking for in 1 click. ## Tip our Team Our Website is free to use. To help us grow, you can support our team with a Small Tip. ## Holooly Tables All the data tables that you may search for. ## Holooly Help Desk Need Help? We got you covered. ## Holooly Arabia For Arabic Users, find a teacher/tutor in your City or country in the Middle East. Products ## Textbooks & Solution Manuals Find the Source, Textbook, Solution Manual that you are looking for in 1 click. ## Holooly Arabia For Arabic Users, find a teacher/tutor in your City or country in the Middle East. ## Holooly Help Desk Need Help? We got you covered. ## Q. 8.1 BATTLE OF THE REVOLVING DOOR GOAL Apply the basic definition of torque. PROBLEM Two disgruntled businesspeople are trying to use a revolving door, which is initially at rest (see Fig. 8.3.) The woman on the left exerts a force of $625 \mathrm{~N}$ perpendicular to the door and $1.20 \mathrm{~m}$ from the hub’s center, while the man on the right exerts a force of $8.50 \times 10^{2} \mathrm{~N}$ perpendicular to the door and $0.800 \mathrm{~m}$ from the hub’s center. Find the net torque on the revolving door. STRATEGY Calculate the individual torques on the door using the definition of torque, Equation 8.1, $\tau = rF$      [8.1] and then sum to find the net torque on the door. The woman exerts a negative torque, the man a positive torque. Their positions of application also differ. ## Verified Solution Calculate the torque exerted by the woman. A negative sign must be supplied because $\overrightarrow{\mathbf{F}}_{1}$, if unopposed, would cause a clockwise rotation: $\tau_{1}=-r_{1} F_{1}=-(1.20 \mathrm{~m})(625 \mathrm{~N})=-7.50 \times 10^{2} \mathrm{~N} \cdot \mathrm{m}$ Calculate the torque exerted by the man. The torque is positive because $\overrightarrow{\mathbf{F}}_{2}$, if unopposed, would cause a counterclockwise rotation: $\tau_{2}=r_{2} F_{2}=(0.800 \mathrm{~m})\left(8.50 \times 10^{2} \mathrm{~N}\right)=6.80 \times 10^{2} \mathrm{~N} \cdot \mathrm{m}$ Sum the torques to find the net torque on the door: $\tau_{\text {net }}=\tau_{1}+\tau_{2}= – 7.0 \times 10^{1} \mathrm{~N} \cdot \mathrm{m}$ REMARKS The negative result here means that the net torque will produce a clockwise rotation.
2023-01-31 17:01:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 10, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35829856991767883, "perplexity": 2791.991393847741}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499888.62/warc/CC-MAIN-20230131154832-20230131184832-00106.warc.gz"}
https://zbmath.org/authors/?q=ai%3Asmith.james-e
# zbMATH — the first resource for mathematics ## Smith, James E. Compute Distance To: Author ID: smith.james-e Published as: Smith, J.; Smith, J. E.; Smith, James; Smith, James E. External Links: MGP · Wikidata Documents Indexed: 58 Publications since 1970, including 6 Books all top 5 #### Co-Authors 9 single-authored 3 Brown, David B. 3 McCardle, Kevin F. 3 Ulu, Canan 2 Eiben, Ágoston Endre 2 Holt, Craig S. 1 Caleb-Solly, Praminda 1 Craven, Robert P. M. 1 Krasnogor, Natalio 1 Lam, Paklin 1 Metze, Gernot 1 Nair, Ravi 1 Nau, Robert F. 1 Pauplin, Olivier 1 Preece, William K. 1 Sun, Peng 1 Tahir, Muhammad Atif 1 Weiss, Shlomo all top 5 #### Serials 10 Operations Research 9 IEEE Transactions on Computers 3 Management Science 2 Natural Computing Series 1 ACM Transactions on Mathematical Software 1 Pattern Recognition 1 Mathematical Modelling and Scientific Computing 1 JMMA. Journal of Mathematical Modelling and Algorithms #### Fields 13 Operations research, mathematical programming (90-XX) 9 Computer science (68-XX) 9 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 8 Information and communication theory, circuits (94-XX) 1 Statistics (62-XX) #### Citations contained in zbMATH 37 Publications have been cited 477 times in 429 Documents Cited by Year Introduction to evolutionary computing. Zbl 1028.68022 Eiben, A. E.; Smith, J. E. 2003 Information relaxations and duality in stochastic dynamic programs. Zbl 1228.90062 Brown, David B.; Smith, James E.; Sun, Peng 2010 Valuing risky projects: Option pricing theory and decision analysis. Zbl 0843.90015 Smith, James E.; Nau, Robert F. 1995 Asymptotic dimension of discrete groups. Zbl 1100.20034 Dranishnikov, A.; Smith, J. 2006 $$L^p$$-$$L^q$$ estimate for wave equation with bounded time dependent coefficient. Zbl 1090.35046 Reissig, Michael; Smith, James 2005 Generalized Chebychev inequalities: theory and applications in decision analysis. Zbl 0842.90002 Smith, James E. 1995 On asymptotic dimension of countable Abelian groups. Zbl 1144.20024 Smith, J. 2006 Structural properties of stochastic dynamic programs. Zbl 1163.90685 Smith, James E.; Mccardle, Kevin F. 2002 Moment methods for decision analysis. Zbl 0825.90622 Smith, James E. 1993 On asymptotic Assouad-Nagata dimension. Zbl 1116.54020 Dranishnikov, A. N.; Smith, J. 2007 Dispersive and Strichartz estimates for hyperbolic equations with constant coefficients. Zbl 1191.35006 Ruzhansky, Michael; Smith, James 2010 Evaluating income streams: A decision analysis approach. Zbl 0989.90529 Smith, James E. 1998 Valuing oil properties: Integrating option pricing and decision analysis approaches. Zbl 1032.91631 Smith, James E.; McCardle, Kevin F. 1998 Uncertainty, information acquisition, and technology adoption. Zbl 1226.90127 Ulu, Canan; Smith, James E. 2009 Options in the real world: lessons learned in evaluating oil and gas investments. Zbl 1035.91503 Smith, James E.; McCardle, Kevin F. 1999 Introduction to evolutionary computing. 2nd edition. Zbl 1327.68003 Eiben, A. E.; Smith, James E. 2015 Coherent imaging spectroscopy of a quantum many-body spin system. Zbl 1355.81191 Senko, C.; Smith, J.; Richerme, P.; Lee, A.; Campbell, W. C.; Monroe, C. 2014 Technology adoption with uncertain future costs and quality. Zbl 1248.91031 Smith, James E.; Ulu, Canan 2012 Information relaxations, duality, and convex stochastic dynamic programs. Zbl 1327.90149 Brown, David B.; Smith, James E. 2014 Risk aversion, information acquisition, and technology adoption. Zbl 1405.91128 Smith, James E.; Ulu, Canan 2017 Two-loop corrections to Higgs boson production. Zbl 1119.81403 Ravindran, V.; Smith, J.; Van Neerven, W. L. 2005 Measures of the effectiveness of fault signature analysis. Zbl 0436.94036 Smith, James E. 1980 Alfred Tarski. Early work in Poland – geometry and teaching. With a bibliographic supplement. With a foreword by Ivor Grattan-Guinness. Zbl 1310.01002 McFarland, Andrew (ed.); McFarland, Joanna (ed.); Smith, James (ed.) 2014 Optimal sequential exploration: bandits, clairvoyants, and wildcats. Zbl 1273.90255 Brown, David B.; Smith, James E. 2013 A new family of l-group varieties. Zbl 0503.06017 Smith, J. E. 1981 The lattice of l-group varieties. Zbl 0459.06007 Smith, J. E. 1980 Discontinuity in decision-making when objectives conflict: a military command decision case study. Zbl 1121.90349 Dodd, L.; Moffat, J.; Smith, J. 2006 Solvable and $$\ell$$-solvable $$\ell$$-groups. Zbl 0543.06006 Smith, J. E. 1984 Diagnosis of systems with asymmetric invalidation. Zbl 0463.94020 Holt, Craig S.; Smith, James E. 1981 Detection of faults in programmable logic arrays. Zbl 0422.94057 Smith, James E. 1979 Optimal rocket trajectories in a general force-field. Zbl 0263.70033 Brookes, C. J.; Smith, J. 1970 On the chromatic number of subsets of the Euclidean plane. Zbl 1292.05096 Axenovich, M.; Choi, J.; Lastrina, M.; McKay, T.; Smith, J.; Stanton, B. 2014 Memetic algorithms: The polynomial local search complexity theory perspective. Zbl 1135.68630 Krasnogor, Natalio; Smith, James E. 2008 Global time estimates for solutions to equations of dissipative type. Zbl 1172.35011 Ruzhansky, Michael; Smith, James 2005 New methods for tunable, random landscapes. Zbl 0987.68026 Smith, R. E.; Smith, J. E. 2002 Strongly fault secure logic networks. Zbl 0388.94026 Smith, James E.; Metze, Gernot 1978 A computer simulation of a neuron net model as a self-organizing system. Zbl 0256.92006 Kuijpers, K.; Smith, J. 1973 Risk aversion, information acquisition, and technology adoption. Zbl 1405.91128 Smith, James E.; Ulu, Canan 2017 Introduction to evolutionary computing. 2nd edition. Zbl 1327.68003 Eiben, A. E.; Smith, James E. 2015 Coherent imaging spectroscopy of a quantum many-body spin system. Zbl 1355.81191 Senko, C.; Smith, J.; Richerme, P.; Lee, A.; Campbell, W. C.; Monroe, C. 2014 Information relaxations, duality, and convex stochastic dynamic programs. Zbl 1327.90149 Brown, David B.; Smith, James E. 2014 Alfred Tarski. Early work in Poland – geometry and teaching. With a bibliographic supplement. With a foreword by Ivor Grattan-Guinness. Zbl 1310.01002 McFarland, Andrew (ed.); McFarland, Joanna (ed.); Smith, James (ed.) 2014 On the chromatic number of subsets of the Euclidean plane. Zbl 1292.05096 Axenovich, M.; Choi, J.; Lastrina, M.; McKay, T.; Smith, J.; Stanton, B. 2014 Optimal sequential exploration: bandits, clairvoyants, and wildcats. Zbl 1273.90255 Brown, David B.; Smith, James E. 2013 Technology adoption with uncertain future costs and quality. Zbl 1248.91031 Smith, James E.; Ulu, Canan 2012 Information relaxations and duality in stochastic dynamic programs. Zbl 1228.90062 Brown, David B.; Smith, James E.; Sun, Peng 2010 Dispersive and Strichartz estimates for hyperbolic equations with constant coefficients. Zbl 1191.35006 Ruzhansky, Michael; Smith, James 2010 Uncertainty, information acquisition, and technology adoption. Zbl 1226.90127 Ulu, Canan; Smith, James E. 2009 Memetic algorithms: The polynomial local search complexity theory perspective. Zbl 1135.68630 Krasnogor, Natalio; Smith, James E. 2008 On asymptotic Assouad-Nagata dimension. Zbl 1116.54020 Dranishnikov, A. N.; Smith, J. 2007 Asymptotic dimension of discrete groups. Zbl 1100.20034 Dranishnikov, A.; Smith, J. 2006 On asymptotic dimension of countable Abelian groups. Zbl 1144.20024 Smith, J. 2006 Discontinuity in decision-making when objectives conflict: a military command decision case study. Zbl 1121.90349 Dodd, L.; Moffat, J.; Smith, J. 2006 $$L^p$$-$$L^q$$ estimate for wave equation with bounded time dependent coefficient. Zbl 1090.35046 Reissig, Michael; Smith, James 2005 Two-loop corrections to Higgs boson production. Zbl 1119.81403 Ravindran, V.; Smith, J.; Van Neerven, W. L. 2005 Global time estimates for solutions to equations of dissipative type. Zbl 1172.35011 Ruzhansky, Michael; Smith, James 2005 Introduction to evolutionary computing. Zbl 1028.68022 Eiben, A. E.; Smith, J. E. 2003 Structural properties of stochastic dynamic programs. Zbl 1163.90685 Smith, James E.; Mccardle, Kevin F. 2002 New methods for tunable, random landscapes. Zbl 0987.68026 Smith, R. E.; Smith, J. E. 2002 Options in the real world: lessons learned in evaluating oil and gas investments. Zbl 1035.91503 Smith, James E.; McCardle, Kevin F. 1999 Evaluating income streams: A decision analysis approach. Zbl 0989.90529 Smith, James E. 1998 Valuing oil properties: Integrating option pricing and decision analysis approaches. Zbl 1032.91631 Smith, James E.; McCardle, Kevin F. 1998 Valuing risky projects: Option pricing theory and decision analysis. Zbl 0843.90015 Smith, James E.; Nau, Robert F. 1995 Generalized Chebychev inequalities: theory and applications in decision analysis. Zbl 0842.90002 Smith, James E. 1995 Moment methods for decision analysis. Zbl 0825.90622 Smith, James E. 1993 Solvable and $$\ell$$-solvable $$\ell$$-groups. Zbl 0543.06006 Smith, J. E. 1984 A new family of l-group varieties. Zbl 0503.06017 Smith, J. E. 1981 Diagnosis of systems with asymmetric invalidation. Zbl 0463.94020 Holt, Craig S.; Smith, James E. 1981 Measures of the effectiveness of fault signature analysis. Zbl 0436.94036 Smith, James E. 1980 The lattice of l-group varieties. Zbl 0459.06007 Smith, J. E. 1980 Detection of faults in programmable logic arrays. Zbl 0422.94057 Smith, James E. 1979 Strongly fault secure logic networks. Zbl 0388.94026 Smith, James E.; Metze, Gernot 1978 A computer simulation of a neuron net model as a self-organizing system. Zbl 0256.92006 Kuijpers, K.; Smith, J. 1973 Optimal rocket trajectories in a general force-field. Zbl 0263.70033 Brookes, C. J.; Smith, J. 1970 all top 5 #### Cited by 901 Authors 8 Hirosawa, Fumihiko 7 Reissig, Michael 7 Ruzhansky, Michael V. 6 Bender, Christian 6 Dydak, Jerzy 6 Higes, J. 5 D’Abbicco, Marcello 5 Ebert, Marcelo Rempel 5 Wirth, Jens 4 Dranishnikov, Alexander Nikolaevich 4 Fister, Iztok 4 Haugh, Martin B. 4 Neumann, Frank 4 Segura, Carlos 4 Sudholt, Dirk 3 Al-Betar, Mohammed Azmi 3 Bell, Gregory C. 3 Bickel, J. Eric 3 Branke, Jürgen 3 Coello Coello, Carlos A. 3 Darnel, Michael R. 3 Di Caprio, Debora 3 Dikranjan, Dikran N. 3 Doerr, Benjamin 3 Dokuchaev, Nikolai G. 3 Dyer, James S. 3 Eidsvik, Jo 3 Khader, Ahamad Tajudin 3 Lam, Henry 3 Lu, Xiaojun 3 Luenberger, David G. 3 Matsuyama, Tokio 3 Mernik, Marjan 3 Powell, Warren Buckler 3 Santos-Arteaga, Francisco J. 3 Schweizer, Nikolaus 3 Tavana, Madjid 3 Tsetlin, Ilia 3 Zava, Nicolò 2 Akhavan-Tabatabaei, Raha 2 Arlotto, Alessandro 2 Awadallah, Mohammed A. 2 Balseiro, Santiago R. 2 Banakh, Taras Onufrievich 2 Barashko, A. S. 2 Berrones, Arturo 2 Betrò, Bruno 2 Birge, John R. 2 Blanchet, Jose H. 2 Brown, David B. 2 Chandramouli, Shyam Sundar 2 Clark, David Michael 2 Črepinšek, Matej 2 Date, Prasanna 2 Doerr, Carola 2 Doush, Iyad Abu 2 Fister, Iztok jun. 2 Garcia, Salvador G. 2 Garetto, Claudia 2 Gärtner, Christian 2 Gehrmann, Thomas 2 Graf, Peter A. 2 Guentner, Erik Paul 2 Hahn, Warren J. 2 Hammond, Robert K. 2 Hauge, Ragnar 2 Herrera, Francisco 2 Hu, Xiaobing 2 Jannelli, Enrico 2 Ji, Mingjun 2 Jones, Wesley B. 2 Joshi, Mark S. 2 Kalantari, Sh. 2 Karaboga, Dervis 2 Karaesmen, Fikri 2 Karlaftis, Matthew G. 2 Kepaptsoglou, Konstantinos 2 Kim, Kwiseon 2 Kozine, Igor O. 2 Krymsky, Victor G. 2 Kucab, Jacek 2 Kuhn, Daniel 2 Leeson, Mark S. 2 Leon, Coromoto 2 Li, Michael Z. F. 2 Lilleborge, Marie 2 Mezura-Montes, Efrén 2 Min, Yinghua 2 Musiela, Marek 2 Nagórko, Andrzej 2 Nau, Robert F. 2 Owhadi, Houman 2 Pajoohesh, Homeira 2 Ravindran, Varadarajan 2 Sakawa, Masatoshi 2 Sapir, Mark Valentinovich 2 Schosser, Josef 2 Secomandi, Nicola 2 Steele, J. Michael 2 Tessera, Romain ...and 801 more Authors all top 5 #### Cited in 164 Serials 40 European Journal of Operational Research 22 Operations Research 20 Topology and its Applications 14 Computers & Operations Research 13 Annals of Operations Research 11 Applied Mathematics and Computation 10 Decision Analysis 9 Information Sciences 7 Journal of Mathematical Analysis and Applications 7 Operations Research Letters 6 Journal of Optimization Theory and Applications 6 Theoretical Computer Science 6 Quantitative Finance 5 Algebra Universalis 5 Journal of Differential Equations 5 Algorithmica 5 Mathematical Problems in Engineering 5 Mathematical Finance 5 Journal of High Energy Physics 4 Proceedings of the American Mathematical Society 4 Journal of Computer Science and Technology 4 Journal of Economic Dynamics & Control 4 Pattern Recognition 4 Journal of Heuristics 4 International Journal of Applied Mathematics and Computer Science 4 Algebraic & Geometric Topology 4 Natural Computing 3 Computer Methods in Applied Mechanics and Engineering 3 International Journal of Theoretical Physics 3 Information Processing Letters 3 Journal of Computational Physics 3 Mathematical Methods in the Applied Sciences 3 Mathematical Notes 3 Annali di Matematica Pura ed Applicata. Serie Quarta 3 Osaka Journal of Mathematics 3 Order 3 International Journal of Approximate Reasoning 3 Journal of Global Optimization 3 Annals of Mathematics and Artificial Intelligence 3 Soft Computing 3 Probability in the Engineering and Informational Sciences 3 Algorithms 2 Artificial Intelligence 2 Biological Cybernetics 2 The Annals of Statistics 2 Automatica 2 Geometriae Dedicata 2 Journal of Economic Theory 2 Mathematische Annalen 2 Mathematics of Operations Research 2 International Journal of Production Research 2 International Journal of Algebra and Computation 2 Applied Mathematical Modelling 2 Mathematical Programming. Series A. Series B 2 SIAM Journal on Optimization 2 Computational Optimization and Applications 2 Journal of Applied Mathematics and Decision Sciences 2 OR Spectrum 2 Journal of Hyperbolic Differential Equations 2 Annali dell’Università di Ferrara. Sezione VII. Scienze Matematiche 2 Optimization Letters 2 Statistics and Computing 1 Acta Informatica 1 Communications in Mathematical Physics 1 Discrete Mathematics 1 International Journal of General Systems 1 International Journal of Systems Science 1 Journal of Mathematical Biology 1 Physica A 1 Problems of Information Transmission 1 Acta Universitatis Palackianae Olomucensis. Facultas Rerum Naturalium. Mathematica 1 Annals of the Institute of Statistical Mathematics 1 Applied Mathematics and Optimization 1 Archiv der Mathematik 1 BIT 1 Computing 1 Functional Analysis and its Applications 1 Funkcialaj Ekvacioj. Serio Internacia 1 Fuzzy Sets and Systems 1 Inventiones Mathematicae 1 Journal of Algebra 1 Journal of Applied Probability 1 Journal of Functional Analysis 1 Journal of the London Mathematical Society. Second Series 1 Journal für die Reine und Angewandte Mathematik 1 Journal of Statistical Planning and Inference 1 Mathematische Nachrichten 1 Mathematische Zeitschrift 1 Mathematika 1 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 1 Numerical Functional Analysis and Optimization 1 SIAM Journal on Control and Optimization 1 SIAM Journal on Numerical Analysis 1 Theory and Decision 1 Transactions of the American Mathematical Society 1 Cybernetics 1 Mathematical Social Sciences 1 Insurance Mathematics & Economics 1 Statistics & Probability Letters 1 Optimization ...and 64 more Serials all top 5 #### Cited in 46 Fields 180 Operations research, mathematical programming (90-XX) 98 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 60 Computer science (68-XX) 36 Group theory and generalizations (20-XX) 36 General topology (54-XX) 35 Statistics (62-XX) 34 Partial differential equations (35-XX) 32 Numerical analysis (65-XX) 28 Probability theory and stochastic processes (60-XX) 23 Systems theory; control (93-XX) 14 Quantum theory (81-XX) 14 Information and communication theory, circuits (94-XX) 12 Biology and other natural sciences (92-XX) 11 Algebraic topology (55-XX) 10 Category theory; homological algebra (18-XX) 9 Order, lattices, ordered algebraic structures (06-XX) 8 General algebraic systems (08-XX) 8 Calculus of variations and optimal control; optimization (49-XX) 8 Manifolds and cell complexes (57-XX) 7 Topological groups, Lie groups (22-XX) 7 Functional analysis (46-XX) 5 Combinatorics (05-XX) 5 Geometry (51-XX) 5 Differential geometry (53-XX) 4 Mechanics of particles and systems (70-XX) 4 Statistical mechanics, structure of matter (82-XX) 3 Harmonic analysis on Euclidean spaces (42-XX) 3 Mechanics of deformable solids (74-XX) 2 History and biography (01-XX) 2 Mathematical logic and foundations (03-XX) 2 Real functions (26-XX) 2 Abstract harmonic analysis (43-XX) 2 Convex and discrete geometry (52-XX) 2 Fluid mechanics (76-XX) 2 Optics, electromagnetic theory (78-XX) 1 General and overarching topics; collections (00-XX) 1 $$K$$-theory (19-XX) 1 Functions of a complex variable (30-XX) 1 Ordinary differential equations (34-XX) 1 Dynamical systems and ergodic theory (37-XX) 1 Approximations and expansions (41-XX) 1 Operator theory (47-XX) 1 Global analysis, analysis on manifolds (58-XX) 1 Classical thermodynamics, heat transfer (80-XX) 1 Relativity and gravitational theory (83-XX) 1 Geophysics (86-XX) #### Wikidata Timeline The data are displayed as stored in Wikidata under a Creative Commons CC0 License. Updates and corrections should be made in Wikidata.
2021-01-20 17:56:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5966466069221497, "perplexity": 10228.531158841939}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703521139.30/warc/CC-MAIN-20210120151257-20210120181257-00305.warc.gz"}
https://www.physicsforums.com/threads/superheterodyne-receiver-with-phase-sensitive-detection-question.986843/
# Superheterodyne Receiver with Phase Sensitive Detection Question • Engineering ## Homework Statement: Someone decides to replace the simple detector with a design based on a phase-sensitive detector, a block diagram of which is shown in Figure 2. The signal after the intermediate frequency section can be written as: $$v(t) = A(1 + m cos(2 \pi f_m t) ) cos(2 \pi f_i t) + x(t) cos(2 \pi f_i t) + y(t) sin(2 \pi f_i t)$$ where $f_m$ is the signal frequency and $f_i$ is the intermediate frequency and $x(t)$ and $y(t)$ are the narrow band noise. The frequency $f_i >> f_m$ and the phase-sensitive detection includes a filter which cuts off all frequencies greater than $2 f_m$. Derive an expression for the steady state output from the signal detector. ## Relevant Equations: Low Pass Filter Transfer Function Trigonometric Identities Hi, Here is the figure 2 that the question referred to: How do I go about a question like this? I have made an attempt, but am not very confident with the method. Note, I have learned about/am aware of Fourier transforms. My attempt: So first I thought that the square wave will be at the intermediate frequency as we are trying to demodulate the signal and superheterodyne receivers have components which work at the intermediate frequency. Thus, I found the Fourier series of the square wave to be: $r(t) = \frac{4}{\pi} \left( cos(\omega_i t) - \frac{1}{3} cos(3 \omega_i t) + \frac{1}{5} cos(5 \omega_i t) - .... \right)$. The design below will basically just multiply $r(t)$ and $v(t)$ together. Due to all the sin and cos terms, when we multiply them we will get lots of addition and subtraction of frequencies. I considered each of the terms in $v(t)$ one by one for the multiplication process. The first term $v_1 (t)$ (note I have changed from f to $\omega$) is given by $$v_1(t) r(t) = A(1 + m cos(\omega_m t) ) cos(\omega_i t) \times \frac{4}{\pi} cos( \omega_i t) = \frac{2 A}{\pi}(1 + m cos(\omega_m t) )(1 + 2 cos(2 \omega_i t) )$$ I have only considered the first term of the fourier series of the square wave as all the other terms will just produce sums & differences which will be filtered off. After filtering off terms (we can ignore the $cos(2 \omega_i t)$ term), we are left with $\frac{2 A}{\pi}(1 + m cos(\omega_m t) ) \times effect of transfer function$ The second term becomes $x(t) cos(\omega_i t) cos(\omega_i t)$ and after filtering $\frac{x(t)}{2} \times effect of transfer function$. The third term becomes $y(t) sin(\omega_i t) cos(\omega_i t) = \frac{y(t)}{2} ( 2 sin(\omega_i t))$ and this will all get filtered off. The transfer function is given by $H(j \omega) = \frac{1}{1 + j \omega RC}$. Taking into account the transfer function, term two (with the x(t) ) remains the same as it is DC. For term 1, we let $\omega = \frac{1}{2RC}$ and thus, we get $|H| = \frac{2}{\sqrt 5}$ and a phase distortion of $- arctan(\frac{0.5}{1})$ Combining these two terms, we get that the output signal is: $$v_{out} (t) = \frac{x(t)}{2} + \frac{2}{\pi} + \frac{2}{\sqrt 5} \frac{2}{\pi} cos(\omega_f t) = \frac{2}{\pi} + \frac{x(t)}{2} + \frac{4}{\pi \sqrt 5} cos(\omega_f t - arctan(0.5))$$ I feel that I may have done something wrong by treating $x(t)$ like a dc component when perhaps the IF filter in the superheterodyne filter would have made it at $\omega_i$. Thank you very much in advance for the help. Related Engineering and Comp Sci Homework Help News on Phys.org Joshy Gold Member I'm curious myself and was having hard time following. I'm not strong in this area and have been aiming to learn more- I don't know the answer or if you're approach is wrong or right. I was looking at ##sin(\omega_i t)cos(\omega_i t)## I think it should be ##{ {1} \over {2} } sin(2\omega_i t)##? I suppose either way it'll still get filtered out anyways. That DC component sounds right. You already noticed ##cos(\omega t)cos(\omega t)## is going to be ##{ {1} \over {2} }(1+cos(2\omega t))## so that first component is DC without any sinusoid. I would imagine that's the point of the exercise show that you can't get rid of all the noise? I'll post anyways without watching silently... I'm ready to learn more :) Choosing ##\omega = { {1} \over {2RC} }## did not make sense to me. Why did you do that? Wouldn't it be whatever the frequency ##f_i## is? Forgive me if I'm being silly. Thanks for responding. I'm curious myself and was having hard time following. I'm not strong in this area and have been aiming to learn more- I don't know the answer or if you're approach is wrong or right. I apologise about this, I was trying to strike the balance between explanation of steps and length of post - it seems that I did not get this quite right. I will try to add some explanations here (on that note, do you know whether I am able to edit the original post - it doesn't seem to let me?). The overall method was just to multiply $v(t)$ with $r(t)$, then use identities to get sum and difference terms, and then remove all the ones that would get filtered out while accounting for the effect of the LPF on the remaining terms. I was looking at ##sin(\omega_i t)cos(\omega_i t)## I think it should be ##{ {1} \over {2} } sin(2\omega_i t)##? I suppose either way it'll still get filtered out anyways. Yes, you are correct. I did try to give it a read through for errors but must have missed that. Choosing ##\omega = { {1} \over {2RC} }## did not make sense to me. Why did you do that? Wouldn't it be whatever the frequency ##f_i## is? Forgive me if I'm being silly. I thought that we choose $\omega_m$ because it is the frequency of the cos term and that is how the filter will affect it. Thus, as the break-point is at $\frac{1}{RC}$, then we are at half of the break-point frequency so $\omega = \frac{1}{2RC}$. I might very well be wrong myself (I feel that this is part of the problem with me working in the time domain rather than putting it all into the frequency domain - but I feel that this problem was written to be solved in the time domain (the lecture series doesn't ever really use Fourier transforms or make much mention of them, despite us learning about them in other parts of our course). Joshy Joshy Gold Member Fair enough. That sounds reasonable. I don't know if it's the right answer, but it's not throwing up immediate red flags for me anymore. It took me a moment to say that ##\omega_{cutoff}=2\pi (2f_m)## when you make ##\omega=2\pi f_m## then the constant is halved. You then switched to phasor form to multiply the amplitudes and add the phase. I don't know if this would be easier to solve in the frequency domain? If you could do both then why not? Whenever I see multiplication I'm happy where I'm at because I know the other domain is going to be a convolution problem- I'm not a big fan although sanity checks are always pleasant. I think people who donate to the community can edit their post or they have more time to edit it (gold members). I donated. I forgot how I stumbled upon this site, but I knew right away that I really liked it and didn't mind supporting it. The price seems like a lot at a first glance, but it's perpetual not a subscription or monthly fee I could take the hit once and look away :) those little features really make a big difference. If I were in your shoes I would ask a moderator if they could change the title of your thread (maybe send them a PM) because when I saw the words Superheterodyne Receiver I was thinking it was an architecture question I would have guessed it's a question about images, interferers, linearity, harmonics and spurious components. Quite honestly the math is the same but it sounds terrifying I'm wondering if this scaring away some other posters who could be more helpful than myself. Maybe something simplified like Fourier Transform on LPF? Just a suggestion. Last edited:
2020-09-30 03:46:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7463572025299072, "perplexity": 500.19398501168513}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402101163.62/warc/CC-MAIN-20200930013009-20200930043009-00204.warc.gz"}
https://elmcip.net/creative-work/nome
# Nome Creative Work Year: 1993 Language: Record Status: Tags: Description (in English): A multimedia project produced in book, video, and CD form. "The poems of Nome pointed to the necessity of thinking not only about the transformations that the exchange of material artifacts implies in the way we interact with the words, but also in the way they modify the meanings of the words in this mediatic ecology system in which contents are made available to reading in different situations (at the museum, at home or in the street), affecting the poetic perception in a network of meanings that connects and individualizes them." (Quote from Giselle Beiguelman, "The Reader, the Player and the Executable Poetics: Towards a Literature Beyond the Book")
2021-09-22 18:30:22
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8349726796150208, "perplexity": 3281.469124525198}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057371.69/warc/CC-MAIN-20210922163121-20210922193121-00053.warc.gz"}
http://www.sawaal.com/probability-questions-and-answers/a-bag-contains-4-red-and-3-black-balls-a-second-bag-contains-2-red-and-4-black-balls-one-bag-is-sele_9593
14 Q: # A bag contains 4 red and 3 black balls. A second bag contains 2 red and 4 black balls. One bag is selected at random. From the selected bag, one ball is drawn. Find the probability that the ball drawn is red. A) 23/42 B) 19/42 C) 7/32 D) 16/39 Explanation: A red ball can be drawn in two mutually exclusive ways (i) Selecting bag I and then drawing a red ball from it. (ii) Selecting bag II and then drawing a red ball from it. Let E1, E2 and A denote the events defined as follows: E1 = selecting bag I, E2 = selecting bag II A = drawing a red ball Since one of the two bags is selected randomly, therefore P(E1) = 1/2  and  P(E2) = 1/2 Now, $\inline P(\frac{A}{E1})$ = Probability of drawing a red ball when the first bag has been selected = 4/7 $\inline P(\frac{A}{E2})$  = Probability of drawing a red ball when the second bag has been selected = 2/6 Using the law of total probability, we have P(red ball) = P(A) = $\inline P(E1)P(\frac{A}{E1})+P(E2)P(\frac{A}{E2})$ = $\inline \frac{1}{2}\times \frac{4}{7}+\frac{1}{2}\times \frac{2}{6}=\frac{19}{42}$ Q: The Manager of a company accepts only one employees leave request for a particular day. If five employees namely Roshan, Mahesh, Sripad, Laxmipriya and Shreyan applied for the leave on the occasion of Diwali. What is the probability that Laxmi priya’s leave request will be approved ? A) 1 B) 1/5 C) 5 D) 4/5 Explanation: Number of applicants = 5 On a day, only 1 leave is approved. Now favourable events =  1 of 5 applicants is approved Probability that Laxmi priya's leave is granted = 1/5. 3 22 Q: Tickets numbered 1 to 20 are mixed up and then a ticket is drawn at random. What is the probability that the ticket drawn has a number which is a multiple of 4 or 15 ? A) 6/19 B) 3/10 C) 7/10 D) 6/17 Explanation: Here, S = {1, 2, 3, 4, ...., 19, 20}=> n(s) = 20 Let E = event of getting a multiple of 4 or 15 =multiples od 4 are {4, 8, 12, 16, 20} And multiples of 15 means multiples of 3 and 5 = {3, 6 , 9, 12, 15, 18, 5, 10, 15, 20}. = the common multiple is only (15). => E = n(E)= 6 Required Probability = P(E) = n(E)/n(S) = 6/20 = 3/10. 4 66 Q: Out of sixty students, there are 14 who are taking Economics and 29 who are taking Calculus. What is the probability that a randomly chosen student from this group is taking only the Calculus class ? A) 8/15 B) 7/15 C) 1/15 D) 4/15 Explanation: Given total students in the class = 60 Students who are taking Economics = 24 and Students who are taking Calculus = 32 Students who are taking both subjects = 60-(24 + 32) = 60 - 56 = 4 Students who are taking calculus only = 32 - 4 = 28 probability that a randomly chosen student from this group is taking only the Calculus class = 28/60 = 7/15. 6 86 Q: Three unbiased coins are tossed. What is the probability of getting at most two heads ? A) 4/3 B) 2/3 C) 3/2 D) 3/4 Explanation: Let S be the sample space. Here n(S)= $\inline \fn_jvn \small 2^{3}$= 8 Let E be the event of getting atmost two heads. Then, n(E) = {(H,T,T), (T,H,T), (T,T,H), (H,H,T), (T,H,H), (H,T,H)} Required probability = n(E)/n(S) = 6/8 = 3/4. 4 140 Q: Tickets numbered 1 to 20 are mixed up and then a ticket is drawn at random. What is the probability that the ticket drawn has a number which is a multiple of 3 or 5 ? A) 2/3 B) 1/2 C) 7/8 D) 4/5 $\fn_jvn&space;\small&space;\therefore$ Required probability = 10/20 = 1/2.
2017-02-26 07:38:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6759763360023499, "perplexity": 988.3561179342487}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171936.32/warc/CC-MAIN-20170219104611-00284-ip-10-171-10-108.ec2.internal.warc.gz"}
http://mathhelpforum.com/math-topics/164424-classical-mechanics-problem.html
1. ## classical mechanics problem A ball is thrown with initial speed $\displaystyle v_0$ up an inclined plane. the plane is at an angle $\displaystyle \phi$ and the ball's initial velocity is at angle $\displaystyle \theta$. Choose an axis with x measured up the slope, y normal, and z across it. Write down Newton's second law using these axes and find the ball's position as a function of time. Show that the ball lands a distance $\displaystyle R=\dfrac{2v_0^2\sin{\theta}\cos(\theta+\phi)}{\cos ^2(\phi)}$ from it's launch point. Show that for a given $\displaystyle v_0$ and $\displaystyle \phi$ that the max range is $\displaystyle R_{max}=\dfrac{v_0^2}{g(1+\sin\theta)}$ 2. Um... have you tried anything? 3. I've tried many things but none of them get me even close to getting that result. I know that the force in the x direction is $\displaystyle mg\sin\theta$ and so the displacement formula has this subed for g without the for the displacement in the x direction. The displacement in the x direction is $\displaystyle v_0\cos\theta t$ and the y is normal so there's no displacement in the y. 4. Ok I'm coming back to this now. I took a look at the derivation for this problem in two dimensions for a projectile and tried to model my approach after it. This is how far I got. $\displaystyle 0=x(t)= v_0 \sin(\phi)t-\frac{1}{2}g \cos(\theta)t^2$ then from this we can get $\displaystyle t=\dfrac{2v_0\sin(\phi)}{g\cos(\theta)}$ Then as in the 2d simple case I multiplied $\displaystyle t v_0$ to obtain $\displaystyle R$ but this doesn't give the result I was looking for. Can anyone help me with this? 5. Originally Posted by magus A ball is thrown with initial speed $\displaystyle v_0$ up an inclined plane. the plane is at an angle $\displaystyle \phi$ and the ball's initial velocity is at angle $\displaystyle \theta$. Choose an axis with x measured up the slope, y normal, and z across it. Write down Newton's second law using these axes and find the ball's position as a function of time. Show that the ball lands a distance $\displaystyle R=\dfrac{2v_0^2\sin{\theta}\cos(\theta+\phi)}{g\co s^2(\phi)}$ note the correction in your formula for R from your original post. $\displaystyle \displaystyle \Delta x = v_0\cos{\theta} \cdot t - \frac{1}{2}g\sin{\phi} \cdot t^2$ $\displaystyle \displaystyle \Delta y = v_0\sin{\theta} \cdot t - \frac{1}{2}g\cos{\phi} \cdot t^2$ since $\displaystyle \Delta y = 0$ ... $\displaystyle \displaystyle t = \frac{2v_0\sin{\theta}}{g\cos{\phi}}$ substituting for $\displaystyle t$ in the $\displaystyle \Delta x$ equation ... $\displaystyle \displaystyle \Delta x = \frac{2v_0^2 \sin{\theta} \cos{\theta}}{g\cos{\phi}} - \frac{2v_0^2 \sin^2{\theta} \sin{\phi}}{g\cos^2{\phi}}$ common denominator ... $\displaystyle \displaystyle \Delta x = \frac{2v_0^2 \sin{\theta} \cos{\theta}\cos{\phi}}{g\cos^2{\phi}} - \frac{2v_0^2 \sin^2{\theta} \sin{\phi}}{g\cos^2{\phi}}$ combine and factor ... $\displaystyle \displaystyle \Delta x = \frac{2v_0^2 \sin{\theta}(\cos{\theta}\cos{\phi} - \sin{\theta}\sin{\phi})}{g\cos^2{\phi}}$ finally, note that ... $\displaystyle \cos{\theta}\cos{\phi} - \sin{\theta}\sin{\phi} = \cos(\theta+\phi)$ ... and you're there. 6. I think there is a 'g' missing in your first part. Make a sketch. (I'll be using v instead of vo to make it easier to type) The velocity of the ball up the plane is $\displaystyle v \cos\theta$ That perpendicular to the plane is $\displaystyle v \sin\theta$ The acceleration is down the plane and is given by $\displaystyle g\ sin\phi$ And that perpendicular to the plane is given by $\displaystyle g \cos\phi$ From this, the distance perpendicular to the plane where the particle lands is 0 and is given by: $\displaystyle s = ut + \dfrac12 at^2$ This: $\displaystyle 0 = v\sin\theta t - \dfrac12 g\cos\phi t^2$ For the distance along the plane, we have: $\displaystyle s = v\cos\theta t - \dfrac12 g\sin\phi t^2$ Can you complete the first part now? EDIT: Didn't see you replied Skeeter 7. For the second part now. (there is also a mistake, see below) $\displaystyle R = \dfrac{2v^2\sin\theta\cos(\theta+\phi)}{g\cos^2\ph i}$ Use the identity: $\displaystyle 2\sin A\cos B = \sin(A+B) + \sin(A-B)$ This gives: $\displaystyle R = \dfrac{v^2(\sin(2\theta + \phi) + \sin(-\phi))}{g\cos^2(\phi)}$ Simplify: $\displaystyle R = \dfrac{v^2(\sin(2\theta + \phi) - \sin(\phi))}{g(1+\sin\phi)(1 - \sin\phi)}$ At the maximum range, $\displaystyle \sin(2\theta + \phi) = 1$ since only theta can vary. Hence we get: $\displaystyle R = \dfrac{v^2(1 - \sin(\phi))}{g(1+\sin\phi)(1 - \sin\phi)}$ Something cancels out, giving: $\displaystyle R_{max} = \dfrac{v^2}{g(1+\sin\phi)}$ From this, you can even find the relation between theta and phi for this value of range 8. Thanks. The $\displaystyle \Delta y$ is the other equation I really needed I guess.
2018-05-26 12:34:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8020178079605103, "perplexity": 351.9737821654615}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867417.71/warc/CC-MAIN-20180526112331-20180526132331-00019.warc.gz"}
https://chemistry.stackexchange.com/questions/136809/steric-effect-of-t-butyl-group-on-sn1-on-a-cyclohexane-ring
# Steric effect of t-butyl group on SN1 on a cyclohexane ring This is a question from GRB Kota Question Bank Organic Chemistry, Chapter 3, Reasoning type, Q. 4: Assertion: trans-1-t-butyl-4-chlorocyclohexane is less reactive than cis-1-t-butyl-4-chlorocyclohexane towards an SN1 reaction. Reason: More the steric factor near the leaving group, the higher the leaving group tendency. The answer given is: Both the assertion and reason is correct and the reason is the correct explanation for the assertion. The answer given feels wrong. My reasoning for saying so is because the rate determining step in an SN1 is the formation of a carbocation. Due to this, the steric effect seems irrelevant. However, when the intermediate is same for both compounds undergoing substitution, I assume that the compound having greater stability would be less reactive since the threshold energy required would be greater. Using the above statement, we can say that trans-1-t-butyl-4-chlorocyclohexane would be less reactive since both the groups are in equatorial position making it more stable whereas in its cis-isomer the t-butyl group would be equatorial whereas the chloride group would be axial. Due to this, I presume that the assertion is correct. However, the reason seems vague as it does not mention whether the reaction taking place is SN1 or SN2 since this could change with respect to the reaction involved. Is there anything wrong with my above reasoning? Can the validity of the reason be proved/disproved beyond doubt? • Two cyclohexane rings with no leaving groups are not reactive towards substitution. – Zhe Jul 20 '20 at 13:31 • @Zhe, I have rectified the issue.. It was chloride and not methyl. – Safdar Faisal Jul 20 '20 at 13:36 • An SN1 reaction will occur faster when the solvent is able to solvate the cation formed,thus compensating for the energy required in breaking the carbon-leaving group bond. Steric hindrance will inhibit this solvation. Does this hint help in answering your question? – Yusuf Hasan Jul 20 '20 at 13:45 • @YusufHasan So, the reason is correct.. Thanks for that. However how does this make it the correct explanation? – Safdar Faisal Jul 20 '20 at 13:49 • @RahulVerma I am talking about the solvation which would be present even before the C-Cl bond is broken. The breaking of the C-Cl bond doesn't happen on it's own; the Cl of the C-Cl bond is solvated by the positive end of the polar protic solvent,and these weak bonds which are formed b/w the solvent and the leaving group will provide the energy to ultimately break the C-Cl bond itself. This solvation will definitely be affected by the fact that whether the butyl is on the same or opposite side as the C-Cl. Anyway, the steric factor argument presented by the book doesn't make sense on it's own – Yusuf Hasan Jul 20 '20 at 16:46 The cis isomer ($$C$$) and the trans isomer ($$T$$) will both react via the same carbocation intermediate. This means the energies of the intermediate are the same. Chloride has an A-value of 0.43, so $$C$$ is higher in energy than $$T$$. Since the formation of cation is endothermic, we may invoke the Hammond Postulate: the first step has a late transition state that structurally looks like the intermediate and is similar to it in energy. Finally, we conclude that the transition state for the reaction from $$C$$ has a lower barrier and thus a faster reaction.
2021-05-05 22:54:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6092559099197388, "perplexity": 1644.7743264490614}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988696.23/warc/CC-MAIN-20210505203909-20210505233909-00015.warc.gz"}
http://mathoverflow.net/feeds/question/56388
delooping under Dold-Kan and simplicial delooping - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-18T19:32:39Z http://mathoverflow.net/feeds/question/56388 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/56388/delooping-under-dold-kan-and-simplicial-delooping delooping under Dold-Kan and simplicial delooping Urs Schreiber 2011-02-23T12:00:28Z 2011-06-23T15:57:28Z <p>What maps of simplicial sets exist between </p> <ul> <li><p>the image under the Dold-Kan correspondence of a chain complex shifted up in degree</p></li> <li><p>and the image under the right adjoint to simplicial looping of the DK-image of the unshifted complex</p></li> </ul> <p>?</p> <p>Here is the same question in detail:</p> <p>Write</p> <p>$$(G \dashv \bar W) : sGrp \stackrel{\leftarrow}{\underset{\bar W}{\to}} sSet_0 \hookrightarrow sSet$$</p> <p>for the adjunction between simplicial groups and reduced simplicial sets whose left adjoint is the simplicial loop group functor (as for instance in Goerss-Jardine, chapter V);</p> <p>and write</p> <p>$$Ch_\bullet^+ \overset{\Xi}{\to} sAbGrp \hookrightarrow sGrp \overset{U}{\to} sSet$$</p> <p>for the Dold-Kan correspondence, where in both cases I care about the images as simplicial sets.</p> <p>Then for $V \in Ch_\bullet^+$ a chain complex and $V[1]$ (or $V[-1]$ if you prefer) its shift up in degree (its delooping as a chain complex) the two simplicial sets</p> <p>$$U \Xi (V[1])$$</p> <p>and</p> <p>$$\bar W (\Xi V)$$</p> <p>should have the same homotopy type. What nice natural maps of simplicial sets do we have between them? </p> http://mathoverflow.net/questions/56388/delooping-under-dold-kan-and-simplicial-delooping/68624#68624 Answer by Jesse Wolfson for delooping under Dold-Kan and simplicial delooping Jesse Wolfson 2011-06-23T15:57:28Z 2011-06-23T15:57:28Z <p>There's an explicit natural isomorphism between the two functors. </p> <p>Rick Jardine says as much, but for the image of the functors in the category of chain complexes (i.e. after applying the normalization). You can find this in Goerss, Jardine Remark III.5.6, or in greater depth in section 4.6 of Jardine's book on Generalized Etale Cohomology. </p> <p>The combinatorics for the isomorphism in simplicial abelian groups means that the isomorphism takes a little longer to state, but I could send you a pdf with everything written out if this would be useful. </p>
2013-05-18 19:32:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8893623948097229, "perplexity": 1899.591065039187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382764/warc/CC-MAIN-20130516092622-00044-ip-10-60-113-184.ec2.internal.warc.gz"}
http://mathhelpforum.com/differential-geometry/149237-examples-non-riemann-integrability.html
# Math Help - Examples of Non-Riemann Integrability 1. ## Examples of Non-Riemann Integrability I was wondering if people can give me "nice" examples of non-Riemann integrable functions. I know the one about the rationals and irrationals, so-called indicator function (and called something else by a lot of other people), but I was hoping for something a little more natural. I'm trying to see whether $f^{2} \in \mathscr{R}$ entails that $f \in \mathscr{R}$, and I thought that if I had a few examples of non-integrable functions it might help, but there are precious few to be found by a basic Google search. 2. Well, we know that a function is Riemann integrable if and only if its set of discontinuities has measure zero. So, you're going to have to have a function that has loads of discontinuities in a very small space. I don't know of any other examples other than the characteristic on the rationals (or irrationals, it doesn't matter which). I'm sure you could construct many such functions, but I think they would all have to be a similarly pathological function to the characteristic on the rationals. $f^{2}$ being Riemann integrable does not imply that $f$ is Riemann integrable. Counter example: Let $f:[0,1]\to[-1,1]$ be defined by $f(x)=\begin{cases}1\quad x\in\mathbb{Q}\cap[0,1]\\ -1\quad x\in\mathbb{Q}^{c}\cap[0,1]\end{cases}.$ Then, clearly, $f$ suffers from the same problem that the characteristic function does: its set of discontinuities has measure 1. However, $f^{2}(x)=1$ for all $x\in[0,1]$. That function is obviously Riemann integrable. 3. Try an unbounded function: Take $f\colon [0,1] \to \mathbb{R}$ giving $f(x) = \begin{cases} 1/\sqrt{x}, & x > 0\\ 0, & x = 0.\end{cases}$ which is Riemann integrable on all proper closed subintervals of [0,1], but is not Riemann integrable on [0,1] itself. I chose $1/\sqrt{x}$ because the area under this graph from 0 to 1 is finite and can still be computed using elementary calculus techniques, yet it's not Riemann integrable by the definition. 4. Originally Posted by gosualite Try an unbounded function: Take $f\colon [0,1] \to \mathbb{R}$ giving $f(x) = \begin{cases} 1/\sqrt{x}, & x > 0\\ 0, & x = 0.\end{cases}$ which is Riemann integrable on all proper closed subintervals of [0,1], but is not Riemann integrable on [0,1] itself. I chose $1/\sqrt{x}$ because the area under this graph from 0 to 1 is finite and can still be computed using elementary calculus techniques, yet it's not Riemann integrable by the definition. Why is exactly is $f(x)$ not Riemann integrable here? 5. You might check and see if your library has this book (or you can just buy it - it's not expensive). You might find something in there. 6. Originally Posted by chiph588@ Why is exactly is $f(x)$ not Riemann integrable here? Because, by definition, a Riemann integrable function must be bounded. Integrals of the type gosualite gave are called improper Riemann integrable. 7. I agree with chiph588: the exhibited function is actually Riemann integrable, and even finite. To find a function along those lines that has infinite area, you have to go to the other side of 1 in the exponent: $f(x)=1/x^{2}$ has infinite integral on that interval. But that's not the same thing as being Riemann integrable or not. The set of discontinuities of $1/x^{2}$ still has measure zero, thus implying that the function is Riemann integrable. The integral just has an infinite value. The characteristic function on the rationals is a bona fide non-Riemann-integrable function, because it's everywhere discontinuous. The problem in finding non-Riemann-integrable functions is that you have to pick up enough points of discontinuity on the real axis to get positive measure. That means irrationals, and that means you're going to have to have a function that oscillates so wildly that it looks something like the characteristic function of the rationals. 8. Reply to Jose27: Perhaps you're right. I think some authors just lump all the Riemann integrable functions together. In my analysis course, it was Riemann integrable if and only if set of discontinuities has measure zero, regardless of boundedness. 9. Originally Posted by Ackbeet Reply to Jose27: Perhaps you're right. I think some authors just lump all the Riemann integrable functions together. In my analysis course, it was Riemann integrable if and only if set of discontinuities has measure zero, regardless of boundedness. The main problem I have with that definition is that it doesn't coincide with the partition or step-function-aprox. definition of the Riemann integral (the problem being taking a supremum or infimum), and although one can arrive at Lebesgue's criterion and from there drop the boundedness condition, it is really an abuse of notation to call them Riemann integrable, since by definition they're not. Though it's not like mathematics isn't full of notation abuse. 10. Perhaps you're right. I, for one, though, would like the integral exhibited in post #3 to be Riemann-integrable, since it does have a finite area in that interval. In one sense, your way is tidier, but in another, mine is. I'm curious how the improper Riemann integral is defined - I'm not sure I've run across it in many analysis books. Could you point me to a reference, please? 11. Wait, I see it. They're defined as limits of Riemann integrals. Got it. 12. Originally Posted by Ackbeet Perhaps you're right. I, for one, though, would like the integral exhibited in post #3 to be Riemann-integrable, since it does have a finite area in that interval. In one sense, your way is tidier, but in another, mine is. I'm curious how the improper Riemann integral is defined - I'm not sure I've run across it in many analysis books. Could you point me to a reference, please? They're treated as exercises in Spivak's Calculus (chapter 14). They're essentially of two types. Another thing: your definition, as given, still requires some method of approximation (which is it?) since otherwise $\frac{1}{x}$ would be integrable in any closed interval containing 0. 13. No, I don't think 1/x would be integrable in any interval containing 0, because the limits which you use to define the integral would not exist. The limits have to exist in order for the improper Riemann integral to be well-defined. 14. Originally Posted by Ackbeet No, I don't think 1/x would be integrable in any interval containing 0, because the limits which you use to define the integral would not exist. The limits have to exist in order for the improper Riemann integral to be well-defined. Sure, it is not improper Riemann integrable, but what I'm asking is how do you arrive at the non-integrability of such a function with your definition, since its set of discontinuities clearly has measure 0 (and it's not improper R. or Lebesgue integrable). 15. Well, with my definition, I would say that a function can be Riemann integrable, and yet still have an infinite value. You know, it just occurred to me that I might be digging myself in a hole here with wrong definitions. I could easily be wrong about this whole darn thing. Let me check my analysis book tomorrow (it's at work), and I'll see whether "bounded" is in the Lebesgue criteria or not. Then I'll get back to you. Page 1 of 2 12 Last
2014-12-28 20:53:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 20, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9414150714874268, "perplexity": 264.85581356591547}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447559592.38/warc/CC-MAIN-20141224185919-00094-ip-10-231-17-201.ec2.internal.warc.gz"}
https://docs.classiq.io/latest/user-guide/combinatorial-optimization/problem-solving/
Solve Optimization Problem¶ So far, we've gone through the optimization model formulation. We shall now present the core Classiq capabilities, generation of a designated quantum solution, and execution of the generated algorithm on a quantum backend. We present our method with a common example problem, the Max Independent Set (MIS) with networkx.star_graph(4) solved by QAOAMixer algorithm with a single QAOA layer (see the problem library). import networkx as nx import pyomo.core as pyo def mis(graph: nx.Graph) -> pyo.ConcreteModel: model = pyo.ConcreteModel() model.x = pyo.Var(graph.nodes, domain=pyo.Binary) @model.Constraint(graph.edges) def independent_rule(model, node1, node2): return model.x[node1] + model.x[node2] <= 1 model.cost = pyo.Objective(expr=sum(model.x.values()), sense=pyo.maximize) return model The method consists of building a PYOMO model, indicating the QAOA and optimizer preferences and then using one of the following commands: • generate - generates the quantum circuit. • solve - solves the optimization problem. • get_operator - returns the Ising hamiltonian representing the problem's objective. • get_objective - returns the PYOMO object representing the problem's objective. • get_initial_point - returns the initial parameters for a parametric ansatz. • classical_solution.solve - solves the optimization problem classically. { "name": "mis", "graph": [ [0.0, 1.0, 1.0, 1.0, 1.0], [1.0, 0.0, 0.0, 0.0, 0.0], [1.0, 0.0, 0.0, 0.0, 0.0], [1.0, 0.0, 0.0, 0.0, 0.0], [1.0, 0.0, 0.0, 0.0, 0.0] ], "qaoa_preferences" : { "qsolver": "QAOAMixer", "qaoa_reps": 1, } } The file should include the name of the problem, and the required arguments for the model building function. In the case of MIS problem, the only argument is the underline graph in the form of adjacency matrix. Please make sure the given name is equal to the file name of the model definition. Two extension commands are available: generate circuit and solve problem. This is done by opening the Command Palette (Ctrl+Shift+P / Command+Shift+P) and choosing either the "Generate circuit for combinatorial optimization" command or "Solve combinatorial optimization" command, respectively. First, we create the desired optimization problem as a PYOMO model: The following code snippet is a concise example of the application of the optimization engine. import networkx as nx graph = nx.star_graph(4) mis_model = mis(graph) We are now ready to send the model to the Classiq backend. This is done using the CombinatorialOptimization package and its "synthesize" and "solve" commands. from classiq.applications.combinatorial_optimization import ( CombinatorialOptimization, QSolver, QAOAPreferences, ) qaoa_preferences = QAOAPreferences(qsolver=QSolver.QAOAMixer, qaoa_reps=3) mis_problem = CombinatorialOptimization( model=mis_model, qsolver_preferences=qaoa_preferences ) Results¶ "Model Designer" command¶ The model property exposes the functional-level model of the resulting ansatz. Afterwards, the user can synthesize from it the explicit circuit. model = mis_problem.get_model() or, model = mis_problem.model "Synthesize" command¶ The quantum circuit is returned as GeneratedCircuit class, which contains both textual and visual representations. The textual data is available in multiple formats. The visual data consists of a static image (see figure below) and an interactive image. qc = mis_problem.synthesize() or, qc = mis_problem.ansatz "Solve problem" command¶ result = mis_problem.solve() The results are organized in the VQESolverResult class and may be observed in several formats. serialized output¶ print(result) === OPTIMAL SOLUTION === solution cost --------------- ------ (0, 1, 1, 1, 1) 4 === SOLUTION DISTRIBUTION === solution cost probability --------------- ------ ------------- (0, 1, 1, 1, 1) 4 0.71 (0, 1, 1, 1, 0) 3 0.02 (0, 1, 1, 0, 1) 3 0.04 (0, 1, 0, 1, 1) 3 0.03 ........ .. .... === OPTIMAL_PARAMETERS === [3.064572795460487, 0.5370940203297239, 2.869550922366777, 0.526740029882901, 3.0904332803941017, 1.417496935210913] === TIME === 00:00:02.258639 Histogram¶ print(result.histogram()) Convergence Graph¶ print(result.convergence_graph) Optimal Parameters¶ result.optimal_parameters_graph() Operator command¶ operator = mis_problem.get_operator() print(operator.show()) -2.500 * IIIII +0.500 * IIIIZ +0.500 * IIIZI +0.500 * IIZII +0.500 * IZIII +0.500 * ZIIII Objective command¶ print(mis_problem.get_objective()) x[0] + x[1] + x[2] + x[3] + x[4] + x[5] Initial parameters command¶ initial_parameters = mis_problem.get_initial_point() [0.22.., 3.09..., 0.95..., 2.83...] Solve classically command¶ result = mis_problem.solve_classically() best_cost=4.0 time=None solution=(0, 1, 1, 1, 1)
2023-03-27 04:11:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3712567985057831, "perplexity": 6930.893256748932}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946637.95/warc/CC-MAIN-20230327025922-20230327055922-00137.warc.gz"}
http://www.physicsforums.com/showthread.php?p=1327473
# Solving differential equations through matrix by devoured_elysium Tags: differential, equations, matrix, solving P: 15 Hello I'd like to know how to solve the following equation with matrix, if possible at all: d ( x^2 ) / dt^2 + w^2 x = 0 I know how to solve it without having to use a matrix, but I heard it is possible to do it with matrix. How about doing it? How is this method called? Thanks Math Emeritus Sci Advisor Thanks PF Gold P: 39,323 Is that d ( x^2 ) / dt^2 + w^2 x = 0 or d^2 x/dt^2+ w^2x= 0? I'm going to assume it is the latter. Define y= dx/dt so dy/dt= d^2x/dt^2 and the equation becomes dy/dt= -w^2x. You now have the two equations dx/dt= y and dy/dt= -w^2x. If you write $$X= \left(\begin{array}{c} x \\ y\end{array}\right)$$ Then the two equations become the single matrix equation $$\frac{dX}{dt}= \left(\begin{array}{ccc}0 && 1 \\-w^2 && 0\end{array}\right)X$$ To solve that, find the eigenvalues of the coefficient array (they are $\pm w i$). The general solution then can be written as exponentials of those eigenvalues times t or, since they are imaginary, sine and cosine. P: 15 it's the latter one, as you thought. thanks by the response! P: 15 Solving differential equations through matrix Hi again Sorry but I could not follow the step X=(x, y) and why then it becomes the next equation. Math Emeritus Sci Advisor Thanks PF Gold P: 39,323 $$\left(\begin{array}{c}\frac{dx}{dt} \\ \frac{dy}{dt}\end{array}\right)= \left(\begin{array}{ccc}0 && 1 \\-w^2 && 0\end{array}\right)\left(\begin{array}{c}x \\ y\end{array}\right)$$ Do you see how the matrix multiplication on the right works out? Mentor P: 15,065 Quote by HallsofIvy $$\left(\begin{array}{c}\frac{dx}{dt} \\ \frac{dy}{dt}\end{array}\right)= \left(\begin{array}{ccc}0 && 1 \\-w^2 && 0\end{array}\right)\left(\begin{array}{c}x \\ y\end{array}\right)$$ Taking this one step further, define $\mathbf X$ and $\mathbf A$ as $$\begin{array}{rl} \mathbf X &\equiv \bmatrix x\\y\endbmatrix \\[12pt] \mathbf A &\equiv \bmatrix 0&&1\\-w^2&&0\endbmatrix \endarray$$ then $$\frac {d\mathbf X}{dt}= \mathbf A\mathbf X$$ If $\mathbf X$ and $\mathbf A$ were scalars, the solution to the above would be the exponential $$\mathbf X = e^{\mathbf A t}\mathbf X|_{t=0}$$ The series expansion of the exponential function works for matrices as well as scalars (for example, see http://mathworld.wolfram.com/MatrixExponential.html or http://www.sosmath.com/matrix/expo/expo.html). In this case, $$\mathbf A^2 = -w^2 \mathbf I$$ where $\mathbf I$ is the identity matrix. Thus $$\begin{array}{rl} (\mathbf A t)^{2n} &= (-1)^n (wt)^{2n} \mathbf I \\ (\mathbf A t)^{2n+1} &= (-1)^n \frac 1 w (wt)^{2n+1} \mathbf A\\ \end{array}$$ The matrix exponential is thus $$\begin{array}{rl} e^{\mathbf A t} &=\sum_{n=0}^{\infty} \frac {(\mathbf At)^n}{n!} \\[12pt] &= \sum_{n=0}^{\infty} \frac {(\mathbf At)^{2n}}{(2n)!} + \sum_{n=0}^{\infty} \frac {(\mathbf At)^{2n+1}}{(2n+1)!} \\[12pt] &= \sum_{n=0}^{\infty} (-1)^n\frac {(wt)^{2n}}{(2n)!} \mathbf I + \frac 1 w\sum_{n=0}^{\infty}(-1)^n\frac{(wt)^{2n+1}}{(2n+1)!} \mathbf A \\[12pt] &= \cos(wt) \mathbf I +\frac 1 w\sin(wt) \mathbf A \end{array}$$ Related Discussions Calculus & Beyond Homework 5 Calculus & Beyond Homework 2 Calculus & Beyond Homework 3 Calculus & Beyond Homework 1 Introductory Physics Homework 5
2014-07-26 03:08:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8526173830032349, "perplexity": 1059.2333924445838}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997894976.0/warc/CC-MAIN-20140722025814-00145-ip-10-33-131-23.ec2.internal.warc.gz"}
https://www.globinch.com/how-to-copy-text-as-html-link-from-web-pages/?rel=author
There are no default features available in any browser to Copy Text as HTML Link From a Web Page.But if you are a blogger or webmaster you certainly need to copy and paste links and texts from around the web to include in your post or page. If you are doing it manually, you know how tiring it is. You need to copy the text first and then add the web page link manually using anchor (<A> </A>) Tag. But there are some helps available in the form of browser add-ons if you use browsers like Mozilla Firefox. This is one of the most useful add-on available for Firefox to copy the text as HTML link from any web page.Copy As HTML Link does exactly what its name reflects. The add-on has two useful features ### 1. Copy Text as HTML Link Creates an HTML link to the current page using the selected text and copies it into the clipboard. You can paste this to the post or page you want to add the text as well as the link. See the Screen-shot The copied text will be converted to an HTML link. See the HTML source below. 1 2 3 <a href="https://www.globinch.com/how-to-gmails-undo-send-mail-up-to-30-seconds/> You can recall the email </a> ### 2. Copy any link with text This is another interesting feature.Right-click on any link (without having to select it, which can be very difficult), select Copy as HTML Link, and you’ll actually copy the link text with the link destination together as an HTML link. See the screen-shot Copy As HTML Link Firefox add-on is really a time saver as well as helps to avoid manual creation of HTML links for text by allowing you to copy HTML links on the fly.
2021-12-02 14:56:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30712249875068665, "perplexity": 2174.531597497417}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362230.18/warc/CC-MAIN-20211202145130-20211202175130-00009.warc.gz"}
https://math.stackexchange.com/questions/1867754/the-difference-between-taylor-and-laurent-expansions-for-holomorphic-functions
# The difference between Taylor and Laurent expansions for Holomorphic functions I have encountered 2 similar but different theorems on expansions of holomorphic functions to power series, but am not sure how exactly do they differ. Is correct that any $f: U \rightarrow \mathbb{C}$ holomorphic on $U$, for any closed ball $B(z_0, r)$ completely contained in $U$, $f$ identifies with a unique Taylor series on the ball? And is it also correct that in the specific case that $U$ is a "ring" $\{z\space|\space r<|z|<R\}$, $f$ is representable on the entire ring by a unique Laurent series, but may not be representable on it by a single Taylor series? • Yes and yes. $\,\,\,$ – zhw. Jul 22 '16 at 17:24 • (for a domain $r < |z| < R$ we say annulus : every Laurent series has an annlus as its domain of convergence) – reuns Jul 22 '16 at 17:25 I'm not quite sure what's going on in your first statement, with a set $U$ and closed balls inside $U$. All that matters is that $f$ be holomorphic on an open disk centered at the point in question - this is the disk on which the power series representation of $f$ will converge. Maybe you are worried about the kind of convergence? For both Taylor and Laurent series of holomorphic functions, the convergence is uniform on compact subsets of the disk/annulus. Laurent series exist when $f$ is holomorphic on the annulus. It does not have to be holomorphic on the disk removed from the interior, and if I remember correctly, that removed disk could even be a point, i.e. $r=0$ is okay, and that $R = \infty$ is okay also.
2019-10-14 01:47:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8865329623222351, "perplexity": 112.96615477078514}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986648481.7/warc/CC-MAIN-20191014003258-20191014030258-00014.warc.gz"}
https://answers.ros.org/question/341670/subscribing-to-tfs-transformation-change/
# subscribing to tf's transformation change Is there a way I could somehow subscribe to a transformation update? I.e. I want to write a callback function each time there is an update of transformation from source_frame to target_frame. The only way I could think of is polling, or subscribing to /tf and filtering myself - both sounds bad to me. edit retag close merge delete 1 To avoid an xy-problem could you perhaps add a little info on why you want to do this? There may be easier (and supported) ways to do what you actually want to do. ( 2020-01-14 05:30:13 -0600 )edit Sort by » oldest newest most voted With as generic of a statement of your problem you are right that there's limited solutions. However as @gvdhoorn suggests it's likely that if you more clearly define your problem there might be a better solution. For example a common tool to use for receiving data with low latency is to use a tf2_ros::MessageFilter if you're processing data and need to wait for current information it will do all of that for you. The only way I could think of is polling, or subscribing to /tf and filtering myself - both sounds bad to me. This depends on what metric you want to use to define "bad" are you looking for minimum latency, minimum computational resources, minimal network resources. What are the rates of all components in your system? In most scenarios low rate polling will be by far the lowest CPU overhead, but will have higher latency. Similarly do you really want an update every time that a transform updates? /tf topics may come in at 1000Hz. Is there not a minimum threshold for the update etc? An example of doing something like this is already implemented in the tf2_web_republisher more In TF "changes" are distributed by broadcasting frames, which are essentially publications on the /tf topic. There is no update of the TF tree inbetween transforms (which I get the feeling you are somewhat expecting). Lookups for frames at timepoints which fall between two updates are interpolated (but of course: only when actually requested). So a naive approach would indeed be to subscribe to /tf yourself and then pick out the frames you're interested in. There is no infrastructure that could do the filtering for you available afaik, but tf/tfMessage is relatively trivial: it consists of a list of geometry_msgs/TransformStamped, which have child_frame_id and frame_id (in the header). more Thanks. Just making sure - tf is a pretty high rate topic, so subscribing to it directly will be quite wasteful right? ( 2020-01-14 09:58:05 -0600 )edit It'll be no different from instantiating a tf2_ros::TransformListener. ( 2020-01-14 10:09:49 -0600 )edit I think subscribing to the TF topic is the right answer. The Listeners are also doing that and then dumping the results into a buffer used to get transforms and interpolate. If you want to trigger something based on an update of a specific frame, this seems to be the clearest way to handle it. ( 2020-01-14 15:41:48 -0600 )edit
2020-01-21 20:54:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22256307303905487, "perplexity": 1470.549738688166}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250605075.24/warc/CC-MAIN-20200121192553-20200121221553-00096.warc.gz"}