url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|---|
https://physics.stackexchange.com/questions/536643/quantum-gate-fswap-acting-on-two-fermion-states/536648 | # Quantum gate fSWAP acting on two-fermion states
When we have identical particles which are fermions, any exchange (swap) among 2 states introduces a minus sign. In the paper https://arxiv.org/abs/1807.07112, the Eq. (7) aims to represent that operation, fSWAP, via the matrix
$$\mathrm{fSWAP}= \begin{pmatrix} 1 & 0 & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 1 & 0 & 0\\ 0& 0 & 0& -1 \end{pmatrix}$$
Now, if I have the state $$(a\ b\ c\ d)^T = a|00\rangle + b|01\rangle + c|10\rangle + d|11\rangle$$ representing 2 identical fermions, then a swap among the two of them would produce the state $$(-1)·[a|00\rangle + c|01\rangle + b|10\rangle + d|11\rangle] = -(a\ c\ b\ d)^T$$. Nevertheless, that matrix doesn't give you this but $$(a\ c\ b\ -d)^T$$. So, what am I not getting?
• Fig 7 in the paper is two complicated, it should work with 2 CNOT gates. – Norbert Schuch Mar 16 at 21:02
• @NorbertSchuch Do you mean that instead a SWAP and a controlled-Z I could simply use 2 CNOTs? How? – Vicky Mar 17 at 1:27
• With 2 CNOTs (+single-qubit gates), you can build the iSWAP gate (see e.g. arxiv.org/abs/quant-ph/0209035), and the iSWAP equals the fSWAP up to Z rotations (by $\pi/2$) on both qubits. – Norbert Schuch Mar 17 at 10:30
The notation $$\vert0\rangle$$ and $$\vert1\rangle$$ denotes a mode with zero or one fermions, respectively.
Then, $$\vert ij\rangle$$ denotes two modes, where $$i$$ denotes the number of fermions in the first mode and $$j$$ the number of fermions in the second mode.
So if you apply the fSWAP, what you swap is whatever is in the first and the second mode. But this means you only exchanged two fermions if there are two fermions, i.e., if you were in the state $$\vert11\rangle$$ initially. This is the only case in which you should get a minus sign, and this is what the fSWAP gate does. | 2020-06-04 06:00:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 10, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9468389749526978, "perplexity": 865.3864343856683}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347439019.86/warc/CC-MAIN-20200604032435-20200604062435-00250.warc.gz"} |
http://openstudy.com/updates/55f9ba6de4b0b96d8936c3d1 | ## anonymous one year ago Y=6/(3x-2) Find the gradient of the curve at the point where x = 2.
1. IrishBoy123
for this? $y = \frac {6}{3x-2}$ how did you try to do this?!
2. anonymous
actually i have a diagram wait .
3. anonymous
4. IrishBoy123
so what are you actually learning? some context would be great :p | 2016-10-23 06:29:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.569854736328125, "perplexity": 1670.766079253182}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719155.26/warc/CC-MAIN-20161020183839-00290-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://jeremykun.com/tag/experiments/ | # A Motivation for Quantum Computing
Quantum mechanics is one of the leading scientific theories describing the rules that govern the universe. It’s discovery and formulation was one of the most important revolutions in the history of mankind, contributing in no small part to the invention of the transistor and the laser.
Here at Math ∩ Programming we don’t put too much emphasis on physics or engineering, so it might seem curious to study quantum physics. But as the reader is likely aware, quantum mechanics forms the basis of one of the most interesting models of computing since the Turing machine: the quantum circuit. My goal with this series is to elucidate the algorithmic insights in quantum algorithms, and explain the mathematical formalisms while minimizing the amount of “interpreting” and “debating” and “experimenting” that dominates so much of the discourse by physicists.
Indeed, the more I learn about quantum computing the more it’s become clear that the shroud of mystery surrounding quantum topics has a lot to do with their presentation. The people teaching quantum (writing the textbooks, giving the lectures, writing the Wikipedia pages) are almost all purely physicists, and they almost unanimously follow the same path of teaching it.
Scott Aaronson (one of the few people who explains quantum in a way I understand) describes the situation superbly.
There are two ways to teach quantum mechanics. The first way – which for most physicists today is still the only way – follows the historical order in which the ideas were discovered. So, you start with classical mechanics and electrodynamics, solving lots of grueling differential equations at every step. Then, you learn about the “blackbody paradox” and various strange experimental results, and the great crisis that these things posed for physics. Next, you learn a complicated patchwork of ideas that physicists invented between 1900 and 1926 to try to make the crisis go away. Then, if you’re lucky, after years of study, you finally get around to the central conceptual point: that nature is described not by probabilities (which are always nonnegative), but by numbers called amplitudes that can be positive, negative, or even complex.
The second way to teach quantum mechanics eschews a blow-by-blow account of its discovery, and instead starts directly from the conceptual core – namely, a certain generalization of the laws of probability to allow minus signs (and more generally, complex numbers). Once you understand that core, you can then sprinkle in physics to taste, and calculate the spectrum of whatever atom you want.
Indeed, the sequence of experiments and debate has historical value. But the mathematics needed to have a basic understanding of quantum mechanics is quite simple, and it is often blurred by physicists in favor of discussing interpretations. To start thinking about quantum mechanics you only need to a healthy dose of linear algebra, and most of it we’ve covered in the three linear algebra primers on this blog. More importantly for computing-minded folks, one only needs a basic understanding of quantum mechanics to understand quantum computing.
The position I want to assume on this blog is that we don’t care about whether quantum mechanics is an accurate description of the real world. The real world gave an invaluable inspiration, but at the end of the day the mathematics stands on its own merits. The really interesting question to me is how the quantum computing model compares to classical computing. Most people believe it is strictly stronger in terms of efficiency. And so the murky depths of the quantum swamp must be hiding some fascinating algorithmic ideas. I want to understand those ideas, and explain them up to my own standards of mathematical rigor and lucidity.
So let’s begin this process with a discussion of an experiment that motivates most of the ideas we’ll need for quantum computing. Hopefully this will be the last experiment we discuss.
## Shooting Photons and The Question of Randomness
Does the world around us have inherent randomness in it? This is a deep question open to a lot of philosophical debate, but what evidence do we have that there is randomness?
Here’s the experiment. You set up a contraption that shoots photons in a straight line, aimed at what’s called a “beam splitter.” A beam splitter seems to have the property that when photons are shot at it, they will be either be reflected at a 90 degree angle or stay in a straight line with probability 1/2. Indeed, if you put little photon receptors at the end of each possible route (straight or up, as below) to measure the number of photons that end at each receptor, you’ll find that on average half of the photons went up and half went straight.
The triangle is the photon shooter, and the camera-looking things are receptors.
If you accept that the photon shooter is sufficiently good and the beam splitter is not tricking us somehow, then this is evidence that universe has some inherent randomness in it! Moreover, the probability that a photon goes up or straight seems to be independent of what other photons do, so this is evidence that whatever randomness we’re seeing follows the classical laws of probability. Now let’s augment the experiment as follows. First, put two beam splitters on the corners of a square, and mirrors at the other two corners, as below.
The thicker black lines are mirrors which always reflect the photons.
This is where things get really weird. If you assume that the beam splitter splits photons randomly (as in, according to an independent coin flip), then after the first beam splitter half go up and half go straight, and the same thing would happen after the second beam splitter. So the two receptors should measure half the total number of photons on average.
But that’s not what happens. Rather, all the photons go to the top receptor! Somehow the “probability” that the photon goes left or up in the first beam splitter is connected to the probability that it goes left or up in the second. This seems to be a counterexample to the claim that the universe behaves on the principles of independent probability. Obviously there is some deeper mystery at work.
## Complex Probabilities
One interesting explanation is that the beam splitter modifies something intrinsic to the photon, something that carries with it until the next beam splitter. You can imagine the photon is carrying information as it shambles along, but regardless of the interpretation it can’t follow the laws of classical probability.
The simplest classical probability explanation would go something like this:
There are two states, RIGHT and UP, and we model the state of a photon by a probability distribution $(p, q)$ such that the photon has a probability $p$ of being in state RIGHT a probability $q$ of being in state UP, and like any probability distribution $p + q = 1$. A photon hence starts in state $(1,0)$, and the process of traveling through the beam splitter is the random choice to switch states. This is modeled by multiplication by a particular so-called stochastic matrix (which just means the rows sum to 1)
$\displaystyle A = \begin{pmatrix} 1/2 & 1/2 \\ 1/2 & 1/2 \end{pmatrix}$
Of course, we chose this matrix because when we apply it to $(1,0)$ and $(0,1)$ we get $(1/2, 1/2)$ for both outcomes. By doing the algebra, applying it twice to $(1,0)$ will give the state $(1/2, 1/2)$, and so the chance of ending up in the top receptor is the same as for the right receptor.
But as we already know this isn’t what happens in real life, so something is amiss. Here is an alternative explanation that gives a nice preview of quantum mechanics.
The idea is that, rather than have the state of the traveling photon be a probability distribution over RIGHT and UP, we have it be a unit vector in a vector space (over $\mathbb{C}$). That is, now RIGHT and UP are the (basis) unit vectors $e_1 = (1,0), e_2 = (0,1)$, respectively, and a state $x$ is a linear combination $c_1 e_1 + c_2 e_2$, where we require $\left \| x \right \|^2 = |c_1|^2 + |c_2|^2 = 1$. And now the “probability” that the photon is in the RIGHT state is the square of the coefficient for that basis vector $p_{\text{right}} = |c_1|^2$. Likewise, the probability of being in the UP state is $p_{\text{up}} = |c_2|^2$.
This might seem like an innocuous modification — even a pointless one! — but changing the sum (or 1-norm) to the Euclidean sum-of-squares (or the 2-norm) is at the heart of why quantum mechanics is so different. Now rather than have stochastic matrices for state transitions, which are defined they way they are because they preserve probability distributions, we use unitary matrices, which are those complex-valued matrices that preserve the 2-norm. In both cases, we want “valid states” to be transformed into “valid states,” but we just change precisely what we mean by a state, and pick the transformations that preserve that.
In fact, as we’ll see later in this series using complex numbers is totally unnecessary. Everything that can be done with complex numbers can be done without them (up to a good enough approximation for computing), but using complex numbers just happens to make things more elegant mathematically. It’s the kind of situation where there are more and better theorems in linear algebra about complex-valued matrices than real valued matrices.
But back to our experiment. Now we can hypothesize that the beam splitter corresponds to the following transformation of states:
$\displaystyle A = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 & i \\ i & 1 \end{pmatrix}$
We’ll talk a lot more about unitary matrices later, so for now the reader can rest assured that this is one. And then how does it transform the initial state $x =(1,0)$?
$\displaystyle y = Ax = \frac{1}{\sqrt{2}}(1, i)$
So at this stage the probability of being in the RIGHT state is $1/2 = (1/\sqrt{2})^2$ and the probability of being in state UP is also $1/2 = |i/\sqrt{2}|^2$. So far it matches the first experiment. Applying $A$ again,
$\displaystyle Ay = A^2x = \frac{1}{2}(0, 2i) = (0, i)$
And the photon is in state UP with probability 1. Stunning. This time Science is impressed by mathematics.
Next time we’ll continue this train of thought by generalizing the situation to the appropriate mathematical setting. Then we’ll dive into the quantum circuit model, and start churning out some algorithms.
Until then!
[Edit: Actually, if you make the model complicated enough, then you can achieve the result using classical probability. The experiment I described above, while it does give evidence that something more complicated is going on, it does not fully rule out classical probability. Mathematically, you can lay out the axioms of quantum mechanics (as we will from the perspective of computing), and mathematically this forces non-classical probability. But to the best of my knowledge there is no experiment or set of experiments that gives decisive proof that all of the axioms are necessary. In my search for such an experiment I asked this question on stackexchange and didn’t understand any of the answers well enough to paraphrase them here. Moreover, if you leave out the axiom that quantum circuit operations are reversible, you can do everything with classical probability. I read this somewhere but now I can’t find the source 😦
One consequence is that I am more firmly entrenched in my view that I only care about quantum mechanics in how it produced quantum computing as a new paradigm in computer science. This paradigm doesn’t need physics at all, and apparently the motivations for the models are still unclear, so we just won’t discuss them any more. Sorry, physics lovers.] | 2020-11-30 23:23:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 25, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7873216867446899, "perplexity": 392.25985151703134}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141515751.74/warc/CC-MAIN-20201130222609-20201201012609-00624.warc.gz"} |
https://tex.stackexchange.com/questions/142632/multiple-bibliographies-with-local-links-global-labels-also-global-bibliograph | # Multiple bibliographies with local links, global labels. Also global bibliography
How can I have a bibliography per section which shows only the references cited within the section? At the end, I also want a global bibliography which includes all cited references in any section.
As far as labels go, I want them global. So if labels are numeric, then [1] can appear in multiple sections but it always corresponds to the [1] that is in the global bibliography. If the labels are [Author00] then that same label would appear in multiple locations, consistently referring to the same bibitem.
Finally, there is the question of the hyperlink attached to the label. Should it go to the local bibliography or to the global bibliography? For now, I prefer local, but it would be nice to have this configurable.
If it matters, I want to do this with a single bibtex file. I've seen piecemeal answers here and there, but not much on what happens when you click the hyperlink.
With each section corresponding to a new reference section, you get "local" labels pointing to the local bibliographies.
With each section corresponding to a new reference segment, you get "global" labels pointing to the first instance the corresponding bibliography entry is printed.
Neither case gives you the desired result. The document below demonstrates how you can modify some internals to obtain the desired links with reference segments. The new boolean flag anchorsegments targets the local bibliographies when true and the global bibliography otherwise.
\documentclass{article}
\usepackage[refsegment=section]{biblatex}
\usepackage{hyperref}
\newbool{anchorsegments}
\booltrue{anchorsegments}
\makeatletter
\AtBeginDocument{%
\ifbool{anchorsegments}
{\long\def\blx@bibhyperref[#1]#2{%
\blx@sfrest
#2%
\blx@sfrest
#2%
\protected\long\def\blx@imc@bibhypertarget#1#2{%
\blx@sfsave\hyper@natanchorstart{\the\c@refsection:\the\c@refsegment:#1}%
\blx@sfrest
#2%
\blx@sfsave\hyper@natanchorend\blx@sfrest}%
\protected\def\blx@anchor{%
\xifinlist
{\the\c@refsection @\the\c@refsegment @\abx@field@entrykey}
{\blx@anchors}
{}
{\blx@anchors}
{\the\c@refsection @\the\c@refsegment @\abx@field@entrykey}%
\hyper@natanchorstart{%
\the\c@refsection @\the\c@refsegment @\abx@field@entrykey}%
\hyper@natanchorend}}%
\AtNextBibliography{\let\blx@anchor\relax}%
\subsection*{Local references}}}}
\makeatother
\begin{document}
\section{Title}
Filler \parencite{companion,markey,knuth:ct}.
\newpage
\section{Title}
Filler \parencite{markey,bertram,companion}.
Here local anchors are obtained by inserting \the\c@refsegment into link identifiers so that they are specific to both the reference section and segment. Global anchors are achieved by avoiding anchor definitions in each of the local bibliographies via \AtNextBibliography{\let\blx@anchor\relax}. | 2019-08-20 08:23:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7210584282875061, "perplexity": 1659.5182364595123}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315258.34/warc/CC-MAIN-20190820070415-20190820092415-00161.warc.gz"} |
https://www.tutorialspoint.com/delete-function-in-php | # delete() function in PHP
The delete() function deletes a file. The path of the file to be deleted is to be specified as a parameter.
## Syntax
delete(file_path)
## Parameters
• file_path − Specify the path of the file to be deleted.
## Return
The delete() function returns.
• True, on success
• False, on failure
## Example
The following is an example. This deletes the file “amit.txt” specified as a parameter.
<?php
echo delete("E:/list/amit.txt");
?>
## Output
true
karthikeya Boyini
I love programming (: That's all I know | 2023-03-22 06:05:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3495207726955414, "perplexity": 14064.499144365931}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943750.71/warc/CC-MAIN-20230322051607-20230322081607-00477.warc.gz"} |
https://escholarship.org/uc/item/12z3z7vg | Nearly Linear-Work Algorithms for Mixed Packing/Covering and Facility-Location Linear Programs
Skip to main content
Open Access Publications from the University of California
## Nearly Linear-Work Algorithms for Mixed Packing/Covering and Facility-Location Linear Programs
• Author(s): Young, Neal E
• et al.
## Published Web Location
https://arxiv.org/pdf/1407.3015.pdf
No data is associated with this publication.
Abstract
We describe the first nearly linear-time approximation algorithms for explicitly given mixed packing/covering linear programs, and for (non-metric) fractional facility location. We also describe the first parallel algorithms requiring only near-linear total work and finishing in polylog time. The algorithms compute $(1+\epsilon)$-approximate solutions in time (and work) $O^*(N/\epsilon^2)$, where $N$ is the number of non-zeros in the constraint matrix. For facility location, $N$ is the number of eligible client/facility pairs.
Many UC-authored scholarly publications are freely available on this site because of the UC's open access policies. Let us know how this access is important for you.
Item not freely available? Link broken?
Report a problem accessing this item | 2021-05-15 11:02:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7636702656745911, "perplexity": 5069.721831490336}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991801.49/warc/CC-MAIN-20210515100825-20210515130825-00303.warc.gz"} |
https://www.studyadda.com/ncert-solution/11th-chemistry-the-p-block-elements_q21/500/32422 | • # question_answer 21) Rationalise the given statements and give chemical reactions. (i) Lead (II) chloride reacts with $C{{l}_{2}}$ to give$PbC{{l}_{4}}$. (ii) Lead (IV) chloride is highly unstable towards heat. (iii) Lead is known not to form an iodide, $Pb{{l}_{4}}$.
(i) On account of inert pair effect, $PbC{{l}_{2}}$ is more stable than$PbC{{l}_{4}}$. Thus, $PbC{{l}_{2}}$ does not react with chlorine to form$Pb{{I}_{4}}$. (ii) On account of greater stability of +2 state over +4 state, $PbC{{l}_{4}}$decomposes on heating into$PbC{{I}_{2}}$. $PbC{{l}_{4}}\xrightarrow{Heat}PbC{{l}_{2}}+C{{l}_{2}}$ (iii) As $P{{b}^{4+}}$ is an oxidising agent while ${{I}^{-}}$ ion is a reducing agent, the formation of $Pb{{I}_{4}}$ is not possible. $P{{b}^{4+}}+4{{I}^{-}}\to Pb{{I}_{2}}+{{I}_{2}}$ Thus, $Pb{{I}_{4}}$ does not exist. | 2020-09-21 03:20:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5762506723403931, "perplexity": 2796.813386938983}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400198887.3/warc/CC-MAIN-20200921014923-20200921044923-00270.warc.gz"} |
http://mathhelpforum.com/number-theory/59701-congruence-print.html | # congruence
Printable View
• November 15th 2008, 11:04 AM
bill77
congruence
Is there anyone who can help me where to start. I'm having a hard time to figure this problem out. Thanks
Show that if a, b, and m are integers such that m ≥ 2 and a ≡ b( mod m), then gcd(a,m) = gcd(b,m).
• November 15th 2008, 11:30 AM
o_O
$a \equiv b \ (\text{mod } m) \ \Leftrightarrow a = b + km$ for some integer k.
Let $d = (a,m)$. Since $d \mid a$ and $d \mid m$, it follows from $a = b+km$ that $d \mid b$ and is thus a common divisor of $b$ and $m$.
Let $c$ be any common divisor of $b$ and $m$. With a similar argument, we have that $c \mid a$. By definition, since $d$ is the greatest common divisor of $a$ and $m$, we have that $c \leq d$.
This means that any common divisor of $b$ and $m$ is less than $d$. Can you conclude?
• November 15th 2008, 12:14 PM
bill77
thanks for the help, i now know what to conclude:) | 2015-04-25 06:54:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 19, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9414550065994263, "perplexity": 135.40336901314816}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246647589.15/warc/CC-MAIN-20150417045727-00135-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://www.gradesaver.com/textbooks/math/geometry/CLONE-df935a18-ac27-40be-bc9b-9bee017916c2/chapter-4-section-4-1-properties-of-a-parallelogram-exercises-page-190/44 | ## Elementary Geometry for College Students (7th Edition)
We consider a rectangle. where the sides are a, b, c, and d and where the diagonals are A and B. Thus, we obtain: $A^2 = a^2 + b^2$ And: $B^2 = c^2 + d^2$ Adding these gives: $A^2 + B^2 = a^2 + b^2 +c^2 + d^2$ Thus, the proof is completed. | 2021-01-25 08:03:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.563818633556366, "perplexity": 338.19665236608745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703565376.63/warc/CC-MAIN-20210125061144-20210125091144-00452.warc.gz"} |
https://hackage.haskell.org/package/hex-text-0.1.0.4/docs/Text-Hex.html | hex-text-0.1.0.4: ByteString-Text hexidecimal conversions
Text.Hex
Synopsis
# Encoding and decoding
Encodes a byte string as hexidecimal number represented in text. Each byte of the input is converted into two characters in the resulting text.
>>> (encodeHex . ByteString.singleton) 192
"c0"
>>> (encodeHex . ByteString.singleton) 168
"a8"
>>> (encodeHex . ByteString.pack) [192, 168, 1, 2]
"c0a80102"
Text produced by encodeHex can be converted back to a ByteString using decodeHex.
The lazy variant of encodeHex is lazilyEncodeHex.
Decodes hexidecimal text as a byte string. If the text contains an even number of characters and consists only of the digits 0 through 9 and letters a through f, then the result is a Just value.
Unpacking the ByteString in the following examples allows for prettier printing in the REPL. >>> (fmap ByteString.unpack . decodeHex . Text.pack) "c0a80102" Just [192,168,1,2]
If the text contains an odd number of characters, decoding fails and produces Nothing.
>>> (fmap ByteString.unpack . decodeHex . Text.pack) "c0a8010"
Nothing
If the text contains non-hexidecimal characters, decoding fails and produces Nothing.
>>> (fmap ByteString.unpack . decodeHex . Text.pack) "x0a80102"
Nothing
The letters may be in either upper or lower case. This next example therefore gives the same result as the first one above:
>>> (fmap ByteString.unpack . decodeHex . Text.pack) "C0A80102"
Just [192,168,1,2]
lazilyEncodeHex is the lazy variant of encodeHex.
With laziness, it is possible to encode byte strings of infinite length:
>>> (LazyText.take 8 . lazilyEncodeHex . LazyByteString.pack . cycle) [1, 2, 3]
"01020301"
# Types
type Text = Text Source #
Strict text
type LazyText = Text Source #
Lazy text
Strict byte string
Lazy byte string | 2022-01-22 22:00:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.368380069732666, "perplexity": 13997.033478197418}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303884.44/warc/CC-MAIN-20220122194730-20220122224730-00254.warc.gz"} |
https://www.physicsforums.com/threads/physical-meaning-of-kdv-equation.809586/ | # Physical meaning of KdV equation
1. Apr 20, 2015
### fian
Here is one of the KdV form
u_t + u_x + uu_x + u_{xxx} = 0
Where u is elevation, x is spatial variable, and t is time variable. The first two terms describe the linear water wave, the third term represent the nonlinear effect, and the last term is the dispersion.
From what i understand, the nonlinear term explain the energy focusing that keeps the shape of the wave packet. But, how is u multiplied by u_x represents the energy focusing? For example, like in predator-prey model, the nonlinear term xy explain the interaction between the two species, where x and y are the number of predators and prey respectively.
Also, how does the last term, the third derivative of u with respect to x, explain the dispersion which is the deformation of the waves?
2. Apr 20, 2015
### fian
Sorry, It seems like i accidentally posted it twice, it is because of the low connection.
3. Apr 20, 2015
### fian
Can anybody please help me to understand the physical interpretation of kdv eq.?
4. Apr 23, 2015
### bigfooted
$uu_x = (\frac{1}{2}u^2)_x$
so it represents convection of kinetic energy. There is a link with the (inviscid) Burgers equation.
5. Apr 23, 2015
### fian
Thank you for replying.
It gives me some hints to study more.
This is how i understand it. Let u be the elevetion of wave. u^2 represents the interaction of waves which causes energy transfer among the waves. Am I correct? | 2017-12-17 14:04:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4791027903556824, "perplexity": 911.0461711299955}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948596051.82/warc/CC-MAIN-20171217132751-20171217154751-00139.warc.gz"} |
http://mathoverflow.net/questions/90490/blow-up-along-a-subscheme-and-along-its-associated-reduced-closed-subscheme | # Blow-up along a subscheme and along its associated reduced closed subscheme
Let $X$ be a noetherian scheme and let $Y$ be a closed subscheme of $X$. What relation is there between $\mathrm{Bl} _ {Y}(X)$ and $\mathrm{Bl} _{ Y _{\mathrm{red}}}(X)$ ?
Thanks.
-
There is no map from one blow up to the other, and definitely not an isomorphism. Please see my comments to J.C. Ottems answer.
However, if you replace radical by integral closure, then everything is fine.
Here's what I mean, if $I$ is an ideal and $J$ is its integral closure, then you always have an everywhere defined map
$$Bl_J X \to Bl_I X.$$
This need not be an isomorphism, indeed the integral closure of $(x^2, y^2)$ is $(x^2, xy, y^2)$. The blow up of the latter ideal is the normalization of the blow up of the former.
The other way you can get a map is if $J = \sqrt I$, and also if we can write $I = J \cdot \mathfrak{a}$ for some other ideal $\mathfrak{a}$. Then the blow up of $I$ is always the blow up of $\mathfrak{a}$ pulled back to $Bl_J X$.
In general, you should expect no relation between the blow up of two ideals with the same radical unless there is some integral closure relation between them and/or one ideal is the product of the other (and something else).
-
In fact, this characterises all cases, according to projecteuclid.org/euclid.ijm/1258138260. – Norbert Pintye Nov 26 '14 at 21:44
Thanks, I didn't know about that paper. – Karl Schwede Nov 27 '14 at 15:11
In general they can be very different. For example take the subscheme $Y$ of $\mathbb{A}^2$ given by the ideal $(x^2,y)$. Here the blow up is covered by the two open subsets
$$U = \mbox{Spec} k[x, y][t]/(y − x^2t),\qquad V = \mbox{Spec} k[x, y][s]/(ys − x^2)$$
In particular the blow up of $Y$ is singular, whereas the blow-up of $\mathbb{A}^2$ at a point is not.
In general, even if you assume that both blow-ups are smooth, all sorts of things can happen depending on how complicated the ideal sheaf is. For example the blow-ups can have a different number of exceptional divisors and not even be related by a finite map. Even worse, every birational morphism $X'\to X$ is the blow-up of $X$ along some ideal sheaf.
-
Thanks for the example. But at least, there is a natural map from one to another? – gio Mar 7 '12 at 21:25
I believe there is no map from one to the other. In the example J.C. Ottem gives, the blow up of $(x^2,y)$ can be obtained as follows. Blow up $(x,y)$, then blow up another point on that first blowup (the origin on one of the usual charts), and then contract the first exceptional curve. There's no map between $Bl_{(x^2,y)}X$ and $Bl_(x,y) X$, at least no map over $X$.$$\text{ }$$ Just because you have an inclusion of Rees algebras, does not mean that there is an everywhere defined map of the blow-ups. In the given example, one of the points of the overring contracts to an irrelevant ideal. – Karl Schwede Mar 8 '12 at 4:39
You are right, Karl. Thanks. – J.C. Ottem Mar 8 '12 at 8:06 | 2015-07-07 13:21:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9445434212684631, "perplexity": 180.06218053607466}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375099361.57/warc/CC-MAIN-20150627031819-00250-ip-10-179-60-89.ec2.internal.warc.gz"} |
http://www.motls.blogspot.com/2006/06/superman-explains-double-slit.html | ## Tuesday, June 06, 2006 ... /////
### Superman explains double slit experiment
If you're a reader who does not know how quantum interference works, an old supersymmetric man, also known as superman with a supercharge "Q" on his shirt, explains it in this
If you have two minutes, you can also learn what is string theory from a
Sorry, Steve, I just copied the description at Google's website and of course disagree with it! :-) For those who found the previous videos too complicated or separated from reality, here's a
but it is more physical than this blog, especially in the context of mechanics. Finally, you should certainly avoid searching for
because otherwise you may find a calculus tutorial by an MIT alumnus which could be a problem. ;-)
#### snail feedback (4) :
During a single run of a double-slit experiment (leading to one more detection event), do you think that
(1) the particle actually goes through both slits
(2) the particle actually goes through only one slit
(3) the question is not allowed
(4) something else?
By the way, I do know the rules for making a quantum calculation, e.g. an irreversible interaction at the slit would destroy the quantum interference. This is a conceptual question about how you think of the physical reality between observations.
Dear mitchell,
sorry, these are somewhat verbal games. But the point of 1) and 2) is clearly designed for the reader to imagine a classical "reality" which either has a real, objective object that goes through one slit, or both slits.
But that's not how this world - a quantum world - operates. So answers 1,2) are gone.
Because I support the freedom of speec, I must also reject the answer 3).
Thankfully, you have given us another choice, option 4).
The correct answer is that a single particle must be treated as going through one slit only, but all histories contribute to the probability amplitudes via Feynman's path-integral prescription, and the resulting amplitude that has contributions from all the histories must be squared (in absolute value) and only be interpreted as the probability, and this probability is the only thing that QM (the most complete possible theory) can predict.
So physical laws can only predict statistical properties of many similar experiments, not the outcome of one particular experiment.
I can also describe the situation without Feynman's approach. The most complete knowledge about the particle is described by a wave function that is nonzero in both slits. But the wave function "is not" the particle itself and it is not a real wave. It is a tool to calculate probabilities, and the rest goes just like in the paragraphs above.
Best wishes
Lubos
Lubos:
I didn't quite understand that explanation, it sounds a awful lot like Many Worlds, but at the same time it didn't.
How would that interpretation differ from the Many histories interpretation/decoherent histories interpretation that Hartle and Gell Mann are supporters of where the others worlds are actual?
Dear I Do Not Give a F***,
what I write is true and completely independent of someone's preferred interpretation as long as the interpretation is consistent with the known observations.
The wave functions interfere (i.e. add from both slits); they only determine probabilities that can only be checked when the same experiment is repeated many times; the particle is always seen at one place.
In many worlds, one imagines that all the histories with the final outcome "exist" somewhere in "parallel universes". I personally prefer consistent histories as the most comprehensive interpretation.
But once again, phenomena such as decoherence are real phenomena that exist regardless any interpretation as long as the interpretation takes experimentally verified 25-year-old realizations into account. They can be observed, they can be calculated and predicted, and they describe many things such as the boundary between the classical and quantum intuition.
Physics is not about vacuous philosophical flapdoodle. Physics is about understanding and predicting phenomena. I told you how this should be done properly, what can be done, and what can't be done. Everything you try to add is pure rubbish and you're clearly dissatisfied only because I don't want to add any rubbish of this kind - which is too bad.
Best wishes
Lubos | 2014-12-22 23:22:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6712509393692017, "perplexity": 889.9786299382087}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802777295.134/warc/CC-MAIN-20141217075257-00101-ip-10-231-17-201.ec2.internal.warc.gz"} |
https://www.esaral.com/q/a-show-that-the-normal-component-of-electrostatic-field-has-a-discontinuity-from-one-side-of-a-charged-surface-to-another-given-by-10903 | # (a) Show that the normal component of electrostatic field has a discontinuity from one side of a charged surface to another given by
Question:
(a) Show that the normal component of electrostatic field has a discontinuity from one side of a charged surface to another given by $\left(\overrightarrow{E_{2}}-\overrightarrow{E_{1}}\right) \cdot \hat{n}=\frac{\sigma}{\epsilon_{0}}$Where $\hat{n}$ is a unit vector normal to the surface at a point and $\sigma$ is the surface charge density at that point. (The direction of $\hat{n}$ is from side 1 to side 2.) Hence show that just outside a conductor, the electric field is $\sigma \hat{n} / €_{0}$
(b) Show that the tangential component of electrostatic field is continuous from one side of a charged surface to another. [Hint: For (a), use Gauss’s law. For, (b) use the fact that work done by electrostatic field on a closed loop is zero.]
Solution:
(a) Electric field on one side of a charged body is E1 and electric field on the other side of the same body is E2. If infinite plane charged body has a uniform thickness, then electric field due to one surface of the charged body is given by,
$\vec{E}_{1}=-\frac{\sigma}{2 \epsilon_{0}} \hat{n}$ ..(i)
Where,
$\hat{n}=$ Unit vector normal to the surface at a point
σ = Surface charge density at that point
Electric field due to the other surface of the charged body,
$\overrightarrow{E_{2}}=-\frac{\sigma}{2 \epsilon_{0}} \hat{n}$ ...(ii)
Electric field at any point due to the two surfaces,
$\overrightarrow{E_{2}}-\overrightarrow{E_{1}}=\frac{\sigma}{2 \epsilon_{0}} \hat{n}+\frac{\sigma}{2 \epsilon_{0}} \hat{n}=\frac{\sigma}{\epsilon_{0}} \hat{n}$
$\left(\overrightarrow{E_{2}}-\overrightarrow{E_{1}}\right) \cdot \hat{n}=\frac{\sigma}{\epsilon_{0}}$ ..(iii)
Since inside a closed conductor, $\overrightarrow{E_{1}}=0$
$\therefore \vec{E}=\overrightarrow{E_{2}}=-\frac{\sigma}{2 \epsilon_{0}} \hat{n}$
Therefore, the electric field just outside the conductor is $\frac{\sigma}{\epsilon_{0}} \hat{n}$.
(b) When a charged particle is moved from one point to the other on a closed loop, the work done by the electrostatic field is zero. Hence, the tangential component of electrostatic field is continuous from one side of a charged surface to the other. | 2023-03-24 06:23:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7889379262924194, "perplexity": 168.22573533885972}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945248.28/warc/CC-MAIN-20230324051147-20230324081147-00315.warc.gz"} |
http://crypto.stackexchange.com/tags/2nd-preimage-resistance/hot?filter=year | # Tag Info
## Hot answers tagged 2nd-preimage-resistance
8
With the definitions that a function $F$ is collision-resistant when a [computationally bounded] adversary can't [with sizable odds] exhibit any $(a,b)$ with $a\ne b$ and $F(a)=F(b)$; first-preimage-resistant when, given $f$ determined as $F(a)$ for an unknown random $a$, a [computationally bounded] adversary can't [with sizable odds] exhibit any $b$ with ...
8
It is neither pre-image resistant, second pre-image resistant nor collision resistant. It is easy to compute square-roots modulo a prime (assuming, of course, a square root exists, it will half the time). If $p = 3 \bmod 4$, then the simple formula $x^{(p+1)/4} \bmod p$ will work; for $p = 1 \bmod 4$, it's a tad more complicated but still sufficiently ...
5
Pre-image resistant but not 2nd pre-image resistant? describes the relationship between the three basic hash function security notions: Collision Resistance, Second Preimage Resistance and Preimage Resistance. In short, Collision Resistance implies Second Preimage Resistance (but not vice-versa) - there is a good diagram on page 4 of RogawayShrimpton04 that ...
5
Let me try to elaborate on their proof. Suppose you had a hash function $H$ that was second-preimage resistant but not first-preimage resistant. By showing that this leads to a contradiction, we will be showing that with second-preimage resistance, you must have first-preimage resistance. Namely, we will show that the lack of first-preimage resistance is ...
4
Preliminary: Almost the same article is available for free without breaking any law, nor downloading 5GB (formatting is shifted by at most one third of a page). It is also (as well as all other articles of IACR crypto conferences from 2000-2011) in the IACR Online Proceedings, specifically in the FSE 2008 section, but then you need to subtract about 223 from ...
4
Yes, it has happened. If you look at the SHA3 hash zoo, there are a number of hashes who has the best attack listed as "2nd preimage". One general place this can occur is if you have a hash function with a weak message compression step, but a fairly strong finalization step. Here, we might not be able to generate first preimages (because we don't know what ...
3
Given message $A$, you have to find message $B$, such that the first 64 bits (say, MSB) of their hashes collide: $$MSB_{64}(H(A)) = MSB_{64}(H(B))$$ This problem is called Second Preimage Search for the function $MSB_{64}(H)$, or Partial Second Preimage Search for the hash function $H$ alone. When $H$ is the full round SHA-1, there is no result, ...
3
While collision resistance can be defined for normal hash functions like SHA1, for target collision resistance you need a so called keyed hash function, that is a hash function that additionally to a message $m$ also takes a key $k$. The simplest way to construct a keyed hash function out of a regular one is to prepend the key in front of the message: ...
2
Take a function $H:\mathbb S\to\{0,1\}^k$ where $\mathbb S$ is a large finite subset of $\{0,1\}^*$, such that $H$ "compress data" [however this is defined], and $H$ is [conjectured] collision-resistant [thus second-preimage-resistant] and first-preimage-resistant; e.g., SHA-512, for $k=512$. Let $«0»$ and $«1»$ be two public distinct elements of \$\mathbb ...
1
The current status as of the time I write this is: There are no known attacks on second pre-images for truncated SHA-256 that are faster than brute force.
Only top voted, non community-wiki answers of a minimum length are eligible | 2014-08-30 18:15:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5508497357368469, "perplexity": 1317.443784136333}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500835670.21/warc/CC-MAIN-20140820021355-00429-ip-10-180-136-8.ec2.internal.warc.gz"} |
https://xzonn.top/posts/Fundamentals-of-Air-Pollution-Control-Homework.html | • 2021-10-31 15:19
• 2022-01-10 16:13
# 《大气污染防治原理》作业机翻
## 第1次作业
### Fundamentals of Air Pollution Engineering
##### 2.2
A high-volatile bituminous coal has the following characteristics:
Proximate analysis近似分析 Fixed carbon 固定碳 54.3% Volatile matter 挥发性物质 32.6% Moisture 水分 1.4% Ash 灰分 11.7% Ultimate analysis终极分析 C 74.4% H 5.1% N 1.4% O 6.7% S 0.7% Heating value 热值 30.7 × 106 J kg-1
It is burned in air at an equivalence ratio of 0.85. 500 × 106 W of electric power is produced with an overall process efficiency (based on the input heating value of the fuel) of 37 %.
(a) Determine the fuel and air feed rates in kg s-1.
(b) Determine the product gas composition.
(c) Sulfur dioxide is removed from the flue gases with a mean efficiency of 80% and the average output of the plant is 75% of its rated capacity. What is the SO2 emission rate in metric tonnes (103 kg) per year?
##### 2.5
Methanol shows promise as an alternate fuel that could reduce nitrogen oxide emissions. The reduction is attributed to lower flame temperatures. Compare the adiabatic flame temperature for combustion of pure methanol at φ = 1 with that of methane (Problem 2.4). Initial fuel and air temperatures are 298 K. The enthalpy of formation of liquid methanol is $\Delta h_f^\circ$ (298 K) = -239,000 J mol-1.
##### 2.6
The bituminous coal of Problem 2.2 is burned in air that has been heated to 590 K. To estimate the maximum temperature in combustion, compute the adiabatic flame temperature for stoichiometric combustion assuming complete combustion. The specific heats of the coal carbon and ash may be taken as $\bar c_{pc}$ = 1810 and $\bar c_{pa}$ = 1100 J kg-1 K-1, respectively. The ash melts at 1500 K with a latent heat of melting of $\Delta \bar\hbar_m$ = 140 J kg-1.
##### 2.9
A fuel oil containing 87% C and 13% H has a specific gravity of 0.825 and a higher heating value of 3.82 × 1010 J m3. It is injected into a combustor at 298 K and burned at atmospheric pressure in stoichiometric air at 298 K. Determine the adiabatic flame temperature and the equilibrium mole fractions of CO, CO2, H2, H2O, O2, and N2.
### Air Pollution Control Engineering
##### P66, 1
The range of droplet sizes in a cloud was determined to be as follows:
Range of drop diameter -microns
Number of drops
5-84
8-116
11-1415
14-1724
17-2024
20-2312
23-264
26-294
29-324
32-383
a. Determine the number median diameter.
b. Determine the mass median diameter.
c. Determine the Sauter mean diameter.
d. What weight fraction of the sample is represented by drops greater than 20 μm in diameter?
e. What is the population density of the 20-23 μm grade?
20-23 μm等级的人口密度是多少?
f. Can the distribution be reasonably well described as log-normal? (Hint: try plotting Δn/Δln dp vs ln dp; also try the upper-limit function). If so, find the two constants for the distribution.
##### P66, 5
The spray from a certain nozzle gave a drop-size distribution which was log-normal, with an AMD of 240 microns and a standard geometric dispersion of 2.00. For this spray:
a. What fraction of the total surface would be on drops between 100 and 200 microns in diameter?
b. What is the value of the “surface to diameter” mean $\bar D_{1,1}$?
“表面对直径”的值是什么意思$\bar D_{1,1}$?
c. What is the value of the maximum population density?
d. At what size does this value occur?
##### P133, 1
The particle size distribution of a certain dust, as obtained by an analysis conducted partly by a Coulter Counter and partly by an Anderson Impactor, may be represented by two straight lines on a log-probability plot. These lines intersect at 5.5 μm and 19.5% finer-than, with the Coulter portion having a σg = 0.205, and the Anderson portion a σg = 11.1, with the Anderson covering the finer range. This dust is to be collected by a device which has a grade efficiency performance given by the following equation:
$n_{M} = 1 - \exp(-2(1.28 \times 10^{-2} d_p^2)^{0.315})$
where dp is in microns. For this operation, find:
a. the “cut diameter”;
“切割直径”;
b. the overall efficiency;
c. the particle size distribution of the dust emitted;
d. the rate of emission, per 100 kg of dust fed.
##### P133, 4
The grade efficiency of a certain gravity (settling chamber) collector was found to be 20% on particles of a certain size and 81% on particles twice as large. Assuming the particles obey Stokes Law, what type of model might represent this collector performance? Would your answer be the same if the grade efficiencies were 25% and 50% respectively?
##### P133, 5
The power consumption of a certain collector was measured as 15 kW. It was processing a stream of gas having an average molecular weight of 32, at 300 ℉ and 15.2 psia, through a duct 18’’ by 36’’ in cross-section at an inlet velocity of 50 ft/sec.
Find: (a) the pressure drop across the collector;
(b) the number of inlet velocity heads of frictional energy loss.
(b) 摩擦能损失的入口速度头数。
##### P133, 7
It has become necessary to control the emission of cement dust from the kiln of a Portland Cement plant in which the operating conditions are as follows: temperature = 250 ℉; pressure = 1 atm; feed rate to kiln = 5 tons/hr; emission rate of dust (uncontrolled) = 230 lb/ton of feed; air flow = 159,600 acf/ton of feed. The dust may be regarded as equivalent of Stairmand Fine. The emission regulations are given in Chapter 1.
a. Select some possible kinds of collection equipment which might be considered in order to meet this requirement. Indicate their relative costs and power consumption.
b. What will be the grain-loading in the feed to the collection system?
c. Could a cyclone collector be used in any way? If so, or if not, assuming the inlet duct to be 2.28 ft by 1.09 ft, and the value of N = 9 inlet velocity heads, estimate the pressure drop across the cyclone, and the power consumption for the operation.
## 第2次作业
### 旋风除尘器
##### 1
(a) A certain cyclone installation is collecting particles of sp.gr. = 2.5 using an inlet velocity of 50 ft/s. What inlet velocity would be required to collect particles of sp.gr. = 1.5 with the same grade-efficiency? How will the pressure drop compare with the original value?
(a) 某旋风分离器装置使用50 ft/s的入口速度收集sp.gr. = 2.5的颗粒。收集sp.gr. = 1.5且具有相同品位效率的颗粒所需的入口速度是多少?压降与原始值相比如何?
(b) The cut-diameter for a Swift high-efficiency design cyclone operating under a certain set of conditions is 2.0 μm and the pressure drop is 3.0'' H2O. What would be the cut-diameter and the pressure drop for a Stairmand design of the same diameter D, operating at the same flow rate, temperature, grain-loading, etc.?
(b) 在特定条件下运行的快速高效设计旋风分离器的切割直径为2.0 μm,压降为3.0'' H2O。在相同流速、温度、谷物负载等条件下,相同直径D的楼梯设计的切割直径和压降是多少?
##### 2
A cyclone designed to operate at 20 ℃ with a flow rate of 10,800 std. cu.ft./min of air, collecting solid particles of 1.5 gm/cm3 density, has a cut-diamter of 1.96 μm. Estimate the collection grade-efficiency of particles 1.96 μm if this same cyclone were operated at 200 ℃ at a flow rate of 5000 scfm, collecting the same material. The cyclone is of high-efficiency Stairmand configuration and is 5 ft in body diameter.
### 电除尘器
##### 1
Using the conditions and values specified for Eqn. (7.9), together with appropriate values for Ci calculate values of the size-dependent term Ciqi*/dpi in Eqn. (7.3) after 1 sec for particle sizes in the neighborhood of 0.2 μm. Show that this term goes through a minimun value. Compare the particle size at which this minimum occurs with that obtained from Eqn. (7.13).
##### 4
Refer to Example 1 in this chapter:
(a) What is the value of the effective migration velocity (or precipitation rate parameter)?
(b) What is the “cut” diameter?
“切割”直径是多少?
(c) What value of a “mean” particle size could be used to represent the overall performance? Does this correspond to any of the “means” defined in Chapter 2? How does it compare with Cooperman's tmean given by (7.22) and (7.23)?
“平均”粒径的什么值可以用来表示总体性能?这是否符合第2章中定义的任何“方法”?它与(7.22)和(7.23)给出的库伯曼tmean相比如何?
(d) Estimate the value of the overall collection efficiency if the rate of gas flow were to double during operation.
##### 7
The fly-ash from a pulverized coal fired furnace has a particle-size-distribution such as given in Feldman's table just below Eqn. (7.19), and a density of 2.5 gm/cm3, It is emitted at the rate of 170 lb/ton of coal fired in a flue gas stream (Mol. Wt. = 28.1) of 14.7 × 106 cu.ft./hr at 300 ℉ and 1 atm. A collection system is to be designed to meet the emission regulation of 0.10 lb/million BTU. The coal used has a heating value of 12,800 BTU/lbm and is fired at the rate of 35 tons/hr. Consider the use of an electrostatic precipitator (either with or without a primary collector ahead of it) for this purpose. Estimate the collecting surface required and propose an arrangement for the plates: number in parallel, spacing, height, length and number of compartments.
### 过滤器
##### 1
(a) Repeat the calculations for the conditions of the example of fiber-bed filtration given in the text, except use velocities of 60 fpm, and of 80 fpm. Note the interplay between the face velocity values and L, A, and ΔP for each filter.
(b) For the conditions of this same example, assume that there is also present an image force brought about by the presence of 90 electronic charges per particle, and that the dielectric constant of the fibers is rather large. Estimate what effect this would have upon the filter dimensions for the case of 40 fpm face velocity.
(c) Again with reference to the same example, how would the results of the original case for 100 fpm be affected if the required efficiency were to be 95%?
##### 2
A dust-laden air stream of 10,000 acfm at 70 ℉ and dust concentration of 2.3 gm/m3 is passed through a fabric filter consisting of 49 bags in parallel, each bag 20 ft long and 1 ft in diameter. Cleaning is by mechanical shaking of all the bags at the same time. Tests indicate that the pressure drop is 3.28'' H2O twenty minutes after shaking, and 3.53'' H2O forty minutes after shaking. Determine:
(a) air/cloth ratio in use during filtration;
(b) the values of SE and K2;
SE和K2的值;
(c) time required to reach a ΔP = 4.0'' H2O;
(d) amount of dust collected when ΔP reaches 4.0'';
ΔP达到4.0''时收集的粉尘量;
(e) the time required to reach ΔP of 4.0'', if an identical arrangement of 49 bags is added in parallel with the present arrangement.
##### 4
Refer to Problem 7 of Chapter 7. For the conditions stated there, consider the design of a baghouse filter system to collect the fly-ash. Assume that the fabric will be fiberglass. Make a preliminary design and compare it with that for the electrostatic precipitator design done for that problem.
### 湿式洗涤器
##### 2
A gravity spray tower 3 m high is operating at a liquid-to-gas ratio of 1 l/m3 with a drop diameter of 400 μm. The gas velocity is 0.1 of the drop terminal velocity which is 157 cm/s. The operation is at 20 ℃. What is the grade efficiency for particles of 1 μm diameter, having a density of 2.0 gm/cm3?
3米高的重力喷雾塔以1 l/m3的液气比运行,液滴直径为400 μm。气体流速为滴头流速的0.1,即157 cm/s。操作温度为20 ℃。直径为1 μm、密度为2.0 gm/cm3的颗粒的分级效率是多少?
##### 4
A Venturi scrubber is to be designed to collect dust from an asphalt stone drier. The dust has a mass median diameter of 1.8 μm and a density of 2.6 gm/cm3. The uncontrolled emission rate is 2310 kg/hr, but state regulations require that this be reduced to a maximum of 25 kg/hr. The air flow is 20,000 acfm at 250 ℉. No additional data are available. Assuming a throat velocity of 150 ft/s, make a preliminary determination of the necessary L/G value, and of the maximum pressure loss in the throat. Discuss the additional data and calculations which would be required to make a final design of the Venturi. | 2022-12-05 03:43:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5488108396530151, "perplexity": 2380.2223527369606}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711003.56/warc/CC-MAIN-20221205032447-20221205062447-00453.warc.gz"} |
https://ask.libreoffice.org/en/question/263229/find-text-in-cell-from-a-filter-list-countif/ | We will be migrating from Ask to Discourse on the first week of August, read the details here
# find text in cell from a filter list; COUNTIF
Hello guys!
I try to find any of the text in the filter list (E2:E4) within cell A2. The formula I found is returning "1" only for the first word in the filter list, i.e. kni.
It does not work for "pan" or "chocolate". Regular expressions are enabled.
I appreciate the help as always! Sophia
FOLLOW UP QUESTION: This is what I have in mind: an accounting sheet with categories assigned to transactions from my bank account.
Categories are Food, Rent, Car, Phone, Shopping, with filter words in each of the categories.
I want the formula to search C colume, starting at C9, and assign a catogory in D colume same row (pink arrow) based on the work of what filter list was found. E.g. if the description contains "cafe", category "Food" is assigned. If the description contains "maverik", category "Car" is assigned. Etc. Please see img and ODS file below:
Here is the actual ODS file: C:\fakepath\example.ods
EXAMPLE FILE 2: the array formula does not work after adding two new inputs (cafe, elevate) in the C colume.
C:\fakepath\exampleSophia1.ods
edit retag close merge delete
Sort by » oldest newest most voted
It's simple. The fact is that COUNTIF() usualy has a different set of parameters - first the search range, and then the condition (the desired value). In other words, your formula looks for each value from E2: E4 in the text of cell A2.
To see the real result, enter your formula as an array formula (complete the entry Ctrl+Shift+Enter). And you will see something like this
This means that your formula is working correctly, but you always see only the first cell of the entire array of results.
To make the formula count correctly, it is enough to wrap it in another function, in SUMPRODUCT():
=SUMPRODUCT(COUNTIF(A2;"*" & $E$2:$E$4 & "*"))
By the way, I see that you are using an asterisk, wildcards, and not regular expressions?
Please, make sure that this parameter is set correctly for you - this discrepancy can lead to many unpleasant disappointments.
more
Thank you! :D
( 2020-08-31 07:52:47 +0200 )edit
@JohnSUN, aw I am still struggling. The example you gave does work great, however I can't make it work for multiple filter lists. This is what I have in mind: an accounting sheet with categories assigned to transactions from my bank account. I can make it work with a combination of ISNUMBER and SEARCH functions, but its clunky and the formula is massive in size! I added a description of the problem, ODF file and img to the original post under paragraph "FOLLOWUP QUESTION", hoping you could have a peak and give a hint what I should change to make it work with the COUNTIF and SUMPRODUCT formula you suggested, which seems so much more elegant.
( 2020-08-31 08:53:56 +0200 )edit
First of all I would change the list of categories and keywords (see 'dictionary' sheet). This makes the search formula much easier.
Just in case, I would not return the first encountered value, but all found categories - this will allow you not to accidentally miss any of the options. For example, the keyword cafe can be in two categories at once (food and shopping - why not? You drank coffee in a shopping center). So I used the TEXTJOIN() function.
Unfortunately, the SUMPRODUCT() trick will not work in combination with TEXTJOIN(), the formula must be entered as an array formula in one cell, and then copied to the entire column.
Okay, check out this sample and ask more if you don't understand something - C:\fakepath\example_Sophia.ods
( 2020-08-31 10:11:20 +0200 )edit
Thanks for the example @JohnSUN. I added 2 lines (cafe, elevate) to the bottom of the C colume. I did not alter the array formula. However, the array formula returns the result of C2 for all other rows as well. I added image and example file to the original post. Question 1: what went wrong? Question 2: How can I add an image and file path to a comment here in the forum?
( 2020-09-01 16:45:04 +0200 )edit
Perhaps I did not explain well - the first cell with the formula should not be stretched to the entire column, but copied and pasted into all cells of the column. When we stretch a cell, we simply enlarge the area to output one formula. When we copy and paste, we paste in multiple copies of the first formula. C:\fakepath\exampleSophia1.ods
Answer 2 :-) I go into editing my answer (or question), use the usual tools for inserting files or images, cut the resulting strings, click Cancel so as not to spoil the existing answer, and paste the copied rows into the comment. (for example, here I paste text [C:\fakepath\exampleSophia1.ods](/upfiles/1598973093643539.ods))
( 2020-09-01 17:18:53 +0200 )edit
Aaahh I see. It works great now! Thank you @JohnSUN Yeah, thats clever (-> answer 2)
( 2020-09-02 06:02:58 +0200 )edit | 2021-07-25 20:26:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38128504157066345, "perplexity": 1878.2578700830936}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151760.94/warc/CC-MAIN-20210725174608-20210725204608-00549.warc.gz"} |
https://homework.zookal.com/questions-and-answers/use-n-to-write-an-expression-which-computes-the-excess-430344920 | 1. Engineering
2. Computer Science
3. use n to write an expression which computes the excess...
# Question: use n to write an expression which computes the excess...
###### Question details
Use n to write an expression which computes the excess amount for the n-bit excess notation used by the IEEE Standard 754. Hint When n = 8 (single precision), the excess amount is 127; when n = 11 (double precision), the excess amount is 1023. | 2021-04-19 22:51:42 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9138088822364807, "perplexity": 2416.35215816793}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038917413.71/warc/CC-MAIN-20210419204416-20210419234416-00172.warc.gz"} |
https://lu.kattis.com/problems/lu.gorilla | OpenKattis
Lund University
# Gorilla
In biology, a recurring problem is to find optimal alignments between two strings of DNA, RNA, or proteins. One example of this is measuring the similarity of different protein sequences found in different animals to understand how related they are.
Given a set of amino acid sequences belonging to different animals, we want to produce an optimal alignment. An alignment between two strings $X$, $Y$ is a pair of alignment strings $X_ a, Y_ a$ of the same length, where each alignment string consists of the original string, but with zero or more ‘-’ characters (called gaps) inserted. An alignment is considered optimal if it maximises the score. The score is calculated by the sum of each aligned character pair in the two strings. A gap compared to anything always gives the score $-4$. The score of two characters from the set ARNDCQEGHILKMFPSTWYVBZX is determined by a specific BLOSUM-matrix. The matrix and code snippets for generating the matrix are attached.
For example, the strings $\texttt{KATTIS}$ and $\texttt{KATIS}$ can be aligned as:
$\texttt{KATTIS}$
$\texttt{KAT-IS}$
This is an optimal alignment since it produces the maximal score $5 + 4 + 5 - 4 + 4 + 4 = 18$.
## Input
The first line contains integers $N$ and $Q$, such that $1 \leq N \leq 20$ and $1 \leq Q \leq 100$. Then follow $2N$ lines of organism names and their amino acid sequences. For each organism, the first line is their name, and the second line is a string representing the amino acid sequence. Then follow $Q$ lines of queries, where each query is a pair of names separated by a space.
An organism name consists of a unique word of at most 20 characters, using only characters in the range {a—z,A—Z}.
Each amino acid sequence consists of a string of length at most $200$ where each character represents an amino acid (from the set ARNDCQEGHILKMFPSTWYVBZX).
## Output
For each of the $Q$ queries, output two lines containing an optimal alignment for the two amino acid sequences. The first line should contain the alignment string corresponding to the first organism, and the second should contain the alignment string corresponding to the second organism. If there exists multiple optimal alignments output any of them.
Sample Input 1 Sample Output 1
2 1
katis
KATIS
kattis
KATTIS
kattis katis
KATTIS
KAT-IS
Sample Input 2 Sample Output 2
1 1
a
A
a a
A
A
Sample Input 3 Sample Output 3
3 3
Sphinx
KQRK
Bandersnatch
KAK
Snark
KQRIKAAKABK
Sphinx Snark
Sphinx Bandersnatch
Snark Bandersnatch
KQR-------K
KQRIKAAKABK
KQRK
K-AK
KQRIKAAKABK
-------KA-K | 2018-11-14 03:20:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46186065673828125, "perplexity": 958.9013670032706}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741578.24/warc/CC-MAIN-20181114020650-20181114042650-00449.warc.gz"} |
https://www.law.cornell.edu/cfr/text/26/1.163-10T | # 26 CFR § 1.163-10T - Qualified residence interest (temporary).
§ 1.163-10T Qualified residence interest (temporary).
(a) Table of contents. This paragraph (a) lists the major paragraphs that appear in this § 1.163-10T.
(b) Treatment of qualified residence interest.
(c) Determination of qualified residence interest when secured debt does not exceed the adjusted purchase price.
(1) In general.
(2) Examples.
(d) Determination of qualified residence interest when secured debt exceeds adjusted purchase price - Simplified method.
(1) In general.
(2) Treatment of interest paid or accrued on secured debt that is not qualified residence interest.
(3) Example.
(e) Determination of qualified residence interest when secured debt exceeds adjusted purchase price - Exact method.
(1) In general.
(2) Determination of applicable debt limit.
(3) Example.
(4) Treatment of interest paid or accrued with respect to secured debt that is not qualified residence interest.
(i) In general.
(ii) Example.
(iii) Special rule of debt is allocated to more than one expenditure.
(iv) Example.
(f) Special rules.
(1) Special rules for personal property.
(i) In general.
(ii) Example.
(2) Special rule for real property.
(i) In general.
(ii) Example.
(g) Selection of method.
(h) Average balance.
(1) Average balance defined.
(2) Average balance reported by lender.
(3) Average balance computed on a daily basis.
(i) In general.
(ii) Example.
(4) Average balance computed using the interest rate.
(i) In general.
(ii) Points and prepaid interest.
(iii) Examples.
(5) Average balance computed using average of beginning and ending balance.
(i) In general.
(ii) Example.
(6) Highest principal balance.
(7) Other methods provided by the Commissioner.
(8) Anti-abuse rule.
(i) [Reserved]
(j) Determination of interest paid or accrued during the taxable year.
(1) In general.
(2) Special rules for cash-basis taxpayers.
(i) Points deductible in year paid under section 461(g)(2).
(ii) Points and other prepaid interest described in section 461(g)(1).
(3) Examples.
(k) Determination of adjusted purchase price and fair market value.
(i) In general.
(ii) Adjusted purchase price of a qualified residence acquired incident to divorce.
(iii) Examples.
(i) In general.
(ii) Examples.
(3) Allocation of adjusted purchase price and fair market value.
(l) [Reserved]
(m) Grandfathered amount.
(1) Substitution for adjusted purchase price.
(2) Determination of grandfathered amount.
(i) In general.
(ii) Special rule for lines of credit and certain other debt.
(iv) Examples.
(3) Refinancing of grandfathered debt.
(i) In general.
(ii) Determination of grandfathered amount.
(4) Limitation on terms of grandfathered debt.
(i) In general.
(ii) Special rule for nonamortizing debt.
(iii) Example.
(n) Qualified indebtedness (secured debt used for medical and educational purposes).
(1) In general.
(iii) Determination of amount of qualified indebtedness for mixed-use debt.
(iv) Example.
(v) Prevention of double counting in year of refinancing.
(vi) Special rule for principal payments in excess of qualified expenses.
(2) Debt used to pay for qualified medical or educational expenses.
(i) In general.
(ii) Special rule for refinancing.
(iii) Other special rules.
(iv) Examples.
(3) Qualified medical expenses.
(4) Qualified educational expenses.
(o) Secured debt.
(1) In general.
(2) Special rule for debt in certain States.
(3) Time at which debt is treated as secured.
(4) Partially secured debt.
(i) In general.
(ii) Example.
(5) Election to treat debt as not secured by a qualified residence.
(i) In general.
(ii) Example.
(iii) Allocation of debt secured by two qualified residences.
(p) Definition of qualified residence.
(1) In general.
(2) Principal residence.
(3) Second residence.
(i) In general.
(ii) Definition of residence.
(iii) Use as a residence.
(iv) Election of second residence.
(4) Allocations between residence and other property.
(i) In general.
(ii) Special rule for rental of residence.
(iii) Examples.
(5) Residence under construction.
(i) In general.
(ii) Example.
(6) Special rule for the time-sharing arrangements.
(q) Special rules for tenant-stockholders in cooperative housing corporations.
(1) In general.
(2) Special rule where stock may not be used to secure debt.
(3) Treatment of interest expense of the cooperative described in section 216(a)(2).
(4) Special rule to prevent tax avoidance.
(5) Other definitions.
(r) Effective date.
Treatment of qualified residence interest. Except as provided below, qualified residence interest is deductible under section 163(a). Qualified residence interest is not subject to limitation or otherwise taken into account under section 163(d) (limitation on investment interest), section 163(h)(1) (disallowance of deduction for personal interest), section 263A (capitalization and inclusion in inventory costs of certain expenses) or section 469 (limitations on losses from passive activities). Qualified residence interest is subject to the limitation imposed by section 263(g) (certain interest in the case of straddles), section 264(a) (2) and (4) (interest paid in connection with certain insurance), section 265(a)(2) (interest relating to tax-exempt income), section 266 (carrying charges), section 267(a)(2) (interest with respect to transactions between related taxpayers) section 465 (deductions limited to amount at risk), section 1277 (deferral of interest deduction allocable to accrued market discount), and section 1282 (deferral of interest deduction allocable to accrued discount).
Determination of qualified residence interest when secured debt does not exceed adjusted purchase price -
In general. If the sum of the average balances for the taxable year of all secured debts on a qualified residence does not exceed the adjusted purchase price (determined as of the end of the taxable year) of the qualified residence, all of the interest paid or accrued during the taxable year with respect to the secured debts is qualified residence interest. If the sum of the average balances for the taxable year of all secured debts exceeds the adjusted purchase price of the qualified residences (determined as of the end of the taxable year), the taxpayer must use either the simplified method (see paragraph (d) of this section) or the exact method (see paragraph (e) of this section) to determine the amount of interest that is qualified residence interest.
Examples.
Example 1.
T purchases a qualified residence in 1987 for $65,000. T pays$6,500 in cash and finances the remainder of the purchase with a mortgage of $58,500. In 1988, the average balance of the mortgage is$58,000. Because the average balance of the mortgage is less than the adjusted purchase price of the residence ($65,000), all of the interest paid or accrued during 1988 on the mortgage is qualified residence interest. Example 2. The facts are the same as in example (1), except that T incurs a second mortgage on January 1, 1988, with an initial principal balance of$2,000. The average balance of the second mortgage in 1988 is $1,900. Because the sum of the average balance of the first and second mortgages ($59,900) is less than the adjusted purchase price of the residence ($65,000), all of the interest paid or accrued during 1988 on both the first and second mortgages is qualified residence interest. Example 3. P borrows$50,000 on January 1, 1988 and secures the debt by a qualified residence. P pays the interest on the debt monthly, but makes no principal payments in 1988. There are no other debts secured by the residence during 1988. On December 31, 1988, the adjusted purchase price of the residence is $40,000. The average balance of the debt in 1988 is$50,000. Because the average balance of the debt exceeds the adjusted purchase price ($10,000), some of the interest on the debt is not qualified residence interest. The portion of the total interest that is qualified residence interest must be determined in accordance with the rules of paragraph (d) or paragraph (e) of this section. Determination of qualified residence interest when secured debt exceeds adjusted purchase price - Simplified method - In general. Under the simplified method, the amount of qualified residence interest for the taxable year is equal to the total interest paid or accrued during the taxable year with respect to all secured debts multiplied by a fraction (not in excess of one), the numerator of which is the adjusted purchase price (determined as of the end of the taxable year) of the qualified residence and the denominator of which is the sum of the average balances of all secured debts. Treatment of interest paid or accrued on secured debt that is not qualified residence interest. Under the simplified method, the excess of the total interest paid or accrued during the taxable year with respect to all secured debts over the amount of qualified residence interest is personal interest. Example. Example. R's principal residence has an adjusted purchase price on December 31, 1988, of$105,000. R has two debts secured by the residence, with the following average balances and interest payments:
Debt Date secured Average balance Interest
Debt 1 June 1983 $80,000$8,000
Debt 2 May 1987 40,000 4,800
Total 120,000 12,800
The amount of qualified residence interest is determined under the simplified method by multiplying the total interest ($12,800) by a fraction (expressed as a decimal amount) equal to the adjusted purchase price ($105,000) of the residence divided by the combined average balances ($120,000). For 1988, this fraction is equal to 0.875 ($105,000/$120,000). Therefore,$11,200 ($12,800 × 0.875) of the total interest is qualified residence interest. The remaining$1,600 in interest ($12,800 −$11,200) is personal interest, even if (under the rules of § 1.163-8T) such remaining interest would be allocated to some other category of interest.
Determination of qualified residence interest when secured debt exceeds adjusted purchase price - Exact method -
In general. Under the exact method, the amount of qualified residence interest for the taxable year is determined on a debt-by-debt basis by computing the applicable debt limit for each secured debt and comparing each such applicable debt limit to the average balance of the corresponding debt. If, for the taxable year, the average balance of a secured debt does not exceed the applicable debt limit for that debt, all of the interest paid or accrued during the taxable year with respect to the debt is qualified residence interest. If the average balance of the secured debt exceeds the applicable debt limit for that debt, the amount of qualified residence interest with respect to the debt is determined by multiplying the interest paid or accrued with respect to the debt by a fraction, the numerator of which is the applicable debt limit for that debt and the denominator of which is the average balance of the debt.
Determination of applicable debt limit. For each secured debt, the applicable debt limit for the taxable year is equal to
(i) The lesser of -
(A) The fair market value of the qualified residence as of the date the debt is first secured, and
(B) The adjusted purchase price of the qualified residence as of the end of the taxable year,
(ii) Reduced by the average balance of each debt previously secured by the qualified residence.
For purposes of paragraph (e)(2)(ii) of this section, the average balance of a debt shall be treated as not exceeding the applicable debt limit of such debt. See paragraph (n)(1)(i) of this section for the rule that increases the adjusted purchase price in paragraph (e)(2)(i)(B) of this section by the amount of any qualified indebtedness (certain medical and educational debt). See paragraph (f) of this section for special rules relating to the determination of the fair market value of the qualified residence.
Example.
(i) R's principal residence has an adjusted purchase price on December 31, 1988, of $105,000. R has two debts secured by the residence. The average balances and interest payments on each debt during 1988 and fair market value of the residence on the date each debt was secured are as follows: Debt Date secured Fair market value Average balance Interest Debt 1 June 1983$100,000 $80,000$8,000
Debt 2 May 1987 140,000 40,000 4,800
Total 120,000 12,800
(ii) The amount of qualified residence interest for 1988 under the exact method is determined as follows. Because there are no debts previously secured by the residence, the applicable debt limit for Debt 1 is $100,000 (the lesser of the adjusted purchase price as of the end of the taxable year and the fair market value of the residence at the time the debt was secured). Because the average balance of Debt 1 ($80,000) does not exceed its applicable debt limit ($100,000), all of the interest paid on the debt during 1988 ($8,000) is qualified residence interest.
(iii) The applicable debt limit for Debt 2 is $25,000 ($105,000 (the lesser of $140,000 fair market value and$105,000 adjusted purchase price) reduced by $80,000 (the average balance of Debt 1)). Because the average balance of Debt 2 ($40,000) exceeds its applicable debt limit, the amount of qualified residence interest on Debt 2 is determined by multiplying the amount of interest paid on the debt during the year ($4,800) by a fraction equal to its applicable debt limit divided by its average balance ($25,000/$40,000 = 0.625). Accordingly,$3,000 ($4,800 × 0.625) of the interest paid in 1988 on Debt 2 is qualified residence interest. The character of the remaining$1,800 of interest paid on Debt 2 is determined under the rules of paragraph (e)(4) of this section.
Treatment of interest paid or accrued with respect to secured debt that is not qualified residence interest -
In general. Under the exact method, the excess of the interest paid or accrued during the taxable year with respect to a secured debt over the amount of qualified residence interest with respect to the debt is allocated under the rules of § 1.163-8T.
Example. T borrows $20,000 and the entire proceeds of the debt are disbursed by the lender to T's broker to purchase securities held for investment. T secures the debt with T's principal residence. In 1990, T pays$2,000 of interest on the debt. Assume that under the rules of paragraph (e) of this section, $1,500 of the interest is qualified residence interest. The remaining$500 in interest expense would be allocated under the rules of § 1.163-8T. Section 1.163-8T generally allocates debt (and the associated interest expense) by tracing disbursements of the debt proceeds to specific expenditures. Accordingly, the $500 interest expense on the debt that is not qualified residence interest is investment interest subject to section 163(d). Special rule if debt is allocated to more than one expenditure. If - (A) The average balance of a secured debt exceeds the applicable debt limit for that debt, and (B) Under the rules of § 1.163-8T, interest paid or accrued with respect to such debt is allocated to more than one expenditure, the interest expense that is not qualified residence interest may be allocated among such expenditures, to the extent of such expenditures, in any manner selected by the taxpayer. Example. (i) C borrows$60,000 secured by a qualified residence. C uses (within the meaning of § 1.163-8T) $20,000 of the proceeds in C's trade or business,$20,000 to purchase stock held for investment and $20,000 for personal purposes. In 1990, C pays$6,000 in interest on the debt and, under the rules of § 1.163-8T, $2,000 in interest is allocable to trade or business expenses,$2,000 to investment expenses and $2,000 to personal expenses. Assume that under paragraph (e) of this section,$2,500 of the interest is qualified residence interest and $3,500 of the interest is not qualified residence interest. (ii) Under paragraph (e)(4)(iii) of this section, C may allocate up to$2,000 of the interest that is not qualified residence interest to any of the three categories of expenditures up to a total of $3,500 for all three categories. Therefore, for example, C may allocate$2,000 of such interest to C's trade or business and $1,500 of such interest to the purchase of stock. Special rules - Special rules for personal property - In general. If a qualified residence is personal property under State law (e.g., a boat or motorized vehicle) - (A) For purposes of paragraphs (c)(1) and (d)(1) of this section, if the fair market value of the residence as of the date that any secured debt (outstanding during the taxable year) is first secured by the residence is less than the adjusted purchase price as of the end of the taxable year, the lowest such fair market value shall be substituted for the adjusted purchase price. (B) For purposes of paragraphs (e)(2)(i)(A) and (f)(1)(i)(A) of this section, the fair market value of the residence as of the date the debt is first secured by the residence shall not exceed the fair market value as of any date on which the taxpayer borrows any additional amount with respect to the debt. Example. D owns a recreational vehicle that is a qualified residence under paragraph (p)(4) of this section. The adjusted purchase price and fair market value of the recreational vehicle is$20,000 in 1989. In 1989, D establishes a line of credit secured by the recreational vehicle. As of June 1, 1992, the fair market value of the vehicle has decreased to $10,000. On that day, D borrows an additional amount on the debt by using the line of credit. Although under paragraphs (e)(2)(i) and (f)(1)(i)(A) of this section, fair market value is determined at the time the debt is first secured, under paragraph (f)(1)(i)(B) of this section, the fair market value is the lesser of that amount or the fair market value on the most recent date that D borrows any additional amount with respect to the line of credit. Therefore, the fair market value with respect to the debt is$10,000.
Special rule for real property -
In general. For purposes of paragraph (e)(2)(i)(A) of this section, the fair market value of a qualified residence that is real property under State law is presumed irrebuttably to be not less than the adjusted purchase price of the residence as of the last day of the taxable year.
Example.
(i) C purchases a residence on August 11, 1987, for $50,000, incurring a first mortgage. The residence is real property under State law. During 1987, C makes$10,000 in home improvements. Accordingly, the adjusted purchase price of the residence as of December 31, 1988, is $60,000. C incurs a second mortgage on May 19, 1988, as of which time the fair market value of the residence is$55,000.
(ii) For purposes of determining the applicable debt limit for each debt, the fair market value of the residence is generally determined as of the time the debt is first secured. Accordingly, the fair market value would be $50,000 and$55,000 with respect to the first and second mortgage, respectively. Under the special rule of paragraph (f)(2)(i) of this section, however, the fair market value with respect to both debts in 1988 is $60,000, the adjusted purchase price on December 31, 1988. Selection of method. For any taxable year, a taxpayer may use the simplified method (described in paragraph (d) of this section) or the exact method (described in paragraph (e) of this section) by completing the appropriate portion of Form 8598. A taxpayer with two qualified residences may use the simplified method for one residence and the exact method for the other residence. Average balance - Average balance defined. For purposes of this section, the term “average balance” means the amount determined under this paragraph (h). A taxpayer is not required to use the same method to determine the average balance of all secured debts during a taxable year or of any particular secured debt from one year to the next. Average balance reported by lender. If a lender that is subject to section 6050H (returns relating to mortgage interest received in trade or business from individuals) reports the average balance of a secured debt on Form 1098, the taxpayer may use the average balance so reported. Average balance computed on a daily basis - In general. The average balance may be determined by - (A) Adding the outstanding balance of a debt on each day during the taxable year that the debt is secured by a qualified residence, and (B) Dividing the sum by the number of days during the taxable year that the residence is a qualified residence. (ii) Example. Taxpayer A incurs a debt of$10,000 on September 1, 1989, securing the debt with A's principal residence. The residence is A's principal residence during the entire taxable year. A pays current interest on the debt monthly, but makes no principal payments. The debt is, therefore, outstanding for 122 days with a balance each day of $10,000. The residence is a qualified residence for 365 days. The average balance of the debt for 1989 is$3,342 (122 × $10,000/365). (4) Average balance computed using the interest rate - (i) In general. If all accrued interest on a secured debt is paid at least monthly, the average balance of the secured debt may be determined by dividing the interest paid or accrued during the taxable year while the debt is secured by a qualified residence by the annual interest rate on the debt. If the interest rate on a debt varies during the taxable year, the lowest annual interest rate that applies to the debt during the taxable year must be used for purposes of this paragraph (h)(4). If the residence securing the debt is a qualified residence for less than the entire taxable year, the average balance of any secured debt may be determined by dividing the average balance determined under the preceding sentence by the percentage of the taxable year that the debt is secured by a qualified residence. (ii) Points and prepaid interest. For purposes of paragraph (h)(4)(i) of this section, the amount of interest paid during the taxable year does not include any amount paid as points and includes prepaid interest only in the year accrued. (iii) Examples. Example 1. B has a line of credit secured by a qualified residence for the entire taxable year. The interest rate on the debt is 10 percent throughout the taxable year. The principal balance on the debt changes throughout the year. B pays the accrued interest on the debt monthly. B pays$2,500 in interest on the debt during the taxable year. The average balance of the debt ($25,000) may be computed by dividing the total interest paid by the interest rate ($25,000 = $2,500/0.10). Example 2. Assume the same facts as in example 1, except that the residence is a qualified residence, and the debt is outstanding, for only one-half of the taxable year and B pays only$1,250 in interest on the debt during the taxable year. The average balance of the debt may be computed by first dividing the total interest paid by the interest rate ($12,500 =$1,250/0.10). Second, because the residence is not a qualified residence for the entire taxable year, the average balance must be determined by dividing this amount ($12,500) by the portion of the year that the residence is qualified (0.50). The average balance is therefore$25,000 ($12,500/0.50). (5) Average balance computed using average of beginning and ending balances - (i) In general. If - (A) A debt requires level payments at fixed equal intervals (e.g., monthly, quarterly) no less often than semi-annually during the taxable year, (B) The taxpayer prepays no more than one month's principal on the debt during the taxable year, and (C) No new amounts are borrowed on the debt during the taxable year, the average balance of the debt may be determined by adding the principal balance as of the first day of the taxable year that the debt is secured by the qualified residence and the principal balance as of the last day of the taxable year that the debt is secured by the qualified residence and dividing the sum by 2. If the debt is secured by a qualified residence for less than the entire period during the taxable year that the residence is a qualified residence, the average balance may be determined by multiplying the average balance determined under the preceding sentence by a fraction, the numerator of which is the number of days during the taxable year that the debt is secured by the qualified residence and the denominator of which is the number of days during the taxable year that the residence is a qualified residence. For purposes of this paragraph (h)(5)(i), the determination of whether payments are level shall disregard the fact that the amount of the payments may be adjusted from time to time to take into account changes in the applicable interest rate. (ii) Example. C borrows$10,000 in 1988, securing the debt with a second mortgage on a principal residence. The terms of the loan require C to make equal monthly payments of principal and interest so as to amortize the entire loan balance over 20 years. The balance of the debt is $9,652 on January 1, 1990, and is$9,450 on December 31, 1990. The average balance of the debt during 1990 may be computed as follows:
Balance on first day of the year: $9,652 Balance on last day of the year:$9,450
$\text{Average balance:}\frac{9,562+9,450}{2}=9,551$
(6) Highest principal balance. The average balance of a debt may be determined by taking the highest principal balance of the debt during the taxable year.
(7) Other methods provided by the Commissioner. The average balance may be determined using any other method provided by the Commissioner by form, publication, revenue ruling, or revenue procedure. Such methods may include methods similar to (but with restrictions different from) those provided in paragraph (h) of this section.
(8) Anti-abuse rule. If, as a result of the determination of the average balance of a debt using any of the methods specified in paragraphs (h) (4), (5), or (6) of this section, there is a significant overstatement of the amount of qualified residence interest and a principal purpose of the pattern of payments and borrowing on the debt is to cause the amount of such qualified residence interest to be overstated, the district director may redetermine the average balance using the method specified under paragraph (h)(3) of this section.
(i) [Reserved]
Determination of interest paid or accrued during the taxable year -
In general. For purposes of determining the amount of qualified residence interest with respect to a secured debt, the amount of interest paid or accrued during the taxable year includes only interest paid or accrued while the debt is secured by a qualified residence.
Special rules for cash-basis taxpayers -
Points deductible in year paid under section 461(g)(2). If points described in section 461(g)(2) (certain points paid in respect of debt incurred in connection with the purchase or improvement of a principal residence) are paid with respect to a debt, the amount of such points is qualified residence interest.
Points and other prepaid interest described in section 461(g)(1). The amount of points or other prepaid interest charged to capital account under section 461(g)(1) (prepaid interest) that is qualified residence interest shall be determined under the rules of paragraphs (c) through (e) of this section in the same manner as any other interest paid with respect to the debt in the taxable year to which such payments are allocable under section 461(g)(1).
Examples.
Example 1.
T designates a vacation home as a qualified residence as of October 1, 1987. The home is encumbered by a mortgage during the entire taxable year. For purposes of determining the amount of qualified residence interest for 1987, T may take into account the interest paid or accrued on the secured debt from October 1, 1987, through December 31, 1987.
Example 2.
R purchases a principal residence on June 17, 1987. As part of the purchase price, R obtains a conventional 30-year mortgage, secured by the residence. At closing, R pays 2 1/2 points on the mortgage and interest on the mortgage for the period June 17, 1987 through June 30, 1987. The points are actually paid by R and are not merely withheld from the loan proceeds. R incurs no additional secured debt during 1987. Assuming that the points satisfy the requirements of section 461(g) (2), the entire amount of points and the interest paid at closing are qualified residence interest.
Example 3.
(i) On July 1, 1987, W borrows $120,000 to purchase a residence to use as a vacation home. W secures the debt with the residence. W pays 2 points, or$2,400. The debt has a term of 10 years and requires monthly payments of principal and interest. W is permitted to amortize the points at the rate of $20 per month over 120 months. W elects to treat the residence as a second residence. W has no other debt secured by the residence. The average balance of the debt in each taxable year is less than the adjusted purchase price of the residence. W sells the residence on June 30, 1990, and pays off the remaining balance of the debt. (ii) W is entitled to treat the following amounts of the points as interest paid on a debt secured by a qualified residence - 1987$120 = $20 × 6 months;$240 = $20 × 12 months;$120 = $20 × 6 months.$480
All of the interest paid on the debt, including the allocable points, is qualified residence interest. Upon repaying the debt, the remaining $1,920 ($2,400−$480) in unamortized points is treated as interest paid in 1990 and, because the average balance of the secured debt in 1990 is less than the adjusted purchase price, is also qualified residence interest. Determination of adjusted purchase price and fair market value - Adjusted purchase price - In general. For purposes of this section, the adjusted purchase price of a qualified residence is equal to the taxpayer's basis in the residence as initially determined under section 1012 or other applicable sections of the Internal Revenue Code, increased by the cost of any improvements to the residence that have been added to the taxpayer's basis in the residence under section 1016(a)(1). Any other adjustments to basis, including those required under section 1033(b) (involuntary conversions), and 1034(e) (rollover of gain or sale of principal residence) are disregarded in determining the taxpayer's adjusted purchase price. If, for example, a taxpayer's second residence is rented for a portion of the year and its basis is reduced by depreciation allowed in connection with the rental use of the property, the amount of the taxpayer's adjusted purchase price in the residence is not reduced. See paragraph (m) of this section for a rule that treats the sum of the grandfathered amounts of all secured debts as the adjusted purchase price of the residence. Adjusted purchase price of a qualified residence acquired incident to divorce. [Reserved] Examples. Example 1. X purchases a residence for$120,000. X's basis, as determined under section 1012, is the cost of the property, or $120,000. Accordingly, the adjusted purchase price of the residence is initially$120,000.
Example 2.
Y owns a principal residence that has a basis of $30,000. Y sells the residence for$100,000 and purchases a new principal residence for $120,000. Under section 1034, Y does not recognize gain on the sale of the former residence. Under section 1034(e), Y's basis in the new residence is reduced by the amount of gain not recognized. Therefore, under section 1034(e), Y's basis in the new residence is$50,000 ($120,000−$70,000). For purposes of section 163(h), however, the adjusted purchase price of the residence is not adjusted under section 1034(e). Therefore, the adjusted purchase price of the residence is initially $120,000. Example 3. Z acquires a residence by gift. The donor's basis in the residence was$30,000. Z's basis in the residence, determined under section 1015, is $30,000. Accordingly, the adjusted purchase price of the residence is initially$30,000.
Fair market value -
In general. For purposes of this section, the fair market value of a qualified residence on any date is the fair market value of the taxpayer's interest in the residence on such date. In addition, the fair market value determined under this paragraph (k)(2)(i) shall be determined by taking into account the cost of improvements to the residence reasonably expected to be made with the proceeds of the debt.
Example. In 1988, the adjusted purchase price of P's second residence is $65,000 and the fair market value of the residence is$70,000. At that time, P incurs an additional debt of $10,000, the proceeds of which P reasonably expects to use to add two bedrooms to the residence. Because the fair market value is determined by taking into account the cost of improvements to the residence that are reasonably expected to be made with the proceeds of the debt, the fair market value of the residence with respect to the debt incurred in 1988 is$80,000 ($70,000 +$10,000).
Allocation of adjusted purchase price and fair market value. If a property includes both a qualified residence and other property, the adjusted purchase price and the fair market value of such property must be allocated between the qualified residence and the other property. See paragraph (p)(4) of this section for rules governing such an allocation.
[Reserved]
Grandfathered amount -
Substitution for adjusted purchase price. If, for the taxable year, the sum of the grandfathered amounts, if any, of all secured debts exceeds the adjusted purchase price of the qualified residence, such sum may be treated as the adjusted purchase price of the residence under paragraphs (c), (d) and (e) of this section.
Determination of grandfathered amount -
In general. For any taxable year, the grandfathered amount of any secured debt that was incurred on or before August 16, 1986, and was secured by the residence continuously from August 16, 1986, through the end of the taxable year, is the average balance of the debt for the taxable year. A secured debt that was not incurred and secured on or before August 16, 1986, has no grandfathered amount.
Special rule for lines of credit and certain other debt. If, with respect to a debt described in paragraph (m)(2)(i) of this section, a taxpayer has borrowed any additional amounts after August 16, 1986, the grandfathered amount of such debt is equal to the lesser of -
(A) The average balance of the debt for the taxable year, or
(B) The principal balance of the debt as of August 16, 1986, reduced (but not below zero) by all principal payments after August 16, 1986, and before the first day of the current taxable year.
For purposes of this paragraph (m)(2)(ii), a taxpayer shall not be considered to have borrowed any additional amount with respect to a debt merely because accrued interest is added to the principal balance of the debt, so long as such accrued interest is paid by the taxpayer no less often than quarterly.
Fair market value limitation. The grandfathered amount of any debt for any taxable year may not exceed the fair market value of the residence on August 16, 1986, reduced by the principal balance on that day of all previously secured debt.
Examples.
Example 1.
As of August 16, 1986, T has one debt secured by T's principal residence. The debt is a conventional self-amortizing mortgage and, on August 16, 1986, it has an outstanding principal balance of $75,000. In 1987, the average balance of the mortgage is$73,000. The adjusted purchase price of the residence as of the end of 1987 is $50,000. Because the mortgage was incurred and secured on or before August 16, 1986 and T has not borrowed any additional amounts with respect to the mortgage, the grandfathered amount is the average balance,$73,000. Because the grandfathered amount exceeds the adjusted purchase price ($50,000), T may treat the grandfathered amount as the adjusted purchase price in determining the amount of qualified residence interest. Example 2. The facts are the same as in example (1), except that in May 1986, T also obtains a home equity line of credit that, on August 16, 1986, has a principal balance of$40,000. In November 1986, T borrows an additional $10,000 on the home equity line, increasing the balance to$50,000. In December 1986, T repays $5,000 of principal on the home equity line. The average balance of the home equity line in 1987 is$45,000.
Because T has borrowed additional amounts on the line of credit after August 16, 1986, the grandfathered amount for that debt must be determined under the rules of paragraph (m)(2)(ii) of this section. Accordingly, the grandfathered amount for the line of credit is equal to the lesser of $45,000, the average balance of the debt in 1987, and$35,000, the principal balance on August 16, 1986, reduced by all principal payments between August 17, 1986, and December 31, 1986 ($40,000-$5,000). The sum of the grandfathered amounts with respect to the residence is $108,000 ($73,000 + $35,000). Because the sum of the grandfathered amounts exceeds the adjusted purchase price ($50,000), T may treat the sum as the adjusted purchase price in determining the qualified residence interest for 1987.
Refinancing of grandfathered debt -
In general. A debt incurred and secured on or before August 16, 1986, is refinanced if some or all of the outstanding balance of such a debt (the “original debt”) is repaid out of the proceeds of a second debt secured by the same qualified residence (the “replacement debt”). In the case of a refinancing, the replacement debt is treated as a debt incurred and secured on or before August 16, 1986, and the grandfathered amount of such debt is the amount (but not less than zero) determined pursuant to paragraph (m)(3)(ii) of this section.
Determination of grandfathered amount -
(A) Exact refinancing. If -
(1) The entire proceeds of a replacement debt are used to refinance one or more original debts, and
(2) The taxpayer has not borrowed any additional amounts after August 16, 1986, with respect to the original debt or debts,
the grandfathered amount of the replacement debt is the average balance of the replacement debt. For purposes of the preceding sentence, the fact that proceeds of a replacement debt are used to pay costs of obtaining the replacement debt (including points or other closing costs) shall be disregarded in determining whether the entire proceeds of the replacement debt have been used to refinance one or more original debts.
(B) Refinancing other than exact refinancings -
(1) Year of refinancing. In the taxable year in which an original debt is refinanced, the grandfathered amount of the original and replacement debts is equal to the lesser of -
(i) The sum of the average balances of the original debt and the replacement debt, and
(ii) The principal balance of the original debt as of August 16, 1986, reduced by all principal payments on the original debt after August 16, 1986, and before the first day of the current taxable year.
(2) In subsequent years. In any taxable year after the taxable year in which an original debt is refinanced, the grandfathered amount of the replacement debt is equal to the least of -
(i) The average balance of the replacement debt for the taxable year,
(ii) The amount of the replacement debt used to repay the principal balance of the original debt, reduced by all principal payments on the replacement debt after the date of the refinancing and before the first day of the current taxable year, or
(iii) The principal balance of the original debt on August 16, 1986, reduced by all principal payments on the original debt after August 16, 1986, and before the date of the refinancing, and further reduced by all principal payments on the replacement debt after the date of the refinancing and before the first day of the current taxable year.
(C) Example.
(i) Facts. On August 16, 1986, T has a single debt secured by a principal residence with a balance of $150,000. On July 1, 1988, T refinances the debt, which still has a principal balance of$150,000, with a new secured debt. The principal balance of the replacement debt throughout 1988 and 1989 is $150,000. The adjusted purchase price of the residence is$100,000 throughout 1987, 1988 and 1989. The average balance of the original debt was $150,000 in 1987 and$75,000 in 1988. The average balance of the replacement debt is $75,000 in 1988 and$150,000 in 1989.
(ii) Grandfathered amount in 1987. The original debt was incurred and secured on or before August 16, 1986 and T has not borrowed any additional amounts with respect to the debt. Therefore, its grandfathered amount in 1987 is its average balance ($150,000). This amount is treated as the adjusted purchase price for 1987 and all of the interest paid on the debt is qualified residence interest. (iii) Grandfathered amount in 1988. Because the replacement debt was used to refinance a debt incurred and secured on or before August 16, 1986, the replacement debt is treated as a grandfathered debt. Because all of the proceeds of the replacement debt were used in the refinancing and because no amounts have been borrowed after August 16, 1986, on the original debt, the grandfathered amount for the original debt is its average balance ($75,000) and the grandfathered amount for the replacement debt is its average balance ($75,000). Since the sum of the grandfathered amounts ($150,000) exceeds the adjusted purchase price of the residence, the sum of the grandfathered amounts may be substituted for the adjusted purchase price for 1988 and all of the interest paid on the debt is qualified residence interest.
(iv) Grandfathered amount in 1989. The grandfathered amount for the placement debt is its average balance ($150,000). This amount is treated as the adjusted purchase price for 1989 and all of the interest paid on the mortgage is qualified residence interest. Limitation on term of grandfathered debt - In general. An original debt or replacement debt shall not have any grandfathered amount in any taxable year that begins after the date, as determined on August 16, 1986, that the original debt was required to be repaid in full (the “maturity date”). If a replacement debt is used to refinance more than one original debt, the maturity date is determined by reference to the original debt that, as of August 16, 1986, had the latest maturity date. Special rule for nonamortizing debt. If an original debt was actually incurred and secured on or before August 16, 1986, and if as of such date the terms of such debt did not require the amortization of its principal over its original term, the maturity date of the replacement debt is the earlier of the maturity date of the replacement debt or the date 30 years after the date the original debt is first refinanced. Example. C incurs a debt on May 10, 1986, the final payment of which is due May 1, 2006. C incurs a second debt on August 11, 1990, with a term of 20 years and uses the proceeds of the second debt to refinance the first debt. Because, under paragraph (m)(4)(i) of this section, a replacement debt will not have any grandfathered amount in any taxable year that begins after the maturity date of the original debt (May 1, 2006), the second debt has no grandfathered amount in any taxable year after 2006. Qualified indebtedness (secured debt used for medical and educational purposes) - In general - Treatment of qualified indebtedness. The amount of any qualified indebtedness resulting from a secured debt may be added to the adjusted purchase price under paragraph (e)(2)(i)(B) of this section to determine the applicable debt limit for that secured debt and any other debt subsequently secured by the qualified residence. Determination of amount of qualified indebtedness. If, as of the end of the taxable year (or the last day in the taxable year that the debt is secured), at least 90 percent of the proceeds of a secured debt are used (within the meaning of paragraph (n)(2) of this section) to pay for qualified medical and educational expenses (within the meaning of paragraphs (n)(3) and (n)(4) of this section), the amount of qualified indebtedness resulting from that debt for the taxable year is equal to the average balance of such debt for the taxable year. Determination of amount of qualified indebtedness for mixed-use debt. If, as of the end of the taxable year (or the last day in the taxable year that the debt is secured), more than ten percent of the proceeds of a secured debt are used to pay for expenses other than qualified medical and educational expenses, the amount of qualified indebtedness resulting from that debt for the taxable year shall equal the lesser of - (A) The average balance of the debt, or (B) The amount of the proceeds of the debt used to pay for qualified medical and educational expenses through the end of the taxable year, reduced by any principal payments on the debt before the first day of the current taxable year. Example. (i) C incurs a$10,000 debt on April 20, 1987, which is secured on that date by C's principal residence. C immediately uses (within the meaning of paragraph (n)(2) of this section) $4,000 of the proceeds of the debt to pay for a qualified medical expense. C makes no principal payments on the debt during 1987. During 1988 and 1989, C makes principal payments of$1,000 per year. The average balance of the debt during 1988 is $9,500 and the average balance during 1989 is$8,500.
(ii) Under paragraph (n)(1)(iii) of this section, C determines the amount of qualified indebtedness for 1988 as follows:
Average balance Amount of debt used to pay for qualified medical expenses $9,500$4,000 $0$4,000
The amount of qualified indebtedness for 1988 is, therefore, $4,000 (lesser of$9,500 average balance or $4,000 net qualified expenses). This amount may be added to the adjusted purchase price of C's principal residence under paragraph (e)(2)(i)(B) of this section for purposes of computing the applicable debt limit for this debt and any other debt subsequently secured by the principal residence. (iii) C determines the amount of qualified indebtedness for 1989 as follows: Average balance Amount of debt used to pay for qualified medical expenses$8,500 $4,000$1,000 $3,000 The amount of qualified indebtedness for 1989 is, therefore,$3,000 (lesser of $8,500 average balance or$3,000 net qualified expenses).
Prevention of double counting in year of refinancing -
(A) In general. A debt used to pay for qualified medical or educational expenses is refinanced if some or all of the outstanding balance of the debt (the “original debt”) is repaid out of the proceeds of a second debt (the “replacement debt”). If, in the year of a refinancing, the combined qualified indebtedness of the original debt and the replacement debt exceeds the combined qualified expenses of such debts, the amount of qualified indebtedness for each such debt shall be determined by multiplying the amount of qualified indebtedness for each such debt by a fraction, the numerator of which is the combined qualified expenses and the denominator of which is the combined qualified indebtedness.
(B) Definitions. For purposes of paragraph (n)(1)(v)(A) of this section -
(1) The term “combined qualified indebtedness” means the sum of the qualified indebtedness (determined without regard to paragraph (n)(1)(v) of this section) for the original debt and the replacement debt.
(2) The term “combined qualified expenses” means the amount of the proceeds of the original debt used to pay for qualified medical and educational expenses through the end of the current taxable year, reduced by any principal payments on the debt before the first day of the current taxable year, and increased by the amount, if any, of the proceeds of the replacement debt used to pay such expenses through the end of the current taxable year other than as part of the refinancing.
(C) Example.
(i) On August 11, 1987, C incurs a $8,000 debt secured by a principal residence. C uses (within the meaning of paragraph (n)(2)(i) of this section)$5,000 of the proceeds of the debt to pay for qualified educational expenses. C makes no principal payments on the debt. On July 1, 1988, C incurs a new debt in the amount of $8,000 secured by C's principal residence and uses all of the proceeds of the new debt to repay the original debt. Under paragraph (n)(2)(ii) of this section$5,000 of the new debt is treated as being used to pay for qualified educational expenses. C makes no principal payments (other than the refinancing) during 1987 or 1988 on either debt and pays all accrued interest monthly. The average balance of each debt in 1988 is $4,000. (ii) Under paragraph (n)(1)(iii) of this section, the amount of qualified indebtedness for 1988 with respect to the original debt is$4,000 (the lesser of its average balance ($4,000) and the amount of the debt used to pay for qualified medical and educational expenses ($5,000)). Similarly, the amount of qualified indebtedness for 1988 with respect to the replacement debt is also $4,000. Both debts, however, are subject in 1988 to the limitation in paragraph (n)(1)(v)(A) of this section. The combined qualified indebtedness, determined without regard to the limitation, is$8,000 ($4,000 of qualified indebtedness from each debt). The combined qualified expenses are$5,000 ($5,000 from the original debt and$0 from the replacement debt). The amount of qualified indebtedness from each debt must, therefore, be reduced by a fraction, the numerator of which is $5,000 (the combined qualified expenses) and the denominator of which is$8,000 (the combined qualified indebtedness). After application of the limitation, the amount of qualified indebtedness for the original debt is $2,500 ($4,000 × × 5/8). Similarly, the amount of qualified indebtedness for the replacement debt is $2,500. Note that the total qualified indebtedness for both the original and the replacement debt is$5,000 ($2,500 +$2,500). Therefore, C is entitled to the same amount of qualified indebtedness as C would have been entitled to if C had not refinanced the debt.
Special rule for principal payments in excess of qualified expenses. For purposes of paragraph (n)(1)(iii)(B), (n)(1)(v)(B)(2) and (n)(2)(ii) of this section, a principal payment is taken into account only to the extent that the payment, when added to all prior payments, does not exceed the amount used on or before the date of the payment to pay for qualified medical and educational expenses.
Debt used to pay for qualified medical or educational expenses -
In general. For purposes of this section, the proceeds of a debt are used to pay for qualified medical or educational expenses to the extent that -
(A) The taxpayer pays qualified medical or educational expenses within 90 days before or after the date that amounts are actually borrowed with respect to the debt, the proceeds of the debt are not directly allocable to another expense under § 1.163-8T(c)(3) (allocation of debt; proceeds not disbursed to borrower) and the proceeds of any other debt are not allocable to the medical or educational expenses under § 1.163-8T(c)(3), or
(B) The proceeds of the debt are otherwise allocated to such expenditures under § 1.163-8T.
Special rule for refinancings. For purposes of this section, the proceeds of a debt are used to pay for qualified medical and educational expenses to the extent that the proceeds of the debt are allocated under § 1.163-8T to the repayment of another debt (the “original debt”), but only to the extent of the amount of the original debt used to pay for qualified medical and educational expenses, reduced by any principal payments on such debt up to the time of the refinancing.
Other special rules. The following special rules apply for purposes of this section.
(A) Proceeds of a debt are used to pay for qualified medical or educational expenses as of the later of the taxable year in which such proceeds are borrowed or the taxable year in which such expenses are paid.
(B) The amount of debt which may be treated as being used to pay for qualified medical or educational expenses may not exceed the amount of such expenses.
(C) Proceeds of a debt may not be treated as being used to pay for qualified medical or educational expenses to the extent that:
(1) The proceeds have been repaid as of the time the expense is paid;
(2) The proceeds are actually borrowed before August 17, 1986; or
(3) The medical or educational expenses are paid before August 17, 1986.
Examples -
Example 1.
A pays a $5,000 qualified educational expense from a checking account that A maintains at Bank 1 on November 9, 1987. On January 1, 1988, A incurs a$20,000 debt that is secured by A's residence and places the proceeds of the debt in a savings account that A also maintains at Bank 1. A pays another $5,000 qualified educations expense on March 15 from a checking account that A maintains at Bank 2. Under paragraph (n)(2) of this section, the debt proceeds are used to pay for both educational expenses, regardless of other deposits to, or expenditures from, the accounts, because both expenditures are made within 90 days before or after the debt was incurred. Example 2. B pays a$5,000 qualified educational expense from a checking account on November 1, 1987. On November 30, 1987, B incurs a debt secured by B's residence, and the lender disburses the debt proceeds directly to a person who sells B a new car. Although the educational expense is paid within 90 days of the date the debt is incurred, the proceeds of the debt are not used to pay for the educational expense because the proceeds are directly allocable to the purchase of the new car under § 1.163-8T(c)(3).
Example 3.
On November 1, 1987, C borrows $5,000 from C's college. The proceeds of this debt are not disbursed to C, but rather are used to pay tuition fees for C's attendance at the college. On November 30, 1987, C incurs a second debt and secures the debt by C's residence. Although the$5,000 educational expense is paid within 90 days before the second debt is incurred, the proceeds of the second debt are not used to pay for the educational expense, because the proceeds of the first debt are directly allocable to the educational expense under § 1.163-8T(c)(3).
Example 4.
On January 1, 1988, D incurs a $20,000 debt secured by a qualified residence. D places the proceeds of the debt in a separate account (i.e., the proceeds of the debt are the only deposit in the account). D makes payments of$5,000 each for qualified educational expenses on September 1, 1988, September 1, 1989, September 1, 1990, and September 1, 1991. Because the debt proceeds are allocated to educational expenses as of the date the expenses are paid, under the rules of § 1.163-8T(c)(4), the following amounts of the debt proceeds are used to pay for qualified educational expenses as of the end of each year:
1988: $5,000 1989:$10,000
1990: $15,000 1991:$20,000
Example 5.
During 1987 E incurs a $10,000 debt secured by a principal residence. E uses (within the meaning of paragraph (n)(2)(i) of this section) all of the proceeds of the debt to pay for qualified educational expenses. On August 20, 1988, at which time the balance of the debt is$9,500, E incurs a new debt in the amount of $9,500 secured by E's principal residence and uses all of the proceeds of the new debt to repay the original debt. Under paragraph (n)(2)(ii) of this section, all of the proceeds of the new debt are used to pay for qualified educational expenses. Qualified medical expenses. Qualified medical expenses are amounts that are paid for medical care (within the meaning of section 213(d)(1) (A) and (B)) for the taxpayer, the taxpayer's spouse, or a dependent of the taxpayer (within the meaning of section 152), and that are not compensated for by insurance or otherwise. Qualified educational expenses. Qualified educational expenses are amounts that are paid for tuition, fees, books, supplies and equipment required for enrollment, attendance or courses of instruction at an educational organization described in section 170(b) (1)(A)(ii) and for any reasonable living expenses while away from home while in attendance at such an institution, for the taxpayer, the taxpayer's spouse or a dependent of the taxpayer (within the meaning of section 152) and that are not reimbursed by scholarship or otherwise. Secured debt - In general. For purposes of this section, the term “secured debt” means a debt that is on the security of any instrument (such as a mortgage, deed of trust, or land contract) - (i) That makes the interest of the debtor in the qualified residence specific security for the payment of the debt, (ii) Under which, in the event of default, the residence could be subjected to the satisfaction of the debt with the same priority as a mortgage or deed of trust in the jurisdiction in which the property is situated, and (iii) That is recorded, where permitted, or is otherwise perfected in accordance with applicable State law. A debt will not be considered to be secured by a qualified residence if it is secured solely by virtue of a lien upon the general assets of the taxpayer or by a security interest, such as a mechanic's lien or judgment lien, that attaches to the property without the consent of the debtor. Special rule for debt in certain States. Debt will not fail to be treated as secured solely because, under an applicable State or local homestead law or other debtor protection law in effect on August 16, 1986, the security interest is ineffective or the enforceability of the security interest is restricted. Times at which debt is treated as secured. For purposes of this section, a debt is treated as secured as of the date on which each of the requirements of paragraph (o)(1) of this section are satisfied, regardless of when amounts are actually borrowed with respect to the debt. For purposes of this paragraph (o)(3), if the instrument is recorded within a commercially reasonable time after the security interest is granted, the instrument will be treated as recorded on the date that the security interest was granted. Partially secured debt - In general. If the security interest is limited to a prescribed maximum amount or portion of the residence, and the average balance of the debt exceeds such amount or the value of such portion, such excess shall not be treated as secured debt for purposes of this section. Example. T borrows$80,000 on January 1, 1991. T secures the debt with a principal residence. The security in the residence for the debt, however, is limited to $20,000. T pays$8,000 in interest on the debt in 1991 and the average balance of the debt in that year is $80,000. Because the average balance of the debt exceeds the maximum amount of the security interest, such excess is not treated as secured debt. Therefore, for purposes of applying the limitation on qualified residence interest, the average balance of the secured debt is$20,000 (the maximum amount of the security interest) and the interest paid or accrued on the secured debt is $2,000 (the total interest paid on the debt multiplied by the ratio of the average balance of the secured debt ($20,000) and the average balance of the total debt ($80,000)). Election to treat debt as not secured by a qualified residence - In general. For purposes of this section, a taxpayer may elect to treat any debt that is secured by a qualified residence as not secured by the qualified residence. An election made under this paragraph shall be effective for the taxable year for which the election is made and for all subsequent taxable years unless revoked with the consent of the Commissioner. Example. T owns a principal residence with a fair market value of$75,000 and an adjusted purchase price of $40,000. In 1988, debt A, the proceeds of which were used to purchase the residence, has an average balance of$15,000. The proceeds of debt B, which is secured by a second mortgage on the property, are allocable to T's trade or business under § 1.163-8T and has an average balance of $25,000. In 1988, T incurs debt C, which is also secured by T's principal residence and which has an average balance in 1988 of$5,000. In the absence of an election to treat debt B as unsecured, the applicable debt limit for debt C in 1988 under paragraph (e) of this section would be zero dollars ($40,000−$15,000−$25,000) and none of the interest paid on debt C would be qualified residence interest. If, however, T makes or has previously made an election pursuant to paragraph (o)(5)(i) of this section to treat debt B as not secured by the residence, the applicable debt limit for debt C would be$25,000 ($40,000−$15,000), and all of the interest paid on debt C during the taxable year would be qualified residence interest. Since the proceeds of debt B are allocable to T's trade or business under § 1.163-8T, interest on debt B may be deductible under other sections of the Internal Revenue Code.
Allocation of debt secured by two qualified residences. [Reserved]
Definition of qualified residence -
In general. The term “qualified residence” means the taxpayer's principal residence (as defined in paragraph (p)(2) of this section), or the taxpayer's second residence (as defined in paragraph (p)(3) of this section).
Principal residence. The term “principal residence” means the taxpayer's principal residence within the meaning of section 1034. For purposes of this section, a taxpayer cannot have more than one principal residence at any one time.
Second residence -
In general. The term “second residence” means -
(A) A residence within the meaning of paragraph (p)(3)(ii) of this section,
(B) That the taxpayer uses as a residence within the meaning of paragraph (p)(3)(iii) of this section, and
(C) That the taxpayer elects to treat as a second residence pursuant to paragraph (p)(3)(iv) of this section.
A taxpayer cannot have more than one second residence at any time.
Definition of residence. Whether property is a residence shall be determined based on all the facts and circumstances, including the good faith of the taxpayer. A residence generally includes a house, condominium, mobile home, boat, or house trailer, that contains sleeping space and toilet and cooking facilities. A residence does not include personal property, such as furniture or a television, that, in accordance with the applicable local law, is not a fixture.
Use as a residence. If a residence is rented at any time during the taxable year, it is considered to be used as a residence only if the taxpayer uses it during the taxable year as a residence within the meaning of section 280A(d). If a residence is not rented at any time during the taxable year, it shall be considered to be used as a residence. For purposes of the preceding sentence, a residence will be deemed to be rented during any period that the taxpayer holds the residence out for rental or resale or repairs or renovates the residence with the intention of holding it out for rental or resale.
Election of second residence. A taxpayer may elect a different residence (other than the taxpayer's principal residence) to be the taxpayer's second residence for each taxable year. A taxpayer may not elect different residences as second residences at different times of the same taxable year except as provided below -
(A) If the taxpayer acquires a new residence during the taxable year, the taxpayer may elect the new residence as a taxpayer's second residence as of the date acquired;
(B) If property that was the taxpayer's principal residence during the taxable year ceases to qualify as the taxpayer's principal residence, the taxpayer may elect that property as the taxpayer's second residence as of the date that the property ceases to be the taxpayer's principal residence; or
(C) If property that was the taxpayer's second residence is sold during the taxable year or becomes the taxpayer's principal residence, the taxpayer may elect a new second residence as of such day.
Allocations between residence and other property -
In general. For purposes of this section, the adjusted purchase price and fair market value of property must be allocated between the portion of the property that is a qualified residence and the portion that is not a qualified residence. Neither the average balance of the secured debt nor the interest paid or accrued on secured debt is so allocated. Property that is not used for residential purposes does not qualify as a residence. For example, if a portion of the property is used as an office in the taxpayer's trade or business, that portion of the property does not qualify as a residence.
Special rule for rental of residence. If a taxpayer rents a portion of his or her principal or second residence to another person (a “tenant”), such portion may be treated as used by the taxpayer for residential purposes if, but only if -
(A) Such rented portion is used by the tenant primarily for residential purposes,
(B) The rented portion is not a self-contained residential unit containing separate sleeping space and toilet and cooking facilities, and
(C) The total number of tenants renting (directly or by sublease) the same or different portions of the residence at any time during the taxable year does not exceed two. For this purpose, if two persons (and the dependents, as defined by section 152, of either of them) share the same sleeping quarters, they shall be treated as a single tenant.
Examples.
Example 1.
D, a dentist, uses a room in D's principal residence as an office which qualifies under section 280A(c)(1)(B) as a portion of the dwelling unit used exclusively on a regular basis as a place of business for meeting with patients in the normal course of D's trade or business. D's adjusted purchase price of the property is $65,000;$10,000 of which is allocable under paragraph (o)(4)(i) of this section to the room used as an office. For purposes of this section, D's residence does not include the room used as an office. The adjusted purchase price of the residence is, accordingly, $55,000. Similarly, the fair market value of D's residence must be allocated between the office and the remainder of the property. Example 2. J rents out the basement of property that is otherwise used as J's principal residence. The basement is a self-contained residential unit, with sleeping space and toilet and cooking facilities. The adjusted purchase price of the property is$100,000; $15,000 of which is allocable under paragraph (o)(4)(i) of this section to the basement. For purposes of this section, J's residence does not include the basement and the adjusted purchase price of the residence is$85,000. Similarly, the fair market value of the residence must be allocated between the basement unit and the remainder of the property.
Residence under construction -
In general. A taxpayer may treat a residence under construction as a qualified residence for a period of up to 24 months, but only if the residence becomes a qualified residence, without regard to this paragraph (p)(5)(i), as of the time that the residence is ready for occupancy.
Example. X owns a residential lot suitable for the construction of a vacation home. On April 20, 1987, X obtains a mortgage secured by the lot and any property to be constructed on the lot. On August 9, 1987, X begins construction of a residence on the lot. The residence is ready for occupancy on November 9, 1989. The residence is used as a residence within the meaning of paragraph (p)(3)(iii) of this section during 1989 and X elects to treat the residence as his second residence for the period November 9, 1989, through December 31, 1989. Since the residence under construction is a qualified residence as of the first day that the residence is ready for occupancy (November 9, 1987), X may treat the residence as his second residence under paragraph (p)(5)(i) of this section for up to 24 months of the period during which the residence is under construction, commencing on or after the date that construction is begun (August 9, 1987). If X treats the residence under construction as X's second residence beginning on August 9, 1987, the residence under construction would cease to qualify as a qualified residence under paragraph (p)(5)(i) on August 8, 1989. The residence's status as a qualified residence for future periods would be determined without regard to paragraph (p)(5)(i) of this section.
Special rule for time-sharing arrangements. Property that is otherwise a qualified residence will not fail to qualify as such solely because the taxpayer's interest in or right to use the property is restricted by an arrangement whereby two or more persons with interests in the property agree to exercise control over the property for different periods during the taxable year. For purposes of determining the use of a residence under paragraph (p)(3)(iii) of this section, a taxpayer will not be considered to have used or rented a residence during any period that the taxpayer does not have the right to use the property or to receive any benefits from the rental of the property.
Special rules for tenant-stockholders in cooperative housing corporations -
In general. For purposes of this section, a residence includes stock in a cooperative housing corporation owned by a tenant-stockholder if the house or apartment which the tenant-stockholder is entitled to occupy by virtue of owning such stock is a residence within the meaning of paragraph (p)(3)(ii) of this section.
Special rule where stock may not be used to secure debt. For purposes of this section, if stock described in paragraph (q)(1) of this section may not be used to secure debt because of restrictions under local or State law or because of restrictions in the cooperative agreement (other than restrictions the principal purpose of which is to permit the tenant-stockholder to treat unsecured debt as secured debt under this paragraph (q)(2)), debt may be treated as secured by such stock to the extent that the proceeds of the debt are allocated to the purchase of the stock under the rules of § 1.163-8T. For purposes of this paragraph (q)(2), proceeds of debt incurred prior to January 1, 1987, may be treated as allocated to the purchase of such stock to the extent that the tenant-stockholder has properly and consistently deducted interest expense on such debt as home mortgage interest attributable to such stock on Schedule A of Form 1040 in determining his taxable income for taxable years beginning before January 1, 1987. For purposes of this paragraph (q)(2), amended returns filed after December 22, 1987, are disregarded.
Treatment of interest expense of the cooperative described in section 216(a)(2). For purposes of section 163(h) and § 1.163-9T (disallowance of deduction for personal interest) and section 163(d) (limitation on investment interest), any amount allowable as a deduction to a tenant-stockholder under section 216(a)(2) shall be treated as interest paid or accrued by the tenant-stockholder. If a tenant-stockholder's stock in a cooperative housing corporation is a qualified residence of the tenant-shareholder, any amount allowable as a deduction to the tenant-stockholder under section 216(a)(2) is qualified residence interest.
Special rule to prevent tax avoidance. If the amount treated as qualified residence interest under this section exceeds the amount which would be so treated if the tenant-stockholder were treated as directly owning his proportionate share of the assets and liabilities of the cooperative and one of the principal purposes of the cooperative arrangement is to permit the tenant-stockholder to increase the amount of qualified residence interest, the district director may determine that such excess is not qualified residence interest.
Other definitions. For purposes of this section, the terms “tenant-stockholder,” “cooperative housing corporation” and “proportionate share” shall have the meaning given by section 216 and the regulations thereunder.
Effective date. The provisions of this section are effective for taxable years beginning after December 31, 1986.
[T.D. 8168, 52 FR 48410, Dec. 22, 1987] | 2019-12-13 08:25:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5172433853149414, "perplexity": 3692.452184628453}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540551267.14/warc/CC-MAIN-20191213071155-20191213095155-00419.warc.gz"} |
https://www.nature.com/articles/s41377-018-0044-7?error=cookies_not_supported&code=76052ba0-5d37-4ed2-bb3a-6cd0c63655d5 | Article | Open | Published:
# Single-shot real-time femtosecond imaging of temporal focusing
Light: Science & Applicationsvolume 7, Article number: 42 (2018) | Download Citation
## Abstract
While the concept of focusing usually applies to the spatial domain, it is equally applicable to the time domain. Real-time imaging of temporal focusing of single ultrashort laser pulses is of great significance in exploring the physics of the space–time duality and finding diverse applications. The drastic changes in the width and intensity of an ultrashort laser pulse during temporal focusing impose a requirement for femtosecond-level exposure to capture the instantaneous light patterns generated in this exquisite phenomenon. Thus far, established ultrafast imaging techniques either struggle to reach the desired exposure time or require repeatable measurements. We have developed single-shot 10-trillion-frame-per-second compressed ultrafast photography (T-CUP), which passively captures dynamic events with 100-fs frame intervals in a single camera exposure. The synergy between compressed sensing and the Radon transformation empowers T-CUP to significantly reduce the number of projections needed for reconstructing a high-quality three-dimensional spatiotemporal datacube. As the only currently available real-time, passive imaging modality with a femtosecond exposure time, T-CUP was used to record the first-ever movie of non-repeatable temporal focusing of a single ultrashort laser pulse in a dynamic scattering medium. T-CUP’s unprecedented ability to clearly reveal the complex evolution in the shape, intensity, and width of a temporally focused pulse in a single measurement paves the way for single-shot characterization of ultrashort pulses, experimental investigation of nonlinear light-matter interactions, and real-time wavefront engineering for deep-tissue light focusing.
## Introduction
The space–time duality in optics originates from the mathematical equivalence between paraxial diffraction and dispersive propagation1. Remarkably, this duality enables one to translate spatial-domain optical techniques to the temporal domain, which has fostered the development of powerful temporal imaging approaches, such as temporal microscopy, to characterize optical signals2,3. Among the many temporal imaging phenomena, temporal focusing, as a time-domain counterpart of spatial focusing, describes an exquisite optical phenomenon—temporal compression of the duration of a chirped laser pulse to the shortest time possible at a designated location4,5,6. Temporal focusing has been leveraged in the temporal 4f processor7 and the dispersive Fourier transformer8 for analyzing optical waveforms with unprecedented bandwidths. Akin to spatial focusing—confining photons laterally—temporal focusing enables photon confinement in the longitudinal direction. This salient feature has powered depth-sectioning wide-field nonlinear microscopy for neuroimaging9. Recently, temporal focusing has been achieved through static scattering media10, which has sparked interest in deep biomedical imaging. In addition, the strong intensity localization has made it an attractive tool for material processing11, which has led to extensive studies of elusive physics mechanisms of the strong-field interaction with matter12. Considering the stochastic (e.g., time reversal of dynamic speckle patterns to produce temporal focusing in live biological tissue13) and non-repeatable (e.g., micromachining using temporal focusing in glass14) nature of these transient phenomena, visualizing temporal focusing in real time (i.e., at its actual time of occurrence) becomes a prerequisite for investigating and further utilizing them. In addition, since the width and intensity of the laser pulse experiences drastic changes during temporal focusing, a femtosecond-level exposure time is required to clearly resolve the evolving instantaneous spatiotemporal details of this phenomenon. Moreover, the nanometer-to-micrometer spatial scales of these transient events demand ultrafast imaging for blur-free observation [e.g., for imaging a light-speed event, an imaging speed of 1 trillion frames-per-second (Tfps) is required for a spatial resolution of 300 µm]15. Finally, since these events are often self-luminescent, a passive (i.e., receive-only) detector is highly desired for direct recording.
Existing ultrafast imaging techniques, however, are incapable of providing real-time, femtosecond, passive imaging capability. The current mainstream technique used in ultrafast imaging is based on pump–probe measurements16,17. Although having achieved femtosecond temporal resolution and passive detection, these multiple-shot imaging techniques depend on precise repetition of the targeted ultrafast event during temporal or spatial scanning. Hence, in cases where temporal focusing must be recorded in a single measurement, these imaging techniques are inapplicable.
Recently, a number of single-shot ultrafast imaging techniques18,19,20,21,22 have been developed. Among them, active-illumination-based approaches have achieved frame rates at the Tfps level20,21. However, such approaches are incapable of imaging luminescent transient events, so they are precluded from direct imaging of evolving light patterns in temporal focusing. The requirement of active illumination was recently eliminated by a new single-shot ultrafast imaging modality, termed compressed ultrafast photography (CUP)23,24,25. Synergizing compressed sensing and streak imaging, CUP works by first compressively recording a three dimensional (3D, i.e., x,y,t) scene into a two-dimensional (2D) snapshot and then computationally recovering it by solving an optimization problem. The resultant CUP system can passively receive photons scattered or emitted from dynamic scenes at frame rates of up to 100 billion fps. CUP has been applied to a number of applications, including fluorescence lifetime mapping23, real-time imaging of a propagating photonic Mach cone24, and time-of-flight volumetric imaging25. However, in these previous studies, the frame interval (defined as the reciprocal of the frame rate) was 10 ps, which has hindered the use of CUP for imaging spatiotemporal details of temporal focusing in the femtosecond regime.
## Results
### Principle and system of T-CUP
To enable real-time, ultrafast, passive imaging of temporal focusing, here, we have developed single-shot trillion-frame-per-second compressed ultrafast photography (T-CUP), which can image non-repeatable transient events at a frame rate of up to 10 Tfps in a receive-only fashion. The operation of T-CUP consists of data acquisition and image reconstruction (Fig. 1). For the data acquisition, the intensity distribution of a 3D spatiotemporal scene, I[m,n,k], is first imaged with a beam splitter to form two images. The first image is directly recorded by a 2D imaging sensor via spatiotemporal integration (defined as spatial integration over each pixel and temporal integration over the entire exposure time). This process, which forms a time-unsheared view with an optical energy distribution of Eu[m,n], can be expressed by
$$E_{\rm{u}}\left[ {m,n} \right] = \eta \mathop {\sum }\limits_k \left( {h_{\rm{u}} \ast I} \right)\left[ {m,n,k} \right]$$
(1)
where η is a constant, hu represents spatial low-pass filtering imposed by optics in the time-unsheared view, and * denotes the discrete 2D spatial convolution operation. Equation 1 can be regarded as a single-angle Radon transformation operated on I[m, n, k] (detailed in Supplementary Note 1).
The second image is spatially encoded by a pseudo-random binary pattern. Then the spatially encoded scene is relayed to a femtosecond shearing unit, where temporal frames are sheared on one spatial axis. Finally, the spatially encoded, temporally sheared frames are recorded by another 2D imaging sensor via spatiotemporal integration to form a time-sheared view with an optical energy distribution of Es[m,n]. This process can be described by
$$E_{\rm{s}}\left[ {m,n} \right] = \eta \mathop {\sum }\limits_k \left( {h_{\rm{s}} \ast I_{\rm{C}}} \right)\left[ {f_{\rm{D}},g_{\rm{D}} + k,k} \right]$$
(2)
where hs represents spatial low-pass filtering in the time-sheared view. IC[fD,gD+k,k] is the spatially encoded scene. fD and gD are the discrete coordinates transformed from m and n, according to the distortion in the time-sheared view24. Equation 2 can be regarded as the Radon transformation of the spatiotemporal datacube from an oblique angle determined by the shearing speed of the streak camera and pixel size of the sensor (detailed in Supplementary Note 1).
Combining the two views, the data acquisition of T-CUP can be expressed by a linear equation,
$$\left[ {E_{\rm{u}},\alpha E_{\rm{s}}} \right]^T = \left[ {{\boldsymbol{O}}_{\rm{u}},\alpha {\boldsymbol{O}}_{\rm{s}}} \right]^TI$$
(3)
where α is a scalar factor introduced to balance the energy ratio between the two views during measurement, and Ou and Os are the measurement operators for the two views (see Materials and methods and Supplementary Fig. S1). Thus T-CUP records a 3D dynamic scene into two 2D projections in a single exposure.
Image reconstruction of the scene can be done by solving the minimization problem of $$\min _I\left\{ {\frac{1}{2}\Vert \left[ {E_{\rm{u}},\alpha E_{\rm{s}}} \right]^T - \left[ {{\boldsymbol{O}}_{\rm{u}},\alpha {\boldsymbol{O}}_{\rm{s}}} \right]^TI \Vert_2^2 + \rho {\it{\Phi }}\left( I \right)} \right\}$$, where $$\left\| \cdot \right\|$$ denotes the l2 norm, Ф(I) is a regularization function that promotes sparsity in the dynamic scene, and ρ is the regularization parameter (detailed in Supplementary Notes 2). The solution to this minimization problem can be stably and accurately recovered, even with a highly compressed measurement26.
The integration of compressed sensing into the Radon transformation drastically reduces the required number of projections to two. The time-unsheared view, in which the projection is parallel to the time axis, losslessly retains spatial information while discarding all temporal information. The time-sheared view, on the other hand, preserves temporal information by projecting the spatiotemporal datacube from an oblique angle. As a result, these two views, as an optimal combination, enable one to record an optimum amount of information with the minimum number of measurements. However, a direct inversion of the Radon transform is not possible in this case due to the small number of projections and the fact that the linear system (Eq. 3) that needs to be inverted is under-determined. To solve this problem, compressed sensing is used. Leveraging the sparsity of the scene, as well as the random encoding in the time-sheared view as prior information, the compressed-sensing-based reconstruction algorithm uses the regularization-function-guided search to find a unique solution. Our simulation has demonstrated that this compressed-sensing-augmented two-view projection can retrieve a dynamic scene with a high reconstruction quality (Supplementary Fig. S2 and detailed in Supplementary Note 3).
In practice, T-CUP is embodied in an imaging system (Fig. 2 and detailed in Materials and methods) that uses several key devices to realize specific operations. Specifically, a charge-coupled device (CCD) camera performs spatiotemporal integration, a digital micromirror device (DMD) performs spatial encoding, and the time-varying voltage applied to the sweep electrodes in a femtosecond streak camera accomplishes femtosecond shearing. In addition, a compressed-sensing-based two-view reconstruction algorithm recovers the dynamic scene. The T-CUP system can capture a dynamic scene with spatial dimensions of 450 × 150 pixels and a sequence depth (i.e., number of frames per movie) of 350 frames in a single camera exposure. The frame rate of the reconstructed video is determined by v/d, where v is the temporal shearing velocity of the streak camera, and d is the pixel size of the internal CCD along the temporal shearing direction. By varying v, the frame rate can be widely adjusted from 0.5 to 10 Tfps. Thus, with single-shot data capture, a tunable ultrahigh frame rate, and an appreciable sequence depth, the T-CUP system is well suited for imaging single-event ultrafast transient phenomena occurring over a wide range of timescales (the characterization of the spatial and temporal resolutions of T-CUP is detailed in Supplementary Fig. S3 and Supplementary Note 4). The T-CUP temporal resolutions for 0.5, 1, 2.5, and 10 Tfps frame rates have been quantified to be 6.34, 4.53, 1.81, and 0.58 ps, respectively.
### Imaging temporal focusing of a single femtosecond laser pulse using the T-CUP system
A typical temporal focusing setup consists of a diffraction grating and a 4f imaging system (Fig. 3a). The incident laser pulse is first spatially dispersed by the grating and then collected by a collimation lens. Finally, a focusing lens recombines all the frequencies at the focal plane of the lens (Supplementary Fig. S4 and detailed in Supplementary Note 5). Temporal focusing has two major features: first, the shortest pulse width is at the focal plane of the focusing lens4; second, the angular dispersion of the grating creates a pulse front tilt so that the recombined pulse scans across the focal plane5. The pulse front tilt angle can be expressed by $$\gamma = \tan ^{ - 1}\left( {\lambda _{\rm{c}}/Md_{\rm{g}}} \right)$$ (refs. 27,28), where M is the overall magnification ratio, λc is the central wavelength of the ultrashort pulse, and dg is the grating period. The femtosecond pulse that undergoes temporal focusing presents a complex spatiotemporal profile (Supplementary Fig. S4) that can be revealed only in the captured instantaneous light patterns. Even a picosecond-level exposure time would erase these spatiotemporal details via significant temporal blurring. This speed requirement excludes previous CUP systems23,24,25 from visualizing this ultrafast optical phenomenon. In contrast, T-CUP can achieve unprecedented real-time visualization with a single camera exposure.
We imaged the temporal focusing from both the front and the side (Fig. 3a) at 2.5 Tfps. A collimated femtosecond laser pulse (800 nm central wavelength, 50 fs pulse duration, 1 × 3 mm2 spatial beam size) was used to illuminate a 1200 line mm−1 grating. The 4f imaging system had a magnification ratio of M=1/4. In theory, the tilt angle for the pulse front at the temporal focusing plane was 75.4°.
For front-view detection, T-CUP captured the impingement of the tilted laser pulse front sweeping along the y axis of the temporal focusing plane (Fig. 3b and Supplementary Movie S1). The pulse swept a distance of ~0.75 mm over 10 ps, corresponding to a pulse front tilt of ~76°, which closely matches the theoretical prediction.
For side-view detection, weak water vapor was spread as a dynamic scattering medium. T-CUP revealed the full evolution of the pulse propagation across the temporal focusing plane (Fig. 3c, d, Supplementary Fig. S5, and Supplementary Movies S1 and S2): a tilted pulse propagates toward the right. As it approaches the temporal focusing plane, the pulse width continuously reduces, manifesting as an increasing intensity. At the temporal focusing plane, the focus of the pulse sweeps along the y axis at its peak intensity. The evolution after the temporal focusing plane mirrors the preceding process: the pulse width is elongated, and the intensity is continuously weakened. We then quantitatively analyzed the pulse compression effect of temporal focusing. Figure 3e shows the temporal profiles of the laser pulse on the z axis near the temporal focusing plane, demonstrating the sharp temporal focusing of the laser pulse. Figure 3f shows the pulse duration along the z axis near the temporal focusing plane. The full width at half maximum of the temporal profile is reduced from 10.4 ps to 1.9 ps—compressed by a factor of 5.5. It is notable that the measured pulse width is wider than the incident pulse, which is likely due to dispersion by optical elements and scattering, as well as to the temporal broadening caused by the finite temporal resolution of the T-CUP system.
T-CUP is currently the only technology capable of observing temporal focusing in real time. First, the entire process of the imaged temporal focusing event occurred in ~10 ps, which equals the previous state-of-the-art exposure time for a single frame23; hence, it could not be resolved previously. In contrast, T-CUP, using a frame interval of 0.4 ps, clearly resolved the intensity fluctuation, width compression, and structural change of the temporal focusing process. Second, the dynamic scattering induced by the water vapor makes the scattered temporal focusing pulse non-repeatable. In different measurements, the reconstructed results show a difference in spatial shape, compression ratio, and intensity fluctuation. To demonstrate the non-repeatability, another dataset for the sideways detection of temporal focusing is shown in Supplementary Fig. S6.
Although the ultrashort laser pulse was dispersed and converged in space by the 4f imaging system, it is worth noting that the effect of spatial focusing is limited. As the pulse approached the temporal focusing plane, the beam size fluctuated with a normalized standard deviation of 5.6% over a duration of 4.8 ps (Fig. 3d), while the peak on-axis intensity of the pulse increased approximately five-fold (Fig. 3e). Thus the intensity increase is caused dominantly by the temporal focusing.
### Imaging light-speed phenomena in real time in both the visible and near-infrared spectral ranges
Four fundamental optical phenomena, namely, a beam sweeping across a surface, spatial focusing, splitting, and reflection, were imaged by the T-CUP system in real time (Fig. 4). In the beam sweeping experiment, a collimated near-infrared ultrashort laser pulse (800 nm wavelength, 50 fs pulse duration) obliquely impinged on a scattering bar pattern. The T-CUP system was placed perpendicular to the target to collect the scattered photons (Fig. 4a). Imaging at 10 Tfps, the T-CUP system clearly reveals how the pulse front of the ultrashort laser pulse swept across the bar pattern (Fig. 4b and Supplementary Movie S3).
In addition, T-CUP enables real-time video recording of spatial focusing of a single picosecond pulse. This phenomenon has been previously documented by phase-contrast microscopy29 and interferometry30 using conventional pump–probe schemes. In contrast, here, T-CUP was used to capture the scattered light intensity in a single measurement. In the setup, a single laser pulse (532 nm wavelength, 7 ps pulse width) was focused by a 10× objective lens into a weakly scattering aqueous suspension. T-CUP imaged this phenomenon at 2.5 Tfps (Fig. 4c and Supplementary Movie S4). We analyzed the time course of the light intensity at the spatial focus. After normalization, the intensity profile (Fig. 4d) was fitted by a Gaussian function,$$\hat I\left( t \right) = {\rm{exp}}\left[ { - 2\left( {t - t_0} \right)^2/\tau _{\rm{g}}^2} \right]$$, where t0 = 24.76 ps, and τg = 4.94 ps. The fitted result yields a 1/e width of 6.99 ps, closely matching the experimental specifications.
Imaging at 2.5 Tfps, T-CUP also revealed the spatiotemporal details of the beam splitting process of a single laser pulse (Fig. 4e and Supplementary Movie S5). Impinging on a beam splitter, part of the laser pulse was reflected immediately, while the transmitted portion propagated into the beam splitter and appeared on the other side of the beam splitter after a finite time. To quantitatively analyze the time course of the incident and transmitted pulses, we calculated the average light intensities in the two dashed boxes on both sides of the beam splitter (Fig. 4f). The measured temporal separation between the incident and transmitted pulses was 9.6 ps. Given the 2-mm thickness of this float glass beam splitter (refractive index n=1.52 at 532 nm) and the incident angle of ~25°, in theory, the light pulse needs approximately 10 ps to pass through the beam splitter. Thus our measured result agrees well with the theoretical value. It is also noteworthy that the time latency for the reflected and transmitted pulse (9.6 ps) is beyond the imaging capability of previous techniques23. T-CUP’s unprecedented frame rate reveals for the first time the spatiotemporal details of this transient event.
Finally, imaging at 1 Tfps, T-CUP was used to capture the reflection of a laser pulse by two mirrors over a sufficiently long time window (Supplementary Movie S6). In Fig. 4g, the first frame shows that the laser pulse has just entered the field of view (FOV). Subsequent frames show the propagating pulse being reflected by the two mirrors before finally traveling out of the FOV. It is noted that an inhomogeneous distribution of scatterers in the aqueous suspension led to increased scattered light intensity in the frames after 74 ps. For this reason, the pulse visually appears to be larger. However, the pulse width, when quantitatively measured via the cross-sectional full width at half maximum, was comparable to that in the rest of the frames.
## Discussion
### Current limitations
The performance of the streak camera, and not the principle of the technique, hinders further increases in frame rate, as well as other important characteristics, such as the spatial resolution and spectral range. The limited performance of the streak camera also impacts the choice of a single-sheared view in the system design (detailed in Supplementary Note 6). Finally, the imaging duty cycle for the T-CUP is currently limited to 5 × 10–9–10–7 due to the modest sweep frequency (100 fps) and the size of the internal sensor of the streak camera. A precise synchronization is therefore necessary to capture transient events within the time window. A new streak tube design and customized optical components would enable future implementations of a lossless-encoding scheme24, which is anticipated to improve the spatial and temporal resolutions in reconstructed images. In addition, the implementations of dual sweep-electrode pairs31 and an ultra-large-format camera32 are expected to largely increase the duty cycle with the possibility of even realizing continuous streaming.
### Application potential
Single-shot real-time imaging of temporal focusing is expected to immediately benefit the study of nonlinear light–matter interactions. For example, in femtosecond laser 3D micromachining using transparent media (e.g., glass), it was found that temporal focusing can induce an anisotropic fabrication quality33 depending on the translation direction of the sample. Thus far, the underlying mechanism for this nonreciprocal writing effect remains elusive. Recent theoretical investigations have indicated a close relation to the plasma dynamics controlled by the tilted pulse front of the temporal focusing pulses34. The T-CUP system can substitute for the low-speed cameras that are currently employed in imaging the laser–glass interaction35. Specifically, by changing the current zoom imaging system to a 20×, high numerical aperture (NA) objective lens, the microscopic T-CUP system will provide a 10-Tfps frame rate, a 1-µm spatial resolution, and 150-µm FOV at the sample, which is sufficient to simultaneously capture the evolution of a temporally focused pulse and the induced plasma (using a 10×, 0.2-NA objective lens as the focusing lens in Fig. 3a)10. The measured spatiotemporal profiles will be analyzed using the established models36 to investigate how the pulse front tilt and laser pulse energy affect the transient structure, dispersion properties, and spatial density of the induced plasma. The advantages of single-shot and ultrafast imaging will also pave the way for studying the plasma dynamics generated at microscopically heterogeneous locations (e.g., impurities and defects) in these materials.
Single-shot real-time imaging of temporal focusing by T-CUP also opens up new routes for spatiotemporal characterization of optical waveforms. Currently, temporal microscopes are often deployed as ultrafast all-optical oscilloscopes2 to passively analyze optical waveforms with few picosecond temporal resolution37 at a specific spatial point. The resolution quantification and imaging experiments in our work have demonstrated that T-CUP, while achieving a comparable temporal resolution, outperforms these oscilloscopes by adding a passive two-spatial-dimensional imaging ability. Thus the large parallel characterization of T-CUP could enable simultaneous ultrafast optical signal processing at multiple wavelengths for telecommunication38.
In metrology, a spatiotemporal microscope developed from T-CUP could be well suited for characterizing spatiotemporally complex ultrashort pulses39. In many time-resolved high-field laser experiments, the laser systems employed usually have low repetition rates. Therefore, single-shot characterization powered by T-CUP is attractive especially for fast and precise alignment of the setup40 and for imaging samples that are difficult to be repeatedly delivered41.
In biomedicine, T-CUP holds promise for in vivo tissue imaging. Living biological tissue is an example of dynamic scattering media with a millisecond-level speckle decorrelation time42. Thus far, owing to the limited speed of wavefront characterization in existing methods, spatiotemporal focusing beyond the optical diffusion limit has only been realized with static scattering media43,44. In contrast, T-CUP demonstrates single-shot femtosecond imaging of transient light patterns in a dynamic scattering medium (Fig. 3c). By integrating T-CUP with interferometry, it is possible to examine the scattered electric field of a broadband beam, which would assist in the design of phase conjugation of spatiotemporal focusing in living biological tissue. Therefore, our work, as an important step in imaging instrumentation, will open up new routes toward deep-tissue wide-field two-photon microscopy, photodynamic therapy, and optogenetics.
### Summary
By improving the frame rate by two orders of magnitude compared with the previous state-of-the-art23, T-CUP demonstrated that the ever-lasting pursuit of a higher frame rate is far from ending. As the only detection solution thus far available for passively probing dynamic self-luminescent events at femtosecond timescales in real time, T-CUP was used to reveal spatiotemporal details of transient scattering events that were inaccessible using previous systems. The compressed-sensing-augmented projection extended the application of the Radon transformation to probing spatiotemporal datacubes. This general scheme can be potentially implemented in other imaging modalities, such as tomographic phase microscopy45 and time-of-flight volumography46. T-CUP’s unprecedented ability for real-time, wide-field, femtosecond-level imaging from the visible to the near-infrared will pave the way for future microscopic investigations of time-dependent optical and electronic properties of novel materials under transient out-of-equilibrium conditions47. With continuous improvement in streak camera technologies48, future development may enable a 1 quadrillion fps (1015 fps) frame rate with a wider imaging spectral range, allowing direct visualization and exploration of irreversible chemical reactions49 and nanostructure dynamics50.
## Materials and methods
### Summary of the principle of operation of T-CUP
We first derive the expression for the data acquisition of T-CUP in a continuous model. For data acquisition, T-CUP records the intensity distribution of the dynamic scene, I(x, y, t), in two projected views (Supplementary Fig. S1 and detailed in Supplementary Note 1). The first view, termed the time-unsheared view, directly records the dynamic scene with an external CCD camera (Fig. 2). This recording process is expressed as
$$E_{\rm{u}} = {\boldsymbol{TF}}_{\mathbf{u}}I\left( {x,y,t} \right)$$
(4)
where Eu denotes the measured optical energy distribution on the external CCD camera, the linear operator Fu represents the spatial low-pass filtering in the time-unsheared view, and T represents the spatiotemporal integration.
The second view, termed the time-sheared view, records the projected view of the spatiotemporal scene from an oblique angle (Supplementary Fig. S1). Specifically, the dynamic scene is first spatially encoded by a pseudo-random binary mask, followed by femtosecond shearing along one spatial axis by a time-varying voltage applied to a pair of sweep electrodes before the scene is finally spatiotemporally integrated on an internal CCD camera in the streak camera. Mathematically, the optical energy measured by the internal CCD camera, Es, is related to I(x, y, t) by
$$E_{\rm{s}} = {\boldsymbol{TS}}_{\mathbf{f}}{\boldsymbol{DF}}_{\mathbf{s}}{\boldsymbol{C}}I\left( {x,y,t} \right)$$
(5)
where the linear operator C represents spatial encoding, Fs represents spatial low-pass filtering in the time-sheared view, D represents image distortion in the time-sheared view with respect to the time-unsheared view, and Sf represents femtosecond shearing.
With the two-view projection, the data acquisition of T-CUP can be described as
$$E = {\boldsymbol{O}}I$$
(6)
where E=[Eu,αEs]T and O=[TFu,αTSfDFsC]T are the measurement and the linear operators in their concatenated forms, respectively. The scalar factor α is the energy calibration ratio between the external CCD camera and the streak camera.
For image reconstruction, we discretized Eqs. 46 to obtain Eqs. 13 (detailed in Supplementary Note 1). Given the known measurement matrix and leveraging the intrinsic sparsity in the dynamic scene, we estimate that the datacube for the transient scene by solving the inverse problem of Eq. 3. In practice, a two-view reconstruction method, aided by the two-step iterative shrinkage/thresholding algorithm, is implemented to recover the image (detailed in Supplementary Note 2). The T-CUP system greatly improved the reconstruction quality compared with a previously reported CUP system23 (illustrated in Supplementary Fig. S2 and detailed in Supplementary Note 3).
### System configuration
The T-CUP system configuration is shown in Fig. 2. The dynamic scene is first imaged by a zoom imaging system built in-house, which supports tunable demagnification ratios of 2–5×. Following the intermediate image, a 50:50 beam splitter sends the incident light in two directions. The reflected beam is recorded by an external CCD camera (Point Grey, GS3-U3-28S4M-C). The transmitted beam is passed onto a DMD (Texas Instruments, LightCrafter 3000) by a 4f imaging system with a unit magnification ratio. A pseudo-random binary pattern is displayed on the DMD to encode the input image. As a binary-amplitude spatial light modulator, the DMD consists of hundreds of thousands of micromirrors; each mirror can be tilted to either +12° (as “on” pixels) or –12° (as “off” pixels). The light reflected by the “on” pixels is re-collected by the same 4f imaging system. After being reflected by the beam splitter, the spatially encoded dynamic scene is projected onto the entrance port of a femtosecond streak camera (Hamamatsu, C6138). To enable time-resolved measurement in two spatial dimensions, the entrance port is opened to its full width (3 mm). Inside the streak camera, the spatially encoded dynamic scene is first relayed to a photocathode that generates a number of photoelectrons proportional to the light intensity distribution. To temporally shear the spatially encoded dynamic scene, a sweep voltage deflects the photoelectrons to different vertical positions according to their time of flight. The deflected photoelectrons are multiplied by a micro-channel plate and then converted back into light by a phosphor screen. Relayed by output optics, the temporally sheared, spatially encoded dynamic scene is captured by an internal CCD camera (Hamamatsu, ORCA-R2) with 2 × 2 binning (672 × 512 binned pixels, 12.9 × 12.9 μm2 binned pixel size). With two-view recording, the light throughput for the T-CUP system is 62.5%.
Accepted article preview online: 27 June 2018
## References
1. 1.
Kolner, B. H. Space-time duality and the theory of temporal imaging. IEEE J. Quant. Electron. 30, 1951–1963 (1994).
2. 2.
Foster, M. A. et al. Silicon-chip-based ultrafast optical oscilloscope. Nature 456, 81–84 (2008).
3. 3.
Patera, G., Shi, J., Horoshko, D. B. & Kolobov, M. I. Quantum temporal imaging: application of a time lens to quantum optics. J. Opt. 19, 054001 (2017).
4. 4.
Zhu, G. H., van Howe, J., Durst, M., Zipfel, W. & Xu, C. Simultaneous spatial and temporal focusing of femtosecond pulses. Opt. Express 13, 2153–2159 (2005).
5. 5.
Oron, D., Tal, E. & Silberberg, Y. Scanningless depth-resolved microscopy. Opt. Express 13, 1468–1476 (2005).
6. 6.
Papagiakoumou, E. et al. Functional patterned multiphoton excitation deep inside scattering tissue. Nat. Photonics 7, 274–278 (2013).
7. 7.
Salem, R., Foster, M. A. & Gaeta, A. L. Application of space–time duality to ultrahigh-speed optical signal processing. Adv. Opt. Photonics 5, 274–317 (2013).
8. 8.
Goda, K. & Jalali, B. Dispersive Fourier transformation for fast continuous single-shot measurements. Nat. Photonics 7, 102–112 (2013).
9. 9.
Papagiakoumou, E. et al. Scanless two-photon excitation of channelrhodopsin-2. Nat. Methods 7, 848–854 (2010).
10. 10.
Katz, O., Small, E., Bromberg, Y. & Silberberg, Y. Focusing and compression of ultrashort pulses through scattering media. Nat. Photonics 5, 372–377 (2011).
11. 11.
Beresna, M., Gecevičius, M. & Kazansky, P. G. Ultrafast laser direct writing and nanostructuring in transparent materials. Adv. Opt. Photonics 6, 293–339 (2014).
12. 12.
Jing, C. R., Wang, Z. H. & Cheng, Y. Characteristics and applications of spatiotemporally focused femtosecond laser pulses. Appl. Sci. 6, 428 (2016).
13. 13.
Stockbridge, C. et al. Focusing through dynamic scattering media. Opt. Express 20, 15086–15092 (2012).
14. 14.
Kammel, R. et al. Enhancing precision in fs-laser material processing by simultaneous spatial and temporal focusing. Light Sci. Appl. 3, e169 (2014).
15. 15.
Mikami, H., Gao, L. & Goda, K. Ultrafast optical imaging technology: principles and applications of emerging methods. Nanophotonics 5, 98–110 (2016).
16. 16.
Schaffer, C. B., Nishimura, N., Glezer, E. N., Kim, A. M. T. & Mazur, E. Dynamics of femtosecond laser-induced breakdown in water from femtoseconds to microseconds. Opt. Express 10, 196–203 (2002).
17. 17.
Velten, A. et al. Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging. Nat. Commun. 3, 745 (2012).
18. 18.
Li, Z. Y., Zgadzaj, R., Wang, X. M., Chang, Y. Y. & Downer, M. C. Single-shot tomographic movies of evolving light-velocity objects. Nat. Commun. 5, 3085 (2014).
19. 19.
Goda, K., Tsia, K. & Jalali, B. Serial time-encoded amplified imaging for real-time observation of fast dynamic phenomena. Nature 458, 1145–1149 (2009).
20. 20.
Nakagawa, K. et al. Sequentially timed all-optical mapping photography (STAMP). Nat. Photonics 8, 695–700 (2014).
21. 21.
Ehn, A. et al. FRAME: femtosecond videography for atomic and molecular dynamics. Light Sci. Appl. 6, e17045 (2017).
22. 22.
Kubota, T., Komai, K., Yamagiwa, M. & Awatsuji, Y. Moving picture recording and observation of three-dimensional image of femtosecond light pulse propagation. Opt. Express 15, 14348–14354 (2007).
23. 23.
Gao, L., Liang, J. Y., Li, C. Y. & Wang, L. V. Single-shot compressed ultrafast photography at one hundred billion frames per second. Nature 516, 74–77 (2014).
24. 24.
Liang, J. Y. et al. Single-shot real-time video recording of a photonic Mach cone induced by a scattered light pulse. Sci. Adv. 3, e1601814 (2017).
25. 25.
Liang, J. Y., Gao, L., Hai, P. F., Li, C. Y. & Wang, L. V. Encrypted three-dimensional dynamic imaging using snapshot time-of-flight compressed ultrafast photography. Sci. Rep. 5, 15504 (2015).
26. 26.
Candès, E. J. The restricted isometry property and its implications for compressed sensing. C. R. Math. 346, 589–592 (2008).
27. 27.
Bor, Z., Racz, B., Szabo, G., Hilbert, M. & Hazim, H. A. Femtosecond pulse front tilt caused by angular dispersion. Opt. Eng. 32, 2501–2504 (1993).
28. 28.
Hebling, J. Derivation of the pulse front tilt caused by angular dispersion. Opt. Quant. Electron. 28, 1759–1763 (1996).
29. 29.
Mermillod-Blondin, A. et al. Time-resolved imaging of laser-induced refractive index changes in transparent media. Rev. Sci. Instrum. 82, 033703 (2011).
30. 30.
Sun, Q. et al. Measurement of the collision time of dense electronic plasma induced by a femtosecond laser in fused silica. Opt. Lett. 30, 320–322 (2005).
31. 31.
Lumpkin, A. H. & Early, J. W. First dual-sweep streak camera measurements of a photoelectric injector drive laser. Nucl. Instrum. Methods Phys. Res. A 318, 389–395 (1992).
32. 32.
Brady, D. J. et al. Multiscale gigapixel photography. Nature 486, 386–389 (2012).
33. 33.
Vitek, D. N. et al. Spatio-temporally focused femtosecond laser pulses for nonreciprocal writing in optically transparent materials. Opt. Express 18, 24673–24678 (2010).
34. 34.
Wang, Z. H. et al. Time-resolved shadowgraphs of transient plasma induced by spatiotemporally focused femtosecond laser pulses in fused silica glass. Opt. Lett. 40, 5726–5729 (2015).
35. 35.
Wang, X. F. et al. High-frame-rate observation of single femtosecond laser pulse propagation in fused silica using an echelon and optical polarigraphy technique. Appl. Opt. 53, 8395–8399 (2014).
36. 36.
Li, G. H. et al. Second harmonic generation in centrosymmetric gas with spatiotemporally focused intense femtosecond laser pulses. Opt. Lett. 39, 961–964 (2014).
37. 37.
Foster, M. A. et al. Ultrafast waveform compression using a time-domain telescope. Nat. Photonics 3, 581–585 (2009).
38. 38.
van Howe, J. & Xu, C. Ultrafast optical signal processing based upon space-time dualities. J. Light Technol. 24, 2649–2662 (2006).
39. 39.
Weiner, A. M. in Ultrafast Optics (ed Boreman, G.) Ch. 3 (John Wiley & Sons, Inc., Hoboken, NJ, 2008).
40. 40.
Durfee, C. G. & Squier, J. A. Breakthroughs in photonics 2014: spatiotemporal focusing: advances and applications. IEEE Photon J. 7, 0700806 (2015).
41. 41.
Poulin, P. R. & Nelson, K. A. Irreversible organic crystalline chemistry monitored in real time. Science 313, 1756–1760 (2006).
42. 42.
Gross, M. et al. Heterodyne detection of multiply scattered monochromatic light with a multipixel detector. Opt. Lett. 30, 1357–1359 (2005).
43. 43.
Mosk, A. P., Lagendijk, A., Lerosey, G. & Fink, M. Controlling waves in space and time for imaging and focusing in complex media. Nat. Photonics 6, 283–292 (2012).
44. 44.
McCabe, D. J. et al. Spatio-temporal focusing of an ultrafast pulse through a multiply scattering medium. Nat. Commun. 2, 447 (2011).
45. 45.
Choi, W. et al. Tomographic phase microscopy. Nat. Methods 4, 717–719 (2007).
46. 46.
Satat, G. et al. Locating and classifying fluorescent tags behind turbid layers using time-resolved inversion. Nat. Commun. 6, 6796 (2015).
47. 47.
Horng, J. et al. Imaging electric field dynamics with graphene optoelectronics. Nat. Commun. 7, 13704 (2016).
48. 48.
Frühling, U. et al. Single-shot terahertz-field-driven X-ray streak camera. Nat. Photonics 3, 523–528 (2009).
49. 49.
Hockett, P., Bisgaard, C. Z., Clarkin, O. J. & Stolow, A. Time-resolved imaging of purely valence-electron dynamics during a chemical reaction. Nat. Phys. 7, 612–615 (2011).
50. 50.
Gorkhover, T. et al. Femtosecond and nanometre visualization of structural dynamics in superheated nanoparticles. Nat. Photonics 10, 93–97 (2016).
## Acknowledgements
The authors thank Dr. Zhengyan Li from the University of Ottawa, Dr. Shian Zhang from East China Normal University, and Dr. Liang Gao from the University of Illinois at Urbana-Champaign for fruitful discussion. The authors also acknowledge Yujia Chen and Chiye Li for experimental assistance and Professor James Ballard for close reading of the manuscript. This work was supported in part by National Institutes of Health grants DP1 EB016986 (NIH Director’s Pioneer Award) and R01 CA186567 (NIH Director’s Transformative Research Award).
## Author information
### Author notes
• Jinyang Liang
Present address: Centre Énergie Matériaux Télécommunications, Institut National de la Recherche Scientifique, 1650 Boulevard Lionel-Boulet, Varennes, QC, J3X1S2, Canada
1. These authors contributed equally: Jinyang Liang, Liren Zhu
### Affiliations
1. #### Caltech Optical Imaging Laboratory, Andrew and Peggy Cherng Department of Medical Engineering, Department of Electrical Engineering, California Institute of Technology, 1200 East California Boulevard, Mail Code 138-78, Pasadena, CA, 91125, USA
• Jinyang Liang
• , Liren Zhu
• & Lihong V. Wang
### Contributions
J.L. designed and built the system and conducted all the experiments. L.Z. developed the reconstruction algorithm. J.L. and L.Z. analyzed the data and drafted the manuscript. L.V.W. supervised the project. All authors were involved in revising the manuscript.
### Conflict of interest
The authors declare that they have no conflict of interest.
### Corresponding author
Correspondence to Lihong V. Wang. | 2018-12-17 19:19:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4340030550956726, "perplexity": 4267.4160877880695}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376829115.83/warc/CC-MAIN-20181217183905-20181217205905-00143.warc.gz"} |
http://cobweb.cs.uga.edu/~jwlee/teaching/csci3360/syllabus.html | # CSCI 3360 Data Science
## 1 Course info.
Welcome to CSCI 3360 Data Science! Data science is a rapidly growing field that combines traditional statistics, machine learning, data mining, and programming. It has been attracting a great deal of attentions from both academia and industry. Also, data scientist is selected as the most promising job in the United States1.
• Instructor : Jaewoo Lee
• Email : [email protected]
• Office : BOYD 620
• Office hours : Wed. 12:10 pm - 1:10 pm, Thurs. 3 pm - 4 pm
• TA : Yang Shi (Mon. 9:00 am - 10 am, Tue. 2:15 pm to 3:15 pm), BOYD 307
## 2 Course description
This course is designed as an introductory study of the theory and practice of data science. Data science is about learning from data to extract insight and knowledge. This course introduces computational and statistical tools used in data analysis to answer questions from data. To be specific, we will investigate on tools and methods for
• data collection, data munging, cleaning
• data exploration, hypothesis testing
• statistical modeling
• making inference on data (regression, classification, and clustering)
• data visualization, and communication/interpretation of results.
### 2.1 Prerequisite
Students are expected to have a working knowledge of Python 2.7. All programming assignments must be completed using Python unless it is specified otherwise. Some elementary knowledge of statistics, linear algebra, and probability theory are expected, but not REQUIRED. Those fundamentals will be provided as they are needed.
### 2.2 Learning objectives
• Using Python, collect data from web and process the raw data into a form usable by data analysis algorithms.
• Summarize and visualize the data using statistical tools to quickly explore different aspects of complex data.
• Design a statistical experiment to test a hypothesis on data.
• Choose the most suitable statistical model for the given analysis task.
• Apply statistics and computational method (e.g., machine learning) to make predictions based on data.
• Implement (or modify) an analysis algorithm using python packages.
• Communicate with non-data science experts about analysis results, using effective statistics and visualizations.
## 3 Recommended textbooks
Here are some books that I recommend:
• Introduction to machine learning with python by Andreas C. Muller & Sarah Guido
• Elements of statistical learning by Trevor Hastie et al. PDF
• Pattern recognition and Machine learning by Chirstopher M. Bishop
• Convex optimization by Boyd and Vandenberghe PDF
While the first book may be listed as our main textbook and these are great books, we will not follow the structure of any book listed above, as our main focus will be covering some practical tools and techniques used in data science.
## 4 Evaluation criteria
Portion Description
Homework 40% 4 individual assignments involving problem solving, discussion, programming
Exams 35% midterm (15%) and final (20%)
Team project 25% implementation of data analysis program
- interim progress presentation (10%)
- final report and presentation (15%)
Each submitted item (for example, homework, report, or presentation) will be graded on a 100 point scale and then the numeric score may be curved to get a more reasonable grade distribution. In other words, rank is more important metric than the score on the graded item.
A [82, 100)
A- [80, 82)
B+ [75, 80)
B [60, 75)
B- [50, 60)
C+ [40, 50)
C [25, 40)
C- [15, 25)
D [0, 15)
For all students enrolled in this course, it is assumed that they will abide by UGA's academic honesty policy and procedures. Please refer to UGA's A CULTURE OF HONESTY.
For every individual assignment, students are welcome to discuss the problems and share ideas (at high level), but the submitted item must be a work of yours. For example, you can discuss how to solve a homework problem and share an idea, but you have to write your own answer.
• Type your homework ($$\LaTeX$$ is recommented).
• Do not write (and submit) something you don't understand or can't explain.
• Do not provide or make available your answer to others (no matter whether they are enrolled in or not)
• If you can't meet the submission deadline due to a illness, first inform the instructor by email and attach a doctor's note.
## 6 Tentative Schedule
Week Topic Note
Jan. 5 Course overview
I. Foundations
Jan. 10 Probability theory review: random variable, conditional probability, bayes theorem
Python: working with data - panda, numpy
Jan. 17 Sampling and distributions
Visualizing data: matplotlib
HW1 OUT
Jan. 24 Introduction to statistical inference, MLE
II. Statistics
Jan. 31 Hypothesis testing, asymptotics, p-values
Python: chi-square test for independence
Type I, Type II errors
Feb. 7 Resampling technique (bootstrap), confidence interval HW2 OUT
III. Optimization
Feb. 14 Fundamentals of convex optimization: convex set, convex function
Feb. 21 Multivariate calculus HW3 OUT
IV. Machine Learning
Mar. 6 Spring break week (no class)
Mar. 14 Statistical learning theory: notation and setup
Linear regression: least squares, Overfitting, Generalization
Mar. 21 Regression in high dimensional space (ridge regression)
regularization
HW4 OUT
Mar. 28 Linear classification: logistic regression
Apr. 4 Naive Bayes
Lazy learning: k-nearest neighbor algorithm
Apr. 11 Data clustering, curse of dimensionality, K-means algorithm
GMM and EM algorithm
Apr. 18 Linear algebra review: SVD
Dimensionality reduction (PCA)
Apr. 25 Team project presentation Final exam (TBD) | 2017-11-18 10:14:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26322484016418457, "perplexity": 4538.281254238239}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934804724.3/warc/CC-MAIN-20171118094746-20171118114746-00768.warc.gz"} |
https://mathematica.stackexchange.com/questions/184941/understanding-conditional-replacement | # Understanding conditional replacement
I have encountered recently a replacement and I tried to look up documentation but could not quite find anything and hope to get some explanation from experts. The replacement is the following:
Gamma[2*x+c] /. Gamma[t:2*g_ + d_:0 ] -> 1/Sqrt[Pi]*2^(t-1)*Gamma[t/2]*Gamma[(t+1)/2]
I have never seen this way of conditional replacement. This gives correct replacement for any
Gamma[2*x+c].
What I want to know if anyone can explain the piece, what it does:
Gamma[t:2*g_ + d_:0 ]
What I understood is that whenever it sees argument of the type 2*x+c, it replaces and in the replaced output it does the following:
t-> 2*x+c
and +d_:0 is needed to recognise any argument of type
2*x+c
I will be grateful if anyone can explain this type of notations for replacement and what are the scopes and how it will look for multiple variable replacement.
• t:2*g_ + d_ matches 2 x + c, t:2*g_ matches 2 x, and t:2*g_ + d_:0 (or t:2*g_ + d_.) matches both 2x and 2 x +c. – kglr Oct 30 '18 at 17:42
• The symbol t:2*g_ means that “t” is a name that on the right hand side represents 2*g_. In the second example d_:0 means that “0” is a default value that will be used in case “d” is not supplied. – Jack LaVigne Oct 30 '18 at 18:10
Take your expression and wrap it in FullForm[HoldForm[expr]] and it will spell out the details:
FullForm[HoldForm[
Gamma[2*x + c] /.
Gamma[t : 2*g_ + d_: 0] ->
1/Sqrt[Pi]*2^(t - 1)*Gamma[t/2]*Gamma[(t + 1)/2]]]
The construct
t:2*g_
is a shortcut for a named pattern. t represents the pattern object 2*g_ on the right hand side during the replacement.
The construct
d_:0
is a shortcut for Optional. It means that if that part of the pattern is omitted it will use 0 for d during the replacement. | 2020-07-15 01:45:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3292062282562256, "perplexity": 2002.5336789814496}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657154789.95/warc/CC-MAIN-20200715003838-20200715033838-00233.warc.gz"} |
http://www.imperial.ac.uk/people/m.ruzhansky/?respub-action=citation.html&id=907741&noscript=noscript | # ProfessorMichaelRuzhansky
Faculty of Natural SciencesDepartment of Mathematics
Visiting Professor
//
### Contact
+44 (0)20 7594 8500m.ruzhansky CV
//
### Location
615Huxley BuildingSouth Kensington Campus
//
# Citation
## BibTex format
@article{Ruzhansky:2018:10.1016/j.jde.2018.06.033,author = {Ruzhansky, M and Tokmagambetov, N},doi = {10.1016/j.jde.2018.06.033},journal = {Journal of Differential Equations},title = {Nonlinear damped wave equations for the sub-Laplacian on the Heisenberg group and for Rockland operators on graded Lie groups},url = {http://dx.doi.org/10.1016/j.jde.2018.06.033},year = {2018}}
## RIS format (EndNote, RefMan)
TY - JOURAB - In this paper we study the Cauchy problem for the semilinear damped wave equation for the sub-Laplacian on the Heisenberg group. In the case of the positive mass, we show the global in time well-posedness for small data for power like nonlinearities. We also obtain similar well-posedness results for the wave equations for Rockland operators on general graded Lie groups. In particular, this includes higher order operators on and on the Heisenberg group, such as powers of the Laplacian or the sub-Laplacian. In addition, we establish a new family of Gagliardo-Nirenberg inequalities on graded Lie groups that play a crucial role in the proof but which are also of interest on their own: if $G$ is a graded Lie group of homogeneous dimension $Q$ and $a>0$, $1<r<\frac{Q}{a},$ and $1\leq p\leq q\leq \frac{rQ}{Q-ar},$ then we have the following Gagliardo-Nirenberg type inequality $$\|u\|_{L^{q}(G)}\lesssim \|u\|_{\dot{L}_{a}^{r}(G)}^{s} \|u\|_{L^{p}(G)}^{1-s}$$ for $s=\left(\frac1p-\frac1q\right)\left(\frac{a}Q+\frac1p-\frac1r\right)^{-1}\in [0,1]$ provided that$\frac{a}Q+\frac1p-\frac1r\not=0$, where $\dot{L}_{a}^{r}$ is the homogeneous Sobolev space of order $a$ over $L^r$. If $\frac{a}Q+\frac1p-\frac1r=0$, we have $p=q=\frac{rQ}{Q-ar}$, and then the above inequality holds for any $0\leqs\leq 1$.AU - Ruzhansky,MAU - Tokmagambetov,NDO - 10.1016/j.jde.2018.06.033PY - 2018///SN - 0022-0396TI - Nonlinear damped wave equations for the sub-Laplacian on the Heisenberg group and for Rockland operators on graded Lie groupsT2 - Journal of Differential EquationsUR - http://dx.doi.org/10.1016/j.jde.2018.06.033UR - http://arxiv.org/abs/1703.07902v1UR - http://hdl.handle.net/10044/1/61851ER - | 2019-02-16 17:30:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7085298299789429, "perplexity": 635.9140357951293}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247480905.29/warc/CC-MAIN-20190216170210-20190216192210-00472.warc.gz"} |
http://tug.org/pipermail/texhax/2005-September/004623.html | # [texhax] roman page numbers in acrobat
Fri Sep 9 18:34:43 CEST 2005
```On Sep 9, 2005, at 12:02 PM, Florian Knorn wrote:
> short question. i'm using the hyperref package, together with the
> memoir
> class. i also use \frontmatter, \mainmatter and \backmatter.
>
> now how do i get the cute roman page numbers into acrobat aswell ? like
> in the manual from the memoir-class ?
>
> thanks for your help, i hope it's not jsut some little setup thing i
> oversaw,
Are you using pdftex?
The invocation for this from the memman.tex manual I have handy is:
\ifpdf
\pdfoutput=1
\usepackage[plainpages=false,pdfpagelabels,bookmarksnumbered]{hyperref}
\usepackage{memhfixc}
\fi
William
-- | 2017-10-18 20:33:46 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9738834500312805, "perplexity": 13752.47247039015}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823114.39/warc/CC-MAIN-20171018195607-20171018215607-00385.warc.gz"} |
https://www.hackmath.net/en/math-problem/23541 | # The half life
The half-life of a radioactive isotope is the time it takes for a quantity of the isotope to be reduced to half its initial mass. Starting with 145 grams of a radioactive isotope, how much will be left after 3 half-lives?
Correct result:
m3 = 18.125 g
#### Solution:
We would be pleased if you find an error in the word problem, spelling mistakes, or inaccuracies and send it to us. Thank you!
Tips to related online calculators
Do you want to convert time units like minutes to seconds?
## Next similar math problems:
After 548 hours decreases the activity of a radioactive substance to 1/9 of the initial value. What is the half-life of the substance?
A radioactive material loses 10% of its mass each year. What proportion will be left there after n=6 years?
• Half life
Determine the half life of bismuth, when bismuth weight from the original weight of 32 g was only 2 grams in 242 minutes.
• Crystal
The crystal grows every month 1.2 permille of its mass. For how many months to grow a crystal from weight 177 g to 384 g?
• Suppose 2
Suppose that the half-life of a substance is 250 years. If there were initially 100 g of the substance, a. Give an exponential model for the situation b. How much will remain after 500 years?
• Deposit
If you deposit 719 euros the beginning of each year, how much money we have at 1.3% (compound) interest after 9 years?
• Geometric progression
In geometric progression, a1 = 7, q = 5. Find the condition for n to sum first n members is: sn≤217.
• The city 2
Today lives 298000 citizens in the city. How many citizens can we expect in 8 years if their annual increase is 2.4%?
• Interest
Calculate how much you earn for 10 years 43000 deposit if the interest rate is 1.3% and the interest period is a quarter.
• Virus
We have a virus that lives one hour. Every half hour produce two child viruses. What will be the living population of the virus after 3.5 hours?
• Compound interest
Compound interest: Clara deposited CZK 100,000 in the bank with an annual interest rate of 1.5%. Both money and interest remain deposited in the bank. How many CZK will be in the bank after 3 years?
• Loan
Apply for a $59000 loan, the loan repayment period is 8 years, the interest rate 7%. How much should I pay for every month (or every year if paid yearly). Example is for practise geometric progression and/or periodic payment for an annuity. • Three members GP The sum of three numbers in GP (geometric progression) is 21 and the sum of their squares is 189. Find the numbers. • Wire One pull of wire is reduced its diameter by 14%. What will be diameter of wire with diameter 19 mm over 10 pulls? • Acid solution By adding 250 grams of a 96% solution of sulfuric acid to its 3% solution its initial concentration was changed to 25%. How many grams of 3% of the acid were used for dilution? • If you 3 If you deposit$4500 at 5% annual interest compound quarterly, how much money will be in the account after 10 years?
• Investment
1000\$ is invested at 10% compound interest. What factor is the capital multiplied by each year? How much will be there after n=12 years? | 2021-01-19 21:03:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4440460205078125, "perplexity": 1429.8394072946182}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519784.35/warc/CC-MAIN-20210119201033-20210119231033-00338.warc.gz"} |
https://www.mail-archive.com/[email protected]/msg37375.html | # Re: [NTG-context] BNF grammar for ConTeXt (was: What happened with sectionworld?)
R. Bastian:
> CONTEXT_SOURCE ::= PREAMBLE "\starttext" TEXT "\stoptext" | CONTEXT_SOURCE
>>> TEXT ::= STARTSTOPS | SETUPS | DEFINES | OTHERS [ TEXT
>>
>> luigi:
> To be general, i think
>> MY_CONTEXT_SOURCE ::= MACRO* END
>>
>
R. Bastian:
> I dont understand the sense of "\end\starttext"
sense==semantic
"\end""\starttext" is a valid string for a hypothetical bnf grammar of
ContTeXt
which is not valid for your bnf ;
"\end""\starttext""\stoptext" is in your bnf grammar
and has the same semantic of "\end""\starttext" .
The point is : a bnf for Context can be hard to define
luigi:
think that a bnf or lpeg grammar is really useful for a sort of
>> standard-ConTeXt
>> or minimal-ConTeXt or light-ConTeXt
>> ie a ConTeXt to use as "reference"
>>
>
R. Bastian:
> Exactly what I need : standard, minimal and light
>
Exactly what can be hard to define and capture in a bnf .
wolfgang
>
> How could a BNF grammar help to learn ConTeXt,
a bnf can help to build a syntax checker, a highlighter etc.
Actually the only way to say that you have a valid ConTeXt string
is running context on that string .
The semantic is another story.
--
luigi
___________________________________________________________________________________ | 2021-04-10 12:46:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7849396467208862, "perplexity": 9818.759294683174}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038056869.3/warc/CC-MAIN-20210410105831-20210410135831-00623.warc.gz"} |
https://www.linstitute.net/archives/694839 | # IB DP Physics: SL复习笔记4.2.1 Properties of Waves
### Properties of Waves
• Travelling waves are defined as follows:
Oscillations that transfer energy from one place to another without transferring matter
• Energy is transferred by the waves, but matter is not
• The direction of the motion of the wave is the direction of the energy transfer
• Travelling waves can be of two types:
• Mechanical Waves, which propagate through a medium and cannot take place in a vacuum
• Electromagnetic Waves, which can travel through a vacuum
• Waves are generated by oscillating sources
• These oscillations travel away from the source
• Oscillations can propagate through a medium (e.g. air, water) or in vacuum (i.e. no particles), depending on the type of wave
• The key properties of travelling waves are as follows:
• Displacement (x) of a wave is the distance of a point on the wave from its equilibrium position
• It is a vector quantity; it can be positive or negative
• Measured in metres (m)
• Wavelength (λ) is the length of one complete oscillation measured from same point on two consecutive waves
• For example, two crests, or two troughs
• Measured in metres (m)
• Amplitude (x0) is the maximum displacement of an oscillating wave from its equilibrium position (x = 0)
• Amplitude can be positive or negative depending on the direction of the displacement
• Measured in metres (m)
• Period (T) is the time taken for a fixed point on the wave to undergo one complete oscillation
• Measured in seconds (s)
• Frequency (f) is the number of full oscillations per second
• Measured in Hertz (Hz)
• Wave speed (c) is the distance travelled by the wave per unit time
• Measured in metres per second (m s-1)
Diagram showing the amplitude and wavelength of a wave
• The frequency, f, and the period, T, of a travelling wave are related to each other by the equation:
Period T and frequency f of a travelling wave
#### Worked Example
The graph below shows a travelling wave.
Determine:
(i) The amplitude A of the wave in metres (m)
(ii) The frequency f of the wave in hertz (Hz)
(i) Identify the amplitude A of the wave on the graph
• The amplitude is defined as the maximum displacement from the equilibrium position (x = 0)
• The amplitude must be converted from centimetres (cm) into metres (m)
A = 0.1 m
(ii) Calculate the frequency of the wave
Step 1: Identify the period T of the wave on the graph
• The period is defined as the time taken for one complete oscillation to occur
• The period must be converted from milliseconds (ms) into seconds (s)
T = 1 × 10–3 s
Step 2: Write down the relationship between the frequency f and the period T
Step 3: Substitute the value of the period determined in Step 1
f = 1000 Hz
### The Wave Equation
• The wave equation describes the relationship between the wave speed, the wavelength and the frequency of the wave
c = fλ
• Where
• c = wave speed in metres per second (m s−1)
• f = frequency in hertz (Hz)
• λ = wavelength in metres (m)
#### Deriving the Wave Equation
• The wave equation can be derived using the equation for speed
• Where
• v = velocity or speed in metres per second (m s−1)
• d = distance travelled in metres (m)
• t = time taken in seconds (s)
• When the source of a wave undergoes one complete oscillation, the travelling wave propagates forward by a distance equal to one wavelength λ
• The travelling wave covers this distance in the time it takes the source to complete one oscillation, the time period T
• Therefore, combining these equations gives the wave equation
c = fλ
#### Worked Example
A travelling wave has a period of 1.0 μs and travels at a velocity of 100 cm s–1. Calculate the wavelength of the wave. Give your answer in metres (m).
Step 1: Write down the known quantities
• Period, T = 1.0 μs = 1.0 × 10–6 s
• Velocity, c = 100 cm s–1 = 1.0 m s–1
Note the conversions:
• The period must be converted from microseconds (μs) into seconds (s)
• The velocity must be converted from cm s–1 into m s–1
Step 2: Write down the relationship between the frequency f and the period T
Step 3: Substitute the value of the period into the above equation to calculate the frequency
f = 1.0 × 106 Hz
Step 4: Write down the wave equation
c = fλ
Step 5: Rearrange the wave equation to calculate the wavelength λ
Step 6: Substitute the numbers into the above equation
λ = 1 × 10–6 m | 2022-08-15 00:20:10 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8162487745285034, "perplexity": 1240.0499669285355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572089.53/warc/CC-MAIN-20220814234405-20220815024405-00379.warc.gz"} |
http://www.orafaq.com/aggregator/sources/450 | # Bobby Durrett's DBA Blog
Oracle database performance
Updated: 6 hours 26 min ago
### HugePages speeds up Oracle login process on Linux
Thu, 2016-10-20 13:28
We bumped a Linux 11.2.0.4 database up to a 12 gigabyte SGA and the login time went up to about 2.5 seconds. Then a Linux admin configured 12 gigabytes of HugePages to fit the SGA and login time went down to .13 seconds. Here is how I tested the login time. E.sql just has the exit command in it so this logs in as SYSDBA and immediately exits:
time sqlplus / as sysdba < e.sql ... edited out for space ... real 0m0.137s user 0m0.007s sys 0m0.020s So, then the question came up about our databases with 3 gig SGAs without HugePages. So I tested one of them: real 0m0.822s user 0m0.014s sys 0m0.007s Same version of Oracle/Linux/etc. Seems like even with a 3 gig SGA the page table creation is adding more than half a second to the login time. No wonder they came up with HugePages for Linux! Bobby Categories: DBA Blogs ### Quickly built new Python graph SQL execution by plan Wed, 2016-10-19 17:51 I created a new graph in my PythonDBAGraphs to show how a plan change affected execution time. The legend in the upper left is plan hash value numbers. Normally I run the equivalent as a sqlplus script and just look for plans with higher execution times. I used it today for the SQL statement with SQL_ID c6m8w0rxsa92v. It has been running slow since 10/11/2016. Since I just split up my Python graphs into multiple smaller scripts I decided to build this new Python script to see how easy it would be to show the execution time of the SQL statement for different plans graphically. It was not hard to build this. Here is the script (sqlstatwithplans.py): import myplot import util def sqlstatwithplans(sql_id): q_string = """ select to_char(sn.END_INTERVAL_TIME,'MM-DD HH24:MI') DATE_TIME, plan_hash_value, ELAPSED_TIME_DELTA/(executions_delta*1000000) ELAPSED_AVG_SEC from DBA_HIST_SQLSTAT ss,DBA_HIST_SNAPSHOT sn where ss.sql_id = '""" q_string += sql_id q_string += """' and ss.snap_id=sn.snap_id and executions_delta > 0 and ss.INSTANCE_NUMBER=sn.INSTANCE_NUMBER order by ss.snap_id,ss.sql_id,plan_hash_value""" return q_string database,dbconnection = util.script_startup('Graph execution time by plan') # Get user input sql_id=util.input_with_default('SQL_ID','acrg0q0qtx3gr') mainquery = sqlstatwithplans(sql_id) mainresults = dbconnection.run_return_flipped_results(mainquery) util.exit_no_results(mainresults) date_times = mainresults[0] plan_hash_values = mainresults[1] elapsed_times = mainresults[2] num_rows = len(date_times) # build list of distict plan hash values distinct_plans = [] for phv in plan_hash_values: string_phv = str(phv) if string_phv not in distinct_plans: distinct_plans.append(string_phv) # build a list of elapsed times by plan # create list with num plans empty lists elapsed_by_plan = [] for p in distinct_plans: elapsed_by_plan.append([]) # update an entry for every plan # None for ones that aren't # in the row for i in range(num_rows): plan_num = distinct_plans.index(str(plan_hash_values[i])) for p in range(len(distinct_plans)): if p == plan_num: elapsed_by_plan[p].append(elapsed_times[i]) else: elapsed_by_plan[p].append(None) # plot query myplot.xlabels = date_times myplot.ylists = elapsed_by_plan myplot.title = "Sql_id "+sql_id+" on "+database+ " database with plans" myplot.ylabel1 = "Averaged Elapsed Seconds" myplot.ylistlabels=distinct_plans myplot.line() Having all of the Python code for this one graph in a single file made it much faster to put together a new graph. Pretty neat. Bobby Categories: DBA Blogs ### Tim Gorman at AZORA meeting tomorrow in Scottsdale Wed, 2016-10-19 10:34 #meetup_oembed .mu_clearfix:after { visibility: hidden; display: block; font-size: 0; content: " "; clear: both; height: 0; }* html #meetup_oembed .mu_clearfix, *:first-child+html #meetup_oembed .mu_clearfix { zoom: 1; }#meetup_oembed { background:#eee;border:1px solid #ccc;padding:10px;-moz-border-radius:3px;-webkit-border-radius:3px;border-radius:3px;margin:0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 12px; }#meetup_oembed h3 { font-weight:normal; margin:0 0 10px; padding:0; line-height:26px; font-family:Georgia,Palatino,serif; font-size:24px }#meetup_oembed p { margin: 0 0 10px; padding:0; line-height:16px; }#meetup_oembed img { border:none; margin:0; padding:0; }#meetup_oembed a, #meetup_oembed a:visited, #meetup_oembed a:link { color: #1B76B3; text-decoration: none; cursor: hand; cursor: pointer; }#meetup_oembed a:hover { color: #1B76B3; text-decoration: underline; }#meetup_oembed a.mu_button { font-size:14px; -moz-border-radius:3px;-webkit-border-radius:3px;border-radius:3px;border:2px solid #A7241D;color:white!important;text-decoration:none;background-color: #CA3E47; background-image: -moz-linear-gradient(top, #ca3e47, #a8252e); background-image: -webkit-gradient(linear, left bottom, left top, color-stop(0, #a8252e), color-stop(1, #ca3e47));disvplay:inline-block;padding:5px 10px; }#meetup_oembed a.mu_button:hover { color: #fff!important; text-decoration: none; }#meetup_oembed .photo { width:50px; height:50px; overflow:hidden;background:#ccc;float:left;margin:0 5px 0 0;text-align:center;padding:1px; }#meetup_oembed .photo img { height:50px }#meetup_oembed .number { font-size:18px; }#meetup_oembed .thing { text-transform: uppercase; color: #555; } Arizona Oracle User Group – October 20, 2016 Thursday, Oct 20, 2016, 12:30 PM Republic Services – 3rd Floor Conference Room 14400 N 87th St (AZ101 & Raintree) Scottsdale, AZ 16 AZORAS Attending Change In Plans -Tim Gorman comes to Phoenix! Stephen Andert had a sudden business commitment making it impossible for him to speak at Thursday’s meeting.Fortunately, Tim Gorman of Delphix will be coming from Denver to speak instead. Tim is an internationally-renowned speaker, performance specialist, member of the Oak Table, Oracle Ace Director, … Phoenix area readers – I just found out that Oracle performance specialist and Delphix employee Tim Gorman will be speaking at the Arizona User Group meeting tomorrow in Scottsdale. I am looking forward to it. Bobby Categories: DBA Blogs ### Thinking about using Python scripts like SQL scripts Fri, 2016-10-14 19:18 I’ve used Python to make graphs of Oracle database performance information. I put the scripts out on GitHub at https://github.com/bobbydurrett/PythonDBAGraphs. As a result I’m keeping my Python skills a little fresher and learning about git for version control and GitHub as a forum for sharing Open Source. Really, these Python scripts were an experiment. I don’t claim that I have done any great programming or that I will. But, as I review what I have done so far it makes me think about how to change what I am doing so that Python would be more usable to me. I mainly use SQL scripts for Oracle database tuning. I run them through sqlplus on my laptop. I think I would like to make the way I’m using Python more like the way I use SQL scripts. My idea is that all the pieces would be in place so that I could write a new Python script as easily and quickly as I would a SQL script. I started out with my PythonDBAGraphs project with a main script called dbgraphs.py that gives you several graphs to choose from. I also have a script called perfq.py that includes the code to build a select statement. To add a new graph I have added entries to both of these files. They are getting kind of long and unwieldy. I’m thinking of breaking up these to scripts into a separate script for each graph like ashcpu.py, onewait.py, etc. You may wonder why I am talking about changes I might make to this simple set of scripts. I am thinking that my new approach is more in line with how businesses think about using Python. I have heard people say that business users could use Python and the same graphing library that I am using to build reports without having a developer work with them. Of course, people think the same about SQL and it is not always true. But, I think that my first approach to these Python scripts was to build it like a large standalone program. It is like I am building an app to sell or to publish like a compiler or new database system. But, instead I think it makes sense to build an environment where I can quickly write custom standalone scripts, just as I can quickly put together custom SQL scripts. Anyway, this is my end of the week, end of the work day blogging thoughts. I’m thinking of changing my Python scripts from one big program to an environment that I can use to quickly build new smaller scripts. Bobby Categories: DBA Blogs ### Need classes directory to run ENCRYPT_PASSWORD on PeopleTools 8.53 Tue, 2016-10-11 18:57 I had worked on creating a Delphix virtual copy of our production PeopleTools 8.53 database and wanted to use ENCRYPT_PASSWORD in Datamover to change a user’s password. But I got this ugly error: Error: Process aborted. Possibly due to JVM is not available or missing java class or empty password. What the heck! I have used Datamover to change passwords this way for 20 years and never seen this error. Evidently in PeopleTools 8.53 they increased the complexity of the encryption by adding a “salt” component. So, now when Datamover runs the ENCRYPT_PASSWORD command it calls Java for part of the calculation. For those of you who don’t know, Datamover is a Windows executable, psdmt.exe. But, now it is calling java.exe to run ENCRYPT_PASSWORD. I looked at Oracle’s support site and tried the things the recommended but it didn’t resolve it. Here are a couple of the notes: E-SEC: ENCRYPT_PASSWORD Error: Process aborted. Possibly due to JVM is not available or missing java class or empty password. (Doc ID 2001214.1) E-UPG PT8.53, PT8.54: PeopleTools Only Upgrade – ENCRYPT_PASSWORD Error: Process aborted. Possibly due to JVM is not available or missing java class or empty password. (Doc ID 1532033.1) They seemed to focus on a situation during an upgrade when you are trying to encrypt all the passwords and some have spaces in their passwords. But that wasn’t the case for me. I was just trying to change one user’s password and it wasn’t spaces. Another recommendation was to put PS_HOME/jre/bin in the path. This totally made sense. I have a really stripped down PS_HOME and had the least number of directories that I need to do migrations and tax updates. I only have a 120 gig SSD C: drive on my laptop so I didn’t want a full multi-gigabyte PS_HOME. So, I copied the jre directory down from our windows batch server and tried several ways of putting the bin directory in my path and still got the same error. Finally, I ran across an idea that the Oracle support documents did not address, probably because no one else is using partial PS_HOME directories like me. I realized that I needed to download the classes directory. I found a cool documentation page about the Java class search path for app servers in PeopleTools 8.53. It made me guess that psdmt.exe would search the PS_HOME/classes directory for the classes it needed to do the ENCRYPT_PASSWORD command. So, I copied classes down from the windows batch server and put the jre/bin directory back in the path and success! Password hashed for TEST Ended: Tue Oct 11 16:36:55 2016 Successful completion Script Completed. So, I thought I would pass this along in the unusual case that someone like myself needs to not only but the jre/bin directory in their path but is also missing the classes directory. Bobby Categories: DBA Blogs ### JDBC executeBatch looks odd in AWR Fri, 2016-10-07 19:18 A project team asked me to look at the performance of an Oracle database application that does a bunch of inserts into a table. But, when I started looking at the AWR data for the insert the data confused me. The SQL by elapsed time section looked like this: So, 1514 executions of an insert with 1 second of elapsed time each, almost all of which was CPU. But then I looked at the SQL text: Hmm. It is a simple insert values statement. Usually this means it is inserting one row. But 1 second is a lot of CPU time to insert a row. So, I used my sqlstat.sql script to query DBA_HIST_SQLSTAT about this sql_id. >select ss.sql_id, 2 ss.plan_hash_value, 3 sn.END_INTERVAL_TIME, 4 ss.executions_delta, 5 ELAPSED_TIME_DELTA/(executions_delta*1000) "Elapsed Average ms", 6 CPU_TIME_DELTA/(executions_delta*1000) "CPU Average ms", 7 IOWAIT_DELTA/(executions_delta*1000) "IO Average ms", 8 CLWAIT_DELTA/(executions_delta*1000) "Cluster Average ms", 9 APWAIT_DELTA/(executions_delta*1000) "Application Average ms", 10 CCWAIT_DELTA/(executions_delta*1000) "Concurrency Average ms", 11 BUFFER_GETS_DELTA/executions_delta "Average buffer gets", 12 DISK_READS_DELTA/executions_delta "Average disk reads", 13 ROWS_PROCESSED_DELTA/executions_delta "Average rows processed" 14 from DBA_HIST_SQLSTAT ss,DBA_HIST_SNAPSHOT sn 15 where ss.sql_id = 'fxtt03b43z4vc' 16 and ss.snap_id=sn.snap_id 17 and executions_delta > 0 18 and ss.INSTANCE_NUMBER=sn.INSTANCE_NUMBER 19 order by ss.snap_id,ss.sql_id; SQL_ID PLAN_HASH_VALUE END_INTERVAL_TIME EXECUTIONS_DELTA Elapsed Average ms CPU Average ms IO Average ms Cluster Average ms Application Average ms Concurrency Average ms Average buffer gets Average disk reads Average rows processed ------------- --------------- ------------------------- ---------------- ------------------ -------------- ------------- ------------------ ---------------------- ---------------------- ------------------- ------------------ ---------------------- fxtt03b43z4vc 0 29-SEP-16 07.00.34.682 PM 441 1100.68922 1093.06512 .32522449 0 0 .000492063 60930.449 .047619048 4992.20181 fxtt03b43z4vc 0 29-SEP-16 08.00.43.395 PM 91 1069.36489 1069.00231 .058494505 0 0 0 56606.3846 .010989011 5000 fxtt03b43z4vc 0 29-SEP-16 09.00.52.016 PM 75 1055.05561 1053.73324 .00172 0 0 0 55667.1333 0 4986.86667 fxtt03b43z4vc 0 29-SEP-16 10.00.01.885 PM 212 1048.44043 1047.14276 .073080189 0 0 .005287736 58434.6934 .004716981 4949.35377 Again it was about 1 second of cpu and elapsed time, but almost 5000 rows per execution. This seemed weird. How can a one row insert affect 5000 rows? I found an entry in Oracle’s support site about AWR sometimes getting corrupt with inserts into tables with blobs so I thought that might be the case here. But then the dev team told me they were using some sort of app that did inserts in batches of 1000 rows each. I asked for the source code. Fortunately, and this was very cool, the app is open source and I was able to look at the Java code on GitHub. It was using executeBatch in JDBC to run a bunch of inserts at once. I guess you load up a bunch of bind variable values in a batch and execute them all at once. Makes sense, but it looked weird in the AWR. Here is the Java test program that I hacked together to test this phenomenon: import java.sql.*; import oracle.jdbc.*; import oracle.jdbc.pool.OracleDataSource; import java.io.ByteArrayInputStream; import java.io.IOException; import java.util.*; public class InsertMil5k { public static void main (String args []) throws SQLException { OracleDataSource ods = new OracleDataSource(); ods.setUser("MYUSER"); ods.setPassword("MYPASSWORD"); ods.setURL("jdbc:oracle:thin:@MYHOST:1521:MYSID"); OracleConnection conn = (OracleConnection)(ods.getConnection ()); conn.setAutoCommit(false); PreparedStatement stmt = conn.prepareStatement("insert into test values (:1,:2,:3,:4)"); byte [] bytes = new byte[255]; int k; for (k=0;k<255;k++) bytes[k]=(byte)k; /* loop 200 times. Make sure i is unique */ int i,j; for (j=0;j < 200; j++) { /* load 5000 sets of bind variables */ for (i=j*5000;i < (j*5000)+5000; i++) { stmt.setString(1, Integer.toString(i)); stmt.setInt(2, 1); stmt.setBinaryStream(3, new ByteArrayInputStream(bytes), bytes.length); stmt.setLong(4, 1); stmt.addBatch(); } stmt.executeBatch(); conn.commit(); } conn.close(); } } I started with one of the Oracle JDBC samples and grabbed the batch features from the github site. I just made up some random data which wasn’t super realistic. It took me a while to realize that they were actually, at times, doing 5000 row batches. The other AWR entries had 1000 rows per execution so that finally makes sense with what the dev team told me. I guess the lesson here is that the AWR records each call to executeBatch as an execution but the number of rows is the size of the batch. So, that explains why a simple one row insert values statement showed up as 5000 rows per execution. Bobby Categories: DBA Blogs ### Ask Tom table about NOLOGGING and redo generation Wed, 2016-09-07 14:34 I was googling for things related to NOLOGGING operations and found this useful post on the Ask Tom web site: url There is a nice table in the post that shows when insert operations generate redo log activity. But it isn’t formatted very well so I thought I would format the table here so it lines up better. Table Mode Insert Mode ArchiveLog mode result ----------- ------------- ----------------- ----------- LOGGING APPEND ARCHIVE LOG redo generated NOLOGGING APPEND ARCHIVE LOG no redo LOGGING no append "" redo generated NOLOGGING no append "" redo generated LOGGING APPEND noarchive log mode no redo NOLOGGING APPEND noarchive log mode no redo LOGGING no append noarchive log mode redo generated NOLOGGING no append noarchive log mode redo generated All of this is from Ask Tom. My contribution here is just the formatting. I ran a couple of tests whose results agree with this table. I ran insert append on a database that was not in archivelog mode and the insert ran for the same amount of time with the table set for LOGGING as it did with the table set for NOLOGGING. I ran the same test on a database that is in archivelog mode and saw a big difference in run time between LOGGING and NOLOGGING. I didn’t prove it but I assume that the redo generation caused the difference in run time. No archivelog and logging: insert /*+append*/ into target select * from source; 64000 rows created. Elapsed: 00:00:00.36 No archivelog and nologging: insert /*+append*/ into target select * from source; 64000 rows created. Elapsed: 00:00:00.38 Archivelog and logging: insert /*+append*/ into target select * from source; 64000 rows created. Elapsed: 00:00:00.84 Archivelog and nologging: insert /*+append*/ into target select * from source; 64000 rows created. Elapsed: 00:00:00.53 I haven’t tested all the table options but I thought it was worth formatting for my reference and for others who find it useful. Bobby Categories: DBA Blogs ### New graph: Average Active Sessions per minute Thu, 2016-09-01 17:25 I am working on a production issue. I do not think that we have a database issue but I am graphing some performance metrics to make sure. I made a new graph in my PythonDBAGraphs program. It shows the average number of active sessions for a given minute. It prompts you for start and stop date and time. It works best with a relatively small interval or the graph gets too busy. Red is sessions active on CPU and blue is all active sessions. This graph is a production database today. Activity peaked around mid day. It is kind of like the OEM performance screen but at least having it in Python lets me tinker with the graph to meet my needs. Check out the README on the GitHub link above if you want to run this in your environment. Bobby Categories: DBA Blogs ### Bulk collect workaround for memory bug Fri, 2016-08-19 16:42 A coworker passed a test script on to me that was failing with the following memory error: ORA-04030: out of process memory when trying to allocate 4088 bytes (PLS CGA hp,pdzgM64_New_Link) The error occurred when initializing a PL/SQL table variable with 7500 objects. Here is my sanitized version of the code: CREATE OR REPLACE TYPE ARRAY_ELEMENT AS OBJECT ( n1 NUMBER, n2 NUMBER, n3 NUMBER, n4 NUMBER ); / CREATE OR REPLACE TYPE MY_ARRAY IS TABLE OF ARRAY_ELEMENT; / DECLARE MY_LIST MY_ARRAY; BEGIN MY_LIST := MY_ARRAY( ARRAY_ELEMENT(1234,5678,1314,245234), ARRAY_ELEMENT(1234,5678,1314,245234), ARRAY_ELEMENT(1234,5678,1314,245234), ... ARRAY_ELEMENT(1234,5678,1314,245234), ARRAY_ELEMENT(1234,5678,1314,245234) ); The real code had different meaningful constants for each entry in the table. Here is the error: 8004 ARRAY_ELEMENT(1234,5678,1314,245234) 8005 ); 8006 8007 END; 8008 / DECLARE * ERROR at line 1: ORA-04030: out of process memory when trying to allocate 4088 bytes (PLS CGA hp,pdzgM64_New_Link) Elapsed: 00:02:51.31 I wrapped the error code manually so it would fit on the page. The solution looks like this: create table MY_OBJECTS ( o ARRAY_ELEMENT ); DECLARE MY_LIST MY_ARRAY; BEGIN MY_LIST := MY_ARRAY( ); insert into MY_OBJECTS values(ARRAY_ELEMENT(1234,5678,1314,245234)); insert into MY_OBJECTS values(ARRAY_ELEMENT(1234,5678,1314,245234)); insert into MY_OBJECTS values(ARRAY_ELEMENT(1234,5678,1314,245234)); ... insert into MY_OBJECTS values(ARRAY_ELEMENT(1234,5678,1314,245234)); insert into MY_OBJECTS values(ARRAY_ELEMENT(1234,5678,1314,245234)); insert into MY_OBJECTS values(ARRAY_ELEMENT(1234,5678,1314,245234)); commit; SELECT o BULK COLLECT INTO MY_LIST FROM MY_OBJECTS; END; / Here is what the successful run looks like: 8004 insert into MY_OBJECTS values(ARRAY_ELEMENT(1234,5678,1314,245234)); 8005 insert into MY_OBJECTS values(ARRAY_ELEMENT(1234,5678,1314,245234)); 8006 8007 commit; 8008 8009 SELECT o 8010 BULK COLLECT INTO MY_LIST 8011 FROM MY_OBJECTS; 8012 8013 END; 8014 / PL/SQL procedure successfully completed. Elapsed: 00:00:21.36 SQL> There is an Oracle document about this bug: ORA-4030 (PLSQL Opt Pool,pdziM01_Create: New Set), ORA-4030 (PLS CGA hp,pdzgM64_New_Link) (Doc ID 1551115.1) It doesn’t have using bulk collect as a work around. My situation could be only useful in very specific cases but I thought it was worth sharing it. Here are my scripts and their logs: zip This is on HP-UX Itanium Oracle 11.2.0.3. Bobby Categories: DBA Blogs ### Finished Mathematics for Computer Science class Sat, 2016-08-13 17:07 Today I finally finished the Mathematics for Computer Science class that I have worked on since December. For the last year or two I have wanted to do some general Computer Science study in my free time that is not directly related to my work. I documented a lot of this journey in an earlier blog post. The math class is on MIT’s OpenCourseWare (OCW) web site. It was an undergraduate semester class and I spent about 9 months on it mostly in my spare time outside of work. I wanted to test out OCW as a source for training just as I had experimented with edX before. So, I thought I would share my thoughts on the experience. The class contained high quality material. It was an undergraduate class so it may not have been as deep as a graduate level class could be but world-class MIT professors taught the class. Some of my favorite parts of the video lectures were where professor Leighton made comments about how the material applied in the real world. The biggest negative was that a lot of the problems did not have answers. Also, I was pretty much working through this class on my own. There were some helpful people on a Facebook group that some of my edX classmates created that helped keep me motivated. But there wasn’t a large community of people taking the same class. Also, it makes me wonder where I should spend time developing myself. Should I be working more on my communication and leadership skills through Toastmasters? Should I be working on my writing? Should I be learning more Oracle features? I spent months studying for Oracle’s 12c OCP certification exam and I kind of got burnt out on that type of study. The OCP exam has a lot of syntax. To me syntax, which you can look up in a manual, is boring. The underlying computer science is interesting. It is fun to try to understand the Oracle optimizer and Oracle internals, locking, backup and recovery, etc. There is a never-ending well of Oracle knowledge that I could pursue. Also, there is a lot of cloud stuff going on. I could dive into Amazon and other cloud providers. I also have an interest in open source. MySQL and PostgreSQL intrigue me because I could actually have the source code. But, there is only so much time in the day and I can’t do everything. I don’t regret taking the math for computer science class even if it was a diversion from my Toastmasters activities and not directly related to work. Now I have a feel for the kind of materials that you have on OCW: high quality, general computer science, mostly self-directed. Now I just have to think about what is next. Bobby Categories: DBA Blogs ### Trying VirtualBox Fri, 2016-08-05 23:49 I have been using VMware Player to build test virtual machines on my laptop with an external drive for some time now. I used to use the free VMware Server. My test VMs weren’t fast because of the slow disk drive but they were good enough to run small Linux VMs to evaluate software. I also had one VM to do some C hacking of the game Nethack for fun. I got a lot of good use out of these free VMware products and VMware is a great company so I’m not knocking them. But, this week I accidentally wiped out all the VMs that I had on my external drive so I tried to rebuild one so I at least have one to boot up if I need a test Linux VM. I spend several hours trying to get the Oracle Linux 6.8 VM that I created to work with a screen resolution that matched my monitor. I have a laptop with a smaller 14 inch 1366 x 768 resolution built-in monitor and a nice new 27 inch 1920 x 1080 resolution external monitor. VMware player wouldn’t let me set the resolution to more than 1366 x 768 no matter what I did. Finally after a lot of googling and trying all kinds of X Windows and VMware settings I finally gave up and decided to try VirtualBox. I was able to quickly install it and get my OEL 6.8 VM up with a larger resolution with no problem. It still didn’t give me 1920 x 1080 for some reason but had a variety of large resolutions to choose from. After getting my Linux 6.8 machine to work acceptably I remembered that I was not able to get Linux 7 to run on VMware either. I had wanted to build a VM with the latest Linux but couldn’t get it to install. So, I downloaded the 7.2 iso and voilà it installed like a charm in VirtualBox. Plus I was able to set the resolution to exactly 1920 x 1080 and run in full screen mode taking up my entire 27 inch monitor. Very nice! I have not yet tried it, but VirtualBox seems to come with the ability to take a snapshot of a VM and to clone a VM. To get these features on VMware I’m pretty sure you need to buy the249 VMware Workstation. I have a feeling that Workstation is a good product but I think it makes sense to try VirtualBox and see if the features that it comes with meet all my needs.
I installed VirtualBox at the end of the work day today so I haven’t had a lot of time to find its weaknesses and limitations. But so far it seems to have addressed several weaknesses that I found in VMware Player so it may have a lot of value to me. I think it is definitely worth trying out before moving on to the commercial version of VMware.
Bobby
P.S. Just tried the snapshot and clone features. Very neat. Also I forgot another nuisance with VMware Player. It always took a long time to shut down a machine. I think it was saving the current state. I didn’t really care about saving the state or whatever it was doing. Usually I just wanted to bring something up real quick and shut it down fast. This works like a charm on VirtualBox. It shuts down a VM in seconds. So far so good with VirtualBox.
P.P.S This morning I easily got both my Linux 6.8 and 7.2 VM’s to run with a nice screen size that takes up my entire 27 inch monitor but leaves room so I can see the menu at the top of the VM window and my Windows 7 status bar below the VM’s console window. Very nice. I was up late last night tossing and turning in bed thinking about all that I could do with the snapshot and linked clone features.
Categories: DBA Blogs
### Modified IO CPU+IO Elapsed Graph (sigscpuio)
Wed, 2016-07-06 18:16
Still tweaking my Python based Oracle database performance tuning graphs.
I kind of like this new version of my “sigscpuio” graph:
The earlier version plotted IO, CPU, and Elapsed time summed over a group of force matching signatures. It showed the components of the time spent by the SQL statements represented by those signatures. But the IO and CPU lines overlapped and you really could not tell how the elapsed time related to IO and CPU. I thought of changing to a stacked graph where the graph layered all three on top of each other but that would not work. Elapsed time is a separate measure of the total wall clock time and could be more or less than the total IO and CPU time. So, I got the idea of tweaking the chart to show IO time on the bottom, CPU+IO time in the middle, and let the line for elapsed time go wherever it falls. It could be above the CPU+IO line if there was time spent that was neither CPU or IO. It could fall below the line if CPU+IO added up to more than the elapsed time.
So, this version of sigscpuio kind of stacks CPU and IO and just plots elapsed time wherever it falls. Might come in handy.
Bobby
Categories: DBA Blogs
### Graph frequently executed SQL by FORCE_MATCHING_SIGNATURE
Thu, 2016-06-16 15:10
I made a new graph in my PythonDBAGraphs program. Here is an example with real data but the database name blanked out:
My graphs are all sized for 1920 x 1080 monitors so I can see all the detail in the lines using my entire screen. The idea for this graph is to show how the performance of the queries that matter to the users changes as we add more load and data to this production database. I knew that this database had many queries with literals in their where clauses. I decided to pick a group of SQL by FORCE_MATCHING_SIGNATURE and to graph the average elapsed run time against the total number of executions.
I used this query to list all the SQL by signature:
column FORCE_MATCHING_SIGNATURE format 99999999999999999999
select FORCE_MATCHING_SIGNATURE,
sum(ELAPSED_TIME_DELTA)/1000000 total_seconds,
sum(executions_delta) total_executions,
count(distinct sql_id) number_sqlids,
count(distinct snap_id) number_hours,
min(PARSING_SCHEMA_NAME)
from DBA_HIST_SQLSTAT
group by FORCE_MATCHING_SIGNATURE
order by number_hours desc;
This is an edited version of the output – cut down to fit the page:
FORCE_MATCHING_SIGNATURE TOTAL_SECONDS TOTAL_EXECUTIONS NUMBER_HOURS
------------------------ ------------- ---------------- ------------
14038313233049026256 22621.203 68687024 1019
18385146879684525921 18020.9776 157888956 1013
2974462313782736551 22875.4743 673687 993
12492389898598272683 6203.78985 66412941 992
14164303807833460050 4390.32324 198997 980
10252833433610975622 6166.07675 306373 979
17697983043057986874 17391.0907 25914398 974
15459941437096211273 9869.31961 7752698 967
2690518030862682918 15308.8561 5083672 952
1852474737868084795 50095.5382 3906220 948
6256114255890028779 380.095915 4543306 947
16226347765919129545 9199.14289 215756 946
13558933806438570935 394.913411 4121336 945
12227994223267192558 369.784714 3970052 945
18298186003132032869 296.887075 3527130 945
17898820371160082776 184.125159 3527322 944
10790121820101128903 2474.15195 4923888 943
2308739084210563004 265.395538 3839998 941
13580764457377834041 2807.68503 62923457 934
12635549236735416450 1023.42959 702076 918
17930064579773119626 2423.03972 61576984 914
14879486686694324607 33.253284 17969 899
9212708781170196788 7292.5267 126641 899
357347690345658614 6321.51612 182371 899
15436428048766097389 11986.082 334125 886
5089204714765300123 6858.98913 190700 851
11165399311873161545 4864.60469 45897756 837
12042794039346605265 11223.0792 179064 835
15927676903549361476 505.624771 3717196 832
9120348263769454156 12953.0746 230090 828
10517599934976061598 311.61394 3751259 813
6987137087681155918 540.565595 3504784 809
11181311136166944889 5018.309 59540417 808
187803040686893225 3199.87327 12788206 800
I picked the ones that had executed in 800 or more hours. Our AWR has about 1000 hours of history so 800 hours represents about 80% of the AWR snapshots. I ended up pulling one of these queries out because it was a select for update and sometimes gets hung on row locks and skews the graph. So, the graph above has that one pulled out.
I based the graph above on this query:
select
sn.END_INTERVAL_TIME,
sum(ss.executions_delta) total_executions,
sum(ELAPSED_TIME_DELTA)/((sum(executions_delta)+1))
from DBA_HIST_SQLSTAT ss,DBA_HIST_SNAPSHOT sn
where ss.snap_id=sn.snap_id
and ss.INSTANCE_NUMBER=sn.INSTANCE_NUMBER
and ss.FORCE_MATCHING_SIGNATURE in
(
14038313233049026256,
18385146879684525921,
2974462313782736551,
12492389898598272683,
14164303807833460050,
10252833433610975622,
17697983043057986874,
15459941437096211273,
2690518030862682918,
6256114255890028779,
16226347765919129545,
13558933806438570935,
12227994223267192558,
18298186003132032869,
17898820371160082776,
10790121820101128903,
2308739084210563004,
13580764457377834041,
12635549236735416450,
17930064579773119626,
14879486686694324607,
9212708781170196788,
357347690345658614,
15436428048766097389,
5089204714765300123,
11165399311873161545,
12042794039346605265,
15927676903549361476,
9120348263769454156,
10517599934976061598,
6987137087681155918,
11181311136166944889,
187803040686893225
)
group by sn.END_INTERVAL_TIME
order by sn.END_INTERVAL_TIME;
Only time will tell if this really is a helpful way to check system performance as the load grows, but I thought it was worth sharing what I had done. Some part of this might be helpful to others.
Bobby
Categories: DBA Blogs
### Understanding query slowness after platform change
Thu, 2016-05-12 14:54
We are moving a production database from 10.2 Oracle on HP-UX 64 bit Itanium to 11.2 Oracle on Linux on 64 bit Intel x86. So, we are upgrading the database software from 10.2 to 11.2. We are also changing endianness from Itanium’s byte order to that of Intel’s x86-64 processors. Also, my tests have shown that the new processors are about twice as fast as the older Itanium CPUs.
Two SQL queries stand out as being a lot slower on the new system although other queries are fine. So, I tried to understand why these particular queries were slower. I will just talk about one query since we saw similar behavior for both. This query has sql_id = aktyyckj710a3.
First I looked at the way the query executed on both systems using a query like this:
select ss.sql_id,
ss.plan_hash_value,
sn.END_INTERVAL_TIME,
ss.executions_delta,
ELAPSED_TIME_DELTA/(executions_delta*1000),
CPU_TIME_DELTA/(executions_delta*1000),
IOWAIT_DELTA/(executions_delta*1000),
CLWAIT_DELTA/(executions_delta*1000),
APWAIT_DELTA/(executions_delta*1000),
CCWAIT_DELTA/(executions_delta*1000),
BUFFER_GETS_DELTA/executions_delta,
ROWS_PROCESSED_DELTA/executions_delta
from DBA_HIST_SQLSTAT ss,DBA_HIST_SNAPSHOT sn
where ss.sql_id = 'aktyyckj710a3'
and ss.snap_id=sn.snap_id
and executions_delta > 0
and ss.INSTANCE_NUMBER=sn.INSTANCE_NUMBER
order by ss.snap_id,ss.sql_id;
It had a single plan on production and averaged a few seconds per execution:
PLAN_HASH_VALUE END_INTERVAL_TIME EXECUTIONS_DELTA Elapsed Average ms CPU Average ms IO Average ms Cluster Average ms Application Average ms Concurrency Average ms Average buffer gets Average disk reads Average rows processed
--------------- ------------------------- ---------------- ------------------ -------------- ------------- ------------------ ---------------------- ---------------------- ------------------- ------------------ ----------------------
918231698 11-MAY-16 06.00.40.980 PM 195 1364.80228 609.183405 831.563728 0 0 0 35211.9487 1622.4 6974.40513
918231698 11-MAY-16 07.00.53.532 PM 129 555.981481 144.348698 441.670271 0 0 0 8682.84496 646.984496 1810.51938
918231698 11-MAY-16 08.00.05.513 PM 39 91.5794872 39.6675128 54.4575897 0 0 0 3055.17949 63.025641 669.153846
918231698 12-MAY-16 08.00.32.814 AM 35 178.688971 28.0369429 159.676629 0 0 0 1464.28571 190.8 311.485714
918231698 12-MAY-16 09.00.44.997 AM 124 649.370258 194.895944 486.875758 0 0 0 13447.871 652.806452 2930.23387
918231698 12-MAY-16 10.00.57.199 AM 168 2174.35909 622.905935 1659.14223 0 0 .001303571 38313.1548 2403.28571 8894.42857
918231698 12-MAY-16 11.00.09.362 AM 213 3712.60403 1100.01973 2781.68793 0 0 .000690141 63878.1362 3951 15026.2066
918231698 12-MAY-16 12.00.21.835 PM 221 2374.74486 741.20133 1741.28251 0 0 .000045249 44243.8914 2804.66063 10294.81
On the new Linux system the query was taking 10 times as long to run as in the HP system.
PLAN_HASH_VALUE END_INTERVAL_TIME EXECUTIONS_DELTA Elapsed Average ms CPU Average ms IO Average ms Cluster Average ms Application Average ms Concurrency Average ms Average buffer gets Average disk reads Average rows processed
--------------- ------------------------- ---------------- ------------------ -------------- ------------- ------------------ ---------------------- ---------------------- ------------------- ------------------ ----------------------
2834425987 10-MAY-16 07.00.09.243 PM 41 39998.8871 1750.66015 38598.1108 0 0 0 50694.1463 11518.0244 49379.4634
2834425987 10-MAY-16 08.00.13.522 PM 33 44664.4329 1680.59361 43319.9765 0 0 0 47090.4848 10999.1818 48132.4242
2834425987 11-MAY-16 11.00.23.769 AM 8 169.75075 60.615125 111.1715 0 0 0 417.375 92 2763.25
2834425987 11-MAY-16 12.00.27.950 PM 11 14730.9611 314.497455 14507.0803 0 0 0 8456.63636 2175.63636 4914.90909
2834425987 11-MAY-16 01.00.33.147 PM 2 1302.774 1301.794 0 0 0 0 78040 0 49013
2834425987 11-MAY-16 02.00.37.442 PM 1 1185.321 1187.813 0 0 0 0 78040 0 49013
2834425987 11-MAY-16 03.00.42.457 PM 14 69612.6197 2409.27829 67697.353 0 0 0 45156.8571 11889.1429 45596.7143
2834425987 11-MAY-16 04.00.47.326 PM 16 65485.9254 2232.40963 63739.7442 0 0 0 38397.4375 12151.9375 52222.1875
2834425987 12-MAY-16 08.00.36.402 AM 61 24361.6303 1445.50141 23088.6067 0 0 0 47224.4426 5331.06557 47581.918
2834425987 12-MAY-16 09.00.40.765 AM 86 38596.7262 1790.56574 37139.4262 0 0 0 46023.0349 9762.01163 48870.0465
The query plans were not the same but they were similar. Also, the number of rows in our test cases were more than the average number of rows per run in production but it still didn’t account for all the differences.
We decided to use an outline hint and SQL Profile to force the HP system’s plan on the queries in the Linux system to see if the same plan would run faster.
It was a pain to run the query with bind variables that are dates for my test so I kind of cheated and replaced the bind variables with literals. First I extracted some example values for the variables from the original system:
select * from
(select distinct
to_char(sb.LAST_CAPTURED,'YYYY-MM-DD HH24:MI:SS') DATE_TIME,
sb.NAME,
sb.VALUE_STRING
from
DBA_HIST_SQLBIND sb
where
sb.sql_id='aktyyckj710a3' and
sb.WAS_CAPTURED='YES')
order by
DATE_TIME,
NAME;
Then I got the plan of the query with the bind variables filled in with the literals from the original HP system. Here is how I got the plan without the SQL query itself:
truncate table plan_table;
explain plan into plan_table for
-- problem query here with bind variables replaced
/
set markup html preformat on
select * from table(dbms_xplan.display('PLAN_TABLE',
This plan outputs an outline hint similar to this:
/*+
BEGIN_OUTLINE_DATA
INDEX_RS_ASC(@"SEL$683B0107" ... NO_ACCESS(@"SEL$5DA710D3" "VW_NSO_1"@"SEL$5DA710D3") OUTLINE(@"SEL$1")
OUTLINE(@"SEL$2") UNNEST(@"SEL$2")
OUTLINE_LEAF(@"SEL$5DA710D3") OUTLINE_LEAF(@"SEL$683B0107")
ALL_ROWS
OPT_PARAM('query_rewrite_enabled' 'false')
OPTIMIZER_FEATURES_ENABLE('10.2.0.3')
IGNORE_OPTIM_EMBEDDED_HINTS
END_OUTLINE_DATA
*/
Now, to force aktyyckj710a3 to run on the new system with the same plan as on the original system I had to run the query on the new system with the outline hint and get the plan hash value for the plan that the query uses.
explain plan into plan_table for
SELECT
/*+
BEGIN_OUTLINE_DATA
...
END_OUTLINE_DATA
*/
*
FROM
...
Plan hash value: 1022624069
So, I compared the two plans and they were the same but the plan hash values were different. 1022624069 on Linux was the same as 918231698. I think that endianness differences caused the plan_hash_value differences for the same plan.
Then we forced the original HP system plan on to the real sql_id using coe_xfr_sql_profile.sql.
-- build script to load profile
@coe_xfr_sql_profile.sql aktyyckj710a3 1022624069
-- run generated script
@coe_xfr_sql_profile_aktyyckj710a3_1022624069.sql
Sadly, even after forcing the original system’s plan on the new system, the query still ran just as slow. But, at least we were able to remove the plan difference as the source of the problem.
We did notice a high I/O time on the Linux executions. Running AWR reports showed about a 5 millisecond single block read time on Linux and about 1 millisecond on HP. I also graphed this over time using my Python scripts:
HP-UX db file sequential read graph:
So, in general our source HP system was seeing sub millisecond single block reads but our new Linux system was seeing multiple millisecond reads. So, this lead us to look at differences in the storage system. It seems that the original system was on flash or solid state disk and the new one was not. So, we are going to move the new system to SSD and see how that affects the query performance.
Even though this led to a possible hardware issue I thought it was worth sharing the process I took to get there including eliminating differences in the query plan by matching the plan on the original platform.
Bobby
Postscript:
Our Linux and storage teams moved the new Linux VM to solid state disk and resolved these issues. The query ran about 10 times faster than it did on the original system after moving Linux to SSD.
HP Version:
END_INTERVAL_TIME EXECUTIONS_DELTA Elapsed Average ms
------------------------- ---------------- ------------------
02.00.03.099 PM 245 5341.99923
03.00.15.282 PM 250 1280.99632
04.00.27.536 PM 341 3976.65855
05.00.39.887 PM 125 2619.58894
Linux:
END_INTERVAL_TIME EXECUTIONS_DELTA Elapsed Average ms
------------------------- ---------------- ------------------
16-MAY-16 09.00.35.436 AM 162 191.314809
16-MAY-16 10.00.38.835 AM 342 746.313994
16-MAY-16 11.00.42.366 AM 258 461.641705
16-MAY-16 12.00.46.043 PM 280 478.601618
The single block read time is well under 1 millisecond now that
the Linux database is on SSD.
END_INTERVAL_TIME number of waits ave microseconds
-------------------------- --------------- ----------------
15-MAY-16 11.00.54.676 PM 544681 515.978687
16-MAY-16 12.00.01.873 AM 828539 502.911935
16-MAY-16 01.00.06.780 AM 518322 1356.92377
16-MAY-16 02.00.10.272 AM 10698 637.953543
16-MAY-16 03.00.13.672 AM 193 628.170984
16-MAY-16 04.00.17.301 AM 112 1799.3125
16-MAY-16 05.00.20.927 AM 1680 318.792262
16-MAY-16 06.00.24.893 AM 140 688.914286
16-MAY-16 07.00.28.693 AM 4837 529.759768
16-MAY-16 08.00.32.242 AM 16082 591.632508
16-MAY-16 09.00.35.436 AM 280927 387.293204
16-MAY-16 10.00.38.835 AM 737846 519.94157
16-MAY-16 11.00.42.366 AM 1113762 428.772997
16-MAY-16 12.00.46.043 PM 562258 510.357372
Sweet!
Categories: DBA Blogs
### Comparing Common Queries Between Test and Production
Thu, 2016-05-05 13:58
The developers complained that their test database was so much slower than production that they could not use it to really test whether their batch processes would run fast enough when migrated to production. They did not give me any particular queries to check. Instead they said that the system was generally too slow. So, I went through a process to find SQL statements that they had run in test and that normally run in production and compare their run times. I thought that I would document the process that I went through here.
First I found the top 100 queries by elapsed time on both the test and production databases using this query:
column FORCE_MATCHING_SIGNATURE format 99999999999999999999
select FORCE_MATCHING_SIGNATURE from
(select
FORCE_MATCHING_SIGNATURE,
sum(ELAPSED_TIME_DELTA) total_elapsed
from DBA_HIST_SQLSTAT
where
FORCE_MATCHING_SIGNATURE is not null and
FORCE_MATCHING_SIGNATURE <>0
group by FORCE_MATCHING_SIGNATURE
order by total_elapsed desc)
where rownum < 101;
The output looked like this:
FORCE_MATCHING_SIGNATURE
------------------------
944718698451269965
4634961225655610267
15939251529124125793
15437049687902878835
2879196232471320459
12776764566159396624
14067042856362022182
...
Then I found the signatures that were in common between the two lists.
insert into test_sigs values (944718698451269965);
insert into test_sigs values (4634961225655610267);
insert into test_sigs values (15939251529124125793);
...
insert into prod_sigs values (3898230136794347827);
insert into prod_sigs values (944718698451269965);
insert into prod_sigs values (11160330134321800286);
...
select * from test_sigs
intersect
select * from prod_sigs;
This led to 32 values of FORCE_MATCHING_SIGNATURE which represented queries that ran on both test and production, except for the possible difference in constants.
Next I looked at the overall performance of these 32 queries in test and production using this query:
create table common_sigs
(FORCE_MATCHING_SIGNATURE number);
insert into common_sigs values (575231776450247964);
insert into common_sigs values (944718698451269965);
insert into common_sigs values (1037345866341698119);
...
select
sum(executions_delta) total_executions,
sum(ELAPSED_TIME_DELTA)/(sum(executions_delta)*1000),
sum(CPU_TIME_DELTA)/(sum(executions_delta)*1000),
sum(IOWAIT_DELTA)/(sum(executions_delta)*1000),
sum(CLWAIT_DELTA)/(sum(executions_delta)*1000),
sum(APWAIT_DELTA)/(sum(executions_delta)*1000),
sum(CCWAIT_DELTA)/(sum(executions_delta)*1000),
sum(BUFFER_GETS_DELTA)/sum(executions_delta),
sum(ROWS_PROCESSED_DELTA)/sum(executions_delta)
from DBA_HIST_SQLSTAT ss,common_sigs cs
where
ss.FORCE_MATCHING_SIGNATURE = cs.FORCE_MATCHING_SIGNATURE;
Here is part of the output:
TOTAL_EXECUTIONS Elapsed Average ms CPU Average ms IO Average ms
---------------- ------------------ -------------- -------------
5595295 366.185529 241.92785 59.8682797
430763 1273.75822 364.258421 1479.83294
The top line is production and the bottom is test.
This result supported the development team’s assertion that test was slower than production. The 32 queries averaged about 3.5 times longer run times in test than in production. Also, the time spent on I/O was about 25 times worse. I am not sure why the I/O time exceeded the elapsed time on test. I guess it has something to do with how Oracle measures I/O time. But clearly on average these 32 queries are much slower on test and I/O time probably caused most of the run time difference.
After noticing this big difference between test and production I decided to get these same sorts of performance metrics for each signature to see if certain ones were worse than others. The query looked like this:
select
ss.FORCE_MATCHING_SIGNATURE,
sum(executions_delta) total_executions,
sum(ELAPSED_TIME_DELTA)/(sum(executions_delta)*1000),
sum(CPU_TIME_DELTA)/(sum(executions_delta)*1000),
sum(IOWAIT_DELTA)/(sum(executions_delta)*1000),
sum(CLWAIT_DELTA)/(sum(executions_delta)*1000),
sum(APWAIT_DELTA)/(sum(executions_delta)*1000),
sum(CCWAIT_DELTA)/(sum(executions_delta)*1000),
sum(BUFFER_GETS_DELTA)/sum(executions_delta),
sum(ROWS_PROCESSED_DELTA)/sum(executions_delta)
from DBA_HIST_SQLSTAT ss,common_sigs cs
where ss.FORCE_MATCHING_SIGNATURE = cs.FORCE_MATCHING_SIGNATURE
having
sum(executions_delta) > 0
group by
ss.FORCE_MATCHING_SIGNATURE
order by
ss.FORCE_MATCHING_SIGNATURE;
I put together the outputs from running this query on test and production and lined the result up like this:
FORCE_MATCHING_SIGNATURE PROD Average ms TEST Average ms
------------------------ ------------------ ------------------
575231776450247964 20268.6719 16659.4585
944718698451269965 727534.558 3456111.6 *
1037345866341698119 6640.87641 8859.53518
1080231657361448615 3611.37698 4823.62857
2879196232471320459 95723.5569 739287.601 *
2895012443099075884 687272.949 724081.946
3371400666194280661 1532797.66 761762.181
4156520416999188213 109238.997 213658.722
4634693999459450255 4923.8897 4720.16455
5447362809447709021 2875.37308 2659.5754
5698160695928381586 17139.6304 16559.1932
6260911340920427003 290069.674 421058.874 *
7412302135920006997 20039.0452 18951.6357
7723300319489155163 18045.9756 19573.4784
9153380962342466451 1661586.53 1530076.01
9196714121881881832 5.48003488 5.13169472
9347242065129163091 4360835.92 4581093.93
11140980711532357629 3042320.88 5048356.99
11160330134321800286 6868746.78 6160556.38
12212345436143033196 5189.7972 5031.30811
12776764566159396624 139150.231 614207.784 *
12936428121692179551 3563.64537 3436.59365
13637202277555795727 7360.0632 6410.02772
14067042856362022182 859.732015 771.041714
14256464986207527479 51.4042938 48.9237251
14707568089762185958 627.586095 414.14762
15001584593434987669 1287629.02 1122151.35
15437049687902878835 96014.9782 996974.876 *
16425440090840528197 48013.8912 50799.6184
16778386062441486289 29459.0089 26845.8327
17620933630628481201 51199.0511 111785.525 *
18410003796880256802 581563.611 602866.609
I put an asterisk (*) beside the six queries that were much worse on test than production. I decided to focus on these six to get to the bottom of the reason between the difference. Note that many of the 32 queries ran about the same on test as prod so it really isn’t the case that everything was slow on test.
Now that I had identified the 6 queries I wanted to look at what they were spending their time on including both CPU and wait events. I used the following query to use ASH to get a profile of the time spent by these queries on both databases:
select
case SESSION_STATE
when 'WAITING' then event
else SESSION_STATE
end TIME_CATEGORY,
(count(*)*10) seconds
from DBA_HIST_ACTIVE_SESS_HISTORY
where
FORCE_MATCHING_SIGNATURE in
('944718698451269965',
'2879196232471320459',
'6260911340920427003',
'12776764566159396624',
'15437049687902878835',
'17620933630628481201')
group by SESSION_STATE,EVENT
order by seconds desc;
The profile looked like this in test:
TIME_CATEGORY SECONDS
------------------------ -------
ON CPU 141010
direct path write temp 23110
The profile looked like this in production:
TIME_CATEGORY SECONDS
------------------------ -------
ON CPU 433260
PX qref latch 64200
direct path write temp 12000
So, I/O waits dominate the time on test but not production. Since db file parallel read and db file sequential read were the top I/O waits for these 6 queries I used ash to see which of the 6 spent the most time on these waits.
select
2 sql_id,
3 (count(*)*10) seconds
4 from DBA_HIST_ACTIVE_SESS_HISTORY
5 where
6 FORCE_MATCHING_SIGNATURE in
7 ('944718698451269965',
8 '2879196232471320459',
9 '6260911340920427003',
10 '12776764566159396624',
11 '15437049687902878835',
12 '17620933630628481201') and
14 group by sql_id
15 order by seconds desc;
SQL_ID SECONDS
------------- ----------
ak2wk2sjwnd34 159020
95b6t1sp7y40y 37030
brkfcwv1mqsas 11370
7rdc79drfp28a 30
select
2 sql_id,
3 (count(*)*10) seconds
4 from DBA_HIST_ACTIVE_SESS_HISTORY
5 where
6 FORCE_MATCHING_SIGNATURE in
7 ('944718698451269965',
8 '2879196232471320459',
9 '6260911340920427003',
10 '12776764566159396624',
11 '15437049687902878835',
12 '17620933630628481201') and
14 group by sql_id
15 order by seconds desc;
SQL_ID SECONDS
------------- ----------
95b6t1sp7y40y 26840
ak2wk2sjwnd34 22550
6h0km9j5bp69t 13300
brkfcwv1mqsas 170
7rdc79drfp28a 130
Two queries stood out at the top waiters on these two events: 95b6t1sp7y40y and ak2wk2sjwnd34. Then I just ran my normal sqlstat query for both sql_ids for both test and production to find out when they last ran. Here is what the query looks like for ak2wk2sjwnd34:
select ss.sql_id,
ss.plan_hash_value,
sn.END_INTERVAL_TIME,
ss.executions_delta,
ELAPSED_TIME_DELTA/(executions_delta*1000) "Elapsed Average ms",
CPU_TIME_DELTA/(executions_delta*1000) "CPU Average ms",
IOWAIT_DELTA/(executions_delta*1000) "IO Average ms",
CLWAIT_DELTA/(executions_delta*1000) "Cluster Average ms",
APWAIT_DELTA/(executions_delta*1000) "Application Average ms",
CCWAIT_DELTA/(executions_delta*1000) "Concurrency Average ms",
BUFFER_GETS_DELTA/executions_delta "Average buffer gets",
ROWS_PROCESSED_DELTA/executions_delta "Average rows processed"
from DBA_HIST_SQLSTAT ss,DBA_HIST_SNAPSHOT sn
where ss.sql_id = 'ak2wk2sjwnd34'
and ss.snap_id=sn.snap_id
and executions_delta > 0
and ss.INSTANCE_NUMBER=sn.INSTANCE_NUMBER
order by ss.snap_id,ss.sql_id;
I found two time periods where both of these queries were recently run on both test and production and got an AWR report for each time period to compare them.
Here are a couple of pieces of the AWR report for the test database:
Here are similar pieces for the production database:
What really stood out to me was that the wait events were so different. In production the db file parallel read waits averaged around 1 millisecond and the db file sequential reads averaged under 1 ms. On test they were 26 and 5 milliseconds, respectively. The elapsed times for sql_ids 95b6t1sp7y40y and ak2wk2sjwnd34 were considerably longer in test.
This is as far as my investigation went. I know that the slowdown is most pronounced on the two queries and I know that their I/O waits correspond to the two wait events. I am still trying to find a way to bring the I/O times down on our test database so that it more closely matches production. But at least I have a more narrow focus with the two top queries and the two wait events.
Bobby
Categories: DBA Blogs
### Jonathan Lewis
Tue, 2016-04-19 18:09
I am finally getting around to finishing my four-part blog series on people who have had the most influence on my Oracle performance tuning work. The previous three people were Craig ShallahamerDon Burleson, and Cary Millsap. The last person is Jonathan Lewis. These four people, listed and blogged about in chronological order, had the most influence on my understanding of how to do Oracle database performance tuning. There are many other great people out there and I am sure that other DBAs would produce their own, different, list of people who influenced them. But this list reflects my journey through my Oracle database career and the issues that I ran into and the experiences that I had. I ran into Jonathan Lewis’ work only after years of struggling with query tuning and getting advice from others. I ran into his material right around the time that I was beginning to learn about how the Oracle optimizer worked and some of its limits. Jonathan was a critical next step in my understanding of how Oracle’s optimizer worked and why it sometimes failed to pick the most efficient way to run a query.
Jonathan has produced many helpful tuning resources including his blog, his participation in online forums, and his talks at user group conferences, but the first and most profound way he taught me about Oracle performance tuning was through his query tuning book Cost-Based Oracle Fundamentals. It’s \$30 on Amazon and that is an incredibly small amount of money to pay compared to the value of the material inside the book. I had spent many hours over several years trying to understand why the Oracle optimizer some times choses the wrong way to run a query. In many cases the fast way to run something was clear to me and the optimizer’s choices left me stumped. The book helped me better understand how the Oracle optimizer chooses what it thinks is the best execution plan. Jonathan’s book describes the different parts of a plan – join types, access methods, etc. – and how the optimizer assigns a cost to the different pieces of a plan. The optimizer chooses the plan with the least cost, but if some mistake causes the optimizer to calculate an unrealistic cost then it might choose a poor plan. Understanding why the optimizer would choose a slow plan helped me understand how to resolve performance issues or prevent them from happening, a very valuable skill.
There is a lot more I could say about what I got from Jonathan Lewis’ book including just observing how he operated. Jonathan filled his book with examples which show concepts that he was teaching. I think that I have emulated the kind of building of test scripts that you see throughout his book and on his blog and community forums. I think I have emulated not only Jonathan’s approach but the approaches of all four of the people who I have spotlighted in this series. Each have provided me with profoundly helpful technical information that has helped me in my career. But they have also provided me with a pattern of what an Oracle performance tuning practitioner looks like. What kind of things do they do? To this point in my career I have found the Oracle performance tuning part of my job to be the most challenging and interesting and probably the most valuable to my employers. Jonathan Lewis and the three others in this four-part series have been instrumental in propelling me along this path and I am very appreciative.
Bobby
Categories: DBA Blogs
### Log file parallel write wait graph
Thu, 2016-03-31 10:50
I got a chance to use my onewait Python based graph to help with a performance problem. I’m looking at slow write time from the log writer on Thursday mornings. Here is the graph with the database name erased:
We are still trying to track down the source of the problem but there seems to be a backup on another system that runs at times that correspond to the spike in log file parallel write wait times. The nice thing about this graph is that it shows you activity on the top and average wait time on the bottom so you can see if the increased wait time corresponds to a spike in activity. In this case there does not seem to be any increase in activity on the problematic database. But that makes sense if the real problem is contention by a backup on another system.
Anyway, my Python graphs are far from perfect but still helpful in this case.
Bobby
Categories: DBA Blogs
### Python DBA Graphs Github Repository
Tue, 2016-03-29 17:40
I decided to get rid of the Github repository that I had experimented with and to create a new one. The old one had a dump of all my SQL scripts but without any documentation. But, I have updated my Python graphing scripts a bit at a time and have had some recent value from these scripts in my Oracle database tuning work. So, I created a Github repository called PythonDBAGraphs. I think it will be more valuable to have a repository that is more focused and is being actively updated and documented.
It is still very simple but I have gotten real value from the two graphs that are included.
Bobby
Categories: DBA Blogs
### Another SQL Profile to the rescue!
Mon, 2016-03-28 18:57
We have had problems with set of databases over the past few weeks. Our team does not support these databases, but my director asked me to help. These are 11.2.0.1 Windows 64 bit Oracle databases running on Windows 2008. The incident reports said that the systems stop working and that the main symptom was that the oracle.exe process uses all the CPU. They were bouncing the database server when they saw this behavior and it took about 30 minutes after the bounce for the CPU to go back down to normal. A Windows server colleague told me that at some point in the past a new version of virus software had apparently caused high CPU from the oracle.exe process.
At first I looked for some known bugs related to high CPU and virus checkers without much success. Then I got the idea of just checking for query performance. After all, a poorly performing query can eat up a lot of CPU. These Windows boxes only have 2 cores so it would not take many concurrently running high CPU queries to max it out. So, I got an AWR report covering the last hour of a recent incident. This was the top SQL:
The top query, sql id 27d8x8p6139y6, stood out as very inefficient and all CPU. It seemed clear to me from this listing that the 2 core box had a heavy load and a lot of waiting for CPU queuing. %IO was zero but %CPU was only 31%. Most likely the rest was CPU queue time.
I also looked at my sqlstat report to see which plans 27d8x8p6139y6 had used over time.
PLAN_HASH_VALUE END_INTERVAL_TIME EXECUTIONS Elapsed ms
--------------- --------------------- ---------- -----------
3067874494 07-MAR-16 09.00.50 PM 287 948.102286
3067874494 07-MAR-16 10.00.03 PM 292 1021.68191
3067874494 07-MAR-16 11.00.18 PM 244 1214.96161
3067874494 08-MAR-16 12.00.32 AM 276 1306.16222
3067874494 08-MAR-16 01.00.45 AM 183 1491.31307
467860697 08-MAR-16 01.00.45 AM 125 .31948
467860697 08-MAR-16 02.00.59 AM 285 .234073684
467860697 08-MAR-16 03.00.12 AM 279 .214354839
467860697 08-MAR-16 04.00.25 AM 246 .17147561
467860697 08-MAR-16 05.00.39 AM 18 .192
2868766721 13-MAR-16 06.00.55 PM 89 159259.9
3067874494 13-MAR-16 06.00.55 PM 8 854.384125
2868766721 13-MAR-16 07.00.50 PM 70 1331837.56
Plan 2868766721 seemed terrible but plan 467860697 seemed great.
Our group doesn’t support these databases so I am not going to dig into how the application gathers statistics, what indexes it uses, or how the vendor designed the application. But, it seems possible that forcing the good plan with a SQL Profile could resolve this issue without having any access to the application or understanding of its design.
But, before plunging headlong into the use of a SQL Profile I looked at the plan and the SQL text. I have edited these to hide any proprietary details:
SELECT T.*
FROM TAB_MYTABLE1 T,
TAB_MYTABLELNG A,
TAB_MYTABLE1 PIR_T,
TAB_MYTABLELNG PIR_A
WHERE A.MYTABLELNG_ID = T.MYTABLELNG_ID
AND A.ASSIGNED_TO = :B1
AND A.ACTIVE_FL = 1
AND T.COMPLETE_FL = 0
AND T.SHORTED_FL = 0
AND PIR_T.MYTABLE1_ID = T.PIR_MYTABLE1_ID
AND ((PIR_A.FLOATING_PIR_FL = 1
AND PIR_T.COMPLETE_FL = 1)
OR PIR_T.QTY_PICKED IS NOT NULL)
AND PIR_A.MYTABLELNG_ID = PIR_T.MYTABLELNG_ID
AND PIR_A.ASSIGNED_TO IS NULL
ORDER BY T.MYTABLE1_ID
The key thing I noticed is that there was only one bind variable. The innermost part of the good plan uses an index on the column that the query equates with the bind variable. The rest of the plan is a nice nested loops plan with range and unique index scans. I see plans in this format in OLTP queries where you are looking up small numbers of rows using an index and join to related tables.
-----------------------------------------------------------------
Id | Operation | Name
-----------------------------------------------------------------
0 | SELECT STATEMENT |
1 | SORT ORDER BY |
2 | NESTED LOOPS |
3 | NESTED LOOPS |
4 | NESTED LOOPS |
5 | NESTED LOOPS |
6 | TABLE ACCESS BY INDEX ROWID| TAB_MYTABLELNG
7 | INDEX RANGE SCAN | AK_MYTABLELNG_BY_USER
8 | TABLE ACCESS BY INDEX ROWID| TAB_MYTABLE1
9 | INDEX RANGE SCAN | AK_MYTABLE1_BY_MYTABLELNG
10 | TABLE ACCESS BY INDEX ROWID | TAB_MYTABLE1
11 | INDEX UNIQUE SCAN | PK_MYTABLE1
12 | INDEX UNIQUE SCAN | PK_MYTABLELNG
13 | TABLE ACCESS BY INDEX ROWID | TAB_MYTABLELNG
-----------------------------------------------------------------
Plan hash value: 2868766721
----------------------------------------------------------------
Id | Operation | Name
----------------------------------------------------------------
0 | SELECT STATEMENT |
1 | NESTED LOOPS |
2 | NESTED LOOPS |
3 | MERGE JOIN CARTESIAN |
4 | TABLE ACCESS BY INDEX ROWID | TAB_MYTABLE1
5 | INDEX FULL SCAN | PK_MYTABLE1
6 | BUFFER SORT |
7 | TABLE ACCESS BY INDEX ROWID| TAB_MYTABLELNG
8 | INDEX RANGE SCAN | AK_MYTABLELNG_BY_USER
9 | TABLE ACCESS BY INDEX ROWID | TAB_MYTABLE1
10 | INDEX RANGE SCAN | AK_MYTABLE1_BY_MYTABLELNG
11 | TABLE ACCESS BY INDEX ROWID | TAB_MYTABLELNG
12 | INDEX RANGE SCAN | AK_MYTABLELNG_BY_USER
----------------------------------------------------------------
Reviewing the SQL made me believe that there was a good chance that a SQL Profile forcing the good plan would resolve the issue. Sure, there could be some weird combination of data and bind variable values that make the bad plan the better one. But, given that this was a simple transactional application it seems most likely that the straightforward nested loops with index on the only bind variable plan would be best.
We used the SQL Profile to force these plans on four servers and so far the SQL Profile has resolved the issues. I’m not saying that forcing a plan using a SQL Profile is the only or even best way to resolve query performance issues. But, this was a good example of where a SQL Profile makes sense. If modifying the application, statistics, parameters, and schema is not possible then a SQL Profile can come to your rescue in a heartbeat.
Bobby
Categories: DBA Blogs
### Math Resources
Thu, 2016-03-17 17:51
I feel like I have not been posting very much on this blog lately. I have been focused on things outside of Oracle performance so I haven’t had a lot of new scripts to post. I have been quietly updating my Python source code on GitHub so check that out. I have spent a lot of time educating myself in various ways including through the leadership and communication training program that comes from Toastmasters. My new job title is “Technical Architect” which is a form of technical leadership so I’m trying to expand myself beyond being an Oracle database administrator that specializes in performance tuning.
In addition to developing my leadership and communication skills I have gotten into a general computer science self-education kick. I took two introductory C.S. classes on edX. I also read a book on Linux hacking and a book on computer history. I was thinking of buying one of the Donald Knuth books or going through MIT’s free online algorithms class class 6.006. I have a computer science degree and spent two years in C.S. graduate school but that was a long time ago. It is kind of fun to refresh my memory and catch up with the latest trends. But the catch is that both the Knuth book and MIT’s 6.006 class require math that I either never learned or have forgotten. So, I am working my way through some math resources that I wanted to share with those who read this blog.
The first thing I did was to buy a computer math book, called Concrete Mathematics, that seemed to cover the needed material. Reviews on Amazon.com recommended this book as good background for the Knuth series and one of the Oracle performance experts that I follow on Twitter recommended it for similar reasons. But, after finishing my second edX class I began exploring the MIT OCW math class that was a prerequisite to MIT’s 6.006 algorithms class. MIT calls the math class 6.042J and I am working through the Fall 2010 version of the class. There is a lot of overlap between the class and the book but they are not a perfect match. The book has some more difficult to follow material than the class. It is probably more advanced. The class covers some topics, namely graph theory, that the book does not. The free online class has some very good lecture videos by a top MIT professorTom Leighton. I even had my wife and daughters sit down and watch his first lecture with me on our family television for fun on my birthday.
The book led me to a great free math resource called Maxima. Maxima has all kinds of great math built into it such as solving equations, factoring integers, etc. Plus, it is free. There are other similar and I think more popular programs that are not free but for my use it was great to simply download Maxima and have its functionality at my fingertips. | 2016-10-22 05:52:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21224218606948853, "perplexity": 4579.7818004841}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718426.35/warc/CC-MAIN-20161020183838-00225-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/disk-washer-method-check-my-set-up.465482/ | # Disk/Washer Method - check my set up?
## Homework Statement
choose between washer and disk method, and find the area of the region bounded by the following curves by revolving around a) the y-axis b) the x-axis c) y=8, and x=2
y=2x2
y=0
x=2
## The Attempt at a Solution
So I set up a) from 0 to 8 [ 2-sqrt(y/2)]^2 dy (washer method)
I don't know how to do the notation around here, so I hope that is clear? And I did remember the PI out front in these. I got 16pi/3
b) from 0 to 2 (2x^2)^2 dx (disk method). for 128pi/5
c) from 0 to 2 (x-2x^2)^2 dx (washer method) for 184pi/15
d) from 0 to 8 (sqrt(y/2))^^2 dy (disk method) for 16pi
I have no clue if I'm doing this right.... all the values seem so different... :(
HallsofIvy
Yes, for rotation around the y-axis, since the y-axis is not a boundary of the region being rotated, use the "washer method". However, you have the integrand wrong. $\pi (r_1- r_2)^2$ is the area of a full circle of radius $r_1- r_2$. A "washer" with outer radius $r_1$ and inner radius $r_2$ can be thought of as the are of the outer circle, $\pi r_1^2$ and then subtract of the area of the inner circle, $\pi r_2^2$: the area of the washer is $\pi(r_1^2- r_2^2)$.
The radius will be along the x- direction and, since $y= 2x^2$ but we only need x positive, $x= y^{1/2}/\sqrt{2}$ and the area of the "washer" from that x to x= 2 is $\pi(r_1^2- r_2^2)= \pi(4- y/2)$. The volume is $\pi \int_0^8 (4- y/2)dy$. | 2021-05-08 04:53:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8953796625137329, "perplexity": 521.7897305664214}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988837.67/warc/CC-MAIN-20210508031423-20210508061423-00253.warc.gz"} |
http://mymathforum.com/topology/4767-discrete-topology-product-topology.html | My Math Forum discrete topology, product topology
Topology Topology Math Forum
December 2nd, 2008, 01:04 PM #1 Newbie Joined: Nov 2008 Posts: 13 Thanks: 0 discrete topology, product topology For each $n \in \omega$, let $X_n$ be the set $\{0, 1\}$, and let $\tau_n$ be the discrete topology on $X_n$. For each of the following subsets of $\prod_{n \in \omega} X_n$, say whether it is open or closed (or neither or both) in the product topology. (a) $\{f \in \prod_{n \in \omega} X_n | f(10)=0 \}$ (b) $\{f \in \prod_{n \in \omega} X_n | \text{ }\exists n \in \omega \text{ }f(n)=0 \}$ (c) $\{f \in \prod_{n \in \omega} X_n | \text{ }\forall n \in \omega \text{ }f(n)=0 \Rightarrow f(n+1)=1 \}$ (d) $\{f \in \prod_{n \in \omega} X_n | \text{ }|\{ n \in \omega | f(n)=0 \}|=5 \}$ (e)$\{f \in \prod_{n \in \omega} X_n | \text{ }|\{ n \in \omega | f(n)=0 \}|\leq5 \}$ Recall that $\omega= \mathbb{N} \cup \{0\}$
Tags discrete, product, topology
Thread Tools Display Modes Linear Mode
Similar Threads Thread Thread Starter Forum Replies Last Post bigli Topology 8 November 21st, 2013 10:54 AM vercammen Topology 1 October 19th, 2012 11:06 AM matthematical Topology 2 September 20th, 2011 02:20 PM toti Topology 1 June 17th, 2010 01:58 PM genoatopologist Topology 0 December 6th, 2008 10:09 AM
Contact - Home - Forums - Cryptocurrency Forum - Top | 2018-06-25 05:59:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 12, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5045293569564819, "perplexity": 2223.9298333757565}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267867493.99/warc/CC-MAIN-20180625053151-20180625073151-00537.warc.gz"} |
https://www.nature.com/articles/s41467-021-27750-2?error=cookies_not_supported | Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
Yeast-derived nanoparticles remodel the immunosuppressive microenvironment in tumor and tumor-draining lymph nodes to suppress tumor growth
Abstract
Microbe-based cancer immunotherapy has recently emerged as a hot topic for cancer treatment. However, serious limitations remain including infection associated side-effect and unsatisfactory outcomes in clinic trials. Here, we fabricate different sizes of nano-formulations derived from yeast cell wall (YCW NPs) by differential centrifugation. The induction of anticancer immunity of our formulations appears to inversely correlate with their size due to the ability to accumulate in tumor-draining lymph node (TDLN). Moreover, we use a percolation model to explain their distribution behavior toward TDLN. The abundance and functional orientation of each effector component are significantly improved not only in the microenvironment in tumor but also in the TDLN following small size YCW NPs treatment. In combination with programmed death-ligand 1 (PD-L1) blockade, we demonstrate anticancer efficiency in melanoma-challenged mice. We delineate potential strategy to target immunosuppressive microenvironment by microbe-based nanoparticles and highlight the role of size effect in microbe-based immune therapeutics.
Introduction
Cancer immunotherapy is an unprecedented way of utilizing the body’s own immune system to fight tumors1,2,3. However, only a relatively limited (~20%) fraction of patients can benefit from this treatment, including immune checkpoint inhibitor (ICI)-based immunotherapy4. A tumor that resists cancer immunotherapy, so-called ‘cold’ tumor5, is generally with a poor reflection in cancer cell antigenicity and adjuvanticity6, as well as a highly immunosuppressive tumor microenvironment (TME) with limited infiltration by cytotoxic T cells (CTLs) and dendritic cells (DCs) coupled to the accumulation of various myeloid cell populations such as myeloid-derived suppressor cells (MDSCs) and tumor-associated macrophages (TAMs) subsets7. The remodeling of immunosuppressive TME has been considered as an important target for the development of combinatorial immunotherapeutic regimens with superior efficiency8,9.
Microbe-based cancer immunotherapy has recently emerged as an approach for inducing anticancer immunity. Over the last decade, various microbes, including bacteria10,11,12,13, oncolytic virus14,15 and fungi16, have been utilized to activate innate immunity and improve adaptive immunity, thereby augmenting antitumor immune response. For example, it has been reported that engineered attenuated Salmonella typhimurium can induce infiltration of immune cells and proinflammatory cytokines production in TME, thereby augmenting antitumor immunity17. Several recombinant yeast-based vaccines have been tested in clinical trials18. Despite the accumulating data, serious limitations remain19. Firstly, due to their potential to cause infections in patients, live microbes may cause the immune system to attack healthy cells, and their use may be in company with the risk of deadly infection. In addition, the facultative anaerobe Salmonella typhimurium VNP20009 failed in a phase I clinical trial because the monotherapy was found not enough to eliminate tumor effectively20. Microbe-based cancer immunotherapy is still in its early stage 21.
Yeast is one of the most common types of fungi, which has been widely used in fermentation and leavening in food manufacture. As the structure of yeast cell wall includes proteins and polysaccharides, such as β-glucan and chitin22,23 that do not exist in mammalians, they have been considered as ‘danger signals’, potentially resulting in the activation of potent, multiepitope immune response in our body24,25. In addition, yeast-derived β-glucan has been reported to activate DCs and macrophages, thus activating T cells to enhance the anti-tumor efficacy26. However, the micro-size of yeast limited their distribution and uptake efficiency in tumor and tumor-draining lymph node. Hence, in this work, we fabricate different sizes of nanoparticles derived from yeast (Saccharomyces Cerevisiae) cell wall, which have no reproduction ability. Intriguingly, the induction of anticancer immunity appears to inversely correlate with the size of yeast cell wall nanoparticles (YCW NPs). Small size of YCW NPs (~50 nm) showed better efficiency in controlling tumor growth after intratumor injection compared with middle (~200 nm) and large (~500 nm) size of YCW NPs. By observation of their distribution, we find a high accumulation of small size of YCW NPs in TDLN due to their size effects, as compared with middle and large size of YCW NPs. The mathematic percolation model is introduced to explain their accumulation behavior toward TDLNs. In addition, not only the microenvironment in tumors but also that in the TDLNs are remarkably rebuilt in the abundance and functional orientation of each cellular component following small size YCW NPs treatment. Furthermore, in combination with immune checkpoint blockade therapy, our technology enables complete tumor regression in 90–100% of treated mice with limited side effects. The systematic anticancer immune response is also induced to inhibit the growth of distant tumors. Our study develops microbe-based nanoparticles for enhancing cancer ICI-immunotherapy by remodeling the immunosuppressive microenvironment in tumors and TDLNs, and highlights the role of size effect in microbe-based immune therapeutics.
Results
Preparation and characterization of YCW NPs
Different nano-size yeast cell walls (YCW NPs) were obtained from the yeast cells. In brief, micro-size yeast cell walls (YCW MPs) were prepared firstly (Fig. 1A). After being broken and washed, yeast cell walls were centrifuged at 2400 g, 9600 g, and 21,100 g for 10 min to obtain three different YCW NPs, named large size of YCW NPs, middle size of YCW NPs, and small size of YCW NPs (Fig. 1A). We measured the compositions of YCW NPs, which contained ~88.20% β-glucan, 2.88% proteins and 8.92% others (Supplementary Fig. 1, Supplementary Table 2, Supplementary Table 3). As shown in scanning electron microscope (SEM) and transmission electron microscope (TEM) imaging, the diameter of YCW MPs were about 4 μm (Fig. 1Bleft), three different sizes of YCW NPs with the diameter of ~50 nm, ~200 nm, and ~500 nm were obtained, respectively (Fig. 1Bright). Although irregular shapes of NPs were observed (Supplementary Fig. 2A–D), most of them displayed a spherical or quasi-spherical morphology with a uniform of distribution (Fig. 1Bright, Supplementary Fig. 2A–D). The underlying causes likely include (1) spherical particles are more stable than nonspherical particles because they have lower surface free energy compared to nonspherical particles; (2) particles with irregular shape and surface are more prone to aggregate due to attractive van der Waals interactions during the ultrasonic treatment and removed by centrifugation-based separation procedures. Dynamic light scattering (DLS) showed the similar results as TEM and three kinds of YCW NPs had a similar zeta potential around −12 mV (Fig. 1C, Supplementary Fig. 2E). In addition, sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) assay further indicated that all of them had the similar profile of protein contents (Fig. 1D). The changes in diameter and zeta potential at room temperature and 4 °C within 2 weeks were not obvious, indicating great stability of the NPs (Supplementary Fig. 2F–G). We next studied their potential cytotoxicity on DCs (DC2.4), macrophages (RAW264.7), and melanoma (B16-F10) using methyl thiazolyl tetrazolium (MTT) assay, which revealed that three kinds of YCW NPs at indicated concentrations displayed no significant toxicity on both immune cells and tumor cells after incubation for 24 h (Supplementary Fig. 3A–C). These results indicated that the properties of YCW NPs with three different sizes were similar in those aspects.
YCW particles for activation of immune cells
We then studied the interactions of YCW particles, including YCW MPs and YCW NPs, with DCs and macrophages. Confocal fluorescence imaging and flow cytometry analysis of Cy5.5 indicated that after incubation for 24 h, Cy5.5-labelled YCW particles were taken up by both DCs and macrophages effectively (Fig. 1E,F, Supplementary Fig. 4A–F). It was observed that with increasing sizes, fluorescence intensity of Cy5.5-labelled YCW NPs decreased on DCs, but not obviously on macrophages. It has been reported that β-glucan could be recognized by Dectin-1 expression on DCs and macrophages, thereby activating DCs and macrophages. To confirm the cellular uptake of YCW NPs was related to Dectin-1, we applied Dectin-1 competitor laminarin incubation with immune cells for 2 h before incubation with Cy5.5-labelled YCW NPs. Flow cytometry analysis showed that the recognition and phagocytosis of YCW NPs by DCs and macrophages was, at least in part, related to Dectin-1 (Supplementary Fig. 4G–J).
DCs, as a kind of professional antigen-presenting cell, play a vital role in activating a series of adaptive immune responses27,28. Therefore, we tested the maturation of bone marrow-derived dendritic cells (BMDCs) treated with three different sizes of YCW NPs. Intriguingly, small size of YCW NPs showed the highest upregulation in co-stimulatory molecules including CD80, CD86, CD40, and MHCII compared with those treated with middle or large size of YCW NPs or MPs (Fig. 2A,B, Supplementary Fig. 5, Supplementary Fig. 6). In line with the result of BMDCs maturation, highest pro-inflammatory factors such as TNF-α, IL-1β, IL-12p70, IL-6 were also produced by the small size of YCW NPs incubation (Fig. 2C–F, Supplementary Fig. 5C–F). These results proved that YCW particles could efficiently activate BMDCs in a size-dependent manner, probably due to their uptake efficiency. At the same time, we also observed the expression of PD-L1 on BMDCs was significantly increased after incubation with YCW NPs (Supplementary Fig. 5I), however, the expression of PD-1, CTLA-4, LAG-3, and TIM-3 did not show such a dramatic change (Supplementary Fig. 5J–M). We next sought to unravel the mechanism whereby YCW NPs activate DCs. Studies have elucidated that yeast cell wall activates DCs and macrophages by Dectin-1/Syk pathway and TLR2/MyD88 pathway29,30. We performed western blotting assay to explore whether the YCW NPs activate DCs was also related to these pathways. A dramatic increase was observed in the expression level of TLR2, p-Syk, p-P65, MyD88, Dectin-1 in BMDCs after incubation with three different sizes of YCW NPs. In addition, with the decrease of size, the expression level was increased (Fig. 2G–L, Supplementary Fig. 7A). Next, we applied Dectin-1 competitor laminarin and TLR2 inhibitor C29 to pretreat BMDCs for 2 h. The results future confirmed that activation of BMDCs by YCW NPs was related to, at least in part, Dectin-1/Syk pathway and TLR2/MyD88 pathway (Fig. 2M–U, Supplementary Fig. 7B,C).
T cells play significant roles in immune response. To explore whether YCW NPs directly affect T cells, we incubated T cells with various YCW NPs for 24 h. The results of flow cytometry showed that T cells, unlike DCs and macrophages, could not effectively engulf YCW NPs (Supplementary Fig. 8A,B). At the same time, in vitro experiments revealed that three different sizes of YCW NPs did not promote the activation of T cells and the expression of PD-1 on T cells (Supplementary Fig. 8C–I), suggesting that YCW NPs did not affect T cells directly.
YCW NPs inhibited tumor growth by remodeling immunosuppressive tumor microenvironment
We next questioned whether YCW NPs could inhibit established tumor growth by altering immunosuppressive TME. In our experiment, B16-luc tumor cells were inoculated on the back of C57BL/6 mice and then we intratumorally injected three kinds of YCW NPs (0.375 mg/kg) at a frequency of once every two days for a total of five administrations (Fig. 3A). It was revealed that all YCW NPs inhibited the growth of B16-luc tumor significantly. Intriguingly, among the different size of YCW NPs, the small size of YCW NPs exhibited greatest antitumor efficiency (Fig. 3B,C), which was consistent with our in vitro data. Hematoxylin-eosin staining (H&E staining) of tumor in small size YCW NPs treated group confirmed that tumor cells were eradicated remarkably, which directly proved the remarkable anti-tumor efficacy of small size of YCW NPs (Fig. 3D). The weight of mice remained almost unchanged during the treatment compared to that of untreated control, indicating the limited side-effect of YCW NPs administration (Fig. 3E). Dose–response experiments also identified 0.375 mg/kg YCW NPs effectively inhibited tumor growth (Supplementary Fig. 9). We posited that the antitumor effect of YCW NPs was due to the alteration of immunosuppressive TME. We tested this hypothesis by analyzing the changes in tumor infiltrating immune cells after administration. The total and proportion of tumor infiltrating CD8+ T cells and CD4+ T cells was significantly increased in small size of YCW NPs treated group (Fig. 3F–H, Supplementary Fig. 10A, B), culminating with an inflamed tumor immune phenotype. However, we also observed the expression of PD-1 on T cells upregulated synchronously by YCW NPs treatment (Fig. 3I–J). Immunosuppressive cellular components including regulatory T cells (Tregs), MDSCs, and TAMs were further analyzed. Flow cytometry data showed that small size of YCW NPs depleted Tregs, MDSCs and TAMs considerably (Fig. 3K–P, Supplementary Fig. 10C–G). In TME, the number and function of DCs are in an impaired state. The presence of MDSCs, Tregs can inhibit the maturation of DCs, resulting in DCs not being able to secret appropriate co-stimulation and cytokine signals to T cells. The maturation of DCs is imperative to provide co-stimulatory signals to T cells, thereby effectively activating naïve T cells. Hence, we assessed the maturation of DCs within tumor. Compared with untreated tumors, the maturity of DCs was as high as about 35% in treated group, which was essential for promoting T cell priming and recruitment (Fig. 3Q–R, Supplementary Fig. 10H–I). Taken together, all these results indicated that immunosuppressive TME was dramatically reversed by YCW NPs treatment (Supplementary Fig. 11).
Small size of YCW NPs showed great ability to distribute to tumor draining lymph nodes
We next questioned why small size of YCW NPs showed better efficiency in controlling tumor growth compared to middle and large size of YCW NPs. The innate immune cells within TDLNs are critical for the generation of adaptive responses31,32. We reasoned that the distribution efficiency of YCW NPs towards TDLNs might influence their antitumor ability. To validate this hypothesis, three kinds of Cy5.5-labelled YCW NPs were intratumorally injected in B16 tumors. After 48 h, mice were sacrificed and the TDLNs were collected for fluorescence imaging ex vivo. Obviously, we observed that small size of YCW NPs facilitated their entry and targeting in TDLNs, followed by middle size of YCW NPs, while large size of YCW NPs or MPs had poorest ability to target TDLNs (Fig. 4A, Supplementary Fig. 12). Moreover, quantification data revealed that small size of YCW NPs treated mice showed 2.5-fold and 5-fold greater fluorescence in the TDLNs than those treated with the middle size of YCW NPs and large size of YCW NPs, respectively (Fig. 4B). Fluorescent quantitative data further indicated that about 5% of injected small NPs were accumulated in TDLN (Supplementary Fig. 13A,B). Confocal imaging of TDLNs also coincided with the ex vivo imaging (Fig. 4C). We therefore speculated that the distribution efficiency of YCW NPs toward TDLNs was negatively related to the sizes of YCW NPs (Fig. 4D).
Our results were consistent with previous reports. Due to their unique size effect, nanoparticles have a natural tendency to passively draining lymph nodes. For example, studies showed that the size of particles below 200 nm could accumulate to draining lymph nodes fast. However, when the size was above 500 nm, they were limited by extracellular matrix, so that large particles could not get to draining lymph nodes33,34. Other research reported that small nanoparticles (10–100 nm) are absorbed by the lymphatic vessels and diffuse to lymph nodes to target DCs. Large nanoparticles (>100 nm) and microparticles (MPs) are mostly embedded in the interstitial matrix, and need to be captured by surrounding immune cells, indicating large nanoparticles are mainly delivered to the lymph nodes in a cell-mediated manner35. Data fitting results showed that the distribution capacity of nanoparticles was inversely proportional to the square root of the particle diameter. To explain this phenomenon, the percolation model was introduced. This model is based on stochastic processes, which describes the diffusion of hypothetical liquid particles in a random medium and has been widely used in subsurface hydrology36, petroleum engineering, materials science37, etc. Lymph nodes are composed of large number of tiny lymphatic vessels connected to each other. A simplified network is used instead of the complex structure of the internal circulation, including blood vessels and lymphatic vessels. When considering a bond percolation problem, the bonds of the network are either occupied (i.e. they are open to flow, diffusion, and reaction) or vacant. The bonds are occupied randomly and independently of each other with probability $$p$$. One important concept is the bond percolation threshold $${p}_{c}$$. Below the threshold, there will be no large clusters (for sites percolation) or long-range connectivity (for bonds percolation). At this point, the nanoparticles would not be able to reach inside the lymph nodes. For nanoparticles of different diameters, the probability of each edge can be estimated by the percolation coefficient $$K$$. For linear percolation, Darcy’s experimental law shows that $$V=-\frac{K{\prime} }{\mu }(\frac{\partial p}{\partial z}+\rho g)$$. According to the model, we inferred that the radiation efficiency decreased nonlinearly with the diameter in a certain range. After testing the normal distribution of nanoparticle diameters obtained from each group, the mean value of each group was fitted by logarithmic least squares with respect to the radiation efficiency to obtain the quantitative relationship (Fig. 4E).
To characterize the change of TDLNs immune microenvironment, the distribution of YCW NPs in immune cells of lymph nodes was firstly evaluated. Similar with tumor (Supplementary Fig. 13G), signals of Cy5.5 were detected in main TDLNs immune cells (Fig. 4Fand Supplementary Fig. 13C–F), including DCs, macrophages, B cells and T cells, indicating that YCW NPs were distributed in immune cells of TDLNs and its contents appeared to inversely correlate with size of YCW NPs. The small size of YCW NPs had the stronger capacity to entry into these immune cells compared to middle and large ones. In contrast, non-TDLNs did not show any significant Cy5.5 intensity (Supplementary Fig. 13H). In view of distribution of YCW NPs in tumor infiltrating immune cells (Supplementary Fig. 13G), the transport of YCW NPs to lymph nodes was not only confined to simple diffusion, but also can be mediated by local immune cells in the tumor. Next, we explored the ability of three sizes of YCW NPs to regulate the microenvironment of TDLNs. Both T (CD4+ and CD8+) cells and B (CD19+) cells upregulated the expression of CD69 after 48 h of injection, indicating that both T cells and B cells were activated by the treatment (Fig. 4G). PD-1 is also expressed during the early phase of T cell activation38. Consistent with the results of tumor analysis, the expression of PD-1 on T cells upregulated synchronously by YCW NPs treatment, indicating YCW NPs could effectively activate T cells in both tumor and TDLNs (Fig. 4H). Next, we analyzed the expression of co-stimulatory molecules on DCs of TDLNs. The results showed that co-stimulatory molecules, such as MHCII, CD40, CD80, and CD86 were all increased in the treatment group compared with those in control group (Fig. 4I–L). Meanwhile, the expression of PD-L1 on DCs reflected the same trend (Fig. 4M), which might limit T cell responses. Together, these results suggested that compared with middle and large size of YCW NPs, small size of YCW NPs induced higher stimulatory markers on various immune cells in the mice, mainly due to their highest accumulation in TDLNs.
T-cell-mediated anti-tumor immune responses induced by small size of YCW NPs
As the YCW NPs encompassed ‘danger signals’ which may result in the activation of various innate immune cells, we next applied anti-CD4 antibody and anti-CD8a antibody to deplete T cells in vivo to see if the anticancer immune responses caused by YCW NPs were related to T-cell-mediated adaptive immunity. Before treatment, we performed T cell depletion by applying the antibodies on day 4 and day 7, respectively (Fig. 5A). Flow cytometry analysis of ratio of CD4+ T cells and CD8+ T cells in the serum was carried out to determine whether the corresponding T cells were completely depleted (Fig. 5B, Supplementary Fig. 14). After depletion of T cells successfully, we started administrating small NPs. The growth of tumor in both CD4+ and CD8+ T cell depletion groups increased faster than treated group (Fig. 5C–E), leading to diminished immunotherapy efficiency as determined by survival analysis (Fig. 5F–G). These data indicated that T-cell-mediated anti-tumor immune responses was indispensable for YCW NPs-based cancer immunotherapy.
Therapeutic efficacy of YCW NPs in combination with PD-L1 blockade
In our previous data, we found that both PD-1 on T cells and PD-L1 on DCs were significantly upregulated following the YCW NPs treatment (Fig. 3J, Fig. 4H,M). We speculated that blockade of PD-1/PD-L1 by antibody in combination with small size of YCW NPs might induce synergetic antitumor effect. To test this, we established a B16-luc melanoma model in C57BL/6 mice. Small size YCW NPs was administrated intratumorally at a frequency of once every two days for a total of four times. Anti-PD-L1 mAbs were injected intravenously for three times at three-day intervals (Fig. 6A). As expectedly, YCW NPs based immunotherapy showed great synergy with PD-L1 blockade in controlling tumor growth as compared with monotherapy (Fig. 6B–D). More encouraging, the combination treatment enabled complete tumor regression in 100% of treated mice (Fig. 6B–D), which was translated into significantly lengthened survival (Fig. 6E). Noteworthily, the combination treatment was well tolerated in the mice as evidenced by the slight weight change of mice and H&E staining of main organs (Fig. 6F–G, Supplementary Fig. 15A). By analysis of various immune components in tumor, we found that the combination treatment was sufficient to augment the infiltration of CD8+ T cells and CD4+ T cells into the tumors, indicating a robust T-cell mediated anti-tumor immunity (Fig. 6H–J). In addition, the frequency of MDSCs and TAMs in TME were remarkably reduced following the combination treatment (Fig. 6K–N, Supplementary Fig. 15B–C). All these data suggested that small size YCW NPs significantly synergized with PD-L1 blockade in inducing a robust antitumor immunity.
YCW NPs in combination with PD-L1 blockade inhibited metastatic tumor growth
The combination of small size of YCW NPs with anti-PD-L1 significantly inhibited the growth of B16-luc locally by destroying tumor to generate tumor cell lysates (TCLs). TCLs and small size of YCW NPs were jointly delivered to TDLNs to promote the maturation of DCs and activation of T cells and B cells (Fig. 7A). Next, we wondered whether local immunotherapy could be sufficient to induce systemic immune response to dampen the growth of distant tumors. Therefore, on day 0, we inoculated B16-luc tumor cells on both sides of the back of C57BL/6 mice. When the tumors grew up to 50 mm3, the same approaches and dosage were applied. It was observed from the growth curve that by local therapy, not only the treated tumors, but also the tumors on the opposite side regressed remarkably (Fig. 7B–G, Supplementary Fig. 16). After mice were sacrificed, the tumors were collected and weighed. It was showed that the weights of tumors on both sides of the combination treatment group were lower than that of the other treatment groups and the control group (Fig. 7H–J), indicating that the combination treatment induced systemic anti-tumor immune responses. Flow cytometry analysis reflected that the frequency of CD8+ T cells and CD4+ T cells in distant tumors increased after combination treatment (Fig. 7K–M). We further explored whether injected small size YCW NPs locally could treat lung metastasis of melanoma. Encouragingly, we found that the lung metastatic tumor growth was also regressed as indicated by the bioluminescence imaging (Fig. 7N) and reduced the number of metastatic tumor foci on lungs after necropsy (Fig. 7O–P). The results intuitively indicated that local administration of small size YCW NPs combined with PD-L1 blockade could dramatically produce systemic anti-tumor immune responses to promote metastatic tumor regression.
To further demonstrate that our proposed YCW NPs-based immunotherapy could be broadly applicable, we subsequently constructed tumor model of CT26 on BALB/c mice. On day 0, we injected CT26 tumor cells on both sides of back of BALB/c mice. On the sixth day, we administrated small size YCW NPs at one tumor locally, and injected the anti-PD-L1 antibody intravenously. The frequency and dosage of drugs was same as before-mentioned. We again observed a sharp inhibition in tumor growth in the CT26 model. Tumor growth curves on primary tumors and distant tumors suggested that tumors of both sides were remarkably suppressed (Fig. 8A–D). On day 17, we collected tumors for weighing, and result was agreement with that shown in curves of tumor growth (Fig. 8E–G). During the process of therapy, the weight of mice remained almost unchanged (Fig. 8H), indicating that our therapy approach did not produce obvious side effects in the mice.
Discussion
In recent years, microbe-based cancer therapy has attracted much attention due to its rapid progress. Nevertheless, it still has not achieved satisfactory effects in clinic. It has been documented that bacterial therapy merely resulted in a low tumor regression rate but produced undesirable dose-dependent side effects11,39. Oncolytic viruses can induce potent immune responses through generating antitumor cytokines. However, they can also cause serious systemic toxicity40,41,42. Fungi, as one kind of eukaryotic microbe, own similar structure with mammals. Various fungi have been utilized as food, and for use of food manufacture in the long history of humans. Yeast is one of the most common types of fungi with a relatively good biosafety. In addition, yeast-derived β-glucan has been considered as immuno-stimulating agent to treat cancer 43.
To address the challenges of microbe-based cancer therapy, in this work, we prepared nanoparticles derived from yeast cell wall. Compared with live yeast cells, the YCW NPs have no activity of reproduction, as well as a much smaller size than yeast cells. To study whether the size effect influences the therapeutic efficiency, three different sizes of YCW NPs were prepared by differential centrifugation. They had no obvious differences in loading of proteins and polysaccharides. Different from macrophages, DCs showed cellular uptake of YCW NPs in a size-dependent manner in the range of 50–500 nm. Previous studies showed that smaller nanoparticles are more easily uptake into the DCs. For example, compared with PLGA nanoparticles with a diameter of more than 1 μm, PLGA NPs with a diameter of 300 nm have a higher DCs uptake efficiency44,45. Perhaps due to the greater ability of macrophages to engulf and digest foreign particles46, NPs uptake by macrophages is not affected by the particle size significantly in the range from 50 to 500 nm. Nevertheless, we also observed that macrophages showed greater ability to phagocytosis YCW NPs than YCW MPs.
The tumor and TDLNs immune microenvironment regulation are mediated by components of YCW NPs, such as β-glucan which is immunomodulatory compound to activate DCs and macrophages within tumor and TDLNs47. Intriguingly, we found significantly-improved tumor regression following intratumorally injection of YCW NPs. Furthermore, we found that this anticancer effect caused by YCW NPs was inversely correlated with their sizes, that was, with the decrease of diameters, tumors were suppressed more efficiently. The innate immune cells within TDLNs is critical for the generation of adaptive responses31. Therefore, we were interested to explore whether the small size of YCW NPs could promote their TDLNs accumulation. As expected, we observed that small size of YCW NPs facilitated their entry and targeting in TDLNs, followed by middle size YCW NPs, while large size of YCW NPs had the poorest ability to target TDLNs. The alteration of TDLNs immune microenvironment was also evaluated. As a result, the abundance and functional orientation of each effector subset by small size of YCW NPs treatment were significantly improved not only in the microenvironment in tumors but also in the TDLN, the latter of which might be the key site for initiation of greatest anticancer efficiency in response to small size of YCW NPs. As a natural nanoparticle, all composition of which can together to trigger antitumor immune responses. For example, yeast-derived β-glucan has been considered as a major immuno-stimulating agent to treat cancer26. Other yeast-derived components including protein, lipid, and nucleic acid48,49 may also play roles in remodeling immunosuppressive TME, which should be investigated in the future work. Glucan particles (GPs) which have already been described for a variety of applications by the Levitz group and others50,51,52 can serve as an effective adjuvant and vaccine platform for cancer immunotherapy. Our nano-formulate yeast cell wall (YCW NPs) is produced from “GPs” (Fig. 1A-B). However, the GPs showed a diameter with ~4 μm, limiting their ability of entry and targeting in TDLNs (Supplementary Fig. 12) as well as therapeutic outcomes (Supplementary Fig. 17) compared to YCW NPs (~50 nm). We indicate the role of particle size in inducing anticancer immunity.
We then used the mathematic percolation model to explain this phenomenon. It visually demonstrated the influence of size effects with mathematical model. To our knowledge, this is the first study of size-dependent immunotherapy based on nanoparticles derived from yeast, delineating theoretical basis for how the size effect influences the efficiency of cancer immunotherapy. Although in this work we mainly studied the role of size effect of YCW NPs, the role of other properties, such as charge densities, in controlling tumor growth and activating immune cells is an issue that warrants further investigation.
Given that both PD-1 on T cells and PD-L1 on DCs were significantly upregulated following the YCW NPs treatment (Fig. 3J, Fig. 4H, M), we speculated that blockade of PD-1/PD-L1 in combination with small size YCW NPs was capable to induce synergetic antitumor effect. To maximize the therapeutic outcomes, PD-L1 blockade therapy was then introduced to synergy with YCW NPs. Strikingly, the combination treatment enabled complete tumor regression in 100% of treated mice, significantly lengthened survival. Moreover, the systematic anticancer immune response was also triggered to restrain the growth of distant tumor following combination therapy. Importantly, we found the mice could well tolerate the combination treatment through monitoring the change in body weight together with histological examination of main organs, suggesting that the combination treatment did not elicit apparent cytotoxicity and thus appeared safe for the mice.
In conclusion, we developed a yeast-based nanoparticle for enhancing cancer immunotherapy. We also studied how size effect of NPs influenced the anticancer efficiency. The small size of YCW NPs dramatically inhibited tumor growth by remodeling the immunosuppressive microenvironment in both the tumors and TDLN. In combination with PD-L1 blockade, we demonstrated an excellent anticancer efficiency in treating melanoma-bearing mice. We delineated potential strategy to target immunosuppressive microenvironment with microbe-based nanoparticles and highlighted the role of size effect in microbe-based immune therapeutics. Nanoparticles comprised of polysaccharides also act as danger signals to provoke immune response53,54. However, most of them are synthesized nanoparticles and need complicated chemical reactions with high level of skills and cost. In contrast, the preparation of nano-formulate YCW is very simple, including ultrasonic treatment and centrifugation-based separation procedures. In addition, the size of YCW is tunable by differential centrifugation. Due to its ease of preparation, low-cost, feasibility and biosafety of yeast-derived nanoparticles, this technology may have promising clinical translation potential.
Methods
This research complies with all relevant ethical regulations approved by Soochow University.
Materials
Yeast and antibodies applied in this study were shown in Supplementary Table 1.
Cell lines
Melanoma B16-luc (catalog number: fh1123), CT26 (catalog number: TCM37), RAW264.7 (catalog number: TCM13), and DC2.4 (catalog number: fh1026) were purchased from Cell Bank, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences. B16-luc cells and RAW264.7 cells were maintained in Dulbecco’s modified Eagle’s medium (DMEM) containing 10% fetal bovine serum (Gibco), penicillin (100 U/ml; Invitrogen), and streptomycin (100 U/ml; Invitrogen). CT26 cells and DC2.4 cells were maintained in RPMI 1640 medium containing 10% fetal bovine serum (Gibco), penicillin (100 U/ml; Invitrogen), and streptomycin (100 U/ml; Invitrogen). BMDCs were isolated from bone marrow cavities of 7–8-week-old C57BL/6 mice according to an established approach55,56. In brief, bone marrow cells were isolated from female C57BL/6 mice, then incubated in RPMI 1640 medium with GM-CSF (20 ng/mL) for seven days. At last, bone marrow DCs were utilized at 7 days.
Mice
C57BL/6 and BALB/c female mice (6–8 weeks) were obtained from Nanjing Peng Sheng Biological Technology Co. Ltd. We performed all mice studies in accordance with the animal protocol approved by our university laboratory animal center (Approval number: SUDA20210201A02). Experimental group sizes were approved by the regulatory authorities for animal welfare after being defined to balance statistical power, feasibility and ethical aspects. Maximal tumor burden permitted is 1200 mm3. In some cases, this limit has been exceeded the last day of measurement and the mice were immediately euthanized. The actual tumor size (even if larger than 1200 mm3) has been recorded and presented in the Article and Source data file.
Preparation and characterization of nanoparticles with different sizes
To form nanoparticles, the yeast cells were incubated in NaOH at 80 °C for 1 h. After cooled down to room temperature, the lysis of yeast cells was centrifuged at 2000 g for 10 min to obtain insoluble matter containing cell walls. Then it was washed with H2O, isopropanol, and acetone, and centrifuged to obtain micro-sized cell walls. After broken by ultrasonic cell disruptor, NPs with different sizes were obtained by differential centrifugation. Specifically, 25 mg micro-sized cell walls were dissolved in PBS (pH = 7.4), and after crushing, precipitate obtained at 2400 g for 10 min was resuspended in PBS to form large size of YCW NPs. Insoluble matter obtained by centrifuging supernatant at 9600 g for 10 min was middle size of YCW NPs. Next, supernatant centrifuged at 21,100 g for 10 min to form small size of YCW NPs. The morphologies of yeast cell wall (YCW MPs) were observed by scanning electron microscopy (SEM) and transmission electron microscopy (TEM). Three YCW NPs were characterized by TEM. In detail, we activated the carbon-coated 400-mesh grid firstly, then dropped YCW particles on the grids, and washed with H2O. Finally, we observed YCW particles by TEM after dried. We utilized double sticky carbon tape to deposit the silicon wafer dripped with YCW particles on the sample stage, then dried it overnight for SEM. Size distribution and zeta potential of three YCW NPs in aqueous solution were observed by Zetasizer nano ZS instrument. The stability of different YCW NPs was monitored at 4 °C and room temperature in 2 weeks. SDS-PAGE and BCA assay were utilized to evaluate the expression and concentration of protein of different YCW NPs. To determine the composition of polysaccharides, the polysaccharides were first hydrolyzed into monosaccharides. Then the kinds and proportions of monosaccharides were determined by high performance liquid chromatography. We converted proteins into peptides with enzyme digestion assay, and then utilized LC-MS/MS to determine proteins contained in YCW NPs by searching through the protein database. The composition of YCW NPs was determined and analyzed by Shanghai Fuda Analytical Testing Group.
In vitro cytotoxicity assessment
The cytotoxicity of three YCW NPs to DCs (DC2.4), macrophages (RAW264.7) and B16 was assessed by MTT assay following the standard protocol. In brief, the cells were seeded into 96-well plate at 105 cells/mL for 12 h, and then incubated with three YCW NPs for 24 h. The concentrations of YCW NPs incubated with DC2.4 and RAW264.7 were from 0 μg/mL to 150 μg/mL. The concentration of YCW NPs to B16 was 150 μg/mL. Subsequently, MTT solution was added to each well and the plates were incubated for 4 h. At the end of incubation, supernatants were aspirated, and 100 µL of dimethyl sulfoxide was added to each well with shaking for 10 min at dark to dissolve formazan crystals. The absorbance was measured at 570 nm by a microplate reader.
In vitro cellular uptake and BMDCs maturation assay
To evaluate the ability of cellular uptake of YCW particles, including MPs, large NPs, middle NPs, and small NPs, DC2.4, BMDCs and RAW264.7 were incubated with Cy5.5, or Cy5.5-labelled YCW particles for 24 h. The amount of Cy5.5-labelled YCW particles phagocytosed by antigen-presenting cells (APCs) was judged by flow cytometry and confocal microscopy analysis. In experiment of flow cytometry, DC2.4 and BMDCs were first stained with FITC-CD11c, then the mean fluorescence intensity (MFI) of Cy5.5 on FITC-CD11c+ cells was measured. Similarly, after staining RAW264.7 with FITC-F4/80, MFI of Cy5.5 was also measured. In confocal assay, DC2.4, BMDCs and RAW264.7 were incubated with different preparations, including Cy5.5 and Cy5.5-labelled YCW particles, for 24 h. After washing with PBS for three times, cells were fixed with 4% paraformaldehyde for 30 min. Next, cell nuclei were stained with DAPI (4′,6-diamidino-2-phenylindole) for 10 min. The phagocytic ability of APCs (such as DCs and macrophages) was observed by confocal microscope (Zeiss LSM 800). In order to verify whether cellular uptake of YCW NPs by DCs and macrophages was related to Dectin-1, we applied Dectin-1 competitor laminarin (100 μM) to pretreat DCs and macrophages for 2 h. Then DCs and macrophages were incubated with YCW NPs for 24 h. The results of cellular uptake were analyzed by flow cytometry and confocal assay as before. To explore the effect of YCW NPs, including large NPs, middle NPs, and small NPs, on T cells, we incubated cells obtained from inguinal lymph nodes with YCW NPs for 24 h. Then we analyzed the influences of YCW NPs on T cells. Specifically, after cells were incubated with Cy5.5-labelled YCW NPs for 24 h, the MFI of Cy5.5 was analyzed by flow cytometry to determine the phagocytosis of YCW NPs by T cells. After cells obtained from inguinal lymph nodes were incubated with YCW NPs for 24 h, the activation of T cells was detected by flow cytometry, including the expression of CD69 and PD-1 on CD4 T cells and CD8 T cells.
In the assay of maturation of BMDCs, 5 × 105 immature BMDCs were seeded on 24-well plates, after incubated with PBS, lipopolysaccharides (LPS, positive control), YCW particles, including MPs, large NPs, middle NPs, and small NPs, for 24 h, BMDCs were collected at 100 g for 3 min, and supernatants were harvested for enzyme-linked immunosorbent assay (ELISA). The collected BMDCs were resuspended in FACS buffer (PBS containing 1% FBS) and stained with FITC-CD11c, APC-CD86, PE-CD80, PE-MHCII, APC-CD40, PE-PD-L1, APC-PD-1, PE-CTLA-4, PE-LAG-3, PE-TIM-3 at room temperature for 30 min before analyzing by flow cytometry (BD Accurit C6 Plus). The concentration of inflammatory factors secreted in supernatants, such as TNF-a, IL-1β, IL-12p70, IL-6, was measured by ELISA (BioLegend) according to the manufacturer’s instructions.
To explore the mechanism of activation of DCs by YCW NPs, we carried out western blotting assay. The total proteins of BMDCs pre-incubated with three different sizes of YCW NPs for 24 h were extracted, and the expression of TLR-2, p-Syk, p-P65, MyD88, Dectin-1 was detected by western blotting. In brief, cells were lysed, using RIPA lysis buffer with PMSF protease inhibitor and phosphatase inhibitor obtain total proteins, which were then separated the protein by 12.5% SDS-PAGE gel electrophoresis. Next, proteins were transferred to PVDF membrane by wet method. The membrane was blocked with 3% BSA solution, and incubated with anti-p-P65, anti-TLR-2, anti-p-Syk, anti-MyD88, anti-Dectin-1 and anti-GAPDH antibody overnight, and then incubated with rabbit secondary antibody (Goat anti-Rabbit IgG (H+L)-HRP) for 1 h. The bands were developed by chemiluminescence, and the image was quantitatively analyzed with Image J software. To further confirm the activation of DCs via Dectin-1/Syk pathway and TLR2/MyD88 pathway, we pretreated BMDCs with Dectin-1 competitor laminarin (100 μM) and TLR2 inhibitor C29 (50 μM) for 2 h, respectively. After incubated with small NPs for 24 h, proteins of BMDCs were collected for western blotting assay as before.
Analysis of lymph nodes ex vivo
To analyze the accumulation of YCW particles, including MPs, large NPs, middle NPs and small NPs toward tumor draining lymph nodes (TDLNs), C57BL/6 mice were intratumorally injected with Cy5.5-labelled YCW particles. After 48 h, mice were sacrificed and TDLNs were harvested for imaging by IVIS Spectrum Imaging System and confocal microscopy. At the same time, tumors, TDLNs and non-TDLNs were collected for flow cytometry. The tumors, TDLNs and non-TDLNs were grinded and digested, and then stained with FITC-CD11c, FITC-F4/80, FITC-CD19, FITC-CD3 antibodies to evaluate the number of different NPs in DCs, macrophages, B cells and T cells. Meanwhile, to observe the distribution of YCW particles toward TDLNs more intuitively, TDLNs from different groups were buried in optimal cutting temperature compound, then cut into micrometer slices and fixed on slides. After staining cell nucleus with DAPI, the sections were observed using confocal microscope. In order to analyze the activation status of immune cells in TDLNs, PE-MHCII, APC-CD40, PE–CD80, APC–CD86 were stained to observe the maturation of DCs. APC–CD69 staining was used to evaluate the activation state of T cells and B cells. At the same time, APC–PD-1 and APC–PD-L1 staining was applied to examine the expression levels of PD-1 and PD-L1 on T cells and APCs, respectively.
Therapeutic experiments of YCW NPs
To determine the administration dose, we carried out the dose-dependent experiment. In detail, 1 × 106 B16-luc cells were implanted on the right flank of depilated C57BL/6 mice to establish tumor model of melanoma on day 0. On day 7, mice were divided into five groups (UnTx, 0.09 mg/kg, 0.18 mg/kg, 0.375 mg/kg, 0.75 mg/kg; the number of female C57BL/6 was 25) and were treated with different dose of small NPs intratumorally every 2 days. The tumor volumes were observed every 2 days. Tumor volume was calculated using the formulation: length × width × width/2.
The treatment efficiency of three different size of YCW NPs was evaluated on model of B16-luc tumor bearing C57BL/6 mice. 1 × 106 B16-luc cells were implanted on the right flank of depilated C57BL/6 mice to establish tumor model of melanoma on day 0. On day 6, mice were divided into four groups (UnTx, large NPs, middle NPs, small NPs; the number of female C57BL/6 was 23) and were treated with 7.5 µg (50 µL per mouse, 0.375 mg/kg) of YCW NPs intratumorally every 2 days. To monitor the therapeutic effect of YCW NPs, the tumor volume and weight of tumor were observed every 2 days. Tumor volume was calculated using the formulation: length × width × width/2.
In experiment of treatment of B16-luc by combining small size of YCW NPs with anti-PD-L1, mice were divided into four groups (UnTx, small NPs, anti-PD-L1, combination; the number of female C57BL/6 was 20). The dosage and frequency of administration of small size of YCW NPs were the same as above. Anti-PD-L1 was injected intravenously every 3 days (50 µg per mouse). The volume of tumor and body weight of mice were recorded every 2 days. Hematoxylin-eosin staining (H&E staining) of major organs was used for assessing the safety of therapeutic. Mice were sacrificed on day 20 to analyze microenvironment of primary and distant tumor. The relative frequency of various immune cell subsets including T cells, MDSCs, Tregs, TAMs, as well as PD-L1 expression on tumor cells were examined by flow cytometry analysis.
In another treatment experiment, CT26 tumor cells (2 × 106 per mouse) were injected at both sides of back of BALB/c mice. On day 6, small size YCW NPs and anti-PD-L1 were administrated as described above. The tumor volume and weight were monitored to evaluate the therapeutic effect. In lung metastasis model, 1 × 106 B16-luc tumor cells were injected intravenously instead. The number of black spots representative of metastatic lesions in lung was counted to evaluate the treatment effect. The number of female BALB/c mice was 16.
To compare the treatment effect of our system (YCW NPs) and existing system (YCW MPs - GPs), we carried out therapeutic experiments with B16. 1 × 106 B16-luc cells were implanted on the right flank of depilated C57BL/6 mice to establish tumor model of melanoma on day 0. On day 8, mice were divided into three groups (UnTx, YCW MPs – GPs, YCW small NPs; the number of female C57BL/6 was 12) and were treated with 7.5 µg (50 µL per mouse, 0.375 mg/kg) of GPs and NPs intratumorally every 2 days. To monitor the therapeutic effect, the tumor volume and weight of mice were observed every 2 days. Tumor volume was calculated using the formulation: length × width × width/2.
Depletion of T cells
C57BL/6 mice were inoculated with B16-luc and divided into four groups (UnTx, anti-CD4 group, anti-CD8a group, no depletion group; the number of female C57BL/6 was 16). On day 4 and day 7, mice were injected intravenously with anti-CD4 (20 µg per mouse) and anti-CD8a (20 µg per mouse) respectively. Next, on day 8, peripheral blood mononuclear cells were collected to analyze the efficiency of T cell (including CD4 and CD8a) depletion. On the same day, the mice were treated with small size of YCW NPs every 2 days except for UnTx. The tumor size and weight were observed every 2 days.
In vivo bioluminescence imaging
Tumor growth of B16-luc was monitored using in vivo IVIS Spectrum Imaging System (PerkinElmer Ltd). Mice were injected with D-luciferin (1.5 mg/mL) intraperitoneally, bioluminescence imaging was obtained after 10 min and exposure time was 30 s. Bioluminescence intensity was quantified as average radiance (photons s−1 cm−2 sr−1) with IVIS Living Image 4.2.
Statistical analysis
All results were expressed as mean ± SD. Unless otherwise stated, all experiments used biological replication. Student’s t test (two-tailed) was utilized for two groups compared. One-way ANOVA using the Tukey post-test for more than two groups compared. All statistical analyses were performed using GraphPad Prism (8.0). For animal survival analysis, the statistical differences were determined by log-rank test. The results were significant when ****P < 0.0001; ***P < 0.005; **P < 0.01; *P < 0.05. All experiments were run at least in triplicate.
Reporting summary
Further information on research design is available in the Nature Research Reporting Summary linked to this article.
Data availability
Source data are provided with this paper. The authors declare that all other data supporting the findings of this study are available within the paper, Supplementary Information or Source Data file. Source data are provided with this paper.
References
1. Irvine, D. J. & Dane, E. L. Enhancing cancer immunotherapy with nanomedicine. Nat. Rev. Immunol. 20, 321–334 (2020).
2. Sanmamed, M. F. & Chen, L. A paradigm shift in cancer immunotherapy: from enhancement to normalization. Cell 175, 313–326 (2018).
3. Hubbell, J., Thomas, S. & Swartz, M. Materials engineering for immunomodulation. Nature. 462, 449–460 (2009).
4. Hegde, P. S. & Chen, D. S. Top 10 Challenges in Cancer Immunotherapy. Immunity 52, 17–35 (2020).
5. Haanen, J. B. A. G. Converting Cold into Hot Tumors by Combining Immunotherapies. Cell 170, 1055–1056 (2017).
6. Senovilla, L. et al. An immunosurveillance mechanism controls cancer cell ploidy. Science 337, 1678–1684 (2012).
7. Vitale, I., Shema, E., Loi, S. & Galluzzi, L. Intratumoral heterogeneity in cancer progression and response to immunotherapy. Nat. Med. 27, 212–224 (2021).
8. Rodriguez-Ruiz, M. E., Vitale, I., Harrington, K. J., Melero, I. & Galluzzi, L. Immunological impact of cell death signaling driven by radiation on the tumor microenvironment. Nat. Immunol. 21, 120–134 (2020).
9. Binnewies, M. et al. Understanding the tumor immune microenvironment (TIME) for effective therapy. Nat. Med. 24, 541–550 (2018).
10. Leventhal, D. S. et al. Immunotherapy with engineered bacteria by targeting the STING pathway for anti-tumor immunity. Nat. Commun. 11, 2739 (2020).
11. Sedighi, M. et al. Therapeutic bacteria to combat cancer; current advances, challenges, and opportunities. Cancer Med. 8, 3167–3181 (2019).
12. Zhou, S., Gravekamp, C., Bermudes, D. & Liu, K. Tumour-targeting bacteria engineered to fight cancer. Nat. Rev. Cancer 18, 727–743 (2018).
13. Yi, X. et al. Bacteria-triggered tumor-specific thrombosis to enable potent photothermal immunotherapy of cancer. Sci. Adv. 6, eaba3546 (2020).
14. Lee, P. & Gujar, S. Potentiating prostate cancer immunotherapy with oncolytic viruses. Nat. Rev. Urol. 15, 235–250 (2018).
15. Park, A. K. et al. Effective combination immunotherapy using oncolytic viruses to deliver CAR targets to solid tumors. Sci. Transl. Med. 12, eaaz1863 (2020).
16. Limon, J. J., Skalski, J. H. & Underhill, D. M. Commensal Fungi in Health and Disease. Cell Host Microbe 22, 156–165 (2017).
17. Zheng, J. H. et al. Two-step enhanced cancer immunotherapy with engineered Salmonella typhimurium secreting heterologous flagellin. Sci. Transl. Med. 9, eaak9537 (2017).
18. Kumar, R. & Kumar, P. Yeast-based vaccines: new perspective in vaccine development and application. FEMS Yeast Res. 19, foz007 (2019).
19. Zitvogel, L., Ma, Y., Raoult, D., Kroemer, G. & Gajewski, T. F. The microbiome in cancer immunotherapy: Diagnostic tools and therapeutic strategies. Science 359, 1366–1370 (2018).
20. Lou, X., Chen, Z., He, Z., Sun, M. & Sun, J. Bacteria-mediated synergistic cancer therapy: small microbiome has a big hope. Nano-Micro Lett. 13, 1–26 (2021).
21. Chilakapati, S. R., Ricciuti, J. & Zsiros, E. Microbiome and cancer immunotherapy. Curr. Opin. Biotechnol. 65, 114–117 (2020).
22. Ruiz-Herrera, J. & Ortiz-Castellanos, L. Cell wall glucans of fungi. A review. Cell Surf. 5, 100022 (2019).
23. Galinari, É., Almeida-Lima, J., Macedo, G. R., Mantovani, H. C. & Rocha, H. A. O. J.I.j.o.b.m. Antioxidant, antiproliferative, and immunostimulatory effects of cell wall α-d-mannan fractions from Kluyveromyces marxianus. Int. J. Biol. Macromol. 109, 837–846 (2018).
24. Kalafati, L. et al. Innate Immune Training of Granulopoiesis Promotes Anti-tumor Activity. Cell 183, 771–785 (2020). e712.
25. Mirza, Z., Soto, E. R., Dikengil, F., Levitz, S. M. & Ostroff, G. R. Beta-glucan particles as vaccine adjuvant carriers. Methods Mol. Biol. 1625, 143–157 (2017).
26. Geller, A., Shrestha, R. & Yan, J. Yeast-Derived beta-Glucan in Cancer: novel uses of a traditional therapeutic. Int. J. Mol. Sci. 20, 3618 (2019).
27. Wculek, S. K. et al. Dendritic cells in cancer immunology and immunotherapy. Nat. Rev. Immunol. 20, 7–24 (2020).
28. Mayoux, M. et al. Dendritic cells dictate responses to PD-L1 blockade cancer immunotherapy. Sci. Transl. Med. 12, eaav7431 (2020).
29. Rogers, N. C. et al. Syk-dependent cytokine induction by Dectin-1 reveals a novel pattern recognition pathway for C type lectins. Immunity 22, 507–517 (2005).
30. Wang, W. et al. A small secreted protein triggers a TLR2/4-dependent inflammatory response during invasive Candida albicans infection. Nat. Commun. 10, 1015 (2019).
31. Leal, J. M. et al. Innate cell microenvironments in lymph nodes shape the generation of T cell responses during type I inflammation. Sci. Immunol. 6, eabb9435 (2021).
32. Haley du Bois, T. A. H. & Amanda, W. L. Tumor-draining lymph nodes: at the crossroads of metastasis and immunity. Sci. Immunol. 6, eabg3551 (2021).
33. Reddy, S. T. et al. Exploiting lymphatic transport and complement activation in nanoparticle vaccines. Nat. Biotechnol. 25, 1159–1164 (2007).
34. Zinkhan, S. et al. The impact of size on particle drainage dynamics and antibody response. J. Control Release 331, 296–308 (2021).
35. Schudel, A., Francis, D. M. & Thomas, S. N. Material design for lymph node drug delivery. Nat. Rev. Mater. 4, 415–428 (2019).
36. Berkowitz, B. & Balberg, I. Percolation theory and its application to groundwater hydrology. Water Resour. Res. 29, 775–794 (1993).
37. Chen, D., Lin, Z., Zhu, H. & Kee, R. J. Percolation theory to predict effective properties of solid oxide fuel-cell composite electrodes. J. Power Sources 191, 240–252 (2009).
38. Ahn, E. et al. Role of PD-1 during effector CD8 T cell differentiation. Proc. Natl Acad. Sci. USA 115, 4749–4754 (2018).
39. Dougan, M. & Dougan, S. K. Programmable bacteria as cancer therapy. Nat. Med. 25, 1030–1031 (2019).
40. Gujar, S., Bell, J. & Diallo, J. S. SnapShot: Cancer Immunotherapy with Oncolytic Viruses. Cell 176, 1240–1240 (2019). e1241.
41. Arwert, E. N. et al. Sting and irf3 in stromal fibroblasts enable sensing of genomic stress in cancer cells to undermine oncolytic viral therapy. Nat. Cell Biol. 22, 758–766 (2020).
42. Bommareddy, P. K., Shettigar, M. & Kaufman, H. L. Author Correction: integrating oncolytic viruses in combination cancer immunotherapy. Nat. Rev. Immunol. 18, 536 (2018).
43. Geller, A., Shrestha, R. & Yan, J. Yeast-derived β-glucan in cancer: novel uses of a traditional therapeutic. Int. J. Mol. Sci. 20, 3618 (2019).
44. Jia, J. et al. Interactions Between Nanoparticles and Dendritic Cells: From the Perspective of Cancer Immunotherapy. Front. Oncol. 8, 404 (2018).
45. Joshi, V. B., Geary, S. M. & Salem, A. K. Biodegradable particles as vaccine delivery systems: size matters. AAPS J. 15, 85–94 (2013).
46. Swanson, J. A. & Cannon, G. J. The macrophage capacity for phagocytosis. J. Cell Sci. 101, 907–913 (1992).
47. Chan, G. C., Chan, W. K. & Sze, D. M. The effects of beta-glucan on human immune and cancer cells. J. Hematol. Oncol. 2, 25 (2009).
48. Kim, O. Y. et al. Bacterial outer membrane vesicles suppress tumor by interferon-gamma-mediated antitumor response. Nat. Commun. 8, 626 (2017).
49. Kim, O. Y. et al. Immunization with Escherichia coli outer membrane vesicles protects bacteria-induced lethality via Th1 and Th17 cell responses. J. Immunol. 190, 4092–4102 (2013).
50. Stubbs, A. C. et al. Whole recombinant yeast vaccine activates dendritic cells and elicits protective cell-mediated immunity. Nat. Med. 7, 625–629 (2001).
51. Huang, H., Ostroff, G. R., Lee, C. K., Specht, C. A. & Levitz, S. M. Characterization and optimization of the glucan particle-based vaccine platform. Clin. Vaccin. Immunol. 20, 1585–1591 (2013).
52. Zhang, M., Kim, J. A. & Huang, A. Y. Optimizing Tumor Microenvironment for Cancer Immunotherapy: beta-Glucan-Based Nanoparticles. Front. Immunol. 9, 341 (2018).
53. Son, S. et al. Sugar-Nanocapsules Imprinted with Microbial Molecular Patterns for mRNA Vaccination. Nano Lett. 20, 1499–1509 (2020).
54. Zeng, Y. et al. Polysaccharide-based nanomedicines for cancer immunotherapy: a review. Bioact. Mater. 6, 3358–3382 (2021).
55. Fan, Q. et al. An implantable blood clot–based immune niche for enhanced cancer vaccination. Sci. Adv. 6, eabb4639 (2020).
56. Han, X. et al. Red blood cell–derived nanoerythrosome for antigen delivery with enhanced cancer immunotherapy. Sci. Adv. 5, eaaw6870 (2019).
Acknowledgements
We thank the graduate student interdisciplinary innovation training program of Soochow University. This work was supported by National Natural Science Foundation of China (No. 31900988 C.W., 81770216 J.C., and 11971021 Jin.C.), the Natural Science Foundation of Jiangsu Province (No. SBK2019040088 C.W.), Jiangsu Province Six Talent Peaks Project (No. SWYY-110 C.W.). This work was also supported by the Program for Jiangsu Specially-Appointed Professors to C.W. This work was partly supported by Collaborative Innovation Center of Suzhou Nano Science & Technology, the Priority Academic Program Development of Jiangsu Higher Education Institutions (PAPD), the 111 Project. This work was partly supported by Undergraduate Training Program for Innovation and Entrepreneurship, Soochow University (Project 201910285013Z).
Author information
Authors
Contributions
C.W. and J.X. designed the project. J.X. performed the experiments, collected the data, and analyzed and interpreted the data. All authors contributed to the writing of the paper, discussed the results and implications, and edited the paper at all stages.
Corresponding authors
Correspondence to Jianhong Chu, Jingrun Chen or Chao Wang.
Ethics declarations
Competing interests
The authors declare no competing interests.
Peer review information
Nature Communications thanks Jianxiang Zhang and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Reprints and Permissions
Xu, J., Ma, Q., Zhang, Y. et al. Yeast-derived nanoparticles remodel the immunosuppressive microenvironment in tumor and tumor-draining lymph nodes to suppress tumor growth. Nat Commun 13, 110 (2022). https://doi.org/10.1038/s41467-021-27750-2
• Accepted:
• Published:
• DOI: https://doi.org/10.1038/s41467-021-27750-2
• Microparticles: biogenesis, characteristics and intervention therapy for cancers in preclinical and clinical research
• Yan Hu
• Yajie Sun
• Kunyu Yang
Journal of Nanobiotechnology (2022) | 2022-07-03 07:55:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5482433438301086, "perplexity": 12668.425921936023}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104215805.66/warc/CC-MAIN-20220703073750-20220703103750-00622.warc.gz"} |
http://stackoverflow.com/questions/1048805/compressing-a-directory-of-files-with-php | # Compressing a directory of files with PHP
I am creating a php backup script that will dump everything from a database and save it to a file. I have been successful in doing that but now I need to take hundreds of images in a directory and compress them into one simple .tar.gz file.
What is the best way to do this and how is it done? I have no idea where to start.
-
If you are using PHP 5.2 or later, you could use the Zip Library
and then do something along the lines of:
$images = '/path/to/images'; //this folder must be writeable by the server$backup = '/path/to/backup';
$zip_file =$backup.'/backup.zip';
if ($handle = opendir($images))
{
$zip = new ZipArchive(); if ($zip->open($zip_file, ZIPARCHIVE::CREATE)!==TRUE) { exit("cannot open <$filename>\n");
}
while (false !== ($file = readdir($handle)))
{
$zip->addFile('path/to/images/'.$file);
echo "$file\n"; } closedir($handle);
echo "numfiles: " . $zip->numFiles . "\n"; echo "status:" .$zip->status . "\n";
$zip->close(); echo 'Zip File:'.$zip_file . "\n";
}
-
Thanks Kev for the answer. I just need to clarify one little thing: did you mean to ask if I had a version of PHP lower than 5.2 or higher than 5.2. I currently have 5.2.6. As I understand your answer, I will not be able to run this because I have a PHP version to high. Is this correct? – VinkoCM Jun 26 '09 at 12:43
He means that you need a PHP version that's 5.2.0 or later, so you'll be fine. – alexn Jun 26 '09 at 13:00
Yeah sorry as Alexander says its for 5.2.0 or later – Paul Dixon Jun 26 '09 at 13:08
I just tried this and I get this error:<br> Fatal error: Class 'ZipArchive' not found in [...] on line 7. <br>What could cause this? I just contacted my web hosting provider to tell me if PHP is installed with the appropriate libraries. – VinkoCM Jun 26 '09 at 13:11
Thank you everybody for all the help. I finally go the script to work. All I had to do was change this line: $zip->addFile($file); to this: $zip->addFile('path/to/images/'.$file); – VinkoCM Jun 26 '09 at 14:43
You can also use something like this:
exec('tar -czf backup.tar.gz /path/to/dir-to-be-backed-up/');
Be sure to heed the warnings about using PHP's exec() function.
-
Is this code runs on a server which is disabled functions exec,system,shell_exec,...? – Amir Jul 1 at 0:46
$archive_name = 'path\to\archive\arch1.tar';$dir_path = 'path\to\dir';
$archive = new PharData($archive_name);
$archive->buildFromDirectory($dir_path); // make path\to\archive\arch1.tar
$archive->compress(Phar::GZ); // make path\to\archive\arch1.tar.gz unlink($archive_name); // deleting path\to\archive\arch1.tar
-
You can easily gzip a folder using this command:
tar -cvzpf backup.tar.gz /path/to/folder
This command can be ran through phps system()-function.
Don't forget to escapeshellarg() all commands.
-
Thank you Alexander for you answer. How would this look like in php? Would it go something along the lines of: system('tar -cvzpf my_backup.tar.gz /images_water/images_rivers/'); – VinkoCM Jun 26 '09 at 12:45
Yes, that would do it. – alexn Jun 26 '09 at 13:00
Unfortunately this does not work for me because my hosting provider has set security restrictions on exec/system commands. – VinkoCM Jun 26 '09 at 13:32 | 2014-07-10 06:30:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.704372763633728, "perplexity": 2701.8711798415984}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1404776404630.61/warc/CC-MAIN-20140707234004-00002-ip-10-180-212-248.ec2.internal.warc.gz"} |
https://byjus.com/question-answer/the-frequency-of-a-radar-is-780-mhz-after-getting-reflected-from-an-approaching-aeroplane-2/ | Question
# The frequency of a radar is $$780\,MHz$$ . After getting reflected from an approaching aeroplane, the apparent frequency is more than the actual frequency by $$2.6\,kHz$$ . The aeroplane has a speed of
A
2km/s
B
1km/s
C
0.5km/s
D
0.25km/s
Solution
## The correct option is C $$0.5\,km/s$$$$f' = \left (\dfrac{c + \nu_a}{c - \nu_a} \right )f$$where $$c$$ is the velocity of the radio wave , an electromagnetic wave , i.e., $$c = 3 \times 10^8\,m/s$$ and $$\nu_s$$ is velocity of aeroplane . $$f' - f = \left [ \dfrac{c + \nu_a}{c - \nu_a} - 1 \right ]f$$ $$\Rightarrow$$ $$\Delta f = \dfrac{2 \nu_a f}{c - \nu_a}$$ Since approaching aeroplane cannot have a speed comparable to the speed of electromagnetic wave , so $$v_s \ll c .$$ $$\therefore$$ $$\Delta f = \dfrac{2 \nu_a f}{c}$$ $$\Rightarrow$$ $$2.6 \times 10^3 = \dfrac{2\nu_A(780 \times 10^6)}{3 \times 10^8}$$ $$\Rightarrow$$ $$\nu_A = 0.5 \times 10^3 \,m/s$$ $$= 0.5\,km/s$$ Physics
Suggest Corrections
0
Similar questions
View More
People also searched for
View More | 2022-01-28 21:50:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9246156811714172, "perplexity": 4874.245791683944}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320306346.64/warc/CC-MAIN-20220128212503-20220129002503-00690.warc.gz"} |
https://stats.stackexchange.com/questions/165/how-would-you-explain-markov-chain-monte-carlo-mcmc-to-a-layperson | # How would you explain Markov Chain Monte Carlo (MCMC) to a layperson?
Maybe the concept, why it's used, and an example.
First, we need to understand what is a Markov chain. Consider the following weather example from Wikipedia. Suppose that weather on any given day can be classified into two states only: sunny and rainy. Based on past experience, we know the following:
$P(\text{Next day is Sunny}\,\vert \,\text{Given today is Rainy)}=0.50$
Since, the next day's weather is either sunny or rainy it follows that:
$P(\text{Next day is Rainy}\,\vert \,\text{Given today is Rainy)}=0.50$
Similarly, let:
$P(\text{Next day is Rainy}\,\vert \,\text{Given today is Sunny)}=0.10$
Therefore, it follows that:
$P(\text{Next day is Sunny}\,\vert \,\text{Given today is Sunny)}=0.90$
The above four numbers can be compactly represented as a transition matrix which represents the probabilities of the weather moving from one state to another state as follows:
$P = \begin{bmatrix} & S & R \\ S& 0.9 & 0.1 \\ R& 0.5 & 0.5 \end{bmatrix}$
Q1: If the weather is sunny today then what is the weather likely to be tomorrow?
A1: Since, we do not know what is going to happen for sure, the best we can say is that there is a $90\%$ chance that it is likely to be sunny and $10\%$ that it will be rainy.
Q2: What about two days from today?
A2: One day prediction: $90\%$ sunny, $10\%$ rainy. Therefore, two days from now:
First day it can be sunny and the next day also it can be sunny. Chances of this happening are: $0.9 \times 0.9$.
Or
First day it can be rainy and second day it can be sunny. Chances of this happening are: $0.1 \times 0.5$.
Therefore, the probability that the weather will be sunny in two days is:
$P(\text{Sunny 2 days from now} = 0.9 \times 0.9 + 0.1 \times 0.5 = 0.81 + 0.05 = 0.86$
Similarly, the probability that it will be rainy is:
$P(\text{Rainy 2 days from now} = 0.1 \times 0.5 + 0.9 \times 0.1 = 0.05 + 0.09 = 0.14$
In linear algebra (transition matrices) these calculations correspond to all the permutations in transitions from one step to the next (sunny-to-sunny ($S_2S$), sunny-to-rainy ($S_2R$), rainy-to-sunny ($R_2S$) or rainy-to-rainy ($R_2R$)) with their calculated probabilities:
On the lower part of the image we see how to calculate the probability of a future state ($t+1$ or $t+2$) given the probabilities (probability mass function, $PMF$) for every state (sunny or rainy) at time zero (now or $t_0$) as simple matrix multiplication.
If you keep forecasting weather like this you will notice that eventually the $n$-th day forecast, where $n$ is very large (say $30$), settles to the following 'equilibrium' probabilities:
$P(\text{Sunny}) = 0.833$
and
$P(\text{Rainy}) = 0.167$
In other words, your forecast for the $n$-th day and the $n+1$-th day remain the same. In addition, you can also check that the 'equilibrium' probabilities do not depend on the weather today. You would get the same forecast for the weather if you start of by assuming that the weather today is sunny or rainy.
The above example will only work if the state transition probabilities satisfy several conditions which I will not discuss here. But, notice the following features of this 'nice' Markov chain (nice = transition probabilities satisfy conditions):
Irrespective of the initial starting state we will eventually reach an equilibrium probability distribution of states.
Markov Chain Monte Carlo exploits the above feature as follows:
We want to generate random draws from a target distribution. We then identify a way to construct a 'nice' Markov chain such that its equilibrium probability distribution is our target distribution.
If we can construct such a chain then we arbitrarily start from some point and iterate the Markov chain many times (like how we forecast the weather $n$ times). Eventually, the draws we generate would appear as if they are coming from our target distribution.
We then approximate the quantities of interest (e.g. mean) by taking the sample average of the draws after discarding a few initial draws which is the Monte Carlo component.
There are several ways to construct 'nice' Markov chains (e.g., Gibbs sampler, Metropolis-Hastings algorithm).
• It is nicely written answer. Though, it would probably loose layperson's attention at the point where transition matrices are discussed. – rraadd88 Jun 11 '18 at 3:27
• Great answer. I think it would be helpful to explain earlier (or in more detail) about the fact that the ultimate goal is to determine some quantity of interest (e.g. the mean or mode of inferred parameters). This is correct right? – Austin Shin Apr 2 '19 at 21:40
I think there's a nice and simple intuition to be gained from the (independence-chain) Metropolis-Hastings algorithm.
First, what's the goal? The goal of MCMC is to draw samples from some probability distribution without having to know its exact height at any point. The way MCMC achieves this is to "wander around" on that distribution in such a way that the amount of time spent in each location is proportional to the height of the distribution. If the "wandering around" process is set up correctly, you can make sure that this proportionality (between time spent and height of the distribution) is achieved.
Intuitively, what we want to do is to to walk around on some (lumpy) surface in such a way that the amount of time we spend (or # samples drawn) in each location is proportional to the height of the surface at that location. So, e.g., we'd like to spend twice as much time on a hilltop that's at an altitude of 100m as we do on a nearby hill that's at an altitude of 50m. The nice thing is that we can do this even if we don't know the absolute heights of points on the surface: all we have to know are the relative heights. e.g., if one hilltop A is twice as high as hilltop B, then we'd like to spend twice as much time at A as we spend at B.
The simplest variant of the Metropolis-Hastings algorithm (independence chain sampling) achieves this as follows: assume that in every (discrete) time-step, we pick a random new "proposed" location (selected uniformly across the entire surface). If the proposed location is higher than where we're standing now, move to it. If the proposed location is lower, then move to the new location with probability p, where p is the ratio of the height of that point to the height of the current location. (i.e., flip a coin with a probability p of getting heads; if it comes up heads, move to the new location; if it comes up tails, stay where we are). Keep a list of the locations you've been at on every time step, and that list will (asyptotically) have the right proportion of time spent in each part of the surface. (And for the A and B hills described above, you'll end up with twice the probability of moving from B to A as you have of moving from A to B).
There are more complicated schemes for proposing new locations and the rules for accepting them, but the basic idea is still: (1) pick a new "proposed" location; (2) figure out how much higher or lower that location is compared to your current location; (3) probabilistically stay put or move to that location in a way that respects the overall goal of spending time proportional to height of the location.
What is this useful for? Suppose we have a probabilistic model of the weather that allows us to evaluate A*P(weather), where A is an unknown constant. (This often happens--many models are convenient to formulate in a way such that you can't determine what A is). So we can't exactly evaluate P("rain tomorrow"). However, we can run the MCMC sampler for a while and then ask: what fraction of the samples (or "locations") ended up in the "rain tomorrow" state. That fraction will be the (model-based) probabilistic weather forecast.
• +1. In my opinion, the 'wandering about' is the most intuitive analogy among the ones listed on this page. – Zhubarb Oct 2 '13 at 13:04
• "without having to know its exact height at any point" This is not the core restriction of MCMC. – JeremyKun May 16 '19 at 19:56
• I do wish this explanation is in textbooks so that we do not have to spend time banging heads so much time to understand what MH is doing. – mon May 19 '19 at 13:16
I'd probably say something like this:
"Anytime we want to talk about probabilities, we're really integrating a density. In Bayesian analysis, a lot of the densities we come up with aren't analytically tractable: you can only integrate them -- if you can integrate them at all -- with a great deal of suffering. So what we do instead is simulate the random variable a lot, and then figure out probabilities from our simulated random numbers. If we want to know the probability that X is less than 10, we count the proportion of simulated random variable results less than 10 and use that as our estimate. That's the "Monte Carlo" part, it's an estimate of probability based off of random numbers. With enough simulated random numbers, the estimate is very good, but it's still inherently random.
"So why "Markov Chain"? Because under certain technical conditions, you can generate a memoryless process (aka a Markovian one) that has the same limiting distribution as the random variable that you're trying to simulate. You can iterate any of a number of different kinds of simulation processes that generate correlated random numbers (based only on the current value of those numbers), and you're guaranteed that once you pool enough of the results, you will end up with a pile of numbers that looks "as if" you had somehow managed to take independent samples from the complicated distribution you wanted to know about.
"So for example, if I want to estimate the probability that a standard normal random variable was less than 0.5, I could generate ten thousand independent realizations from a standard normal distribution and count up the number less than 0.5; say I got 6905 that were less than 0.5 out of 10000 total samples; my estimate for P(Z<0.5) would be 0.6905, which isn't that far off from the actual value. That'd be a Monte Carlo estimate.
"Now imagine I couldn't draw independent normal random variables, instead I'd start at 0, and then with every step add some uniform random number between -0.5 and 0.5 to my current value, and then decide based on a particular test whether I liked that new value or not; if I liked it, I'd use the new value as my current one, and if not, I'd reject it and stick with my old value. Because I only look at the new and current values, this is a Markov chain. If I set up the test to decide whether or not I keep the new value correctly (it'd be a random walk Metropolis-Hastings, and the details get a bit complex), then even though I never generate a single normal random variable, if I do this procedure for long enough, the list of numbers I get from the procedure will be distributed like a large number of draws from something that generates normal random variables. This would give me a Markov Chain Monte Carlo simulation for a standard normal random variable. If I used this to estimate probabilities, that would be a MCMC estimate."
• That's a good explanation, but not for a nontechnical layperson. I suspect the OP wanted to know how to explain it to, say, the MBA who hired you to do some statistical analysis! How would you describe MCMC to someone who, at best, sorta understands the concept of a standard deviation (variance, though, may be too abstract)? – Harlan Jul 20 '10 at 2:18
• @Harlan: It's a hard line to straddle; if someone doesn't at least know what a random variable is, why we might want to estimate probabilities, and have some hazy idea of a density function, then I don't think it is possible to meaningfully explain the how or why of MCMC to them, only the "what", which in this case would boil down to "it's a way of numerically solving an otherwise impossible problem by simulation, like flipping a coin a lot to estimate the probability that it lands on heads". – Rich Jul 20 '10 at 2:39
• +1 for the last paragraph. With a minimum of technicalities it conveys the idea well. – whuber Nov 3 '10 at 0:52
• Cool explanation. I think this is great for a nontechnical person. – SmallChess Jun 11 '15 at 3:54
• Doubt - In the last para, why would the list of numbers seem like coming from a normal distribution? Is it because of the Central Limit Theorem? Further, what if we wanted to simulate some other distribution? – Manoj Kumar Nov 25 '15 at 2:27
Imagine you want to find a better strategy to beat your friends at the board game Monopoly. Simplify the stuff that matters in the game to the question: which properties do people land on most? The answer depends on the structure of the board, the rules of the game and the throws of two dice.
One way to answer the question is this. Just follow a single piece around the board as you throw the dice and follow the rules. Count how many times you land on each property (or program a computer to do the job for you). Eventually, if you have enough patience or you have programmed the rules well enough in you computer, you will build up a good picture of which properties get the most business. This should help you win more often.
What you have done is a Markov Chain Monte Carlo (MCMC) analysis. The board defines the rules. Where you land next only depends on where you are now, not where you have been before and the specific probabilities are determined by the distribution of throws of two dice. MCMC is the application of this idea to mathematical or physical systems like what tomorrow's weather will be or where a pollen grain being randomly buffeted by gas molecules will end up.
• The explanation sounds like simple Monte Carlo Simulation, but what about Markov Chain ? How Markov Chain is related to this Monte Carlo Simulation ? – Emran Hussain Nov 15 '15 at 4:50
• @Emran Graham Cookson's answer seems to explain a connection between Monopoly and Markov chains already. – Glen_b Feb 10 '16 at 21:37
• Monopoly can be modelled as a Markov chain where each property/space is a node/state. When you're on any particular space, you've got various probabilities of moving to the next 12 spaces (if using 2 dice) - these are the edges/connections in the Markov chain. It's easy to work out the probability of each edge/connection: gwydir.demon.co.uk/jo/probability/calcdice.htm#sum – user136692 Nov 1 '16 at 6:21
OK here's my best attempt at an informal and crude explanation.
A Markov Chain is a random process that has the property that the future depends only on the current state of the process and not the past i.e. it is memoryless. An example of a random process could be the stock exchange. An example of a Markov Chain would be a board game like Monopoly or Snakes and Ladders where your future position (after rolling the die) would depend only on where you started from before the roll, not any of your previous positions. A textbook example of a Markov Chain is the "drunkard's walk". Imagine somebody who is drunk and can move only left or right by one pace. The drunk moves left or right with equal probability. This is a Markov Chain where the drunk's future/next position depends only upon where he is at present.
Monte Carlo methods are computational algorithms (simply sets of instructions) which randomly sample from some process under study. They are a way of estimating something which is too difficult or time consuming to find deterministically. They're basically a form of computer simulation of some mathematical or physical process. The Monte Carlo moniker comes from the analogy between a casino and random number generation. Returning to our board game example earlier, perhaps we want to know if some properties on the Monopoly board are visited more often than others. A Monte Carlo experiment would involve rolling the dice repeatedly and counting the number of times you land on each property. It can also be used for calculating numerical integrals. (Very informally, we can think of an integral as the area under the graph of some function.) Monte Carlo integration works great on a high-dimensional functions by taking a random sample of points of the function and calculating some type of average at these various points. By increasing the sample size, the law of large numbers tells us we can increase the accuracy of our approximation by covering more and more of the function.
These two concepts can be put together to solve some difficult problems in areas such as Bayesian inference, computational biology, etc where multi-dimensional integrals need to be calculated to solve common problems. The idea is to construct a Markov Chain which converges to the desired probability distribution after a number of steps. The state of the chain after a large number of steps is then used as a sample from the desired distribution and the process is repeated. There many different MCMC algorithms which use different techniques for generating the Markov Chain. Common ones include the Metropolis-Hastings and the Gibbs Sampler.
• A good explanation indeed. Only one confusion is not cleared. As you said, "the idea is to construct a Markov Chain which converges to the desired Probability Distribution.". Sounds like we already know the Stead State Probability Distribution for the states, then why would we need to construct a markov chain. Markov chain's purpose is to Provide us with the steady state distribution, which we already have at the first place, is not it ? Unless you meant, getting a Markov Chain is still necessary to compute n + 1's state probability based on current state. – Emran Hussain Nov 15 '15 at 6:17
Excerpt from Bayesian Methods for Hackers
## The Bayesian landscape
When we setup a Bayesian inference problem with $N$ unknowns, we are implicitly creating a $N$ dimensional space for the prior distributions to exist in. Associated with the space is an additional dimension, which we can describe as the surface, or curve, of the space, that reflects the prior probability of a particular point. The surface of the space is defined by our prior distributions. For example, if we have two unknowns $p_1$ and $p_2$, and both are uniform on [0,5], the space created is the square of length 5 and the surface is a flat plane that sits ontop of the square (representing that every point is equally likely).
Alternatively, if the two priors are $\text{Exp}(3)$ and $\text{Exp}(10)$, then the space is all postive numbers on the 2-D plane, and the surface induced by the priors looks like a water fall that starts at the point (0,0) and flows over the positive numbers.
The visualization below demonstrates this. The more dark red the color, the more prior probability that the unknowns are at that location. Conversely, areas with darker blue represent that our priors assign very low probability to the unknowns being there.
These are simple examples in 2D space, where our brains can understand surfaces well. In practice, spaces and surfaces generated by our priors can be much higher dimensional.
If these surfaces describe our prior distributions on the unknowns, what happens to our space after we have observed data $X$. The data $X$ does not change the space, but it changes the surface of the space by pulling and stretching the fabric of the surface to reflect where the true parameters likely live. More data means more pulling and stretching, and our original shape becomes mangled or insignificant compared to the newly formed shape. Less data, and our original shape is more present. Regardless, the resulting surface describes the posterior distribution. Again I must stress that it is, unfortunately, impossible to visualize this in larger dimensions. For two dimensions, the data essentially pushes up the original surface to make tall mountains. The amount of pushing up is resisted by the prior probability, so that less prior probability means more resistance. Thus in the double exponential-prior case above, a mountain (or multiple mountains) that might erupt near the (0,0) corner would be much higher than mountains that erupt closer to (5,5), since there is more resistance near (5,5). The mountain, or perhaps more generally, the mountain ranges, reflect the posterior probability of where the true parameters are likely to be found.
Suppose the priors mentioned above represent different parameters $\lambda$ of two Poisson distributions. We observe a few data points and visualize the new landscape.
The plot on the left is the deformed landscape with the $\text{Uniform}(0,5)$ priors, and the plot on the right is the deformed landscape with the exponential priors. The posterior landscapes look different from one another. The exponential-prior landscape puts very little posterior weight on values in the upper right corner: this is because the prior does not put much weight there, whereas the uniform-prior landscape is happy to put posterior weight there. Also, the highest-point, corresponding the the darkest red, is biased towards (0,0) in the exponential case, which is the result from the exponential prior putting more prior wieght in the (0,0) corner.
The black dot represents the true parameters. Even with 1 sample point, as what was simulated above, the mountains attempts to contain the true parameter. Of course, inference with a sample size of 1 is incredibly naive, and choosing such a small sample size was only illustrative.
## Exploring the landscape using the MCMC
We should explore the deformed posterior space generated by our prior surface and observed data to find the posterior mountain ranges. However, we cannot naively search the space: any computer scientist will tell you that traversing $N$-dimensional space is exponentially difficult in $N$: the size of the space quickly blows-up as we increase $N$ (see the curse of dimensionality ). What hope do we have to find these hidden mountains? The idea behind MCMC is to perform an intelligent search of the space. To say "search" implies we are looking for a particular object, which perhaps not an accurate description of what MCMC is doing. Recall: MCMC returns samples from the posterior distribution, not the distribution itself. Stretching our mountainous analogy to its limit, MCMC performs a task similar to repeatedly asking "How likely is this pebble I found to be from the mountain I am searching for?", and completes its task by returning thousands of accepted pebbles in hopes of reconstructing the original mountain. In MCMC and PyMC lingo, the returned sequence of "pebbles" are the samples, more often called the traces.
When I say MCMC intelligently searches, I mean MCMC will hopefully converge towards the areas of high posterior probability. MCMC does this by exploring nearby positions and moving into areas with higher probability. Again, perhaps "converge" is not an accurate term to describe MCMC's progression. Converging usually implies moving towards a point in space, but MCMC moves towards a broader area in the space and randomly walks in that area, picking up samples from that area.
At first, returning thousands of samples to the user might sound like being an inefficient way to describe the posterior distributions. I would argue that this is extremely efficient. Consider the alternative possibilities::
1. Returning a mathematical formula for the "mountain ranges" would involve describing a N-dimensional surface with arbitrary peaks and valleys.
2. Returning the "peak" of the landscape, while mathematically possible and a sensible thing to do as the highest point corresponds to most probable estimate of the unknowns, ignores the shape of the landscape, which we have previously argued is very important in determining posterior confidence in unknowns.
Besides computational reasons, likely the strongest reason for returning samples is that we can easily use The Law of Large Numbers to solve otherwise intractable problems. I postpone this discussion for the next chapter.
### Algorithms to perform MCMC
There is a large family of algorithms that perform MCMC. Simplestly, most algorithms can be expressed at a high level as follows:
1. Start at current position.
2. Propose moving to a new position (investigate a pebble near you ).
3. Accept the position based on the position's adherence to the data
and prior distributions (ask if the pebble likely came from the mountain).
4. If you accept: Move to the new position. Return to Step 1.
5. After a large number of iterations, return the positions.
This way we move in the general direction towards the regions where the posterior distributions exist, and collect samples sparingly on the journey. Once we reach the posterior distribution, we can easily collect samples as they likely all belong to the posterior distribution.
If the current position of the MCMC algorithm is in an area of extremely low probability, which is often the case when the algorithm begins (typically at a random location in the space), the algorithm will move in positions that are likely not from the posterior but better than everything else nearby. Thus the first moves of the algorithm are not reflective of the posterior.
• I understand that the problem was related to specifically MCMC, and not Bayesian inference, but in the context of Bayesian landscapes I find MCMC to be very understandable. – Cam.Davidson.Pilon Mar 6 '13 at 13:10
So there are plenty of answers here paraphrased from statistics/probability textbooks, Wikipedia, etc. I believe we have "laypersons" where I work; I think they are in the marketing department. If I ever have to explain anything technical to them, I apply the rule "show don't tell." With that rule in mind, I would probably show them something like this.
The idea here is to try to code an algorithm that I can teach to spell--not by learning all of the hundreds (thousands?) of rules like When adding an ending to a word that ends with a silent e, drop the final e if the ending begins with a vowel. One reason that won't work is I don't know those rules (i'm not even sure the one I just recited is correct). Instead I am going to teach it to spell by showing it a bunch of correctly spelled words and letting it extract the rules from those words, which is more or less the essence of Machine Learning, regardless of the algorithm--pattern extraction and pattern recognition.
The success criterion is correctly spelling a word the algorithm has never seen before (i realize that can happen by pure chance, but that won't occur to the marketing guys, so i'll ignore--plus I am going to have the algorithm attempt to spell not one word, but a lot, so it's not likely we'll be deceived by a few lucky guesses).
An hour or so ago, I downloaded (as a plain text file) from the excellent Project Gutenberg Site, the Herman Hesse novel Siddhartha. I'll use the words in this novel to teach the algorithm how to spell.
So I coded the algorithm below that scanned this novel, three letters at a time (each word has one additional character at the end, which is 'whitespace', or the end of the word). Three-letter sequences can tell you a lot--for instance, the letter 'q' is nearly always followed by 'u'; the sequence 'ty' usually occurs at the end of a word; z rarely does, and so forth. (Note: I could just as easily have fed it entire words in order to train it to speak in complete sentences--exactly the same idea, just a few tweaks to the code.)
None of this involves MCMC though, that happens after training, when we give the algorithm a few random letters (as a seed) and it begins forming 'words'. How does the algorithm build words? Imagine that it has the block 'qua'; what letter does it add next? During training, the algorithm constructed a massive l*etter-sequence frequency matrix* from all of the thousands of words in the novel. Somewhere in that matrix is the three-letter block 'qua' and the frequencies for the characters that could follow the sequence. The algorithm selects a letter based on those frequencies that could possibly follow it. So the letter that the algorithm selects next depends on--and solely on--the last three in its word-construction queue.
So that's a Markov Chain Monte Carlo algorithm.
I think perhaps the best way to illustrate how it works is to show the results based on different levels of training. Training level is varied by changing the number of passes the algorithm makes though the novel--the more passes thorugh the greater the fidelity of its letter-sequence frequency matrices. Below are the results--in the form of 100-character strings output by the algorithm--after training on the novel 'Siddharta'.
A single pass through the novel, Siddhartha:
then whoicks ger wiff all mothany stand ar you livid theartim mudded sullintionexpraid his sible his
(Straight away, it's learned to speak almost perfect Welsh; I hadn't expected that.)
After two passes through the novel:
the ack wor prenskinith show wass an twor seened th notheady theatin land rhatingle was the ov there
After 10 passes:
despite but the should pray with ack now have water her dog lever pain feet each not the weak memory
And here's the code (in Python, i'm nearly certain that this could be done in R using an MCMC package, of which there are several, in just 3-4 lines)
def create_words_string(raw_string) :
""" in case I wanted to use training data in sentence/paragraph form;
this function will parse a raw text string into a nice list of words;
filtering: keep only words having more than 3 letters and remove
punctuation, etc.
"""
pattern = r'\b[A-Za-z]{3,}\b'
pat_obj = re.compile(pattern)
words = [ word.lower() for word in pat_obj.findall(raw_string) ]
pattern = r'\b[vixlm]+\b'
pat_obj = re.compile(pattern)
return " ".join([ word for word in words if not pat_obj.search(word) ])
def create_markov_dict(words_string):
# initialize variables
wb1, wb2, wb3 = " ", " ", " "
l1, l2, l3 = wb1, wb2, wb3
dx = {}
for ch in words_string :
dx.setdefault( (l1, l2, l3), [] ).append(ch)
l1, l2, l3 = l2, l3, ch
return dx
def generate_newtext(markov_dict) :
simulated_text = ""
l1, l2, l3 = " ", " ", " "
for c in range(100) :
next_letter = sample( markov_dict[(l1, l2, l3)], 1)[0]
simulated_text += next_letter
l1, l2, l3 = l2, l3, next_letter
return simulated_text
if __name__=="__main__" :
# n = number of passes through the training text
n = 1
q1 = create_words_string(n * raw_str)
q2 = create_markov_dict(q1)
q3 = generate_newtext(q2)
print(q3)
• You've built a Markov model for spelling in English and fit it to data. But sampling from the fitted model is not MCMC. (What's the "desired distribution" it samples from? Clearly, not the distribution over "properly spelled words in English", since the model still makes mistakes after training). I don't mean to criticize the exercise; it's a nice demonstration of a Markov chain model for language. But the key idea of MCMC is to design a Markov Chain so that its equilibrium distribution corresponds to some distribution you have in mind, and it's not obvious that this achieves that. – jpillow Jul 9 '11 at 5:36
MCMC is typically used as an alternative to crude Monte Carlo simulation techniques. Both MCMC and other Monte Carlo techniques are used to evaluate difficult integrals but MCMC can be used more generally.
For example, a common problem in statistics is to calculate the mean outcome relating to some probabilistic/stochastic model. Both MCMC and Monte Carlo techniques would solve this problem by generating a sequence of simulated outcomes that we could use to estimate the true mean.
Both MCMC and crude Monte Carlo techniques work as the long-run proportion of simulations that are equal to a given outcome will be equal* to the modelled probability of that outcome. Therefore, by generating enough simulations, the results produced by both methods will be accurate.
*I say equal although in general I should talk about measurable sets. A layperson, however, probably wouldn't be interested in this*
However, while crude Monte Carlo involves producing many independent simulations, each of which is distributed according to the modelled distribution, MCMC involves generating a random walk that in the long-run "visits" each outcome with the desired frequency.
The trick to MCMC, therefore, is picking a random walk that will "visit" each outcome with the desired long-run frequencies.
A simple example might be to simulate from a model that says the probability of outcome "A" is 0.5 and of outcome "B" is 0.5. In this case, if I started the random walk at position "A" and prescribed that in each step it switched to the other position with probability 0.2 (or any other probability that is greater than 0), I could be sure that after a large number of steps the random walk would have visited each of "A" and "B" in roughly 50% of steps--consistent with the probabilities prescribed by our model.
This is obviously a very boring example. However, it turns out that MCMC is often applicable in situations in which it is difficult to apply standard Monte Carlo or other techniques.
You can find an article that covers the basics of what it is and why it works here:
http://wellredd.uk/basics-markov-chain-monte-carlo/
• We are trying to build a permanent repository of high-quality statistical information in the form of questions & answers. We try to avoid link-only answers which are subject to link-rot; as such this is more of a comment than an answer in its own right. If you're able, could you expand it, perhaps by giving a summary of the information at the link (or we could convert it into a comment for you). – Glen_b Aug 17 '16 at 8:48
I'm a DNA analyst that uses fully continuous probabilistic genotyping software to interpret DNA evidence and I have to explain how this works to a jury. Admittedly, we over simplify and I realize some of this over simplification sacrifices accuracy of specific details in the name of improving overall understanding. But, within the context of a jury understanding how this process is used in DNA interpretation without academic degrees and years of professional experience, they get the gist :)
Background: The software uses metropolis Hastings MCMC and a biological model that mimics the known behavior of DNA profiles (model is built based upon validation data generated by laboratory analyzing many DNA profiles from known conditions representing the range encountered in the unknown casework). There's 8 independent chains and we evaluate the convergence to determine whether to re-run increasing the burn in and post accepts (default burnin 100k accepts and post 400k accepts)
When asked by prosecution/defense about MCMC: we explain it stands for markov chain Monte Carlo and represents a special class/kind of algorithm used for complex problem-solving and that an algorithm is just a fancy word referring to a series of procedures or routine carried out by a computer... mcmc algorithms operate by proposing a solution, simulating that solution, then evaluating how well that simulation mirrors the actual evidence data being observed... a simulation that fits the evidence observation well has a higher probability than a simulation that does not fit the observation well... over many repeated samplings/guesses of proposed solutions, the Markov chains move away from the low probability solutions toward the high probability solutions that better fit/explain the observed evidence profile, until eventually equilibrium is achieved, meaning the algorithm has limited ability to sample new proposals yielding significantly increased probabilities
When asked about metropolis Hastings: we explain it's a refinement to MCMC algorithm describing its decision-making process accepting or rejecting a proposal... usually this is explained with an analogy of "hot/cold" children's game but I may have considered using "swipe right or left" when the jury is especially young!! :p But using our hot/cold analogy, we always accept a hot guess and will occasionally accept a cold guess a fraction of the time and explain the purpose of sometimes accepting the cold guess is to ensure the chains sample a wider range of possibilities as opposed to getting stuck around one particular proposal before actual equilibrium
Edited to add/clarify: with the hot/cold analogy we explain that in the children's game, the leader picks a target object/area within the room and the players take turns making guesses which direction to move relative to their current standing/position. The leader tells them to change their position/make the move if it's a hot guess and they lose their turn/stay in position if it's a cold guess. Similarly, within our software, the decision to move/accept depends only on the probability of the proposal compared to the probability of currently held position... HOWEVER, the target is pre-defined/known by the leader in the children's game whereas the target within our software isn't pre-defined--it's completely unknown (also why it's more important for our application to adequately sample the space and occasionally accept cold guesses)
Like I said, super super basic and absolutely lacking technical detail for sake of improving comprehension--we strive for explaining at about a middle-school level of education. Feel free to make suggestions. I'll incorporate them.
This question is broad yet the answers are often quite casual. Alternatively, you can see this paper which gives a concise mathematical description of a broad class of MCMC algorithms including Metropolis-Hastings algorithms, Gibbs sampling, Metropolis-within-Gibbs and auxiliary variables methods, slice sampling, recursive proposals, directional sampling, Langevin and Hamiltonian Monte Carlo, NUTS sampling, pseudo-marginal Metropolis-Hastings algorithms, and pseudo-marginal Hamiltonian Monte Carlo, as discussed by the authors.
A credible review is given here
I'll find more time to elaborate its content in the format of stackexchange.
First, we should explain Monte-Carlo sampling to the layperson. Imagine when you don't have the exact form of a function (for example, $$f(x,y)=z=x^2+2*x*y$$) but there is a machine in Europe (and Los Alamos) that replicates this function (numerically). We can put as many $$(x,y)$$ pairs into it and it will give us the value z. This numerical repetition is sampling and this process is a Monte-Carlo simulation of $$f(x,y)$$. After 1,0000 iterations, we almost know what the function $$f(x,y)$$ is.
Assuming the layperson knows the Monte-Carlo, in MCMC you don't want to waste your CPU efforts / time when you are sampling from a multi-dimensional space $$f(x,y,z,t,s,...,zzz)$$, as the standard Monte-Carlo sampling does. The key difference is that in MCMC you need to have a Markov-chain as a map to guide your efforts.
This video (starting at 5:50) has a very good statement of intuition.
Imagine you want to sample points that are on the green (multi-dimensional) branches in this picture. If you throw points all over the black super-space and check their value, you are WASTING a lot sampling (searching) energy. So it would make more sense to control your sampling strategy (which can be automated) to pick points closer to the green branches (where it matters). Green branches can be found by being hit once accidentally (or controlled), and the rest of the sampling effort (red points) will be generated afterward. The reason the red gets attracted to green line is because of the Markov chain transition matrix that works as your sampling-engine.
So in layman's terms, MCMC is an energy-saving (low cost) sampling method, especially when working in a massive and 'dark' (multi-dimensional) space.
• I think we have a different definition of "layman" – Neil McGuigan Apr 30 '17 at 0:13
• hahaha. I can add the Monte-Carlo for a "layperson" too, but sampling/Monte-Carlo was not a question. – Amir Apr 30 '17 at 1:52 | 2020-08-10 22:14:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6975955963134766, "perplexity": 652.4006411797367}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738699.68/warc/CC-MAIN-20200810205824-20200810235824-00211.warc.gz"} |
http://www.dominicberry.org/ | Please note: You are viewing the unstyled version of this web site. Either your browser does not support CSS (cascading style sheets) or it has been disabled.
# Dominic Berry
## Dominic Berry - Associate Professor
### Macquarie University
My research is in the areas of quantum information and quantum optics. In quantum information, I developed many of the most efficient known algorithms for simulation of physical systems, which has been used as the basis for important new quantum algorithms. In the area of quantum optics, I invented the most accurate known methods to measure optical phase by using adaptive techniques, and am collaborating with experimental groups for demonstration of these methods.
My CV is available here.
PhD and Masters projects
Are you interested in doing a PhD or Masters project in quantum algorithms? PhD scholarships are available through the Sydney Quantum Academy. Applications are currently closed, but will reopen early next year. For more details please contact me at .
News
• 17/11/2021: We've now achieved the optimal quantum algorithm for solving linear equations. Our method is based on a quantum walk, and is relatively simple to perform, but the difficult part is showing that it works. To do that we proved a form of the adiabatic theorem for quantum walks and bounded the error.
• 12/11/2021: Our work on simulation of quantum chemistry using plane waves is now published in PRX Quantum. It is a complete cost analysis of quantum algorithms adapted to first quantisation, showing that the approach is unrivaled for high accuracy simulations of solid-state materials and certain chemical compounds.
• 13/10/2021: We've released new work on how to more efficiently simulate quantum field theory. This one is using a much faster method for preparing the Gaussians, as well as using wavelets for systems that vary over space.
• 9/07/2021: Our work on how to perform highly efficient simulation of quantum chemistry using tensor hypercontraction is now published in PRX Quantum.
• 28/05/2021: We have released a new analysis of quantum simulation of quantum chemistry using plane waves, qubitisation and the interaction picture. We give explicit Toffoli counts for these methods, which have the best asymptotic scaling, showing how they perform in practice with realistic numbers. We introduced a whole host of improvements, which together reduce the complexity by around a factor of 1000 over naive implementations.
• 28/04/2021: The paper on how to perform Boson-sampling inspired QKD is now published in the journal Quantum.
• 5/01/2021: I am now on Twitter, and will be making these announcements there as well.
• 10/11/2020: Our work on quantum algorithms for optimisation is now published in PRX Quantum.
• 9/11/2020: In the most exciting news regarding counts this week, we have the best Toffoli count yet for quantum chemistry, even better than the recent work of von Burg et al. We achieve the highest efficiency yet using the THC decomposition with Majorana operators.
• 27/10/2020: We have shown that the usual limit to laser coherence is not a true limit, and it is possible to achieve coherence that is quadratically better. This work is now published in Nature Physics.
• 16/07/2020: We have shown how to very effectively use Trotterisation for simulation of quantum systems. Unlike optimisation, this problem has an exponential speedup over classical computing, so is a realistic application for quantum computing. This work is now published in Quantum.
• 16/07/2020: We have released new results on the arXiv showing how to efficiently perform quantum optimisation algorithms. Even using the best techniques, our results indicate that quantum computers will not be able to beat classical computing on problems with only a square root speedup.
• 20/04/2020: Our results showing how to speed up simulation of time-dependent systems using a randomised algorithm are now published in Quantum.
• 24/01/2020: Our result showing that there $\pi$ factor in the Heisenberg limit is now published in Physical Review Letters. This shows that the usual form of the limit to phase estimation is not achievable in a single-shot scenario.
• 3/12/2019: Our method of simulating quantum chemistry for sparse systems, with a 700 times speedup for FeMoco, is now published in Quantum.
• 1/11/2019: Our work showing how to use the interaction picture for quantum chemistry simulations with N3.5 complexity is now published in npj Quantum Information.
• 15/07/2019: In work with Rafal Demkowicz-Dobrzanski and others we have shown that the ultimate limit to phase measurement includes a $\pi$ factor, regardless of any prior knowledge. This work is available on the arXiv.
• 18/06/2019: We have shown how to perform simulations of time-dependent Hamiltonians with complexity scaling as the integral of the norm of the Hamiltonian, rather than the maximum value. This is useful for Hamiltonians arising from collisions where the size of the Hamiltonian can vary dramatically over time. This work is available on the arXiv.
• 9/05/2019: We have proposed a new technique for quantum encryption using Boson sampling. This work is available on the arXiv.
• 5/04/2019: Our paper showing how to simulate time-dependent quantum systems with exponential precision is now published in Physical Review A.
• 4/04/2019: Our new algorithm for solving SYK models with many orders of magnitude improvement is now published in Physical Review A as a Rapid Communication.
• 12/03/2019: Our work showing how to more accurately perform sensing with with NV centres at room temperature is now published in Physical Review B.
• 27/02/2019: We have analysed the Trotter approach to quantum simulation, providing several improvements and showing that simulations may be performed with a surprisingly small number of quantum gates.
• 6/02/2019: We have shown how to take advantage of the low rank nature of Hamiltonians to provide a significant speed improvement for simulating FeMoco.
• 17/01/2019: Our work on removing arithmetic from state preparation is now published in Physical Review Letters. It is an Editors' Suggestion! | 2022-01-16 09:32:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29553383588790894, "perplexity": 833.3193143287903}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320299852.23/warc/CC-MAIN-20220116093137-20220116123137-00524.warc.gz"} |
http://mathhelpforum.com/calculus/160573-rolles-theorem-print.html | # Rolle's Theorem!
• October 21st 2010, 07:26 PM
drewbear
Rolle's Theorem!
Use Rolle's Theorem and argue the case that $f(x)=x^5-7x+c$ has at most one real root in the interval [-1,1]
• October 21st 2010, 07:44 PM
Jhevon
Quote:
Originally Posted by drewbear
Use Rolle's Theorem and argue the case that $f(x)=x^5-7x+c$ has at most one real root in the interval [-1,1]
You can show it is possible to have a root using the intermediate value theorem.
To use Rolle's theorem to argue there is at most one, you can proceed thusly:
Assume, to the contrary, there are two roots (or more, but at least 2 is fine), say for $\displaystyle x = x_1$ and $\displaystyle x = x_2$, both in the interval you are considering. and you may also assume that $x_1 < x_2$. then that means
$\displaystyle f(x_1) = f(x_2) = 0$
and so according to Rolle's theorem, there must be a point $\displaystyle x = x_3$, such that $\displaystyle x_1 < x_3 < x_2$ and $\displaystyle f'(x_3) = 0$.
Where can you get with that?
• October 21st 2010, 08:12 PM
drewbear
i am sorry but i am still a little confused. i understand the IVT and the proving by contradiction, but the c variable is throwing me off. do i have to solve for that ever or is that just a constant of no importance?
• October 21st 2010, 08:50 PM
TheEmptySet
c is a constant so when you take the derivative you get
$f'(x)=5x^4-7$
As Jhevon suggested where are the zero's of $f'(x)$? | 2014-03-15 14:53:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9097934365272522, "perplexity": 345.45608655603786}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678697956/warc/CC-MAIN-20140313024457-00008-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/1386721/continuity-of-the-inverse-map | # Continuity of the inverse map
If we have a function $F(x): \mathbb{R^4} \rightarrow \mathbb{R^3}$. Defined as \begin{align} x_1\, x_4&=y_1 \\ x_2\, x_4&=y_2 \\ x_1^2+x_2^2-x_3^2&=y_3 \end{align} Can a continuous inverse map exist? I'm intuitively guessing that the problem with continuity can occur at $y=(0,0,1)$ but I'm not being able to prove it.
• You might have to go for parts of $\mathbb{R}^4$ instead of the full domain. – mvw Aug 6 '15 at 15:13
• @mvw I endorse your comment as a way to make this question most interesting. – James S. Cook Aug 6 '15 at 16:01
A continuous inverse map cannot exist because an inverse map cannot exist, as the function is not injective. For example $$F(1,0,0,0)=(0,0,1)=F(0,1,0,0).$$
• f(1,0,0,0)=f(-1,0,0,0)=(0,0,1). So we do not know how to go back. – Adelafif Aug 6 '15 at 15:04
• Thanks a lot for your comment! Could we possibly obtain a continuous function that could give us a value of x for every value of y such that F(x)=y even if a unique inverse was not available due to injectivity? – Leo Euler Aug 6 '15 at 20:02
The question as asked is easier than what I attack here. Certainly the lack of injectivity spells doom for an inverse, much less a continuous inverse. That said, it's fun to think about how we could suitably restrict the given function as to obtain an inverse. The Jacobian matrix essentially reveals when and what we can do in that regard. I give an illustration of this below:
Observe, $F(x_1,x_2,x_3,x_4) = (y_1,y_2,y_3)$ defined by: \begin{align} x_1\, x_4&=y_1 \\ x_2\, x_4&=y_2 \\ x_1^2+x_2^2-x_3^2&=y_3 \end{align} for all $(x_1,x_2,x_3,x_4) \in \mathbb{R}^4$ has Jacobian matrix: $$J_F = \left[ \frac{\partial F}{\partial x_1} \bigg{|} \frac{\partial F}{\partial x_2}\bigg{|}\frac{\partial F}{\partial x_3}\bigg{|}\frac{\partial F}{\partial x_4}\right] = \left[ \begin{array}{cccc} x_4 & 0 & 0 & x_1 \\ 0 & x_4 & 0 & x_2 \\ 2x_1 & 2x_2 & -2x_3 & 0 \end{array}\right]$$ This shows us what the dimension of the image is locally. In particular, the rank of the Jacobian shows us the dimension of the image near the point as the component functions are polynomial and hence continuous. For example, if $x_3=0$ then we need the following determinant to be nonzero in order that $J_F$ have rank 3: $$\text{det}\left[ \begin{array}{ccc} x_4 & 0 & x_1 \\ 0 & x_4 & x_2 \\ 2x_1 & 2x_2 & 0 \end{array}\right] = -2x_4(x_2^2+x_1^2)$$ which is nonzero for $x_4 \neq 0$ and $(x_1,x_2) \neq (0,0)$. With this in mind, I return to the original system of equations and invert them with respect the given conditions: \begin{align} x_1\, x_4&=y_1 \\ x_2\, x_4&=y_2 \\ x_1^2+x_2^2 &=y_3 \end{align} we wish to solve for $x_1,x_2,x_4$ in terms of $y_1,y_2,y_2$. Note, $x_1/x_2 = y_1/y_2$ thus $y_1^2+y_2^2 = x_4^2y_3$. Therefore, \begin{align} x_4 &= \pm \sqrt{(y_1^2+y_2^2)/y_3} \\ x_1 &= y_1/x_4 = \pm y_1/\sqrt{(y_1^2+y_2^2)/y_3} \\ x_2 &= y_2/x_4 = \pm y_2/\sqrt{(y_1^2+y_2^2)/y_3} \end{align} The formulas above give a pair of local inverses for $F$ restricted to the three-dimensional subset $S$ of $\mathbb{R}^4$ for which $(x_1,x_2,x_3,x_4)$ has $x_3=0$ and $x_4 \neq 0$ and $(x_1,x_2) \neq (0,0)$. In particular, if we denote $S=S_+ \cup S_-$ where $S_+$ has $x_4 >0$ and $S_-$ has points with $x_4<0$ then the formulas above define inverse functions for $F$ restricted to $S_\pm$.
• incidentally, it is worthwhile to remember, any function can be modified into a bijection by choosing a cross-section of the domain which hits each fiber of the function just once and by setting the range to be the codomain. Moreover, this is generally far from a unique process! – James S. Cook Aug 6 '15 at 16:00
• Thanks a lot for your comment! What I had in mind while asking was could we possibly obtain a continuous function that could give us a value of $x$ for every value of $y$ such that $F(x)=y$ even if a unique inverse was not available due to injectivity? – Leo Euler Aug 6 '15 at 17:52
• Glad to help, incidentally, what I claim here is a natural outgrowth of the inverse or implicit function theorem proof. You might get some ideas from en.wikipedia.org/wiki/… also, it's still a work in progress, but Chapter 5 of supermath.info/AdvancedCalculus13.pdf may be useful to you. – James S. Cook Aug 6 '15 at 18:42
Continuous bijections between $\mathbb R^m \rightarrow \mathbb R^n$ ( bijection is needed for the existence of a global inverse ) are not possible unless $n=m$: otherwise, restrict the domain to a closed, bounded ( so compact )ball) B .Then $h_|B \rightarrow h(B)$ is a continuous bijection between compact and Hausdorff, a homeomorphism, which cannot happen between subsets of $\mathbb R^n , \mathbb R^m ; n \neq m$ by, e.g., invariance of domain. | 2019-11-20 19:12:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 4, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9989789724349976, "perplexity": 271.92888202307245}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670601.75/warc/CC-MAIN-20191120185646-20191120213646-00168.warc.gz"} |
https://git.rockbox.org/cgit/rockbox.git/diff/manual/rockbox_interface/playback.tex?id=7bdd03a118a7f2f22e8ac03041e8f8b4e275adc5 | summaryrefslogtreecommitdiffstats log msg author committer range
diff options
context: 12345678910152025303540 space: includeignore mode: unifiedssdiffstat only
Diffstat (limited to 'manual/rockbox_interface/playback.tex')
-rw-r--r--manual/rockbox_interface/playback.tex62
1 files changed, 35 insertions, 27 deletions
diff --git a/manual/rockbox_interface/playback.tex b/manual/rockbox_interface/playback.texindex 34a5c09a7e..2f01c22807 100644--- a/manual/rockbox_interface/playback.tex+++ b/manual/rockbox_interface/playback.tex@@ -22,7 +22,14 @@ setting. that are not available within the \setting{Tag Cache Browser}. Read more about \setting{Tag Cache} in \reference{ref:tagcache}. The remainder of this section deals with the \setting{File Browser}.} -\opt{ondio}{\fixme{Add information on hotplug/multivolume support}}+\opt{ondio}{+Unlike the Archos Firmware, Rockbox provides multivolume support for the +MultiMediaCard, this means the \dap{} can access both data volumes (internal +memory and the MMC), thus being able to for instance, build playlists with +files from both volumes.+In File Browser mode a new folder will appear as soon as the device has read +the content after inserting the card. This new folders name is generated as \fname{}, and will behave exactly as any other folder on the \dap{}.+} \subsection{\label{ref:controls}File Browser Controls} \begin{table}@@ -32,10 +39,10 @@ that are not available within the \setting{Tag Cache Browser}. Read more about entry, the cursor will wrap to the last/first entry.\\ % \opt{IRIVER_H100_PAD,IRIVER_H300_PAD,RECORDER_PAD}- {\ButtonOn+\ButtonUp/\ButtonDown}- \opt{PLAYER_PAD,IPOD_4G_PAD,IPOD_3G_PAD,IAUDIO_X5_PAD}{n/a}- \opt{ONDIO_PAD}{n/a}- & Move one page up/down on the list.\\+ {+ \ButtonOn+\ButtonUp/\ButtonDown+ & Move one page up/down on the list.\\+ } % \opt{IRIVER_H100_PAD,IRIVER_H300_PAD,RECORDER_PAD,IAUDIO_X5_PAD,ONDIO_PAD,IPOD_4G_PAD,IPOD_3G_PAD}{\ButtonLeft} \opt{PLAYER_PAD}{\ButtonStop} @@ -158,21 +165,23 @@ invoked on a single track, it will put only that track into the playlist. On the other hand, if the \setting{Playlist Submenu} is invoked on a directory, Rockbox adds all of the tracks in that directory to the playlist. -\note{You can control whether or not Rockbox includes the contents of subdirectories-when adding an entire directory to a playlists. Set the \setting{Main Menu -$\rightarrow$ Playlist Options $\rightarrow$ Recusively Insert Directories} setting to -\setting{Yes} if you would like Rockbox to include tracks in subdirectories as well as tracks-in the currently-selected directory.}+\note{You can control whether or not Rockbox includes the contents of + subdirectories when adding an entire directory to a playlists. Set the + \setting{Main Menu $\rightarrow$ Playlist Options $\rightarrow$ Recusively + Insert Directories} setting to \setting{Yes} if you would like Rockbox to + include tracks in subdirectories as well as tracks in the currently-selected + directory.} -If you want to have Rockbox create a playlist of a whole folder (to play an entire -album, for example), use the \setting{File Browser} to select the song. When a single -song is selected from the \setting{File Browser}, Rockbox will automatically create a -playlist with all songs in the current folder. However, if you want to play only a single -song and then stop, stop playback, navigate to the song you want to play, and use the +If you want to have Rockbox create a playlist of a whole folder (to play an +entire album, for example), use the \setting{File Browser} to select the song. +When a single song is selected from the \setting{File Browser}, Rockbox will +automatically create a playlist with all songs in the current folder. However, +if you want to play only a single song and then stop, stop playback, navigate +to the song you want to play, and use the \setting{Playlist $\rightarrow$ Insert} function to select the song. -Dynamic playlists are saved so resume will restore them exactly as they were before-shutdown.+Dynamic playlists are saved so resume will restore them exactly as they were +before shutdown. \note{To view, save or reshuffle the current dynamic playlist, use the \setting{Playlist Options} setting in the WPS Context Menu.}@@ -253,31 +262,30 @@ This is the virtual keyboard that is used when entering file names in Rockbox. \end{table} } \opt{ondio}{- \textbf{Picker area}- \begin{table}- \begin{btnmap}{}{}+ \begin{table}+ \begin{btnmap}{Picker area}{} \ButtonUp/\ButtonDown/\ButtonLeft/\ButtonRight & Move about the virtual keyboard (moves the solid cursor). If you move out of the picker area with \ButtonUp/\ButtonDown, you get to the line edit mode. \\ \ButtonMenu & Selects the letter underneath the cursor. \\- Long press on \ButtonMenu - & Accepts the currently selected letter\\+ Hold \ButtonMenu + & Accepts the change and returns to the File Browser.\\ \ButtonOff - & Aborts the currently selected letter\\+ & Quit the virtual keyboard without saving the changes.\\ \end{btnmap} \end{table}- \textbf{Line edit mode} \begin{table}- \begin{btnmap}{}{}+ \begin{btnmap}{Line edit mode}{} \ButtonLeft/\ButtonRight & Move left and right\\ \ButtonMenu & Deletes the letter to the left of the cursor\\- Long press on \ButtonMenu & Accepts the deletion\\+ Hold \ButtonMenu & Accepts the deletion\\ \ButtonUp/\ButtonDown & Returns to the picker area\\ \end{btnmap} \end{table}-}\opt{player}{+}+\opt{player}{ The current filename is always listed on the first line of the display. The second line of the display can contain the character selection bar, as in the screenshot above, or one of a number of other options. | 2022-05-28 22:24:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5461095571517944, "perplexity": 12292.41019879877}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663021405.92/warc/CC-MAIN-20220528220030-20220529010030-00216.warc.gz"} |
http://zbmath.org/?format=complete&q=an:05622792 | # zbMATH — the first resource for mathematics
##### Examples
Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used.
##### Operators
a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses
##### Fields
any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article)
The finite simple groups. (English) Zbl 1203.20012
Graduate Texts in Mathematics 251. London: Springer (ISBN 978-1-84800-987-5/hbk; 978-1-84800-988-2/ebook). xv, 298 p. EUR 49.95/net; £ 28.00; SFR 77.50 (2009).
This book is aimed at providing an overview of all the finite simple groups in one volume. The author intends to describe the groups in as much detail as possible within 300 pages, including concrete construction, calculation of the order and proof of simplicity, and discusses their actions on various geometrical, algebraic or combinatorial objects in order to study their properties. The author emphasizes connections between exceptional behavior of generic groups and the existence of sporadic groups.
After the publication of M. Aschbacher and S. D. Smith [The classification of quasithin groups. I, II. Mathematical Surveys and Monographs 111, 112. Providence, AMS (2004; Zbl 1065.20023, Zbl 1065.20024)], it has been widely believed that the proof of the classification of finite simple groups is complete. D. Gorenstein, R. Lyons and R. Solomon [The classification of the finite simple groups. Mathematical Surveys and Monographs, 40(1). Providence, AMS (1994; Zbl 0816.20016), 40(2) (1996; Zbl 0836.20011), 40(3) (1998; Zbl 0890.20012), 40(4) (1999; Zbl 0922.20021), 40(5) (2002; Zbl 1006.20012), 40(6) (2005; Zbl 1069.20011)] intend to present the whole proof of the classification theorem. However, the work is still in progress. The current status of the classification theorem can be found in Section 1.4 of the book.
The non-Abelian finite simple groups are divided into four types, namely,
(i) alternating groups ${A}_{n}$,
(ii) classical groups: linear ${\mathrm{PSL}}_{n}\left(q\right)$, $n\ge 2$, except ${\mathrm{PSL}}_{2}\left(2\right)$ and ${\mathrm{PSL}}_{2}\left(3\right)$; unitary ${\mathrm{PSU}}_{n}\left(q\right)$, $n\ge 3$, except ${\mathrm{PSU}}_{3}\left(2\right)$; symplectic ${\mathrm{PSp}}_{2n}\left(q\right)$, $n\ge 2$, except ${\mathrm{PSp}}_{4}\left(2\right)$; orthogonal $\text{P}{{\Omega }}_{2n+1}\left(q\right)$, $n\ge 3$, $q$ odd; $\text{P}{{\Omega }}_{2n}^{+}\left(q\right)$, $n\ge 4$; $\text{P}{{\Omega }}_{2n}^{-}\left(q\right)$, $n\ge 4$,
(iii) exceptional groups of Lie type: ${G}_{2}\left(q\right)$, $q\ge 3$, ${E}_{6}\left(q\right)$, ${E}_{7}\left(q\right)$, ${E}_{8}\left(q\right)$, ${F}_{4}\left(q\right)$, ${}^{3}{D}_{4}\left(q\right)$, ${}^{2}{E}_{6}\left(q\right)$, ${}^{2}{B}_{2}\left({2}^{2n+1}\right)$, $n\ge 1$, ${}^{2}{G}_{2}\left({3}^{2n+1}\right)$, $n\ge 1$, ${}^{2}{F}_{4}\left({2}^{2n+1}\right)$, $n\ge 1$, ${}^{2}{F}_{4}{\left(2\right)}^{\text{'}}$,
(iv) 26 sporadic groups: the five Mathieu groups ${M}_{11}$, ${M}_{12}$, ${M}_{22}$, ${M}_{23}$, ${M}_{24}$; the seven Leech lattice groups $C{o}_{1}$, $C{o}_{2}$, $C{o}_{3}$, $McL$, $HS$, $Suz$, ${J}_{2}$; the three Fischer groups $F{i}_{22}$, $F{i}_{23}$, $F{i}_{24}^{\text{'}}$; the five Monstrous groups $𝕄$, $𝔹$, $Th$, $HN$, $He$; the six pariahs ${J}_{1}$, ${J}_{3}$, ${J}_{4}$, ${O}^{\text{'}}N$, $Ly$, $Ru$.
Among the simple groups in the list, there are exceptional isomorphisms as follows: ${\mathrm{PSL}}_{2}\left(4\right)\cong {\mathrm{PSL}}_{2}\left(5\right)\cong {A}_{5}$, ${\mathrm{PSL}}_{2}\left(7\right)\cong {\mathrm{PSL}}_{3}\left(2\right)$, ${\mathrm{PSL}}_{2}\left(9\right)\cong {A}_{6}$, ${\mathrm{PSL}}_{4}\left(2\right)\cong {A}_{8}$, and ${\mathrm{PSU}}_{4}\left(2\right)\cong {\mathrm{PSp}}_{4}\left(3\right)$.
The book under review consists of five chapters. Chapter 1 is Introduction, which includes sections on the classification theorem, remarks on the proof of the classification theorem and how to read this book.
The author begins with the alternating groups in Chapter 2. Various properties of the alternating groups such as simplicity, determination of the outer automorphism group, construction of the non-split central extension and certain general subgroups are discussed. The author points out exceptional behavior of the symmetric group ${S}_{6}$ of degree 6 in the outer automorphism groups and that of the alternating groups ${A}_{6}$ and ${A}_{7}$ of degree 6 and 7 in the non-split central extensions. A proof of the O’Nan-Scott theorem concerning maximal subgroups of the symmetric groups is supplied. Moreover, the notion of reflection groups is introduced as a generalization of the symmetric groups, which are important both for the theory of groups of Lie type and for many sporadic groups. The classification of indecomposable finite real reflection groups by Coxeter is also explained.
The subsequent two chapters are devoted to the simple groups of Lie type. A remarkable feature is that the use of Lie algebras is kept minimal in describing those groups. Thus, it is different from R. W. Carter [Simple groups of Lie type. Pure and Applied Mathematics. Vol. XXVIII. London etc.: John Wiley & Sons (1972; Zbl 0248.20015)].
In Chapter 3 the author deals with classical groups, each of which is of the form ${G}^{\text{'}}/Z\left({G}^{\text{'}}\right)$ with $G$ a matrix group. Simplicity, the automorphism groups, certain subgroups and non-split central extensions of classical groups are studied. First, the author considers the general linear groups over finite fields. Their subgroups and flag varieties of underlying vector spaces are discussed. The notion of projective space is introduced. Using it the author proves the exceptional isomorphisms ${\mathrm{PSL}}_{2}\left(4\right)\cong {\mathrm{PSL}}_{2}\left(5\right)\cong {A}_{5}$ and ${\mathrm{PSL}}_{2}\left(9\right)\cong {A}_{6}$. Next, the author proceeds to bilinear, sesquilinear and quadratic forms which are used for the description of symplectic, unitary and orthogonal groups. A proof of a version of the Aschbacher-Dynkin theorem on maximal subgroups of classical groups is presented. The explicit statement for each family of classical groups is taken from P. Kleidman and M. Liebeck [The subgroup structure of the finite classical groups. London Mathematical Society Lecture Note Series 129. Cambridge etc.: Cambridge University Press (1990; Zbl 0697.20004)]. Furthermore, exceptional isomorphisms among low-dimensional classical groups are explained.
In Chapter 4 the ten families of the exceptional groups of Lie type are discussed. There are three ways to describe those groups, namely, the approaches via Lie algebras, via algebraic groups and via other algebras such as the octonion algebra and the exceptional Jordan algebra. The author takes the third one, an unconventional one for the sake of concrete calculations, and thus treats these ten families separately. First, the author gives an elementary description of the Suzuki group ${}^{2}{B}_{2}\left({2}^{2n+1}\right)$ as a group of $4×4$ matrices over the field $\mathrm{GF}\left({2}^{2n+1}\right)$ of ${2}^{2n+1}$ elements. Next, the author introduces the octonion algebras and studies the group ${G}_{2}\left(q\right)$. In particular, the exceptional isomorphism ${G}_{2}\left(2\right)\cong {\mathrm{PSU}}_{3}\left(3\right):2$ is proved by using an octonion algebra on the ${E}_{8}$-lattice. An octonion algebra in characteristic 3 is also used to construct ${}^{2}{G}_{2}\left({3}^{2n+1}\right)$ and show the isomorphism ${}^{2}{G}_{2}\left(3\right)\cong {\mathrm{PSL}}_{2}\left(8\right):3$. Moreover, the group ${}^{3}{D}_{4}\left(q\right)$ is constructed by using the twisted octonion algebra. As to the group ${F}_{4}\left(q\right)$, the author constructs it as the automorphism group of an exceptional Jordan algebra. Various properties of ${F}_{4}\left(q\right)$ are presented. Then the author proceeds to the group ${}^{2}{F}_{4}\left({2}^{2n+1}\right)$. The group ${E}_{6}\left(q\right)$ is constructed as the automorphism group of a cubic form, a similar construction as Dickson’s original one. The remaining groups ${}^{2}{E}_{6}\left(q\right)$, ${E}_{7}\left(q\right)$ and ${E}_{8}\left(q\right)$ are briefly mentioned.
Chapter 5, the final chapter, is devoted to the 26 sporadic simple groups. First, the author treats the five Mathieu groups in detail, together with the binary and ternary Golay codes and Steiner systems. Then the author proceeds to the Leech lattice and the seven members of Leech lattice groups. One section is used to explain the Suzuki chain. Next, the author describes the three Fischer groups as automorphism groups of the graphs whose vertices consist of transpositions. Parker’s loop is also discussed. Up to here the proofs for various properties of those sporadic groups are basically supplied. As to the remaining sporadic groups, namely, the five members of Monstrous groups and the six members of pariahs, the author restricts himself to state some important properties of the groups and indicate the outline of their proofs.
This book is a unique introductory overview of all the finite simple groups, and thus it is suitable not only for specialists who are interested in finite simple groups but also for advanced undergraduate and graduate students in algebra. The section entitled ‘Further reading’ at the end of each chapter is a nice guide to further study of the subjects.
##### MSC:
20D05 Finite simple groups and their classification 20-02 Research monographs (group theory) 20D08 Simple groups: sporadic finite groups 20D06 Simple groups: alternating groups and groups of Lie type | 2013-12-05 02:41:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 88, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7076107859611511, "perplexity": 621.3025486747063}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163038307/warc/CC-MAIN-20131204131718-00034-ip-10-33-133-15.ec2.internal.warc.gz"} |
http://mathoverflow.net/questions/50355/wanted-a-graph-g-without-bridges-whose-square-is-not-hamiltonian | # Wanted: a graph $G$ without bridges, whose square is not hamiltonian
Construct an example of graph $G$ without bridges, such that its square $G^2$ is non hamiltonian. Note: Since Fleischner's Theorem (the square of each 2-connected graph is Hamiltonian) and bridges are forbidden, the required graph should have at least one cut-vertex.
-
Please see mathoverflow.net/faq#whatnot – Yemon Choi Dec 25 '10 at 17:41
Or, if this is not homework/coursework, see mathoverflow.net/howtoask – Yemon Choi Dec 25 '10 at 17:42
No, the teacher said that this example exists, but he did not remember it. He also said, that as conclusion we obtain that the Fleischner's Theorem does not improve. – Michael Dec 25 '10 at 17:44
I tried to find information on the Internet, but had no success – Michael Dec 25 '10 at 17:51
In this context the square of a graph $G$ has the same vertices but has edges between vertices if their distance in $G$ is 1 or 2. – Aaron Meyerowitz Dec 26 '10 at 6:30 | 2015-10-06 20:23:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9458744525909424, "perplexity": 786.2698793312968}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736679145.29/warc/CC-MAIN-20151001215759-00113-ip-10-137-6-227.ec2.internal.warc.gz"} |
https://socratic.org/questions/what-is-the-distance-between-4-7-2-9-and-2-6-5-3 | # What is the distance between (4.7, 2.9) and (-2.6, 5.3)?
The distance formula is $d = \sqrt{{\left({x}_{2} - {x}_{1}\right)}^{2} + {\left({y}_{2} - {y}_{1}\right)}^{2}}$. When you plug in the given values, you get $d = \sqrt{{\left(4.7 - - 2.6\right)}^{2} + {\left(2.9 - 5.3\right)}^{2}}$.
$d = \sqrt{53.29 + 5.76}$
$d = \sqrt{59.05}$
$d = 7.68$ | 2020-09-22 08:53:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9263656735420227, "perplexity": 451.7346695016768}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400204410.37/warc/CC-MAIN-20200922063158-20200922093158-00481.warc.gz"} |
https://scicomp.stackexchange.com/questions/16179/discrete-optimization-on-a-cartesian-product-with-component-wise-increasing-obje | # Discrete optimization on a cartesian product with component-wise increasing objective function
The set-up is the following:
We have $K$ finite sets of real numbers,
i.e. sets $G_i, i=1 \dotsc, K$ and $|G_i| = n_i < \infty$.
Furthermore, assume that we have a function
$$h: \mathbb R^K \to \mathbb R$$
which is monotonically increasing in each component.
Similarly, there is another function $$g: \mathbb R^K \to \mathbb R$$,which does not necessarily fullfill the monotonicity condition.
The optimization problem I want to solve is the following:
Find $$\max_{(x_1,\dotsc,x_K) \in G_1 \times \dotsc \times G_K} h(x_1,\dotsc, x_K)$$
subject to $$g(x_1,\dotsc, x_K) \leq c$$, where $c$ is a pre-specified constant. The naive approach takes $2\prod_{i=1}^{K}n_i$ function evaluations (evaluate $h$, check condition defined by $g$).
By how much can we improve this by using the componentwise monotonicity of $h$? What if we also assume that $g$ is also increasing in each component?
The monotonicity of $h$ doesn't help you much if you can't say anything about the shape of the feasible set defined by the constraint. Intuitively, for a monotonic objective function, you'd like to "go as far to the right as possible" in each coordinate, but if the constraint function has no properties, in each coordinate direction the feasible set may be disconnected intervals. On the other hand, if $g$ is also increasing in each component, then you know that the feasible set is connected and, I believe, in fact convex. That's a much easier problem to describe.
• "If $g$ is also increasing in each component, then you know that the feasible set is connected and, I believe, in fact convex." It's connected, but not necessarily convex: consider $g(x,y)=\sqrt x+\sqrt y$. – user3883 Nov 21 '14 at 9:52 | 2021-06-16 14:04:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8505187034606934, "perplexity": 206.45366701812847}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487623942.48/warc/CC-MAIN-20210616124819-20210616154819-00291.warc.gz"} |
https://socratic.org/questions/what-is-the-cross-product-of-3-0-5-and-3-6-4 | # What is the cross product of [3, 0, 5] and [3,-6,4] ?
Feb 19, 2016
$\left[3 , 0 , 5\right] \times \left[3 , - 6 , 4\right] = \left[30 , 3 , - 18\right]$
#### Explanation:
[i j k]
[3 0 5]
[3 -6 4]
To calculate the cross product, cover set the vectors out in a table as shown above. Then cover up the column for which you're calculating the value of (e.g. if looking for the i value cover the first column). Next take the product on the top value in the next column to the right and the bottom value of the remaining column. Subtract from this the product of the two remaining values. This has been carried out below, to show how it's done:
i = (04) - (5(-6)) = 0 - (-30) = 30
j = (53) - (34) = 15 - 12 = 3
k = (3(-6)) - (03) = -18 - 0 = -18
Therefore:
$\left[3 , 0 , 5\right] \times \left[3 , - 6 , 4\right] = \left[30 , 3 , - 18\right]$ | 2020-02-21 07:13:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 2, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7034896016120911, "perplexity": 534.4561830528071}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145443.63/warc/CC-MAIN-20200221045555-20200221075555-00236.warc.gz"} |
http://www.mathworks.com/help/dsp/ref/dsp.lmsfilter-class.html?requestedDomain=www.mathworks.com&nocookie=true | # Documentation
### This is machine translation
Translated by
Mouse over text to see original. Click the button below to return to the English verison of the page.
# dsp.LMSFilter System object
Package: dsp
## Description
The `LMSFilter` implements an adaptive FIR filter object that returns the filtered output, the error vector, and filter weights. The LMS filter uses one of five different LMS algorithms.
To implement the adaptive FIR filter object:
1. Define and set up your adaptive FIR filter object. See Construction.
2. Call `step` to implement the filter according to the properties of `dsp.LMSFilter`. The behavior of `step` is specific to each object in the toolbox.
Note: Starting in R2016b, instead of using the `step` method to perform the operation defined by the System object™, you can call the object with arguments, as if it were a function. For example, ```y = step(obj,x)``` and `y = obj(x)` perform equivalent operations.
## Construction
`H = dsp.LMSFilter` returns an adaptive FIR filter object, `H`, that computes the filtered output, filter error and the filter weights for a given input and desired signal using the Least Mean Squares (LMS) algorithm.
`H = dsp.LMSFilter('PropertyName', PropertyValue,...)` returns an LMS filter object, `H`, with each property set to the specified value.
`H = dsp.LMSFilter(LEN,'PropertyName',PropertyValue,...)` returns an LMS filter object, `H`, with the `Length` property set to `LEN`, and other specified properties set to the specified values.
## Properties
`Method` Method to calculate filter weights Specify the method used to calculate filter weights as `LMS`, ```Normalized LMS```, `Sign-Error LMS`, `Sign-Data LMS`, or ```Sign-Sign LMS```. The default is `LMS`. `Length` Length of FIR filter weights vector Specify the length of the FIR filter weights vector as a positive integer. The default is `32`. `StepSizeSource` How to specify adaptation step size Choose how to specify the adaptation step size factor as `Property` or ```Input port```. The default is `Property`. `StepSize` Adaptation step size Specify the adaptation step size factor as a nonnegative real number. For convergence of the normalized LMS method, set the step size greater than `0` and less than `2`. This property only applies when the `StepSizeSource` property is `Property`. The default is `0.1`. This property is tunable. `LeakageFactor` Leakage factor used in LMS filter Specify the leakage factor as a real number between `0` and `1` inclusive. A leakage factor of `1` corresponds to no leakage in the adapting method. The default is `1`. This property is tunable. `InitialConditions` Initial conditions of filter weights Specify the initial values of the FIR filter weights as a scalar or vector of length equal to the `Length` property value. The default is `0`. `AdaptInputPort` Enable weight adaptation Specify when the LMS filter should adapt the filter weights. By default, the value of this property is `false`, and the object continuously updates the filter weights. When this property is set to `true`, an adaptation control input is provided to the `step` method. If the value of this input is nonzero, the object continuously updates the filter weights. If the input is zero, the filter weights remain at their current value. `WeightsResetInputPort` Enable weight reset Specify when the LMS filter should reset the filter weights. By default, the value of this property is `false`, and the object does not reset the weights. When this property is set to `true`, a reset control input is provided to the `step` method, and the `WeightsResetCondition` property applies. The object resets the filter weights based on the values of the `WeightsResetCondition` property and the `reset` input to the `step` method. `WeightsResetCondition` Reset trigger setting for filter weights Specify the event to reset the filter weights as `Rising edge`, ```Falling edge```, `Either edge`, or `Non-zero`. The LMS filter resets the filter weights based on the values of this property and the `reset` input to the `step` method. This property only applies when the `WeightsResetInputPort` property is `true`. The default is `Non-zero`. `WeightsOutputPort` Enable returning filter weights Set this property to `true` to output the adapted filter weights. The default is `true`.
## Methods
clone Create LMS filter object with same property values getNumInputs Number of expected inputs to step method getNumOutputs Number of outputs of step method isLocked Locked status for input attributes and nontunable properties maxstep Maximum step size for LMS adaptive filter convergence msepred Predicted mean-square error for LMS filter msesim Mean-squared error for LMS filter release Allow property value and input characteristics changes reset Reset filter states for LMS filter step Apply LMS adaptive filter to input
## Examples
expand all
```lms1 = dsp.LMSFilter(11,'StepSize',0.01); filt = dsp.FIRFilter; % System to be identified filt.Numerator = fir1(10,.25); x = randn(1000,1); % input signal d = filt(x) + 0.01*randn(1000,1); % desired signal [y,e,w] = lms1(x,d); subplot(2,1,1); plot(1:1000, [d,y,e]); title('System Identification of an FIR filter'); legend('Desired', 'Output', 'Error'); xlabel('time index'); ylabel('signal value'); subplot(2,1,2); stem([filt.Numerator.',w]); legend('Actual','Estimated'); xlabel('coefficient #'); ylabel('coefficient value'); ```
```lms2 = dsp.LMSFilter('Length',11, ... 'Method','Normalized LMS',... 'AdaptInputPort',true, ... 'StepSizeSource','Input port', ... 'WeightsOutputPort',false); filt2 = dsp.FIRFilter('Numerator', fir1(10,[.5, .75])); x = randn(1000,1); % Noise d = filt2(x) + sin(0:.05:49.95)'; % Noise + Signal a = 1; % adaptation control mu = 0.05; % step size [y, err] = lms2(x,d,mu,a); subplot(2,1,1); plot(d); title('Noise + Signal'); subplot(2,1,2); plot(err); title('Signal'); ```
## Algorithms
This filter's algorithm is defined by the following equations.
`$\begin{array}{c}y\left(n\right)={w}^{T}\left(n-1\right)u\left(n\right)\\ e\left(n\right)=d\left(n\right)-y\left(n\right)\\ w\left(n\right)=\alpha w\left(n-1\right)+f\left(u\left(n\right),e\left(n\right),\mu \right)\end{array}$`
The various LMS adaptive filter algorithms available in this System object are defined as:
• LMS:
`$f\left(u\left(n\right),e\left(n\right),\mu \right)=\mu e\left(n\right){u}^{*}\left(n\right)$`
• Normalized LMS:
`$f\left(u\left(n\right),e\left(n\right),\mu \right)=\mu e\left(n\right)\frac{{u}^{\ast }\left(n\right)}{\epsilon +{u}^{H}\left(n\right)u\left(n\right)}$`
• Sign-Error LMS:
`$f\left(u\left(n\right),e\left(n\right),\mu \right)=\mu sign\left(e\left(n\right)\right)u*\left(n\right)$`
• Sign-Data LMS:
`$f\left(u\left(n\right),e\left(n\right),\mu \right)=\mu e\left(n\right)sign\left(u\left(n\right)\right)$`
where u(n) is real.
• Sign-Sign LMS:
`$f\left(u\left(n\right),e\left(n\right),\mu \right)=\mu sign\left(e\left(n\right)\right)sign\left(u\left(n\right)\right)$`
where u(n) is real.
The variables are as follows:
VariableDescription
n
The current time index
u(n)
The vector of buffered input samples at step n
u*(n)
The complex conjugate of the vector of buffered input samples at step n
w(n)
The vector of filter weight estimates at step n
y(n)
The filtered output at step n
e(n)
The estimation error at step n
d(n)
The desired response at step n
µ
The adaptation step size
αThe leakage factor (0 < α ≤ 1) | 2016-10-24 04:05:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 6, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6710289120674133, "perplexity": 2336.431225022031}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719465.22/warc/CC-MAIN-20161020183839-00540-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://en.wikipedia.org/wiki/Divisor_(algebraic_geometry) | # Divisor (algebraic geometry)
Jump to: navigation, search
In algebraic geometry, divisors are a generalization of codimension one subvarieties of algebraic varieties; two different generalizations are in common use, Cartier divisors and Weil divisors (named for Pierre Cartier and André Weil). Both are ultimately derived from the notion of divisibility in the integers and algebraic number fields.
Cartier divisors and Weil divisors are parallel notions. Weil divisors are codimension one objects, while Cartier divisors are locally described by a single equation. On non-singular varieties, these two are identical, but when the variety has singular points, the two can differ. An example of a surface on which the two concepts differ is a cone, i.e. a singular quadric. At the (unique) singular point, the vertex of the cone, a single line drawn on the cone is a Weil divisor, but is not a Cartier divisor (since it is not locally principal).
The divisor appellation is part of the history of the subject, going back to the DedekindWeber work which in effect showed the relevance of Dedekind domains to the case of algebraic curves.[1] In that case the free abelian group on the points of the curve is closely related to the fractional ideal theory.
An algebraic cycle is a higher-dimensional generalization of a divisor; by definition, a Weil divisor is a cycle of codimension one.
## Divisors in a Riemann surface
A Riemann surface is a 1-dimensional complex manifold, so its codimension 1 submanifolds are 0-dimensional. The divisors of a Riemann surface are the elements of the free abelian group over the points of the surface.
Equivalently, a divisor is a finite linear combination of points of the surface with integer coefficients. The degree of a divisor is the sum of its coefficients.
We define the divisor of a meromorphic function f as
$(f):=\sum_{z_\nu \in R(f)} s_\nu z_\nu$
where R(f) is the set of all zeroes and poles of f, and sν is given by
$s_\nu := \left\{ \begin{array}{rl} a & \ \text{if } z_\nu \text{ is a zero of order }a \\ -a & \ \text{if } z_\nu \text{ is a pole of order }a. \end{array} \right.$
A divisor that is the divisor of a meromorphic function is called principal. On a compact Riemann surface, a meromorphic function has as many poles as zeroes, and therefore on such surfaces the degree of a principal divisor is 0. Since the divisor of a product is the sum of the divisors, the set of principal divisors is a subgroup of the group of divisors. Two divisors that differ by a principal divisor are called linearly equivalent.
We define the divisor of a meromorphic 1-form similarly. Since the space of meromorphic 1-forms is a 1-dimensional vector space over the field of meromorphic functions, any two meromorphic 1-forms yield linearly equivalent divisors. The equivalence class of these divisors is called the canonical divisor (usually denoted K).
The Riemann–Roch theorem is an important relation between the divisors of a Riemann surface and its topology.
## Weil divisor
### Definition
Let X be an algebraic variety over a field.[2] A Weil divisor on X is a finite linear combination with integral coefficients of irreducible closed subsets Z of X of codimension one:
$\sum_Z n_Z [Z]$
where the only finitely many nZ are nonzero. For example, a divisor on an algebraic curve is a formal sum of its closed points. The degree of a divisor is the sum of its coefficients. An effective Weil divisor is one in which all the coefficients of the formal sum are non-negative. One writes DD' if the difference D - D' is effective.
If ZX is an irreducible closed subset of codimension one, then the local ring Oη at the generic point of Z comes with the order-of-vanishing function ordZ; namely, $\operatorname{ord}_Z(f) = \operatorname{length}_{\mathcal{O}_{\eta}}({\mathcal{O}_{\eta}}/f{\mathcal{O}_{\eta}})$ if f is regular at the generic point of Z.[3] If f is a nonzero rational function on X, one then puts:
$(f) = \sum_Z \operatorname{ord}_Z (f) [Z]$
This is called the principal (Weil) divisor generated by f. For it to be a divisor, it needs to be a finite sum; this can be seen from the geometric interpretation of the construction. If Z is a closed subscheme (need not be reduced) of X of codimension one, then Z determines an effective (Weil) divisor:
$[Z] = \sum_{V_i \subset Z} \operatorname{length}_{\mathcal{O}_Z}({\mathcal{O}_{Z, \eta_i}}) [V_i]$
where $V_i$ are the irreducible closed subsets of Z of codimension zero and $\eta_i$ is the generic point of $V_i$ in Z; in other words, [Z] counts irreducible components of Z with the multiplicities (the construction actually works for any closed subscheme Z and in general [Z] is called the fundamental cycle of Z.) In particular, if Z is the zero-locus of a regular function f, then (f) = [Z]. If $f = g/h$ is a fraction of regular functions (and a rational function is locally of such a fraction), then $(f) = (g) - (h)$, so $(f)$ is defined by counting zeros of f with positive multiplicity and poles of f with negative multiplicity.
If D is a Weil divisor on X and if X is normal, then the sheaf O(D) on X is defined by:[4] for any open subset U of X,
$\Gamma(U, O(D)) = \{ f \in k(X) | f = 0 \text{ or }(f) + D \ge 0 \text{ on } U \}$
where k(X) is the field of rational functions on X. If D is principal, given by, say, a rational function g, then O(D) is isomorphic to the structure sheaf OX of X via $f \mapsto fg$. Conversely, if O(D) is free, then D is principal. It follows that D is locally principal if and only if O(D) is locally free of rank one; i.e., an invertible sheaf.
If X is locally factorial; i.e., local rings are unique factorization domains, which is the case for example when X is smooth, then D is locally principal and so O(D) is invertible. In general, however, a Weil divisor need not be locally principal (which amounts to being Cartier). The standard example is the following:[5] Let X be the quadric cone $z^2 = xy$, and D the line y = z = 0, a ruling of the cone; D is not principal near the origin.
In general, when X is normal, the sheaf O(D) is a reflexive sheaf and any reflexive sheaf of the form O(D) is called a divisorial sheaf (for the non-normal case, see the "reflexive sheaf" article.)
### Divisor class group
Let Div(X) be the abelian group of Weil divisors on X. Since principal divisors form a subgroup, one can form the quotient group:
$\operatorname{Cl}(X) = \operatorname{Div}(X)/\{ (f) | f \in k(X)^* \}$
called the divisor class group of X. Two divisors are said to be linearly equivalent if they belong to the same divisor class.
Let Z be a closed subset of X. If Z is irreducible of codimension one, then there is an exact sequence
$\mathbb{Z} \overset{1 \mapsto [Z]}\to \operatorname{Cl}(X) \overset{j^*}\to \operatorname{Cl}(X - Z) \to 0$
where $j^*$ is the restriction map along the inclusion $j: X - Z \hookrightarrow X$. If Z has codimension ≥ 2, then $\operatorname{Cl}(X) \overset{j^*}\underset{\sim}\to \operatorname{Cl}(X - Z)$ is an isomorphism. (These facts are special cases of the excision exact sequence for Chow groups.[6] The sequence can be extended further to the left using Bloch's higher Chow groups.)
Example: Take X = Pn to be a projective space and H to be the coordinate hyperplane in it, say, x0 = 0. Then
$\mathbb{Z} \to \operatorname{Cl}(\mathbf{P}^n) \overset{j^*}\to \operatorname{Cl}(\mathbf{P}^n - H) \to 0$
Here, the last term $\operatorname{Cl}(\mathbf{P}^n - H) = \operatorname{Cl}(\mathbf{A}^n) = 0$ since the coordinate ring of the affine space $\mathbf{A}^n$ is a polynomial ring, which is a unique factorization domain and has trivial divisor class group. With a bit more work, one indeed gets: $\operatorname{Cl}(\mathbf{P}^n) \simeq \mathbb{Z}$.
If s is a nonzero section of some line bundle on X, then we let $(s)$ be the divisor defined by the zero-locus of s with multiplicities; it is called the divisor cut out by s. As for rational functions, this construction generalizes to rational sections s by defining $(s)$ to be:
$(s) = \sum_{V_i \subset X} \operatorname{ord}_{V_i}(s) [V_i],$
where $\operatorname{ord}_{V_i}(s)$ is the order of vanishing of s along Vi, which is defined by identifying s with a rational function by means of local trivialization. (This is well-defined; the transition functions locally have no zero and poles. Equivalently, changing the trivialization does not change the local ring $\mathcal{O}_{X, \eta_i}$ at the generic point.)
Any line bundle admits a nonzero rational section (by local triviality) and, moreover, different choices differ by a nonzero rational function. Thus, there is a well-defined group homomorphism:
$\operatorname{Pic}(X) \to \operatorname{Cl}(X)$
where Pic(X) is the Picard group of X and tensor product corresponds to addition. It is injective if X is normal. The image consists of classes of locally principal Weil divisors. In particular, X is locally factorial if and only if the map is an isomorphism; the inverse being $D \mapsto \mathcal{O}(D)$ (continuing the previous example one gets: Pic(Pn) = Cl(Pn) = Z.)
Let X be a normal variety over a perfect field. The smooth points (or the regular points) form an open dense subset $X_{\text{reg}}$. Writing j for the inclusion $X_{\text{reg}} \hookrightarrow X$, we have the restriction homomorphism:
$j^*: \operatorname{Cl}(X) \overset{\sim}\to \operatorname{Cl}(X_{\text{reg}}) = \operatorname{Pic}(X_{\text{reg}}),$
which is an isomorphism since $X - X_{\text{reg}}$ has codimension ≥ 2 as X is normal. For example, one can use this isomorphism to define the canonical divisor $K_X$ on X to be the one (up to linear equivalence) corresponding to the sheaf of differential forms of top degree; in symbols, $\mathcal{O}(j^*K_X) = \Omega^{\dim X}_{X_{\text{reg}}}$.
## Cartier divisor
A Cartier divisor in an algebraic variety X (see the paragraph below for the scheme case) can be represented by an open cover by affine subsets ${U_i}$ of X, and a collection of rational functions $f_i$ defined on $U_i$. The functions must be compatible in this sense: on the intersection of two sets in the cover, the quotient of the corresponding rational functions should be regular and invertible. A Cartier divisor is said to be effective if these $f_i$ can be chosen to be regular functions, and in this case the Cartier divisor defines an associated subvariety of codimension 1 by forming the ideal sheaf generated locally by the $f_i$.
The notion can also be described with the abstract function field instead of rational functions: in this setup X can be any scheme. For each affine open subset U, define M′(U) to be the total quotient ring of OX(U). Because the affine open subsets form a basis for the topology on X, this defines a presheaf on X. (This is not the same as taking the total quotient ring of OX(U) for arbitrary U, since that does not define a presheaf.[7]) The sheaf MX of rational functions on X is the sheaf associated to the presheaf M′, and the quotient sheaf MX* / OX* is the sheaf of local Cartier divisors.
A Cartier divisor is a global section of the quotient sheaf MX*/OX*. We have the exact sequence $1 \to \mathcal O^*_X \to M^*_X \to M^*_X / \mathcal O^*_X \to 1$, so, applying the global section functor $\Gamma (X, \bullet)$ gives the exact sequence $1 \to \Gamma (X, O^*_X) \to \Gamma (X, M^*_X) \to \Gamma (X, M^*_X / \mathcal O^*_X) \to H^1(X, \mathcal O^*_X)$.
A Cartier divisor is said to be principal if it is in the range of the morphism $\Gamma (X, M^*_X) \to \Gamma (X, M^*_X / \mathcal O^*_X)$, that is, if it is the class of a global rational function.
### Cartier divisors in nonrigid sheaves
Of course the notion of Cartier divisors exists in any sheaf (any ringed space). But if the sheaf is not rigid enough, the notion tends to lose some of its interest. For example in a fine sheaf (e.g. the sheaf of real-valued continuous, or smooth, functions on an open subset of a euclidean space, or locally homeomorphic, or diffeomorphic, to such a set, such as a topological manifold), any local section is a divisor of 0, so that the total quotient sheaves are zero, so that the sheaf contains no non-trivial Cartier divisor.
## From Cartier divisors to Weil divisor
There is a natural homomorphism from the group of Cartier divisors to that of Weil divisors, which is an isomorphism for integral separated Noetherian schemes provided that all local rings are unique factorization domains.
## From Cartier divisors to line bundles
The notion of transition map associates naturally to every Cartier divisor D a line bundle (strictly, invertible sheaf) commonly denoted by $\mathcal O_X(D)$ or sometimes also $\mathcal L(D)$.
The line bundle $\mathcal L (D)$ associated to the Cartier divisor D is the sub-bundle of the sheaf MX of rational fractions described above whose stalk at $x \in X$ is given by $D_x \in \Gamma (x, M^*_X/\mathcal O^*_X)$ viewed as a line on the stalk at x of $\mathcal O_X$ in the stalk at x of $M_X$. The subsheaf thus described is tautologically locally freely monogenous over the structure sheaf $\mathcal O_X$.
The mapping $D \mapsto \mathcal L (D)$ is a group homomorphism: the sum of divisors corresponds to the tensor product of line bundles, and isomorphism of bundles corresponds precisely to linear equivalence of Cartier divisors. The group of divisors classes modulo linear equivalence therefore injects into the Picard group. The mapping is not surjective for all compact complex manifolds, but surjectivity does hold for all smooth projective varieties. The latter is true because, by the Kodaira embedding theorem, the tensor product of any line bundle with a sufficiently high power of any positive line bundle becomes ample; thus, on any such manifold, any line bundle is the formal difference between two ample line bundles, and any ample line bundle may be viewed as an effective divisor.
## Global sections of line bundles and linear systems
Recall that the local equations of a Cartier divisor $D$ in a variety $X$ give rise to transition maps for a line bundle $\mathcal L (D)$, and linear equivalences induce isomorphism of line bundles.
Loosely speaking, a Cartier divisor D is said to be effective if it is the zero locus of a global section of its associated line bundle $\mathcal L(D)$. In terms of the definition above, this means that its local equations coincide with the equations of the vanishing locus of a global section.
From the divisor linear equivalence/line bundle isomorphism principle, a Cartier divisor is linearly equivalent to an effective divisor if, and only if, its associated line bundle $\mathcal L (D)$ has non-zero global sections. Two collinear non-zero global sections have the same vanishing locus, and hence the projective space $\mathbb P \Gamma (X, \mathcal L (D))$ over k identifies with the set of effective divisors linearly equivalent to $D$.
If $X$ is a projective (or proper) variety over a field $k$, then $\Gamma (X, \mathcal L (D))$ is a finite-dimensional $k$-vector space, and the associated projective space over $k$ is called the complete linear system of $D$. Its linear subspaces are called linear systems of divisors. The Riemann-Roch theorem for algebraic curves is a fundamental identity involving the dimension of complete linear systems in the setup of projective curves.
## ℚ-divisors
Let X be a normal variety. A (Weil) $\mathbb{Q}$-divisor is a finite formal linear combination of irreducible subvarieties of codimension one of X with rational coefficients. (An $\mathbb{R}$-divisor is defined similarly.) A $\mathbb{Q}$-divisor is called effective if the coefficients are nonnegative. A $\mathbb{Q}$-divisor is called $\mathbb{Q}$-Cartier if some integral multiple of it is a Cartier divisor. If X is smooth, then any $\mathbb{Q}$-divisor is $\mathbb{Q}$-Cartier.
If $D = \sum a_j Z_j$ is a $\mathbb{Q}$-divisor, then its integer part is the divisor
$\sum [a_j] Z_j$
where $[a_j]$ are the integer parts of $a_j$. O(D) is then defined as O of the integer part of D.
See also: multiplier ideal. For some examples, see the following MathOverflow Post
## Relative Cartier divisors
An effective Cartier divisor in a scheme X over a ring R is a closed subscheme D of X that (1) is flat over R and (2) the ideal sheaf $I(D)$ of D is locally free of rank one (i.e., invertible sheaf). Equivalently, a closed subscheme D of X is an effective Cartier divisor if there is an open affine cover $U_i = \operatorname{Spec} A_i$ of X and nonzerodivisors $f_i \in A_i$ such that the intersection $D \cap U_i$ is given by the equation $f_i = 0$ (called local equations) and its ideal sheaf $I(D)|_{U_i} = A / f_i A$ is flat over R and such that they are compatible.
• If D and D' are effective Cartier divisors, then the sum $D + D'$ is the effective Cartier divisor defined locally as $fg = 0$ if f, g give local equations for D and D' .
• If D is an effective Cartier divisor and $R \to R'$ is a ring homomorphism, then $D \times_R R'$ is an effective Cartier divisor in $X \times_R R'$.
• If D is an effective Cartier divisor and $f: X' \to X$ a flat morphism over R, then $D' = D \times_X X'$ is an effective Cartier divisor in X' with the ideal sheaf $I(D') = f^* (I(D))$.
Taking $I(D)^{-1} \otimes_{\mathcal{O}_X} -$ of $0 \to I(D) \to \mathcal{O}_X \to \mathcal{O}_D \to 0$ gives the exact sequence
$0 \to \mathcal{O}_X \to I(D)^{-1} \to I(D)^{-1} \otimes \mathcal{O}_D \to 0$.
This allows one to see global sections of $\mathcal{O}_X$ as global sections of $I(D)^{-1}$. In particular, the constant 1 on X can be thought of as a section of $I(D)^{-1}$ and D is then the zero locus of this section. Conversely, if $L$ is a line bundle on X and s a global section of it that is a nonzerodivisor on $\mathcal{O}_X$ and if $L/\mathcal{O}_X$ is flat over R, then $s = 0$ defines an effective Cartier divisor whose ideal sheaf is isomorphic to the inverse of L.
From now on suppose X is a smooth curve (still over R). Let D be an effective Cartier divisor in X and assume it is proper over R (which is immediate if X is proper.) Then $\Gamma(D, \mathcal{O}_D)$ is a locally free R-module of finite rank. This rank is called the degree of D and is denoted by $\operatorname{deg} D$. It is a locally constant function on $\operatorname{Spec} R$. If D and D' are proper effective Cartier divisors, then $D + D'$ is proper over R and $\operatorname{deg}(D + D') = \operatorname{deg}(D) + \operatorname{deg}(D')$. Let $f: X' \to X$ be a finite flat morphism. Then $\operatorname{deg}(f^* D) = \operatorname{deg}(f) \operatorname{deg}(D)$.[8] On the other hand, a base change does not change degree: $\operatorname{deg}(D \times_R R') = \operatorname{deg}(D)$.[9]
A closed subscheme D of X is finite, flat and of finite presentation if and only if it is an effective Cartier divisor that is proper over R.[10]
## Notes
1. ^ Section VI.6 of Dieudonné (1985).
2. ^ More generally, one can take X to be a Noetherian integral scheme over a field.
3. ^ This is an algebraic fact. If A is a one-dimensional Noetherian local ring, then for any non-zerodivisor f, $A/fA$ is an Artinian A-module (the support has dimension zero) and thus the length
$\operatorname{ord}(f) = \operatorname{length}_A(A/fA)$
is a finite number. For any non-zerodivisor $f/g$ in the total ring of fractions Q(A) of A, one then puts ord(f/g) = ord(f) - ord(g). One can show it is well-defined and that $\operatorname{ord}: Q(A) - \{0\} \to \mathbb{Z}$ is a group homomorphism. Finally, if A is a discrete valuation ring, then this ord is the valuation defined by fixing a generator of the maximal ideal.
4. ^ Vakil, Math 216: Foundations of algebraic geometry, Definition 14.2.2.
5. ^ Hartshorne, Ch. II, Example 6.5.2.
6. ^ That is,
$A_k(Z) \to A_k(X) \to A_k(X - Z) \to 0$
where $A_k$ is the k-th Chow group.
7. ^ Kleiman, p. 203
8. ^ Katz–Mazur 1985, Lemma 1.2.8.
9. ^ Katz–Mazur 1985, Lemma 1.2.9.
10. ^ Katz–Mazur 1985, Lemma 1.2.3. | 2015-11-28 05:56:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 118, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.963823676109314, "perplexity": 243.72055404791138}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398451648.66/warc/CC-MAIN-20151124205411-00159-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://emacs.stackexchange.com/questions/47874/fastest-way-to-draw-pixels-in-emacs/47875 | # Fastest way to draw pixels in Emacs
I'm currently working on a CHIP-8 emulator. This platform requires me to draw 64x32 black/white pixels, ideally at a speed between 30 and 60 frames per second. There's an extension to it that increases the possible resolution to 128x64 black/white pixels. At this speed I've found it necessary to avoid needless consing, this severely limits my options. My experiences so far:
• Generating SVG on the fly. My preferred library for this keeps creating tons of lists. The alternative would be using a template string, however I'm afraid editing it will create new strings, creating a comparable amount of garbage in the process.
• Inserting lines of colored text, then moving across them and deleting/inserting differently colored text if needed. This is my current approach. It works fast enough at 64x32 pixels, but turned out to be too slow at 128x64 pixels. I can imagine that changing the properties of text might help, but haven't tested it yet.
• Inserting a XBM image backed by a bool vector, then changing the bool vector's contents and forcing Emacs to redraw the image. The latter part is particularly painful as you need to (force-window-update BUFFER) and (image-flush SPEC) to make changes happen. Furthermore, not all changes are visible for a yet to be determined reason, the overall experience isn't nearly as smooth as with the previous solution. If drawn at a scale comparable to the text solution, the speed is in fact a bit worse. I've yet to test it at 128x64 pixels.
Are there any other possible solutions or tweaks I've overlooked? I'd prefer to stay within the options of vanilla Emacs before venturing into creating my own C module for creating a canvas I can quickly paint to... | 2019-10-16 10:41:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4740859568119049, "perplexity": 1459.4315524949252}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986666959.47/warc/CC-MAIN-20191016090425-20191016113925-00440.warc.gz"} |
https://www.gradesaver.com/textbooks/math/precalculus/precalculus-6th-edition-blitzer/chapter-p-section-p-4-polynomials-exercise-set-page-56/8 | ## Precalculus (6th Edition) Blitzer
Write the terms polynomial in standard form (descending degree) to obtain $15x^4-8x^3+x^2+91.$ The degree of a polynomial is equal to the degree of the term with the highest degree. The degree of a term with one variable is equal to the exponent of the variable. Thus, the terms of the given polynomial have the following degrees: First term: $4$ Second Term: $3$ Third Term: $2$ Fourth Term: $0$ ( the degree of a constant is 0) The highest degree is 4, therefore the degree of the polynomial is $4$. | 2018-10-18 22:06:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6814751029014587, "perplexity": 176.83201297718747}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512015.74/warc/CC-MAIN-20181018214747-20181019000247-00512.warc.gz"} |
http://hal.in2p3.fr/in2p3-00130927 | # Relativistic Transport Theory for Systems Containing Bound States
Abstract : Using a Lagrangian which contains quarks as elementary degrees of freedom and mesons as bound states, a transport formalism is developed, which allows for a dynamical transition from a quark plasma to a state, where quarks are bound into hadrons. Simultaneous transport equations for both particle species are derived in a systematic and consistent fashion. For the mesons a formalism is used which introduces off-shell corrections to the off-diagonal Green functions. It is shown that these off-shell corrections lead to the appearance of elastic quark scattering processes in the collision integral. The interference of the processes $q\bar q\to\pi$ and $q\bar q\to\pi\to q\bar q$ leads to a modification of the $s$-channel amplitude of quark-antiquark scattering.
Document type :
Journal articles
Complete list of metadata
http://hal.in2p3.fr/in2p3-00130927
Contributor : Dominique Girod <>
Submitted on : Wednesday, February 14, 2007 - 2:20:00 PM
Last modification on : Friday, June 22, 2018 - 9:33:10 AM
### Citation
P. Rehberg. Relativistic Transport Theory for Systems Containing Bound States. Physical Review C, American Physical Society, 1998, 57, pp.3299-3313. ⟨in2p3-00130927⟩
Record views | 2021-06-13 06:04:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4211571514606476, "perplexity": 1921.4826820638696}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487600396.21/warc/CC-MAIN-20210613041713-20210613071713-00422.warc.gz"} |
https://mxnet.apache.org/versions/1.4.1/faq/distributed_training.html | # Distributed Training in MXNet¶
MXNet supports distributed training enabling us to leverage multiple machines for faster training. In this document, we describe how it works, how to launch a distributed training job and some environment variables which provide more control.
## Types of Parallelism¶
There are two ways in which we can distribute the workload of training a neural network across multiple devices (can be either GPU or CPU). The first way is data parallelism, which refers to the case where each device stores a complete copy of the model. Each device works with a different part of the dataset, and the devices collectively update a shared model. These devices can be located on a single machine or across multiple machines. In this document, we describe how to train a model with devices distributed across machines in a data parallel way.
When models are so large that they don’t fit into device memory, then a second way called model parallelism is useful. Here, different devices are assigned the task of learning different parts of the model. Currently, MXNet supports Model parallelism in a single machine only. Refer Training with multiple GPUs using model parallelism for more on this.
## How Does Distributed Training Work?¶
The following concepts are key to understanding distributed training in MXNet:
### Types of Processes¶
MXNet has three types of processes which communicate with each other to accomplish training of a model.
• Worker: A worker node actually performs training on a batch of training samples. Before processing each batch, the workers pull weights from servers. The workers also send gradients to the servers after each batch. Depending on the workload for training a model, it might not be a good idea to run multiple worker processes on the same machine.
• Server: There can be multiple servers which store the model’s parameters, and communicate with workers. A server may or may not be co-located with the worker processes.
• Scheduler: There is only one scheduler. The role of the scheduler is to set up the cluster. This includes waiting for messages that each node has come up and which port the node is listening on. The scheduler then lets all processes know about every other node in the cluster, so that they can communicate with each other.
### KV Store¶
MXNet provides a key-value store, which is a critical component used for multi-device training. The communication of parameters across devices on a single machine, as well as across multiple machines, is relayed through one or more servers with a key-value store for the parameters. Each value in this store is represented by a key and value, where each parameter array in the network is assigned a key, and value refers to the weights of that parameter array. Workers push gradients after processing a batch, and pull updated weights before processing a new batch. We can also pass in optimizers for the KVStore to use while updating each weight. Optimizers like Stochastic Gradient Descent define an update rule, essentially a mathematical formula to compute the new weight based on the old weight, gradient, and some parameters.
If you are using a Gluon Trainer object or the Module API, it uses a kvstore object internally to aggregate gradients from multiple devices on the same machine as well as across different machines.
Although the API remains the same whether or not multiple machines are being used, the notion of kvstore server exists only during distributed training. In this case, each push and pull involves communication with the kvstore servers. When there are multiple devices on a single machine, gradients from these devices are first aggregated on the machine and then sent to the servers.Note that we need to compile MXNet with the build flag USE_DIST_KVSTORE=1 to use distributed training.
The distributed mode of KVStore is enabled by calling mxnet.kvstore.create function with a string argument which contains the word dist as follows:
kv = mxnet.kvstore.create(‘dist_sync’)
### Distribution of Keys¶
Each server doesn’t necessarily store all the keys or parameter arrays. Parameters are distributed across different servers. The decision of which server stores a particular key is made at random. This distribution of keys across different servers is handled transparently by the KVStore. It ensures that when a key is pulled, that request is sent to the server which has the corresponding value. If the value of some key is very large, it may be sharded across different servers. This means that different servers hold different parts of the value. Again, this is handled transparently so that the worker does not have to do anything different. The threshold for this sharding can be controlled with the environment variable MXNET_KVSTORE_BIGARRAY_BOUND. See environment variables for more details.
### Split training data¶
When running distributed training in data parallel mode, we want each machine to be working on different parts of the dataset.
For data parallel training on a single worker, we can use mxnet.gluon.utils.split_and_load to split a batch of samples provided by the data iterator, and then load each part of the batch on the device which will process it.
In the case of distributed training though, we would need to divide the dataset into n parts at the beginning, so that each worker gets a different part. Each worker can then use split_and_load to again divide that part of the dataset across different devices on a single machine.
Typically, this split of data for each worker happens through the data iterator, on passing the number of parts and the index of parts to iterate over. Some iterators in MXNet that support this feature are mxnet.io.MNISTIterator and mxnet.io.ImageRecordIter. If you are using a different iterator, you can look at how the above iterators implement this. We can use the kvstore object to get the number of workers (kv.num_workers) and rank of the current worker (kv.rank). These can be passed as arguments to the iterator. You can look at example/gluon/image_classification.py to see an example usage.
### Updating weights¶
KVStore server supports two modes, one which aggregates the gradients and updates the weights using those gradients, and second where the server only aggregates gradients. In the latter case, when a worker process pulls from kvstore, it gets the aggregated gradients. The worker then uses these gradients and applies the weights locally.
When using Gluon there is an option to choose between these modes by passing update_on_kvstore variable when you create the Trainer object like this:
trainer = gluon.Trainer(net.collect_params(), optimizer='sgd',
optimizer_params={'learning_rate': opt.lr,
'wd': opt.wd,
'momentum': opt.momentum,
'multi_precision': True},
kvstore=kv,
update_on_kvstore=True)
When using the symbolic interface, it performs the weight updates on the server without the user having to do anything special.
### Different Modes of Distributed Training¶
Distributed training itself is enabled when kvstore creation string contains the word dist.
Different modes of distributed training can be enabled by using different types of kvstore.
• dist_sync: In synchronous distributed training, all workers use the same synchronized set of model parameters at the start of every batch. This means that after each batch, the server waits to receive gradients from each worker before it updates the model parameters. This synchronization comes at a cost because the worker pulling parameters would have to wait till the server finishes this process. In this mode, if a worker crashes, then it halts the progress of all workers.
• dist_async: In asynchronous distributed training, the server receives gradients from one worker and immediately updates its store, which it uses to respond to any future pulls. This means that a worker who finishes processing a batch can pull the current parameters from server and start the next batch, even if other workers haven’t finished processing the earlier batch. This is faster than dist_sync because there is no cost of synchronization, but can take more epochs to converge. The update of weights is atomic, meaning no two updates happen on the same weight at the same time. However, the order of updates is not guaranteed. In async mode, it is required to pass an optimizer because in the absence of an optimizer kvstore would replace the stored weights with received weights and this doesn’t make sense for training in asynchronous mode. Hence, when using Gluon with async mode we need to set update_on_kvstore to True.
• dist_sync_device: Same as dist_sync except that when there are multiple GPUs being used on each node, this mode aggregates gradients and updates weights on GPU while dist_sync does so on CPU memory. This is faster than dist_sync because it reduces expensive communication between GPU and CPU, but it increases memory usage on GPU.
• dist_async_device : The analogue of dist_sync_device but in asynchronous mode.
When communication is expensive, and the ratio of computation time to communication time is low, communication can become a bottleneck. In such cases, gradient compression can be used to reduce the cost of communication, thereby speeding up training. Refer Gradient compression for more details.
Note: For small models when the cost of computation is much lower than cost of communication, distributed training might actually be slower than training on a single machine because of the overhead of communication and synchronization.
## How to Start Distributed Training?¶
MXNet provides a script tools/launch.py to make it easy to launch a distributed training job. This supports various types of cluster resource managers like ssh, mpirun, yarn and sge. If you already have one of these clusters setup, you can skip the next section on setting up a cluster. If you want to use a type of cluster not mentioned above, skip ahead to Manually launching jobs section.
### Setting up the Cluster¶
An easy way to set up a cluster of EC2 instances for distributed deep learning is by using the AWS CloudFormation template. If you can not use the above, this section will help you manually set up a cluster of instances to enable you to use ssh for launching a distributed training job. Let us denote one machine as the master of the cluster through which we will launch and monitor the distributed training on all machines.
If the machines in your cluster are a part of a cloud computing platform like AWS EC2, then your instances should be using key-based authentication already. Ensure that you create all instances using the same key, say mxnet-key and in the same security group. Next, we need to ensure that master has access to all other machines in the cluster through ssh by adding this key to ssh-agent and forwarding it to master when we log in. This will make mxnet-key the default key on master.
ssh-add .ssh/mxnet-key
If your machines use passwords for authentication, see here for instructions on setting up password-less authentication between machines.
It is easier if all these machines have a shared file system so that they can access the training script. One way is to use Amazon Elastic File System to create your network file system. The options in the following command are the recommended options when mounting an AWS Elastic File System.
sudo mkdir efs && sudo mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 NETWORK_FILE_SYSTEM_IP:/ efs
Tip: You might find it helpful to store large datasets on S3 for easy access from all machines in the cluster. Refer Using data from S3 for training for more information.
### Using Launch.py¶
MXNet provides a script tools/launch.py to make it easy to launch distributed training on a cluster with ssh, mpi, sge or yarn. You can fetch this script by cloning the mxnet repository.
git clone --recursive https://github.com/apache/incubator-mxnet
#### Example¶
Let us consider training a VGG11 model on the CIFAR10 dataset using example/gluon/image_classification.py.
cd example/gluon/
On a single machine, we can run this script as follows:
python image_classification.py --dataset cifar10 --model vgg11 --epochs 1
For distributed training of this example, we would do the following:
If the mxnet directory which contains the script image_classification.py is accessible to all machines in the cluster (for example if they are on a network file system), we can run:
../../tools/launch.py -n 3 -H hosts --launcher ssh python image_classification.py --dataset cifar10 --model vgg11 --epochs 1 --kvstore dist_sync
If the directory with the script is not accessible from the other machines in the cluster, then we can synchronize the current directory to all machines.
../../tools/launch.py -n 3 -H hosts --launcher ssh --sync-dst-dir /tmp/mxnet_job/ python image_classification.py --dataset cifar10 --model vgg11 --epochs 1 --kvstore dist_sync
Tip: If you don’t have a cluster ready and still want to try this out, pass the option --launcher local instead of ssh
#### Options¶
Here, launch.py is used to submit the distributed training job. It takes the following options:
• -n denotes the number of worker nodes to be launched.
• -s denotes the number of server nodes to be launched. If it is not specified, it is taken to be equal to the number of worker nodes. The script tries to cycle through the hosts file to launch the servers and workers. For example, if you have 5 hosts in the hosts file and you passed n as 3 (and nothing for s). The script will launch a total of 3 server processes, one each for the first three hosts and launch a total of 3 worker processes, one each for the fourth, fifth and first host. If the hosts file has exactly n number of worker nodes, it will launch a server process and a worker process on each of the n hosts.
• --launcher denotes the mode of communication. The options are:
• ssh if machines can communicate through ssh without passwords. This is the default launcher mode.
• mpi if Open MPI is available
• sge for Sun Grid Engine
• yarn for Apache Yarn
• local for launching all processes on the same local machine. This can be used for debugging purposes.
• -H requires the path of the hosts file This file contains IPs of the machines in the cluster. These machines should be able to communicate with each other without using passwords. This file is only applicable and required when the launcher mode is ssh or mpi. An example of the contents of the hosts file would be:
172.30.0.172
172.31.0.173
172.30.1.174
• --sync-dst-dir takes the path of a directory on all hosts to which the current working directory will be synchronized. This only supports ssh launcher mode. This is necessary when the working directory is not accessible to all machines in the cluster. Setting this option synchronizes the current directory using rsync before the job is launched.If you have not installed MXNet system-wide then you have to copy the folder python/mxnet and the file lib/libmxnet.so into the current directory before running launch.py. For example if you are in example/gluon, you can do this with cp -r ../../python/mxnet ../../lib/libmxnet.so .. This would work if your lib folder contains libmxnet.so, as would be the case when you use make. If you use CMake, this file would be in your build directory.
• python image_classification.py --dataset cifar10 --model vgg11 --epochs 1 --kvstore dist_sync is the command for the training job on each machine. Note the use of dist_sync for the kvstore used in the script.
#### Terminating Jobs¶
If the training job crashes due to an error or if we try to terminate the launch script while training is running, jobs on all machines might not have terminated. In such a case, we would need to terminate them manually. If we are using ssh launcher, this can be done by running the following command where hosts is the path of the hostfile.
while read -u 10 host; do ssh -o "StrictHostKeyChecking no" $host "pkill -f python" ; done 10 ### Manually Launching Jobs¶ If for some reason, you do not want to use the script above to start distributed training, then this section will be helpful. MXNet uses environment variables to assign roles to different processes and to let different processes find the scheduler. The environment variables are required to be set correctly as follows for the training to start: • DMLC_ROLE: Specifies the role of the process. This can be server, worker or scheduler. Note that there should only be one scheduler. When DMLC_ROLE is set to server or scheduler, these processes start when mxnet is imported. • DMLC_PS_ROOT_URI: Specifies the IP of the scheduler • DMLC_PS_ROOT_PORT: Specifies the port that the scheduler listens to • DMLC_NUM_SERVER: Specifies how many server nodes are in the cluster • DMLC_NUM_WORKER: Specifies how many worker nodes are in the cluster Below is an example to start all jobs locally on Linux or Mac. Note that starting all jobs on the same machine is not a good idea. This is only to make the usage clear. export COMMAND='python example/gluon/image_classification.py --dataset cifar10 --model vgg11 --epochs 1 --kvstore dist_sync' DMLC_ROLE=server DMLC_PS_ROOT_URI=127.0.0.1 DMLC_PS_ROOT_PORT=9092 DMLC_NUM_SERVER=2 DMLC_NUM_WORKER=2$COMMAND &
DMLC_ROLE=server DMLC_PS_ROOT_URI=127.0.0.1 DMLC_PS_ROOT_PORT=9092 DMLC_NUM_SERVER=2 DMLC_NUM_WORKER=2 $COMMAND & DMLC_ROLE=scheduler DMLC_PS_ROOT_URI=127.0.0.1 DMLC_PS_ROOT_PORT=9092 DMLC_NUM_SERVER=2 DMLC_NUM_WORKER=2$COMMAND &
DMLC_ROLE=worker DMLC_PS_ROOT_URI=127.0.0.1 DMLC_PS_ROOT_PORT=9092 DMLC_NUM_SERVER=2 DMLC_NUM_WORKER=2 $COMMAND & DMLC_ROLE=worker DMLC_PS_ROOT_URI=127.0.0.1 DMLC_PS_ROOT_PORT=9092 DMLC_NUM_SERVER=2 DMLC_NUM_WORKER=2$COMMAND
For an in-depth discussion of how the scheduler sets up the cluster, you can go here.
## Environment Variables¶
### For tuning performance¶
• MXNET_KVSTORE_REDUCTION_NTHREADS Value type: Integer Default value: 4 The number of CPU threads used for summing up big arrays on a single machine This will also be used for dist_sync kvstore to sum up arrays from different contexts on a single machine. This does not affect summing up of arrays from different machines on servers. Summing up of arrays for dist_sync_device kvstore is also unaffected as that happens on GPUs.
• MXNET_KVSTORE_BIGARRAY_BOUND Value type: Integer Default value: 1000000 The minimum size of a big array. When the array size is bigger than this threshold, MXNET_KVSTORE_REDUCTION_NTHREADS threads are used for reduction. This parameter is also used as a load balancer in kvstore. It controls when to partition a single weight to all the servers. If the size of a single weight matrix is less than this bound, then it is sent to a single randomly picked server; otherwise, it is partitioned to all the servers.
• MXNET_ENABLE_GPU_P2P GPU Peer-to-Peer communication Value type: 0(false) or 1(true) Default value: 1 If true, MXNet tries to use GPU peer-to-peer communication, if available on your device. This is used only when kvstore has the type device in it.
### Communication¶
• DMLC_INTERFACE Using a particular network interface Value type: Name of interface Example: eth0 MXNet often chooses the first available network interface. But for machines with multiple interfaces, we can specify which network interface to use for data communication using this environment variable.
• PS_VERBOSE Logging communication Value type: 1 or 2 Default value: (empty)
• PS_VERBOSE=1 logs connection information like the IPs and ports of all nodes
• PS_VERBOSE=2 logs all data communication information
When the network is unreliable, messages being sent from one node to another might get lost. The training process can hang when a critical message is not successfully delivered. In such cases, an additional ACK can be sent for each message to track its delivery. This can be done by setting PS_RESEND and PS_RESEND_TIMEOUT
• PS_RESEND Retransmission for unreliable network Value type: 0(false) or 1(true) Default value: 0 Whether or not to enable retransmission of messages
• PS_RESEND_TIMEOUT Timeout for ACK to be received Value type: Integer (in milliseconds) Default value: 1000 If ACK is not received in PS_RESEND_TIMEOUT milliseconds, then the message will be resent. | 2023-01-28 16:02:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26453787088394165, "perplexity": 1755.233072381888}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499646.23/warc/CC-MAIN-20230128153513-20230128183513-00416.warc.gz"} |
https://www.physicsforums.com/threads/energy-loss-in-shm.113120/ | # Energy loss in SHM
1. Mar 5, 2006
### capslock
When middle C on a piano (frequency = 262Hz) is struck, the vibration of the piano string loses half its energy after 4s.
(i) What is the decay time for the energy?
(ii) What is the Q-factor for this piano wire?
(iii) What is the fractional energy loss per cycle?
SHM has been going great until this chapter on energy loss. I'm totally lossed. I'd really appreciate if someone could explain how to attempt these questions.
Best Regards, Jonathan.
2. Mar 5, 2006
### Hootenanny
Staff Emeritus
Decay time is usally defined as the time taken for a value to fall to $\frac{1}{e}$ times the original value.
Last edited: Mar 5, 2006
3. Mar 5, 2006
### capslock
But how do I use the information given to calculate it?
Best Regards, Jonathan.
4. Mar 5, 2006
### physicsprasanna
i'm not sure of this ... but
i think we can do this problem like radioactivity problems....
for the first part
given that half-life = 4s. Now calculate deacy constant which is ln2/(half-life period)......
The decay constant is defined as inverse of time taken to decay to 1/e times the original.... so the your answer should be (half-life)/ln2 ....
I'm not sure ... i think you better wait for some more replies
5. Mar 5, 2006
### Hootenanny
Staff Emeritus
physicsprasanna is right, you use the same process as nuclear physics, but instead you use energy instead of number of radioactive isotopes.
$$T_{\frac{1}{2}} = \frac{\ln 2}{k}$$
Then you can work out $k$ which allows you to calculate $E_0$ and form an equation. Well thats how I understand in anyway.
6. Mar 5, 2006
### topsquark
The energy in a damped oscillator goes as:
$$E(t)=E_0e^{-t/ \tau}$$
Where tau is the "decay constant." This is the time it takes for the energy to be reduced by a factor of 1/e, as stated above. So we know that
$$E(4)=(1/2)E_0=E_0e^{-4/ \tau}$$.
From this we can find a value for tau.
Next, the "quality factor," or "Q factor," is given by:
$$Q=\omega_0 \tau$$
Where $$\omega_0$$ is the initial angular frequency (NOT frequency!) of the motion.
The fractional energy loss per cycle is defined as:
$$\left ( \frac{\Delta E}{E_0} \right )_{cycle} = \frac{E(T)-E_0}{E_0}$$
It turns out this is inversely proportional to Q, but as you can calculate it without the Q value, I leave finding that relation to you.
-Dan
7. Mar 5, 2006
### capslock
Many thanks.
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
Have something to add? | 2016-12-07 20:56:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8372441530227661, "perplexity": 2027.9771071642572}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542246.21/warc/CC-MAIN-20161202170902-00010-ip-10-31-129-80.ec2.internal.warc.gz"} |
https://csinva.io/blog/compiled_notes/_build/html/intro.html | # overview 👋#
These are a series of notes (see csinva.io) to serve as useful reference for people in machine learning / ai / neuroscience.
Below are the high-level topics of each note, with an auto-generated graph showing their similarities (based on tf-idf).
*Topics labelled with an asterisk are research-level notes, not introductory.
The raw similarities are given in the plot below: | 2023-04-01 16:17:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48343709111213684, "perplexity": 4533.464419536614}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950110.72/warc/CC-MAIN-20230401160259-20230401190259-00651.warc.gz"} |
http://gmatclub.com/forum/m03-18-ns-polynomial-equations-94485.html?sort_by_oldest=true | m03#18 - NS (Polynomial Equations) : Retired Discussions [Locked]
Check GMAT Club Decision Tracker for the Latest School Decision Releases http://gmatclub.com/AppTrack
It is currently 20 Jan 2017, 19:53
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# m03#18 - NS (Polynomial Equations)
Author Message
Director
Joined: 24 Aug 2007
Posts: 954
WE 1: 3.5 yrs IT
WE 2: 2.5 yrs Retail chain
Followers: 76
Kudos [?]: 1271 [0], given: 40
m03#18 - NS (Polynomial Equations) [#permalink]
### Show Tags
19 May 2010, 08:31
I couldnt locate dscussion of this question.
How many roots does the equation x^4 - 2x^2 +1=0 have?
a) 0
b) 1
c) 2
d) 3
e) 4
As per the solution given, x = 1 or x = -1.
My question is:
We are asked how many roots are there for this equation not how many different roots. So, total no. of roots should be 4.
_________________
Tricky Quant problems: http://gmatclub.com/forum/50-tricky-questions-92834.html
Important Grammer Fundamentals: http://gmatclub.com/forum/key-fundamentals-of-grammer-our-crucial-learnings-on-sc-93659.html
Kaplan GMAT Instructor
Joined: 25 Aug 2009
Posts: 644
Location: Cambridge, MA
Followers: 83
Kudos [?]: 276 [0], given: 2
Re: m03#18 - NS (Polynomial Equations) [#permalink]
### Show Tags
19 May 2010, 13:03
Hi ykaiim,
'Roots' aren't like prime factors; repeating a root isn't meaningful. In prime factorization, 2 is the only distinct prime factor of 2 and 32, but since 2 goes into 2 once and 32 five times, 32 have five prime factors. With roots, however, either a given value of X does correctly balance the equation, or it doesn't; it's a meaningless distinction to say that -1 can 'solve the equation twice.' Thus, since there are exactly two x values that solve the equation, 1 and -1, we say there are exactly two roots, even if as a 4th degree equation each of those roots could be derived two ways.
_________________
Eli Meyer
Kaplan Teacher
http://www.kaptest.com/GMAT
Prepare with Kaplan and save \$150 on a course!
Kaplan Reviews
Manager
Joined: 13 Dec 2009
Posts: 129
Followers: 6
Kudos [?]: 281 [0], given: 10
Re: m03#18 - NS (Polynomial Equations) [#permalink]
### Show Tags
30 May 2010, 06:07
ykaiim wrote:
I couldnt locate dscussion of this question.
How many roots does the equation x^4 - 2x^2 +1=0 have?
a) 0
b) 1
c) 2
d) 3
e) 4
As per the solution given, x = 1 or x = -1.
My question is:
We are asked how many roots are there for this equation not how many different roots. So, total no. of roots should be 4.
technically, number of the roots of polynomial is max power in polynomial. so that way answer must be E.
i.e. number of roots of given equation is 4. out 4, 2 are same. here we are not concerned whether roots are repeated or not. we are asked to find number of roots. ..please comment. thanks
Intern
Joined: 30 May 2011
Posts: 9
Followers: 0
Kudos [?]: 0 [0], given: 2
Re: m03#18 - NS (Polynomial Equations) [#permalink]
### Show Tags
13 Sep 2011, 03:39
I tried to solve it as below -
x^4 - 2x^2 +1=0
b= -2
a=1
c=1
Discriminant = b^2-4ac
= (-2)^2 - 4*1*1
=4-4
=0
As b^2-4ac = 0 therefore this equation should have 1 root but ans is 2 roots.
Math Forum Moderator
Joined: 20 Dec 2010
Posts: 2021
Followers: 161
Kudos [?]: 1705 [0], given: 376
Re: m03#18 - NS (Polynomial Equations) [#permalink]
### Show Tags
13 Sep 2011, 03:49
agarwalmanoj2000 wrote:
I tried to solve it as below -
x^4 - 2x^2 +1=0
b= -2
a=1
c=1
Discriminant = b^2-4ac
= (-2)^2 - 4*1*1
=4-4
=0
As b^2-4ac = 0 therefore this equation should have 1 root but ans is 2 roots.
You have applied the Discriminant formula of a quadratic equation for a polynomial of 4th degree.
_________________
Senior Manager
Joined: 23 Oct 2010
Posts: 386
Location: Azerbaijan
Concentration: Finance
Schools: HEC '15 (A)
GMAT 1: 690 Q47 V38
Followers: 21
Kudos [?]: 322 [0], given: 73
Re: m03#18 - NS (Polynomial Equations) [#permalink]
### Show Tags
13 Sep 2011, 04:57
How many roots does the equation x^4 - 2x^2 +1=0 have?
let x^2=a ,then we have a^2-2a+1=0 or (a-1)^2=0
replace a with x-
(a-1)^2=0
(x^2-1)^2=0 or
((x-1)(x+1))^2=0
x=1 x=-1
answ is 2
_________________
Happy are those who dream dreams and are ready to pay the price to make them come true
I am still on all gmat forums. msg me if you want to ask me smth
Intern
Joined: 30 May 2011
Posts: 9
Followers: 0
Kudos [?]: 0 [0], given: 2
Re: m03#18 - NS (Polynomial Equations) [#permalink]
### Show Tags
13 Sep 2011, 05:42
Thanks, Fluke !!!
Sorry, I did not understand .
Can you please provide some more details?
@lalab
Thanks for your response. I am trying to find problem in my approach.
Math Forum Moderator
Joined: 20 Dec 2010
Posts: 2021
Followers: 161
Kudos [?]: 1705 [0], given: 376
Re: m03#18 - NS (Polynomial Equations) [#permalink]
### Show Tags
13 Sep 2011, 07:46
agarwalmanoj2000 wrote:
Thanks, Fluke !!!
Sorry, I did not understand .
Can you please provide some more details?
@lalab
Thanks for your response. I am trying to find problem in my approach.
$$ax^2+bx+c=0$$. This is a polynomial of degree 2, where $$D=b^2-4ac$$
For a polynomial of degree 4, i.e. maximum power of x, falls in a different category:
$$ax^4+bx^2+c=0$$. This is a polynomial of degree 4, for which $$D = b^2-4ac$$ may not hold true.
$$x^4 - 2x^2 +1=0$$. The maximum power of x is 4. You just don't know what is the discriminant. The discriminant formula you used is only for quadratic equations(x with highest power of 2).
_________________
Re: m03#18 - NS (Polynomial Equations) [#permalink] 13 Sep 2011, 07:46
Display posts from previous: Sort by
# m03#18 - NS (Polynomial Equations)
Moderator: Bunuel
Powered by phpBB © phpBB Group and phpBB SEO Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®. | 2017-01-21 03:53:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38046160340309143, "perplexity": 6410.291891273022}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00207-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://mathoverflow.net/questions/430921/hermitian-holomorphic-line-bundle-and-curvature-chern-form-in-demaillys-book/430985 | # Hermitian holomorphic line bundle and curvature Chern form in Demailly's book
In Demailly's book p.272, Theorem 13.9, there is:
Let $$X$$ be an arbitrary complex manifold. (b) Let $$\omega$$ be a $$\mathcal C^∞$$ closed real (1, 1)-form such that $${ω}\in H^2_{dR}(X,\mathbb R)$$ is the image of an integral class. Then there exists a hermitian line bundle $$E\to X$$ such that $$\frac{i}{2π}Θ(E) = ω$$.
It should be noted that the author dose not assume $$X$$ to be a compact Kähler manifold, while in Voisin's book Hodge theory and complex algebraic geometry. I, p.163-164, $$X$$ is assumed to be a compact Kähler manifold, which is used to get the $$\partial\bar\partial$$-lemma, and use it to deduce that $$\omega-\omega_{L,h}=\frac{1}{2\pi i}\partial\bar\partial\phi$$, where $$\omega_{L,h}$$ is the curvature form of a Hermitian holomorphic line bundle $$L$$.
So, my question is that: is the compact and Kähler (or $$\partial\bar\partial$$-) assumptions necessary to make that there is a holomorphic line bundle satisfying $$\frac{i}{2π}Θ(E) = ω$$?
By the way, I find Demailly'proof a bit hard to understand since there are some typos, for example, I can't find Th. I-3.35 in his book, so can anybody help me figure out whether his statement is right or wrong?
• The inter-chapter references are a little borked, but Prop. III-1.20 should be a local $\partial\bar\partial$ lemma, valid on any complex manifold, and Th. I-3.35 should be Th. I-5.16. The statement (originally a lemma due to Weil) is correct as written; the proof is basically "get a Cech cocycle that represents the class, find local holomorphic representatives, lift those via the exponential function, and use those to cook up the transition functions for the line bundle". Sep 21 at 13:56
• @GunnarÞórMagnússon Please write that up as an answer! Sep 21 at 14:26
• @GunnarÞórMagnússon, thanks for pointing out that "Th. I-3.35 should be Th. I-5.16" and ensure the statement in the question is true, they are so helpful, it gives me more courage to try to understand Demailly's proof.
– Tom
Sep 21 at 15:13
Let $$\cup_{i\in I}U_i$$ be a covering of $$X$$ such that $$U_i\cap U_j$$ is simply connected. Since $$\omega$$ is a closed real (1,1) form on $$X$$, by local $$\partial\bar\partial$$-lemma (see Demailly's book mentioned in the question, p.135. Prop 1.19), in $$U_i$$, there is a real-valued $$\mathcal C^{\infty}$$ function $$\phi_i$$ such that $$\frac{i}{2\pi}\partial\bar\partial\phi_i=\omega$$.
Note that $$\phi_i-\phi_j\in \mathcal C^{\infty}(U_i\cap U_j)$$ is $$\partial\bar\partial$$-closed, by Th. I-5.16 ( in Demailly's book p.42 ), there exists a holomorphic function $$f_{ij}\in \mathcal O(U_i\cap U_j)$$ satisfying $$\phi_i-\phi_j=2\text{Re}f_{ij}$$.
Let $$g_{ij}=e^{2\pi if_{ij}}\in\mathcal O^*(U_i\cap U_j)$$, it is easy to check $$g_{ij}\cdot g_{jk}\cdot g_{ki}=1$$ and $$g_{ij}\cdot g_{ji}=1$$. So the transition functions $$\{g_{ij}\}$$ determin a holomorphic line bundle $$L$$ over $$X$$. And $$e^{\phi_i}$$ is the Hermitian metric of $$L$$ which satsfies $$\frac{i}{2\pi}\partial\bar\partial\phi_i=\omega$$. | 2022-11-29 05:05:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 37, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9353926777839661, "perplexity": 294.1459841792334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710685.0/warc/CC-MAIN-20221129031912-20221129061912-00775.warc.gz"} |
https://projecteuclid.org/euclid.mmj/1490639817 | ## The Michigan Mathematical Journal
### A note on cabled slice knots and reducible surgeries
Jeffrey Meier
#### Article information
Source
Michigan Math. J. Volume 66, Issue 2 (2017), 269-276.
Dates
First available in Project Euclid: 27 March 2017
https://projecteuclid.org/euclid.mmj/1490639817
Digital Object Identifier
doi:10.1307/mmj/1490639817
Zentralblatt MATH identifier
1370.57004
#### Citation
Meier, Jeffrey. A note on cabled slice knots and reducible surgeries. Michigan Math. J. 66 (2017), no. 2, 269--276. doi:10.1307/mmj/1490639817. https://projecteuclid.org/euclid.mmj/1490639817
#### References
• [1] M. Doig and S. Wehrli, A combinatorial proof of the homology cobordism classification of lens spaces, preprint, 2015, arXiv:1505.06970.
• [2] M. Eudave Muñoz, Band sums of links which yield composite links. The cabling conjecture for strongly invertible knots, Trans. Amer. Math. Soc. 330 (1992), no. 2, 463–501.
• [3] D. Gabai, Foliations and surgery on knots, Bull. Amer. Math. Soc. (N.S.) 15 (1986), no. 1, 83–87.
• [4] F. González-Acuña and H. Short, Knot surgery and primeness, Math. Proc. Cambridge Philos. Soc. 99 (1986), no. 1, 89–102.
• [5] C. McA. Gordon and J. Luecke, Knots are determined by their complements, J. Amer. Math. Soc. 2 (1989), no. 2, 371–415.
• [6] C. Gordon, Dehn surgery and 3-manifolds, Low dimensional topology, IAS/Park City Math. Ser., 15, pp. 21–71, Amer. Math. Soc., Providence, RI, 2009.
• [7] C. McA. Gordon, Dehn surgery and satellite knots, Trans. Amer. Math. Soc. 275 (1983), no. 2, 687–708.
• [8] C. M. Gordon and J. Luecke, Reducible manifolds and Dehn surgery, Topology 35 (1996), no. 2, 385–409.
• [9] J. E. Greene, $L$-space surgeries, genus bounds, and the cabling conjecture, J. Differential Geom. 100 (2015), no. 3, 491–506.
• [10] C. Grove, Cabling Conjecture for small bridge number, preprint, 2015, arXiv:1507.01317.
• [11] C. Hayashi and K. Motegi, Dehn surgery on knots in solid tori creating essential annuli, Trans. Amer. Math. Soc. 349 (1997), no. 12, 4897–4930.
• [12] C. Hayashi and K. Shimokawa, Symmetric knots satisfy the cabling conjecture, Math. Proc. Cambridge Philos. Soc. 123 (1998), no. 3, 501–529.
• [13] M. Hedden, S.-G. Kim, and C. Livingston, Topologically slice knots of smooth concordance order two, J. Differential. Geom. 102 (2016), 353–393.
• [14] J. A. Hoffman, There are no strict great $x$-cycles after a reducing or $P^{2}$ surgery on a knot, J. Knot Theory Ramifications 7 (1998), no. 5, 549–569.
• [15] J. Hom, Bordered Heegaard Floer homology and the tau-invariant of cable knots, J. Topol. 7 (2014), no. 2, 287–326.
• [16] J. Howie, A proof of the Scott-Wiegold conjecture on free products of cyclic groups, J. Pure Appl. Algebra 173 (2002), no. 2, 167–176.
• [17] J. Howie and Can, Dehn surgery yield three connected summands? Groups Geom. Dyn. 4 (2010), no. 4, 785–797.
• [18] W. W. Menasco and M. B. Thistlethwaite, Surfaces with boundary in alternating knot exteriors, J. Reine Angew. Math. 426 (1992), 47–65.
• [19] K. Miyazaki, Nonsimple, ribbon fibered knots, Trans. Amer. Math. Soc. 341 (1994), no. 1, 1–44.
• [20] L. Moser, Elementary surgery along a torus knot, Pacific J. Math. 38 (1971), 737–745.
• [21] P. S. Ozsváth and Z. Szabó, Absolutely graded Floer homologies and intersection forms for four-manifolds with boundary, Adv. Math. 173 (2003), no. 2, 179–261.
• [22] P. S. Ozsváth and Z. Szabó, Holomorphic disks and topological invariants for closed three-manifolds, Ann. of Math. (2) 159 (2004), no. 3, 1027–1158.
• [23] P. S. Ozsváth and Z. Szabó, Knot Floer homology and integer surgeries, Algebr. Geom. Topol. 8 (2008), no. 1, 101–153.
• [24] P. S. Ozsváth and Z. Szabó, Knot Floer homology and rational surgeries, Algebr. Geom. Topol. 11 (2011), no. 1, 1–68.
• [25] N. Sayari, Reducible Dehn surgery and the bridge number of a knot, J. Knot Theory Ramifications 18 (2009), no. 4, 493–504.
• [26] L. G. Valdez Sánchez, Dehn fillings of $3$-manifolds and non-persistent tori, Topology Appl. 98 (1999), no. 1–3, 355–370, II Iberoamerican Conference on Topology and Its Applications (Morelia, 1997).
• [27] Y.-Q. Wu, Dehn surgery on arborescent knots, J. Differential Geom. 43 (1996), no. 1, 171–197.
• [28] Z. Wu, A cabling formula for $\upsilon^{+}$ invariant, preprint, 2015, arXiv:1501.04749.
• [29] N. Zufelt, Divisibility of great webs and reducible Dehn surgery, preprint, 2014, arXiv:1410.3442. | 2018-02-26 03:50:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31477588415145874, "perplexity": 1756.5686063437036}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891817999.51/warc/CC-MAIN-20180226025358-20180226045358-00191.warc.gz"} |
http://eprint.iacr.org/2007/409/20080706:103157 | ## Cryptology ePrint Archive: Report 2007/409
Building a Collision-Resistant Compression Function from Non-Compressing Primitives
Thomas Shrimpton and Martijn Stam
Abstract: We consider how to build an efficient compression function from a small number of random, non-compressing primitives. Our main goal is to achieve a level of collision resistance as close as possible to the optimal birthday bound. We present a $2n$-to-$n$ bit compression function based on three independent $n$-to-$n$ bit random functions, each called only once. We show that if the three random functions are treated as black boxes finding collisions requires $\Theta(2^{n/2}/n^c)$ queries for $c\approx 1$. This result remains valid if two of the three random functions are replaced by a fixed-key ideal cipher in Davies-Meyer mode (i.e., $E_K(x)\xor x$ for permutation $E_K$). We also give a heuristic, backed by experimental results, suggesting that the security loss is at most four bits for block sizes up to 256 bits.
We believe this is the best result to date on the matter of building a collision resistant compression function from non-compressing functions. It also relates to an open question from Black et al. (Eurocrypt'05), who showed that compression functions that invoke a single non-compressing random function cannot suffice.
We also explore the relationship of our problem with that of doubling the output of a hash function and we show how our compression function can be used to double the output length of ideal hashes.
Category / Keywords: Hash Functions, Random Oracle Model, Compression Functions, Collision Resistance
Publication Info: Full version of paper appearing at ICALP'08
Date: received 25 Oct 2007, last revised 6 Jul 2008
Contact author: martijn stam at epfl ch
Available format(s): PDF | BibTeX Citation
Short URL: ia.cr/2007/409
[ Cryptology ePrint archive ] | 2017-01-19 05:21:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5739949345588684, "perplexity": 1807.428928520769}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00371-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://security.stackexchange.com/questions/81838/how-can-i-set-my-server-to-only-accept-requests-from-my-own-client-app-similar/81854 | # How can I set my server to only accept requests from my own client app? (similar to SSL Client Certificates)
I'd like to have my server APIs only accept requests from my own app, to prevent other "rogue" clients of my service.
As i understand it, the way to do this would be with a "client certificate" that the app sends, and that the web server is configured to verify.
However, I'm hosting my app in Heroku, and I believe it's not possible to do that, so I'm looking for something as good as possible that would allow me to achieve this.
I thought that maybe I could have a key pair, where the client uses a private key to sign a certain agreed-upon token (plus some random salt, so that the encrypted string is always different), and the server uses the public key to verify the token (and rejects duplicate salts to prevent replay). Or maybe the client signs the whole request with its private key, or something like that.
The idea would be that even if someone manages to get around our SSL pinning (with something like iOS kill switch, potentially) and inspect our traffic and figure out how our API works, they still can't call us unless they reverse engineer the binary to extract that certificate / private key. I understand that it's possible to do that, I'm just trying to put up as many barriers as possible.
What are good ways of doing this?
Thanks!
Daniel
• If you can't use SSL client certs then do some additional crypto on top of SSL. – user42178 Feb 17 '15 at 13:03
• You can build a ssl-like process. But you shouldn't encrypt all data with asymmetric encryption because its slow. Hardcode servers public key to your app, create a random crypto string and after encrypting it with servers public key, send it to server. On the serverside you can decrypt the key that your client sent and start encrypting the data with this key since both client and server know it. But use SSL even you have your own encryption layer – Batuhan Feb 17 '15 at 13:25
• SSL was deprecated over a decade ago, I sure hope you aren't using SSL. TLSv1 or later is the only acceptable secure transport layer. – rook Feb 17 '15 at 14:46
• @Rook I'm probably using TLS. Whatever's "the default"? I bought a certificate and gave it to Heroku. Sorry, but I'm a n00b at the details of this. – Daniel Magliola Feb 18 '15 at 14:13
This type of problem lends itself to Cargo-Cult Security type "solutions".
In the real world there is no possible mechanism that can prevent a rogue client from connecting to your service. A VPN is a proven security system that allows trusted clients access to a trusted network, but the internet is inherently untrustworthy. The attacker will have access to any secret embedded in your app, or stored in app memory, TLS client certificates rely upon a secret.
When designing a web service never forget: "The client is the attacker, and can never be trusted." If you have made this mistake, then you need to go back to the drawing board.
• I understand this, but my intention is to make it as hard / annoying as possible to raise the "minimum motivation level" needed to actually do this. My server is secure and you can't do anything nasty to it from the outside (that I know of), but I still don't want people making their own clients. So far, if I don't check anything on the server, you don't need to touch the app binary to extract any secrets. Forcing an attacker to do so is one more hurdle, one more hoop, that will hopefully dissuade them. – Daniel Magliola Feb 18 '15 at 10:06
• @Daniel Magliola The only one you are fooling with cargo-cult security is yourself. – rook Feb 18 '15 at 13:44
• @DanielMagliola that's DRM, and will always eventually fail – Natanael Mar 20 '15 at 14:34
• DRM and code obfuscation will indeed never stop a motivated attacker, but if his objective is just to make it more annoying to create a custom client, it nevertheless is the valid approach. – Dillinur Apr 30 '15 at 12:27
Note
This answer is only applicable under the stated assumptions. I have made them based upon the explicit wording of the question, which explicitly allows for a known attack vector to not be mitigated.
Security is a relative balance between the value of loss should an asset be compromised, and the effort (incl. cost etc.) that an adversary is willing to make to achieve such compromise. It is the author's prerogative to decide upon such a balance as they have the full picture regarding stakeholders.
Assumptions
The threat you are attempting to mitigate is one of an adversary who can read / tamper with their own communications, impersonate the client, but not reverse engineer the contents of the binary (either due to a lack of technical capability, or a lack of willingness).
TL;DR
HMAC(message | nonce, key) along with the client message / nonce "in the clear" (over TLS, but without any further encryption).
Your question is one of authentication, but you appear to be attempting to solve it with encryption (yes there is some crossover in certain circumstances, but separating the two will make the answer easier).
If a secret key (not an asymmetric private key; just a sufficiently large, cryptographically secure, random number) is stored in the binary (as you were proposing with a public/private key-pair) then it is not revealed by knowledge of the HMAC/plaintext pair.
Your server is also knowledgeable of the key, computes the HMAC for the message received, and discards any client communications that are invalid. The nonce is provided by the server to the client before it sends the message, and is unique. It acts to protect against replays of intercepted messages.
Caveat
As noted earlier and in other answers / comments, one may still obtain the key from the binary. For those who are interested there is a really cool technique that calculates the entropy for all regions of the binary, and then maps the linear position to a Hilbert curve. Machine code regions will have a relatively lower entropy than 'key' regions. Chris Domas demonstrates this in one of his videos; possibly his TED talk although I can't remember (watch the TED video either way).
• again, that merely moves the client secret around - a malicious user would still be able to find it and use it for his own bogus app. – AviD Apr 30 '15 at 17:46
• I agree. However, as I stated in the paragraph regarding assumptions, this is a risk that the author is explicitly willing to take. Security is a relative balance between the value of loss should an asset be compromised, and the effort (incl. cost etc.) that an adversary is willing to make to achieve such compromise. It is the author's prerogative to make such a balance as they have the full picture regarding stakeholders. I will update my answer accordingly. – Arran Schlosberg Apr 30 '15 at 22:07
You can go with any of the tunneling mechanism to accept the particular IP's in the server side or else simply configure your firewall to accept the list of trusted clients.
• Unfortunately, my client is a mobile app, so I can't whitelist IPs... Any other ideas? – Daniel Magliola Feb 18 '15 at 10:02
• In case you can try for google oAuth mechanism or OTP to client as banks doing. – user45475 Feb 18 '15 at 20:27
Use OAuth2.0 . It not only authorize the user, but also client application with client_id and client_key properties.
• What does prevent an application from spoofing those attributes? – Dillinur Apr 30 '15 at 12:28
• I am not sure but maybe you can store them into local db in the client app with encryption in development time. In run time, the app gets the data an sends to server in oauth post body. In the server side you can know from which server does the request come from..According to the credentials, you can send token to client.. And if you use ssl, then no network sniffer catch it. – yeulucay May 8 '15 at 8:46
• "store them into local db in the client app" - Then I just copy the local db from the client app to my clone app. "with encryption in development time" - if your app can decrypt the data, so can my clone app. Whatever information I need to do that can be extracted from your app. If your app can not decrypt the data, then my app doesn't need to be able to do that either. – Philipp Jul 26 at 8:29 | 2019-10-16 14:08:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21407733857631683, "perplexity": 1599.0056681596413}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986668994.39/warc/CC-MAIN-20191016135759-20191016163259-00050.warc.gz"} |
http://mathoverflow.net/questions/116395/classification-of-quasi-split-unitary-groups | ## Classification of quasi-split unitary groups
Let $U$ be a unitary group defined with respect to an extension $E/F$ of non-archimedean local fields, and assume it is realised with respect to a pair $(V,q)$, where $V$ is an $n$-dimensional vector space over $E$ and $q$ is a hermitian form on $V$. By a decomposition theorem, $V$ decomposes as a sum of hyperbolic planes and another (possibly trivial) hermitian space of dimension at most 2. My question is as follows:
If $n$ is odd, this other hermitian space is in fact a line, and in that case $U$ is quasi-split. If $n$ is even, then $U$ is quasi-split if and only if this other space is trivial (that is, $V$ is really a sum of hyperbolic planes). Where could I find a reference for this characterisation of quasi-splitness?
This is often use in many papers, but I have never seen a reference for it. Any help would be greatly appreciated.
-
this has nothing to do with local fields. You may look up Tits' article in AMS symposium on algebraic groups and discontinuous subgroups, where he gives the quasi-split forms of the unitary group. – Aakumadula Dec 14 at 21:28
According to Tits, Classification of algebraic semisimple groups, Proc. Sympos. Pure Math. 9 (1966), a semisimple group $G$ over a field $F$ is quasi-split if and only if the centralizer of a maximal $F$-split torus $S$ of $G$ is a (maximal) torus. A simple calculation using this criterion shows that a special unitary group of a $2m+1$-dimensional space is quasi-split if and only if it is of $F$-rank $m$, and a special unitary group of a $2m$-dimensional space is quasi-split if and only if it is of $F$-rank $m$. This implies the desired assertion. – Mikhail Borovoi Dec 14 at 21:33
@Borovoi Thank you! This is most helpful! – M Turgeon Dec 16 at 2:57
There is a text by Scharlau about "Hermitian...". Also the older book by O'Meara.
The point is that, first, over non-archimedean local fields a quadratic form in five or more variables has an isotropic vector. In case the residue characteristic is not two, this has a reasonably elementary direct proof. Then note that a hermitian form is a (special type of) quadratic form in twice as many variables. Thus, there is no anisotropic hermitian form in more than two variables.
Edit: in response to questioner's comment, "quasi-split" means (reductive and) a "Borel subgroup" defined over the field. Then "Borel subgroup" means parabolic subgroup that remains minimal under extending scalars. If the whole space were decomposable as hyperbolic planes and an anisotropic two-dimensional space, any quadratic extension would produce a "smaller" parabolic (the Borel, here, because the minimal parabolic is still next-to-Borel).
About algebraic groups, J. Tits' article in Corvallis is good, also the book Platonov and Rapinchuk, "Algebraic groups and number theory". Both these pay attention to such rationality properties, while many classics (such as Borel's "Linear algebra groups") emphasize the algebraically closed groundfield case.
-
Thank you for your comment. Maybe my original question was not clear enough, but what I am looking for is a reference about being quasi-split. The decomposition I do understand. – M Turgeon Dec 14 at 21:52 Thank you for this addendum. It is quite enlightening. – M Turgeon Dec 16 at 2:56
As pointed out in comments, there is a detailed survey by Tits from the algebraic group viewpoint in his AMS proceedings paper, along with quite a few references to earlier literature. From the viewpoint of groups over local fields, it's worth looking closely at his table of "indices" at the end of the paper. There he shows concisely which labelled Dyhkin diagrams can occur over various kinds of fields. In particular, you are interested in the twisted type $^2 \! A_n$ with $n$ even. He indicates that in the local field case this diagram can occur only relative to a quadratic extension and corresponds then to a quasi-split special unitary group. In his set-up, "quasi-split" corresponds to the case where $n$ is twice the relative rank $r$ and the Dynkin diagram is folded accordingly. (In general, various special unitary groups over division algebras are possible.)
While Tits does not spell out all the details of how such groups are constructed, he does provide a nice overview of the basic algebraic group theory leading to such a classification list. (Some improvements are contained in his 1971 paper in the Crelle Journal.) Also, student of his at Bonn named Martin Selbach elaborated further on the theory:
Klassifikationstheorie halbeinfacher algebraischer Gruppen. Diplomarbeit, Univ. Bonn, Bonn, 1973. Bonner Mathematische Schriften, Nr. 83. Mathematisches Institut der Universitat Bonn, Bonn, 1976. v+140 pp.
P.S. For the characteristic 0 theory, presented in a different style, you might try the lecture notes by I. Satake (which I haven't looked at in a long time): Classification theory of semi-simple algebraic groups. With an appendix by M. Sugiura. Notes prepared by Doris Schattschneider. Lecture Notes in Pure and Applied Mathematics, 3. Marcel Dekker, Inc., New York, 1971. viii+149. Satake also used labelled Dynkin diagrams, but with different conventions than Tits ("Satake-Tits diagrams").
-
Thank you for the references, I will look into it! – M Turgeon Dec 16 at 2:55 | 2013-05-20 05:41:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8576257824897766, "perplexity": 417.4910859362014}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698354227/warc/CC-MAIN-20130516095914-00016-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://k.sfconservancy.org/website/changeset/9c4697993e9b5bd2f524e9afb5dd16d3fed03e8b?context=12 | Changeset - 9c4697993e9b
[Not reviewed]
0 1 2
FSA template is now publicly available online.
This is version is r6334 from Conservancy's internal repository.
FAQ has been appropriately updated to link to this.
3 files changed with 378 insertions and 8 deletions:
0 comments (0 inline, 0 general)
www/conservancy/static/members/apply/ConservancyFSATemplate.pdf
new file 100644 binary diff not shown
www/conservancy/static/members/apply/conservancy-fsa-template.tex
www/conservancy/static/members/apply/index.html
... @@ -120,32 +120,39 @@ it comes across as a bit of a misnomer.
In this context, a fiscal sponsor is a non-profit organization that, rather than fund a project directly, provides the required infrastructure and facilitates the project's ability to raise its own funds. Conservancy therefore assists your project in raising funds, and allows your project to hold those funds and spend them on activities that simultaneously advance the non-profit mission of the Conservancy and the FLOSS development and documentation goals of the project.
What will the project leaders have to agree to if our project joins?
Once you're offered membership, we'll send you a draft fiscal sponsorship agreement. These aren't secret documents and many of our member projects have even chosen to put theirs online. However, we wait to send a draft of this document until the application process is complete, as we often tailor and modify the agreements based on individual project needs. This is painstaking work, and it's better to complete that work after both Conservancy and the project are quite sure that they both want the project to join Conservancy.
Once you're offered membership, Conservancy will send you a draft fiscal sponsorship agreement (FSA). A template of Conservancy's FSA is available in PDF (and in LaTeX). Please note that the preceding documents are only templates. Please do not try to fill one out and send it to Conservancy. The final FSA between Conservancy and your project needs to be negotiated between us, and as can been seen in the template, the Representation section needs substantial work. If your project is offered membership, Conservancy will work with you adapt the FSA template to suit the needs and specific circumstances of your project. This is painstaking work, and it's better to complete that work after both Conservancy and the project are quite sure that they both want the project to join Conservancy.
If my project joins the Conservancy, how will it change?
Substantively, member projects continue to operate in the same way as they did before joining the Conservancy. So long as the project remains devoted to software freedom and operates consistently with the Conservancy's tax-exempt status, the Conservancy does not intervene in the project's development other than to provide administrative assistance. For example, the Conservancy keeps and maintains books and records for the project and assists with the logistics of receiving donations, but does not involve itself with technical or artistic decision making. Projects
0 comments (0 inline, 0 general) | 2022-09-26 09:30:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5531490445137024, "perplexity": 4843.994726146}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334855.91/warc/CC-MAIN-20220926082131-20220926112131-00641.warc.gz"} |
https://web2.0calc.com/questions/help_74853 | +0
# help
0
106
1
At a party, everyone shook hands with everybody else. There were 66 handshakes. How many people were at the party?
Oct 25, 2019
#1
+195
0
If there are n people, you would count the number of handshakes as nC2.
nC2= $$\frac{n(n-1)}{2\cdot1}$$
If there were 10 people, there would be (10*9)/2=45 handshakes. Thats not enough, so lets try a bigger number. 12 people: (12*11)/2=66 handshakes so there were 12 people.
Oct 25, 2019 | 2020-01-26 23:43:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8025878667831421, "perplexity": 1568.0616428007631}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251694071.63/warc/CC-MAIN-20200126230255-20200127020255-00548.warc.gz"} |
https://stackoverflow.com/questions/10587621/how-to-print-to-paper-a-nicely-formatted-data-frame | # How to print (to paper) a nicely-formatted data frame
I'd like to print nicely-formatted data frames to paper, ideally from within a script. (I am trying to collect data using an instrument and automatically process and print it using an R script).
Right now I can write a data frame to a text file using write.table(), but this has two problems:
1. The resulting text file is poorly formatted (columns do not necessarily line up with their headings) and
2. I don't know how to print a text file from within R.
I'm looking more for general strategies than for specific code (although code would be great too!). Would Sweave be the most convenient solution? In principle can I use socketConnection() to print to a printer - and if so, where can I learn about how to use it (I didn't find the documentation to be very helpful).
• Are we talking MS Word I assume or is it a LaTeX paper? – Tyler Rinker May 14 '12 at 16:51
• Do you have LaTeX installed on your computer? I'm thinking a combination of xtable, sweave (or knitr), and possibly this: livedocs.adobe.com/acrobat_sdk/10/Acrobat10_HTMLHelp/wwhelp/… might help. But that does seem a little complex. I'm interested in seeing what others come up with for this. – Dason May 14 '12 at 16:53
• I actually don't want to place the data frame into a larger document - I just want to have a printed out piece of paper with a legible data frame, which I will then put into my lab notebook as a hard-copy record of the instrument output. – Drew Steen May 14 '12 at 16:59
• I don't have LaTeX installed on the machine, but I can do it easily enough I suppose. – Drew Steen May 14 '12 at 17:00
• % System(lpr [filename]) , at least in the *nix world, may let you fire up the printer from within R. – Carl Witthoft May 14 '12 at 18:13
Here is a quick and easy possibility using grid.table from the gridExtra package:
library(gridExtra)
pdf("data_output.pdf", height=11, width=8.5)
grid.table(mtcars)
dev.off()
If your data doesn't fit on the page, you can reduce the text size grid.table(mtcars, gp=gpar(fontsize=8)). This may not be very flexible, nor easy to generalize or automate.
• @bdemarest, how do you put a title to this graph in pdf? – user1471980 Jan 14 '13 at 17:56
• @user1471980, One way to do this is grid.arrange(tableGrob(mtcars, gp=gpar(fontsize=6)), main="Main Title Here."). – bdemarest Jan 19 '13 at 20:35
• Is there a way to print a data frame has a very large number of rows that don't fit in just one page? – Nanami Mar 20 '13 at 23:26
• @Nanami, Try something like this: library(gridExtra); maxrow = 30; npages = ceiling(nrow(iris)/maxrow); pdf("iris_pages.pdf", height=11, width=8.5); for (i in 1:npages) {idx = seq(1+((i-1)*maxrow), i*maxrow); grid.newpage(); grid.table(iris[idx, ])}; dev.off() – bdemarest Mar 21 '13 at 2:48
• @Masi, mtcars is included in the datasets package in a standard R installation. It is loaded by default when you start a new R session. Try typing ?mtcars and mtcars at the R prompt to see what I mean. – bdemarest Oct 31 '16 at 5:04
I would suggest xtable in combination with LaTeX documents. Have a look at the examples in this pdf:
You could also directly combine this with Sweave or knitr.
• Please, no link only answer. It would be great to have a minimum code example with reproducible data and example output. – Léo Léopold Hertz 준영 Oct 30 '16 at 10:15
• I think the criticism should ahve been directed at the questioner. He offered no minimal reproducible example. The usual justification for not accepting link only answers doesn't seem to apply here. This question is 7 years old and the link to a CRAN-sited vignette appears quite stable. – IRTFM Sep 17 '19 at 1:04
Surprised nobody has mentioned the stargazer package for nice printing of data.
You can output a nice-looking text file:
stargazer(mtcars, type = 'text', out = 'out.txt')
============================================
Statistic N Mean St. Dev. Min Max
--------------------------------------------
mpg 32 20.091 6.027 10.400 33.900
cyl 32 6.188 1.786 4 8
disp 32 230.722 123.939 71.100 472.000
hp 32 146.688 68.563 52 335
drat 32 3.597 0.535 2.760 4.930
wt 32 3.217 0.978 1.513 5.424
qsec 32 17.849 1.787 14.500 22.900
vs 32 0.438 0.504 0 1
am 32 0.406 0.499 0 1
gear 32 3.688 0.738 3 5
carb 32 2.812 1.615 1 8
--------------------------------------------
Or even HTML:
stargazer(mtcars, type = 'html', out = 'out.html')
<table style="text-align:center"><tr><td colspan="6" style="border-bottom: 1px solid black"></td></tr><tr><td style="text-align:left">Statistic</td><td>N</td><td>Mean</td><td>St. Dev.</td><td>Min</td><td>Max</td></tr>
<tr><td colspan="6" style="border-bottom: 1px solid black"></td></tr><tr><td style="text-align:left">mpg</td><td>32</td><td>20.091</td><td>6.027</td><td>10.400</td><td>33.900</td></tr>
<tr><td style="text-align:left">cyl</td><td>32</td><td>6.188</td><td>1.786</td><td>4</td><td>8</td></tr>
<tr><td style="text-align:left">disp</td><td>32</td><td>230.722</td><td>123.939</td><td>71.100</td><td>472.000</td></tr>
<tr><td style="text-align:left">hp</td><td>32</td><td>146.688</td><td>68.563</td><td>52</td><td>335</td></tr>
<tr><td style="text-align:left">drat</td><td>32</td><td>3.597</td><td>0.535</td><td>2.760</td><td>4.930</td></tr>
<tr><td style="text-align:left">wt</td><td>32</td><td>3.217</td><td>0.978</td><td>1.513</td><td>5.424</td></tr>
<tr><td style="text-align:left">qsec</td><td>32</td><td>17.849</td><td>1.787</td><td>14.500</td><td>22.900</td></tr>
<tr><td style="text-align:left">vs</td><td>32</td><td>0.438</td><td>0.504</td><td>0</td><td>1</td></tr>
<tr><td style="text-align:left">am</td><td>32</td><td>0.406</td><td>0.499</td><td>0</td><td>1</td></tr>
<tr><td style="text-align:left">gear</td><td>32</td><td>3.688</td><td>0.738</td><td>3</td><td>5</td></tr>
<tr><td style="text-align:left">carb</td><td>32</td><td>2.812</td><td>1.615</td><td>1</td><td>8</td></tr>
<tr><td colspan="6" style="border-bottom: 1px solid black"></td></tr></table>
The printr package is a good option for printing data.frames, help pages, vignette listings, and dataset listings in knitr documents.
From the documentation page:
options(digits = 4)
set.seed(123)
x = matrix(rnorm(40), 5)
knitr::kable(x, digits = 2, caption = "A table produced by printr.")
• I've found this is the best option among all of the answers, if you're looking for printing a dataframe in a knitr-produced pdf. – snd Jan 14 '18 at 16:52
The grid.table solution will indeed be the quickest way to create PDF, but this may not be the optimal solution if you have a fairly long table. RStudio + knitr + longtable make it quite easy to create nicely formatted PDFs. What you'll need is something like:
\documentclass{article}
\usepackage{longtable}
\begin{document}
<<results='asis'>>=
library(xtable)
df = data.frame(matrix(rnorm(400), nrow=100))
xt = xtable(df)
print(xt,
tabular.environment = "longtable",
floating = FALSE
)
@
\end{document}
Pls see this post for more details.
• This answer would be much much better with minimum example of data and output. Now, I feel it as a stub answer. – Léo Léopold Hertz 준영 Oct 30 '16 at 10:13
Not as fancy, but very utilitarian:
print.data.frame(iris)
• That gets it on screen, but does not show how to get that then onto paper. – Brian Diggs Sep 20 '13 at 20:42
The RStudio IDE gives another nice option to print out a data.table:
1. Open the data in the viewer, e.g. View(data_table) or via the GUI
2. Open the view in a seperate window (icon at the top left corner: "Show in new window")
3. The seperate window now supports a print dialog (incl. preview)
This works in RStudio V0.98.1103 (and probably newer versions)
• It looks like the print dialog for separate windows is gone with RStudio V0.99. – kirk Nov 13 '15 at 7:32
• You can still get it by right-clicking the view and selecting "Open Frame" (v0.99.887). – mpe Apr 14 '16 at 15:25
For long/wide tables you could use pander.
It will automatically split long tables into shorter parts that fit the page, e.g. using knitr insert this chunk into your Rmd file:
pander::pander(mtcars)
If you want something that looks more like Excel tables (even with editing options in html) then use rhandsontable. More info about usage and formatting in the vignette. You will need to knit your Rmd into an html file:
library(rhandsontable)
I came across this question when looking to do something similar. I found mention of the sink command elsewhere on stackoverflow that was useful in this context:
sink('myfile.txt')
print(mytable,right=F)
sink()
If you want to export as a png, you can do like this:
library(gridExtra)
png("test.png", height = 50*nrow(df), width = 200*ncol(df))
grid.table(df)
dev.off()
If you want to export as a pdf, you can do like this:
library(gridExtra)
pdf("test.pdf", height=11, width=10)
grid.table(df)
dev.off() | 2020-10-22 16:40:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4098103940486908, "perplexity": 3548.4447526528593}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107879673.14/warc/CC-MAIN-20201022141106-20201022171106-00137.warc.gz"} |
https://zbmath.org/authors/?q=ai%3Afabry.christian | ## Fabry, Christian
Compute Distance To:
Author ID: fabry.christian Published as: Fabry, C.; Fabry, Christian; Fabry, Ch. more...less External Links: MGP
Documents Indexed: 50 Publications since 1970 Co-Authors: 13 Co-Authors with 35 Joint Publications 310 Co-Co-Authors
all top 5
### Co-Authors
14 single-authored 10 Fonda, Alessandro 6 Bonheure, Denis 5 Habets, Patrick 5 Munyamarere, François 3 De Coster, Colette 3 Mawhin, Jean L. 3 Smets, Didier 2 Berkovits, Juha 2 Fayyad, Dolly Khuwayri 1 Franchetti, Carlo 1 Manasevich, Raul F. 1 Nkashama, Mubenga N. 1 Ruiz, David
all top 5
### Serials
5 Journal of Differential Equations 4 Rapport. Séminaire de Mathématique. Nouvelle Série 3 Differential and Integral Equations 3 Nonlinear Analysis. Theory, Methods & Applications 2 Journal of Mathematical Analysis and Applications 2 Nonlinearity 2 Annali di Matematica Pura ed Applicata. Serie Quarta 2 Archiv der Mathematik 2 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 2 Annales de la Société Scientifique de Bruxelles. Série I 2 Proceedings of the Royal Society of Edinburgh. Section A. Mathematics 2 Electronic Journal of Differential Equations (EJDE) 2 Discrete and Continuous Dynamical Systems 2 Bollettino della Unione Matematica Italiana. Series IV 1 International Journal of Non-Linear Mechanics 1 Bulletin of the London Mathematical Society 1 International Journal of Mathematics and Mathematical Sciences 1 Rendiconti dell’Istituto di Matematica dell’Università di Trieste 1 Atti della Accademia Nazionale dei Lincei. Serie Ottava. Rendiconti. Classe di Scienze Fisiche, Matematiche e Naturali 1 Topological Methods in Nonlinear Analysis 1 NoDEA. Nonlinear Differential Equations and Applications 1 Abstract and Applied Analysis 1 Electronic Journal of Qualitative Theory of Differential Equations 1 Portugaliae Mathematica. Nova Série 1 Advanced Nonlinear Studies 1 Communications on Pure and Applied Analysis
all top 5
### Fields
41 Ordinary differential equations (34-XX) 15 Operator theory (47-XX) 7 Mechanics of particles and systems (70-XX) 6 Partial differential equations (35-XX) 3 Dynamical systems and ergodic theory (37-XX) 2 Global analysis, analysis on manifolds (58-XX) 1 Real functions (26-XX) 1 Special functions (33-XX) 1 General topology (54-XX) 1 Algebraic topology (55-XX)
### Citations contained in zbMATH Open
36 Publications have been cited 474 times in 352 Documents Cited by Year
A multiplicity result for periodic solutions of forced nonlinear second order ordinary differential equations. Zbl 0586.34038
Fabry, C.; Mawhin, J.; Nkashama, M. N.
1986
Nonresonance conditions for fourth order nonlinear boundary value problems. Zbl 0810.34017
de Coster, C.; Fabry, C.; Munyamarere, F.
1994
Landesman-Lazer conditions for periodic boundary value problems with asymmetric nonlinearities. Zbl 0816.34014
Fabry, C.
1995
Nonlinear resonance in asymmetric oscillators. Zbl 0915.34033
Fabry, Christian; Fonda, Alessandro
1998
Periodic solutions of second order differential equations with a $$p$$- Laplacian and asymmetric nonlinearities. Zbl 0824.34026
1992
Oscillations of a forced asymmetric oscillator at resonance. Zbl 0956.34028
Fabry, C.; Mawhin, J.
2000
Periodic solutions of second order differential equations with superlinear asymmetric nonlinearities. Zbl 0779.34019
Fabry, C.; Habets, P.
1993
Periodic solutions of forced isochronous oscillators at resonance. Zbl 1021.34032
Bonheure, D.; Fabry, C.; Smets, D.
2002
Periodic solutions of nonlinear differential equations with double resonance. Zbl 0729.34025
Fabry, C.; Fonda, A.
1990
Upper and lower solutions for second-order boundary value problems with nonlinear boundary conditions. Zbl 0612.34015
Fabry, Ch.; Habets, P.
1986
The Picard boundary value problem for nonlinear second order vector differential equations. Zbl 0439.34018
Fabry, Ch.; Habets, P.
1981
Periodic motions in impact oscillators with perfectly elastic bounces. Zbl 1019.34044
Bonheure, D.; Fabry, C.
2002
Structure of the Fučík spectrum and existence of solutions for equations with asymmetric nonlinearities. Zbl 0987.35118
Ben-Naoum, A. K.; Fabry, C.; Smets, D.
2001
Nagumo conditions for systems of second-order differential equations. Zbl 0604.34002
Fabry, C.
1985
Periodic solutions of perturbed isochronous Hamiltonian systems at resonance. Zbl 1080.34020
Fabry, Christian; Fonda, Alessandro
2005
Unbounded motions of perturbed isochronous Hamiltonian systems at resonance. Zbl 1088.34027
Fabry, Christian; Fonda, Alessandro
2005
Behavior of forced asymmetric oscillators at resonance. Zbl 0977.34030
Fabry, C.
2000
Properties of solutions of some forced nonlinear oscillators at resonance. Zbl 0987.34028
Fabry, C.; Mawhin, J.
2000
Equations with a $$p$$-Laplacian and an asymmetric nonlinear term. Zbl 0998.34035
Fabry, Christian; Manásevich, Raul
2001
Nonlinear equations at resonance and generalized eigenvalue problems. Zbl 0756.34079
Fabry, C.; Fonda, A.
1992
Nonlinear equations with growth restrictions on the nonlinear term. Zbl 0326.47060
Fabry, C.; Franchetti, C.
1976
Problems at resonance for equations with periodic nonlinearities. Zbl 1046.34034
Bonheure, D.; Fabry, C.; Ruiz, D.
2003
An extension of the topological degree in Hilbert space. Zbl 1117.47048
Berkovits, J.; Fabry, C.
2005
Bifurcations from infinity in asymmetric nonlinear oscillators. Zbl 0979.34033
Fabry, Christian; Fonda, Alessandro
2000
Large-amplitude oscillations of a nonlinear asymmetric oscillator with damping. Zbl 0996.34032
Fabry, C.
2001
Littlewood’s problem for isochronous oscillators. Zbl 1194.34057
Bonheure, Denis; Fabry, Christian
2009
Unbounded solutions of forced isochronous oscillators at resonance. Zbl 1030.70011
Bonheure, D.; Fabry, C.
2002
An elementary proof of Gorny’s inequality. Zbl 0635.26009
Fabry, C.
1987
Semilinear equations at resonance with non-symmetric linear part. Zbl 0859.47041
Fabry, C.; Fonda, A.; Munyamarere, F.
1993
Resonance with respect to the Fučík spectrum. Zbl 0977.70015
Ben-Naoum, A. K.; Fabry, C.; Smets, D.
2000
Semilinear problems with a non-symmetric linear part having an infinite dimensional kernel. Zbl 1082.47055
Berkovits, J.; Fabry, C.
2004
A variational approach to resonance for asymmetric oscillators. Zbl 1137.34016
Bonheure, Denis; Fabry, Christian
2007
Fučík spectra for vector equations. Zbl 0986.34069
Fabry, C.
2000
Asymmetric nonlinear oscillators. Zbl 0993.34037
Fabry, Christian; Fonda, Alessandro
2001
Inequalities verified by asymmetric nonlinear operators. Zbl 0932.47049
Fabry, C.
1998
Existence et stabilité de solutions périodiques pour des équations différentielles quasi-linéaires d’ordre n. Zbl 0208.11903
Fabry, C.
1970
Littlewood’s problem for isochronous oscillators. Zbl 1194.34057
Bonheure, Denis; Fabry, Christian
2009
A variational approach to resonance for asymmetric oscillators. Zbl 1137.34016
Bonheure, Denis; Fabry, Christian
2007
Periodic solutions of perturbed isochronous Hamiltonian systems at resonance. Zbl 1080.34020
Fabry, Christian; Fonda, Alessandro
2005
Unbounded motions of perturbed isochronous Hamiltonian systems at resonance. Zbl 1088.34027
Fabry, Christian; Fonda, Alessandro
2005
An extension of the topological degree in Hilbert space. Zbl 1117.47048
Berkovits, J.; Fabry, C.
2005
Semilinear problems with a non-symmetric linear part having an infinite dimensional kernel. Zbl 1082.47055
Berkovits, J.; Fabry, C.
2004
Problems at resonance for equations with periodic nonlinearities. Zbl 1046.34034
Bonheure, D.; Fabry, C.; Ruiz, D.
2003
Periodic solutions of forced isochronous oscillators at resonance. Zbl 1021.34032
Bonheure, D.; Fabry, C.; Smets, D.
2002
Periodic motions in impact oscillators with perfectly elastic bounces. Zbl 1019.34044
Bonheure, D.; Fabry, C.
2002
Unbounded solutions of forced isochronous oscillators at resonance. Zbl 1030.70011
Bonheure, D.; Fabry, C.
2002
Structure of the Fučík spectrum and existence of solutions for equations with asymmetric nonlinearities. Zbl 0987.35118
Ben-Naoum, A. K.; Fabry, C.; Smets, D.
2001
Equations with a $$p$$-Laplacian and an asymmetric nonlinear term. Zbl 0998.34035
Fabry, Christian; Manásevich, Raul
2001
Large-amplitude oscillations of a nonlinear asymmetric oscillator with damping. Zbl 0996.34032
Fabry, C.
2001
Asymmetric nonlinear oscillators. Zbl 0993.34037
Fabry, Christian; Fonda, Alessandro
2001
Oscillations of a forced asymmetric oscillator at resonance. Zbl 0956.34028
Fabry, C.; Mawhin, J.
2000
Behavior of forced asymmetric oscillators at resonance. Zbl 0977.34030
Fabry, C.
2000
Properties of solutions of some forced nonlinear oscillators at resonance. Zbl 0987.34028
Fabry, C.; Mawhin, J.
2000
Bifurcations from infinity in asymmetric nonlinear oscillators. Zbl 0979.34033
Fabry, Christian; Fonda, Alessandro
2000
Resonance with respect to the Fučík spectrum. Zbl 0977.70015
Ben-Naoum, A. K.; Fabry, C.; Smets, D.
2000
Fučík spectra for vector equations. Zbl 0986.34069
Fabry, C.
2000
Nonlinear resonance in asymmetric oscillators. Zbl 0915.34033
Fabry, Christian; Fonda, Alessandro
1998
Inequalities verified by asymmetric nonlinear operators. Zbl 0932.47049
Fabry, C.
1998
Landesman-Lazer conditions for periodic boundary value problems with asymmetric nonlinearities. Zbl 0816.34014
Fabry, C.
1995
Nonresonance conditions for fourth order nonlinear boundary value problems. Zbl 0810.34017
de Coster, C.; Fabry, C.; Munyamarere, F.
1994
Periodic solutions of second order differential equations with superlinear asymmetric nonlinearities. Zbl 0779.34019
Fabry, C.; Habets, P.
1993
Semilinear equations at resonance with non-symmetric linear part. Zbl 0859.47041
Fabry, C.; Fonda, A.; Munyamarere, F.
1993
Periodic solutions of second order differential equations with a $$p$$- Laplacian and asymmetric nonlinearities. Zbl 0824.34026
1992
Nonlinear equations at resonance and generalized eigenvalue problems. Zbl 0756.34079
Fabry, C.; Fonda, A.
1992
Periodic solutions of nonlinear differential equations with double resonance. Zbl 0729.34025
Fabry, C.; Fonda, A.
1990
An elementary proof of Gorny’s inequality. Zbl 0635.26009
Fabry, C.
1987
A multiplicity result for periodic solutions of forced nonlinear second order ordinary differential equations. Zbl 0586.34038
Fabry, C.; Mawhin, J.; Nkashama, M. N.
1986
Upper and lower solutions for second-order boundary value problems with nonlinear boundary conditions. Zbl 0612.34015
Fabry, Ch.; Habets, P.
1986
Nagumo conditions for systems of second-order differential equations. Zbl 0604.34002
Fabry, C.
1985
The Picard boundary value problem for nonlinear second order vector differential equations. Zbl 0439.34018
Fabry, Ch.; Habets, P.
1981
Nonlinear equations with growth restrictions on the nonlinear term. Zbl 0326.47060
Fabry, C.; Franchetti, C.
1976
Existence et stabilité de solutions périodiques pour des équations différentielles quasi-linéaires d’ordre n. Zbl 0208.11903
Fabry, C.
1970
all top 5
### Cited by 327 Authors
15 Zanolin, Fabio 14 Fonda, Alessandro 13 Mawhin, Jean L. 13 Qian, Dingbian 12 Papageorgiou, Nikolaos S. 10 Yang, Xiaojing 8 Boscaggin, Alberto 8 Fabry, Christian 8 Garrione, Maurizio 8 Li, Yongxiang 8 Omari, Pierpaolo 7 Dambrosio, Walter 7 Sfecci, Andrea 7 Wang, Yuanming 7 Wang, Zaihong 6 Ge, Weigao 6 Li, Xiong 6 Lu, Shiping 5 Cabada, Alberto 5 Holubová, Gabriela 5 Liu, Bin 5 Nkashama, Mubenga N. 5 O’Regan, Donal 5 Ortega, Rafael 5 Rachůnková, Irena 5 Wang, Guangwa 5 Wang, Zhiguo 5 Zhou, Mingru 4 Andres, Jan 4 Li, Keqiang 4 Li, Shujie 4 Liu, Qihuai 4 Ma, Ruyun 4 Nečesal, Petr 4 Rynne, Bryan P. 4 Torres, Pedro José 4 Wang, Youyu 4 Wei, Zhongli 4 Wu, Xian 3 An, Yukun 3 Bai, Zhanbing 3 Bereanu, Cristian 3 Capietto, Anna 3 Chen, Hongbin 3 Dong, Yujun 3 Gasiński, Leszek 3 Habets, Patrick 3 Jiang, Meiyue 3 Khan, Rahmat Ali 3 Kyritsi, Sophia Th. 3 Li, Yong 3 Liu, Zhaoli 3 Ma, Tiantian 3 Malaguti, Luisa 3 Mariani, Maria Cristina 3 Minhós, Feliz Manuel 3 Pang, Changci 3 Rebelo, Carlota 3 Sovrano, Elisa 3 Sun, Li 3 Yang, Xue 3 Yang, Zhilin 3 Yu, Xingchen 2 Agarwal, Ravi P. 2 Aizicovici, Sergiu 2 Berkovits, Juha 2 Binding, Paul Anthony 2 Chen, Shaowei 2 Cheung, Wing-Sum 2 Chu, Jifeng 2 Colasuonno, Francesca 2 Ding, Wei 2 Drábek, Pavel 2 Feltrin, Guglielmo 2 Hu, Shouchuan 2 Iannacci, Rita 2 Jiang, Daqing 2 Kourogenis, Nikolaos C. 2 Kožušníková, Martina 2 Le, Vy Khoi 2 Li, Chong 2 Li, Sun 2 Li, Yi 2 Liang, Shuqing 2 Liu, Moxin 2 Liu, Yuji 2 Llibre, Jaume 2 Lo, Kueiming 2 López Pouso, Rodrigo 2 Ma, Shiwang 2 Manasevich, Raul F. 2 Matzakos, Nikolaos M. 2 Molle, Riccardo 2 Mukhigulashvili, S. V. 2 Noris, Benedetta 2 Ntouyas, Sotiris K. 2 Pan, Lijun 2 Passaseo, Donato 2 Perán, Juan 2 Reichel, Wolfgang ...and 227 more Authors
all top 5
### Cited in 98 Serials
57 Journal of Mathematical Analysis and Applications 55 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 30 Journal of Differential Equations 13 Nonlinear Analysis. Theory, Methods & Applications 9 Acta Mathematica Sinica. English Series 9 Boundary Value Problems 8 Computers & Mathematics with Applications 8 Applied Mathematics and Computation 7 Applied Mathematics Letters 6 Journal of Computational and Applied Mathematics 6 Proceedings of the American Mathematical Society 6 NoDEA. Nonlinear Differential Equations and Applications 6 Advanced Nonlinear Studies 5 Rocky Mountain Journal of Mathematics 5 Annali di Matematica Pura ed Applicata. Serie Quarta 5 Czechoslovak Mathematical Journal 5 Advances in Difference Equations 4 Journal of Dynamics and Differential Equations 4 Abstract and Applied Analysis 3 ZAMP. Zeitschrift für angewandte Mathematik und Physik 3 Zeitschrift für Analysis und ihre Anwendungen 3 Proceedings of the Royal Society of Edinburgh. Section A. Mathematics 3 Calculus of Variations and Partial Differential Equations 3 Journal of Inequalities and Applications 3 Communications in Contemporary Mathematics 3 Qualitative Theory of Dynamical Systems 2 Mathematische Nachrichten 2 Acta Mathematicae Applicatae Sinica. English Series 2 Mathematical and Computer Modelling 2 Science in China. Series A 2 Journal of Global Optimization 2 Discrete and Continuous Dynamical Systems 2 Differential Equations 2 Communications on Pure and Applied Analysis 2 Mediterranean Journal of Mathematics 2 Open Mathematics 1 Applicable Analysis 1 Bulletin of the Australian Mathematical Society 1 Indian Journal of Pure & Applied Mathematics 1 Journal of Mathematical Physics 1 Chaos, Solitons and Fractals 1 Acta Universitatis Palackianae Olomucensis. Facultas Rerum Naturalium. Mathematica 1 Archiv der Mathematik 1 Glasgow Mathematical Journal 1 International Journal of Mathematics and Mathematical Sciences 1 International Journal of Circuit Theory and Applications 1 Journal of Functional Analysis 1 Nagoya Mathematical Journal 1 Numerical Functional Analysis and Optimization 1 Rendiconti del Seminario Matemàtico e Fisico di Milano 1 Results in Mathematics 1 SIAM Journal on Control and Optimization 1 Tôhoku Mathematical Journal. Second Series 1 Transactions of the American Mathematical Society 1 Chinese Annals of Mathematics. Series B 1 Acta Applicandae Mathematicae 1 Annales de l’Institut Henri Poincaré. Analyse Non Linéaire 1 Applied Numerical Mathematics 1 Mathematica Bohemica 1 Applications of Mathematics 1 Journal of Contemporary Mathematical Analysis. Armenian Academy of Sciences 1 Automation and Remote Control 1 Communications in Partial Differential Equations 1 Linear Algebra and its Applications 1 Proceedings of the Indian Academy of Sciences. Mathematical Sciences 1 Acta Mathematica Sinica. New Series 1 International Journal of Bifurcation and Chaos in Applied Sciences and Engineering 1 Journal of Nonlinear Science 1 Russian Mathematics 1 Set-Valued Analysis 1 Topological Methods in Nonlinear Analysis 1 Filomat 1 Advances in Differential Equations 1 Journal of Difference Equations and Applications 1 Differential Equations and Dynamical Systems 1 European Series in Applied and Industrial Mathematics (ESAIM): Control, Optimization and Calculus of Variations 1 Mathematical Physics, Analysis and Geometry 1 Positivity 1 Discrete Dynamics in Nature and Society 1 Journal of the European Mathematical Society (JEMS) 1 Journal of Dynamical and Control Systems 1 Annales Mathematicae Silesianae 1 The ANZIAM Journal 1 Nonlinear Analysis. Real World Applications 1 Mathematical Modelling and Analysis 1 Differentsial’nye Uravneniya i Protsessy Upravleniya 1 Journal of Applied Mathematics and Computing 1 Journal of Function Spaces and Applications 1 Fixed Point Theory and Applications 1 Complex Variables and Elliptic Equations 1 Journal of Fixed Point Theory and Applications 1 Discrete and Continuous Dynamical Systems. Series S 1 International Journal of Differential Equations 1 Science China. Mathematics 1 Journal of Applied Analysis and Computation 1 Axioms 1 Mathematics for Applications 1 AIMS Mathematics
all top 5
### Cited in 20 Fields
298 Ordinary differential equations (34-XX) 90 Operator theory (47-XX) 45 Dynamical systems and ergodic theory (37-XX) 40 Partial differential equations (35-XX) 19 Global analysis, analysis on manifolds (58-XX) 9 Mechanics of particles and systems (70-XX) 8 Mechanics of deformable solids (74-XX) 7 Difference and functional equations (39-XX) 7 Calculus of variations and optimal control; optimization (49-XX) 7 Numerical analysis (65-XX) 3 Functional analysis (46-XX) 2 General topology (54-XX) 2 Algebraic topology (55-XX) 2 Systems theory; control (93-XX) 2 Information and communication theory, circuits (94-XX) 1 Linear and multilinear algebra; matrix theory (15-XX) 1 Approximations and expansions (41-XX) 1 Probability theory and stochastic processes (60-XX) 1 Quantum theory (81-XX) 1 Operations research, mathematical programming (90-XX) | 2022-11-28 21:00:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6366579532623291, "perplexity": 7005.372357201596}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710662.60/warc/CC-MAIN-20221128203656-20221128233656-00457.warc.gz"} |
https://www.qb365.in/materials/stateboard/12th-computer-science-algorithmic-strategies-two-marks-questions-3061.html | #### Algorithmic Strategies Two Marks Questions
12th Standard EM
Reg.No. :
•
•
•
•
•
•
Computer Science
Time : 00:45:00 Hrs
Total Marks : 30
15 x 2 = 30
1. What is an Algorithm?
2. Define Pseudo code
3. Who is an Algorist?
4. What is Sorting?
5. What is searching? Write its types.
6. Give an example of data structures
7. What in algorithmic strategy? Give an example.
8. What is algorithmic solution?
9. How the efficiency of an algorithm is defined?
10. How the analysis of algorithms and performance evaluation can be divided?Explain.
11. Name the two factors, which decide the efficiency of an algorithm.
12. Give an example. How the time efficiency of an algorithm is measured.
13. What is algorithmetic strategy?
14. Write a note on Big omega asymptotic notation
15. Write a note on memorization. | 2019-10-20 12:26:25 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8255427479743958, "perplexity": 5147.991555059095}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986707990.49/warc/CC-MAIN-20191020105426-20191020132926-00373.warc.gz"} |
https://swmath.org/software/22883 | # Polyhedral
GAP package Polyhedral. The package polyhedral is designed to be used for doing all kinds of computations related to polytopes and use their symmetry groups in the course of the computation. The package Polyhedral is devoted to polytopes and lattices. Its main functionalities are: Computing dual description of polytopes by using symmetries for reducing the size of the computation. Compute the automorphism group of a polytope. Compute the volume of a polytope. Compute the K-skeleton of a polytope. Compute the Wythoff construction of a polytope. Compute the Delaunay tesselation corresponding to a lattice of Rn. Deal with L-type domains, i.e. spaces of lattices. Compute with unimodular vector systems. Recognize affine and spherical Coxeter Dynkin diagrams. Enumerate perfect forms in T-spaces.
## Keywords for this software
Anything in here will be replaced on browsers that support the canvas element
## References in zbMATH (referenced in 4 articles )
Showing results 1 to 4 of 4.
Sorted by year (citations) | 2019-06-19 06:05:16 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8134969472885132, "perplexity": 1780.257937351434}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998913.66/warc/CC-MAIN-20190619043625-20190619065625-00024.warc.gz"} |
http://class-specific.com/csf/csf_book/node79.html | ### Module
The function software/module_A_chisq.m implements a module for the feature calculation (5.1). The module uses a floating reference hypothesis for normalization (Section 2.3.4). The function software/module_A_chisq_synth.m implements UMS and function software/module_A_chisq_test.m tests both functions.
Example 8 We now provide an example of the application of the SPA to a linear combination of exponentials. We performed an acid test (see Section 2.3.8) by generating 1000 samples of a 100-by-1 vector of independent exponentially distributed RVs. The elements of were scaled such that the expected value of the -th element was , . Let the PDF of under these conditions be denoted by . Although the elements of have different means, they are independent, so is easily obtained from the joint PDF from product of chi-square distributions with 2 degrees of freedom (Section 16.1.2).
Next, we applied the linear transformation , where
Notice that the columns of form a linear subspace which contains both the special scaling function applied to under as well as constant scaling under . We can assume, therefore, that will be approximately sufficient for vs. . We then estimated the PDF using a Gaussian mixture model (Section 13.2.1).
Using the module software/module_A_chisq.m, we obtained the projected PDF:
where
Projected PDF values are plotted against the true values of in Figure 5.1. The agreement is very close. The script software/module_A_chisq_test.m runs the example with the following syntax:
module_A_chisq_test('acid',100,2,2);
We then changed matrix to include only the first column (a constant). This makes a scalar and no longer an approximate sufficient statistic for vs. . The result is shown in Figure 5.2. Note the worsening of the error. The script software/module_A_chisq_test.m runs this test with the following syntax:
module_A_chisq_test('acid',100,1,2);
Baggenstoss 2017-05-19 | 2017-12-17 06:05:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8389829993247986, "perplexity": 823.8750676696621}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948593526.80/warc/CC-MAIN-20171217054825-20171217080825-00149.warc.gz"} |
http://openstudy.com/updates/4fe36224e4b06e92b8722920 | ## FoolForMath Group Title Fool's problem of the day, If $$f(x,y)$$ is a function that gives the remainder when $$x$$ divided by $$y$$. Prove that the value of $$f(124^{24},99) - f(173^{24},99) =0$$. Problems credits: IMS india. 2 years ago 2 years ago
1. BTaylor Group Title
f(124^24,99) = 37 f(173^24,99) = 37 37 - 37 = 0
2. BTaylor Group Title
wolframalpha gives that as a mixed fraction.
3. FoolForMath Group Title
I changed the problem statement to avoid the obvious electronic aided solution(s) :)
4. BTaylor Group Title
arggh...
5. ParthKohli Group Title
I think we have to search for a pattern here, and we must have to use a calculator here to some extent.
6. harharf Group Title
^ calculator is only as smart as the idiot pushing the buttons ^_^
7. m_charron2 Group Title
Does it have anything to do with the fact that 124^24mod99 = 25^24mod99 = 31^12 mod 99 and so on so forth?
8. asnaseer Group Title
is this a valid method of solving this?\begin{align} 124&=99+25\\ \therefore124^{24}&=(99+25)^{24}\\ \therefore124^{24}|99&=(99+25)^{24}|99=25^{24}|99\\ \end{align}similarly:\begin{align} 173&=2*99-25\\ \therefore173^{24}&=(2*99-25)^{24}\\ \therefore173^{24}|99&=(2*99-25)^{24}|99=25^{24}|99 \end{align}therefore:$124^{24}|99-173^{24}|99=25^{24}|99-25^{24}|99=0$
9. FoolForMath Group Title
That's how I did it @asnaseer. Well done!
10. KingGeorge Group Title
How about... Notice that this function is the same as the mod function We want to show $124^{24}\equiv174^{24}\pmod{99}$Reduce, and get that this is equivalent to asking $25^{24}\equiv74^{24}\pmod{99}$Since $$74\equiv-25\pmod{99}$$, we can simplify to $25^{24}\overset{?}\equiv(-25)^{24}\pmod{99}$Since we have an even exponent, this is clearly true. This is basically the same thing asnaseer did.
11. KingGeorge Group Title
I should have written "173" in that first line.
12. FoolForMath Group Title
That's seems like a valid approach. | 2014-08-02 02:39:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7351139783859253, "perplexity": 3080.109563174755}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510276250.57/warc/CC-MAIN-20140728011756-00044-ip-10-146-231-18.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/3144333/question-about-a-simple-field-extensions-equality | # Question about a Simple Field Extensions Equality
Let $$E\supseteq F$$ be an extension of fields. Show that $$\forall u \in E,$$ and nonzero $$a\in F,$$ $$F(u)=F(au)$$.
My first instinct was to argue with the fact that $$F(u)$$ is the smallest subfield that contains both $$F$$ and $$u$$, so the "$$\subseteq$$" inclusion is clear. Would the same approach work for the reverse inclusion? I'm not so sure if this method would work this time around, so any other suggestions would be appreciated!
You need to show $$F(u)\subseteq F(au)$$ and $$F(au)\subseteq F(u)$$.
Because $$F(au)$$ is the smallest subfield of $$E$$ containing $$F$$ and $$au$$, to show $$F(au)\subseteq F(u)$$ it suffices to show that $$au\in F(u)$$; but $$u\in F(u)$$ and $$a\in F\subseteq F(u)$$, so their product should be in $$F(u)$$ as well.
Now use similar logic to show $$F(u)\subseteq F(au)$$, using the fact that $$a$$ is invertible in $$F$$.
note that $$a^{-1}\in F\subset F(au)$$, so $$a^{-1}au=u\in F(au)$$, therefore $$F(u)=F(au)$$ | 2019-04-19 20:47:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 25, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8569562435150146, "perplexity": 55.283126028994765}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578528058.3/warc/CC-MAIN-20190419201105-20190419222049-00053.warc.gz"} |
https://www.deepdyve.com/lp/springer_journal/on-a-transport-problem-and-monoids-of-non-negative-integers-WlZ8KvYK5O | # On a transport problem and monoids of non-negative integers
On a transport problem and monoids of non-negative integers A problem about how to transport profitably a group of cars leads us to studying the set T formed by the integers n such that the system of inequalities, with non-negative integer coefficients, \begin{aligned} a_1x_1 +\cdots + a_px_p + \alpha \le n \le b_1x_1 +\cdots + b_px_p - \beta \end{aligned} a 1 x 1 + ⋯ + a p x p + α ≤ n ≤ b 1 x 1 + ⋯ + b p x p - β has at least one solution in $${\mathbb N}^p$$ N p . We prove that $$T\cup \{0\}$$ T ∪ { 0 } is a submonoid of $$({\mathbb N},+)$$ ( N , + ) and, moreover, we give algorithmic processes to compute T. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png aequationes mathematicae Springer Journals
# On a transport problem and monoids of non-negative integers
, Volume 92 (4) – Jun 5, 2018
10 pages
/lp/springer_journal/on-a-transport-problem-and-monoids-of-non-negative-integers-WlZ8KvYK5O
Publisher
Springer International Publishing
Copyright © 2018 by Springer International Publishing AG, part of Springer Nature
Subject
Mathematics; Analysis; Combinatorics
ISSN
0001-9054
eISSN
1420-8903
D.O.I.
10.1007/s00010-018-0572-5
Publisher site
See Article on Publisher Site
### Abstract
A problem about how to transport profitably a group of cars leads us to studying the set T formed by the integers n such that the system of inequalities, with non-negative integer coefficients, \begin{aligned} a_1x_1 +\cdots + a_px_p + \alpha \le n \le b_1x_1 +\cdots + b_px_p - \beta \end{aligned} a 1 x 1 + ⋯ + a p x p + α ≤ n ≤ b 1 x 1 + ⋯ + b p x p - β has at least one solution in $${\mathbb N}^p$$ N p . We prove that $$T\cup \{0\}$$ T ∪ { 0 } is a submonoid of $$({\mathbb N},+)$$ ( N , + ) and, moreover, we give algorithmic processes to compute T.
### Journal
aequationes mathematicaeSpringer Journals
Published: Jun 5, 2018
## You’re reading a free preview. Subscribe to read the entire article.
### DeepDyve is your personal research library
It’s your single place to instantly
that matters to you.
over 18 million articles from more than
15,000 peer-reviewed journals.
All for just $49/month ### Explore the DeepDyve Library ### Search Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly ### Organize Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place. ### Access Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals. ### Your journals are on DeepDyve Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more. All the latest content is available, no embargo periods. DeepDyve ### Freelancer DeepDyve ### Pro Price FREE$49/month
\$360/year
Save searches from
PubMed
Create lists to
Export lists, citations | 2018-08-20 10:53:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5320591926574707, "perplexity": 1772.320638651471}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221216333.66/warc/CC-MAIN-20180820101554-20180820121554-00221.warc.gz"} |
https://socratic.org/questions/58b7a9cbb72cff613541336b | # Question 1336b
Mar 5, 2017
${\text{H"_ 2"C"_ 2"O"_ (4(aq)) + "Ca"("OH")_ (2(aq)) -> "CaC"_ 2"O"_ (4(s)) darr + 2"H"_ 2"O}}_{\left(l\right)}$
#### Explanation:
The unbalanced chemical equation should look like this
${\text{H"_ 2"C"_ 2"O"_ (4(aq)) + "Ca"("OH")_ (2(aq)) -> "CaC"_ 2"O"_ (4(s)) darr + "H"_ 2"O}}_{\left(l\right)}$
This reaction has oxalic acid, ${\text{H"_2"C"_2"O}}_{4}$, and calcium hydroxide, "Ca"("OH")_2#, as the reactants and calcium oxalate, ${\text{CaC"_2"O}}_{4}$, and water as the products.
Calcium hydroxide is not very soluble in water, but the amount that does dissolve dissociates completely to produce calcium cations, ${\text{Ca}}^{2 +}$, and hydroxide anions, ${\text{OH}}^{-}$.
You can thus say that you're dealing with a neutralization reaction between a weak acid and a strong base.
You can balance this equation by taking a look at the ions involved in the reaction.
$2 {\text{H"_ ((aq))^(+) + color(blue)("C"_ 2"O"_ 4)_ ((aq))^(color(blue)(2-)) + color(red)("Ca")_ ((aq))^(color(red)(2+)) + 2"OH"_ ((aq))^(-) -> color(red)("Ca")color(blue)("C"_ 2"O"_ 4) _ ((s)) darr + "H"_ 2"O}}_{\left(l\right)}$
Notice that the $1$ calcium cation and $1$ oxalate anion, ${\text{C"_2"O}}_{4}^{2 -}$, present on the reactants' side are accounted for on the products' side.
This means that all you have to do in order to balance this chemical equation is to balance the hydrogen and oxygen atoms.
Excluding the aforementioned oxalate anion, you have
• $4 \times \text{H}$ on the reactants' side $\to 2 {\text{H"^(+) + 2"OH}}^{-}$
• $2 \times \text{H}$ on the products' side $\to \text{H"_2"O}$
and
• $2 \times \text{O}$ on the reactants 'side $\to 2 {\text{OH}}^{-}$
• $1 \times \text{O}$ on the products' side $\to \text{H"_2"O}$
You can thus balance the hydrogen and oxygen atoms by adding a coefficient of $2$ to the water molecule.
The balanced chemical equation will thus be
$\textcolor{\mathrm{da} r k g r e e n}{\underline{\textcolor{b l a c k}{{\text{H"_ 2"C"_ 2"O"_ (4(aq)) + "Ca"("OH")_ (2(aq)) -> "CaC"_ 2"O"_ (4(s)) darr + 2"H"_ 2"O}}_{\left(l\right)}}}}$ | 2019-09-20 16:10:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 21, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8474399447441101, "perplexity": 2804.5137210700673}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574050.69/warc/CC-MAIN-20190920155311-20190920181311-00302.warc.gz"} |
http://documenta.sagemath.org/vol-kato/bloch_esnault.dm.html | #### DOCUMENTA MATHEMATICA, Extra Volume: Kazuya Kato's Fiftieth Birthday (2003), 131-155
Spencer Bloch and Hélène Esnault
A notion of additive dilogarithm for a field $k$ is introduced, based on the $K$-theory and higher Chow groups of the affine line relative to $2(0)$. Analogues of the $K_2$-regulator, the polylogarithm Lie algebra, and the $\ell$-adic realization of the dilogarithm motive are discussed. The higher Chow groups of $0$-cycles in this theory are identified with the Kähler differential forms $\Omega^*_k$. It is hoped that these results will serve as a guide in developing a theory of contravariant motivic cohomology with modulus, modelled on the generalized Jacobians of Rosenlicht and Serre. | 2017-08-17 15:37:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7218807935714722, "perplexity": 887.6253355326005}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886103579.21/warc/CC-MAIN-20170817151157-20170817171157-00049.warc.gz"} |
https://codereview.stackexchange.com/questions/114523/polymorphic-customer-classes-for-employees-and-affiliates | # Polymorphic customer classes for employees and affiliates
I was trying to achieve a simple problem :
• If the user is an employee of the store, he gets a 30% discount
• If the user is an affiliate of the store, he gets a 10% discount
I came up with the following :
The abstract super class :
abstract class Customer {
String cid;
String customer_name;
double fare;
Customer(){
this.cid="NA";
this.customer_name="NA";
this.fare=0.0d;
}
protected String getCid() {
return cid;
}
protected String getCustomer_name() {
return customer_name;
}
protected abstract double getFare();
protected abstract void printDetails();
}
The child class :
class CustomerAffiliated extends Customer{
CustomerAffiliated(String cid,String customer_name,double fare){
super();
this.cid=cid;
this.customer_name=customer_name;
this.fare=fare;
}
@Override
protected double getFare() {
return this.fare-((10.0/100.0) * this.fare);
}
@Override
protected void printDetails() {
System.out.println(this.getCid());
System.out.println(this.getCustomer_name());
System.out.println(this.getFare());
}
}
And like wise the other child class. Calling the methods something like :
Customer c;
c=new CustomerAffiliated("1","a",100);
c.printDetails();
Am I doing it in a right way? I did not preferred composition as it is (is a) relation. Moreover I am able to use all the fields in the parent class. Is is the correct approach ?
Your whitespaces are all messed up, sometimes there are spaces, sometimes not, sometimes there's a new line after a function declaration, sometimes not. Be consistent, even better, use a code formatter.
abstract class Customer {
String cid;
String customer_name;
double fare;
Why is everything package-private? That class should most likely be public and the variables private or protected.
String cid;
String customer_name;
Both names are not very good:
1. The first one is abbreviated, the second isn't, be consistent.
2. The customer prefix is superfluous, as it is a field in the class Customer.
double fare;
You're representing money with a double, this is dangerous! Use a BigDecimal or a similar class instead.
protected String getCid() {
protected String getCustomer_name() {
protected abstract double getFare();
protected abstract void printDetails();
Why are all these methods protected, shouldn't they be public?
CustomerAffiliated(String cid,String customer_name,double fare){
super();
this.cid=cid;
this.customer_name=customer_name;
this.fare=fare;
}
There should be a consutructor in the Customer class which accepts these values. This makes sure that less knowledge is used when extending the class, additionally you could transform the values in the original constructor.
@Override
protected void printDetails() {
System.out.println(this.getCid());
System.out.println(this.getCustomer_name());
System.out.println(this.getFare());
}
Printing directly to stdout is bad, you should return a String so that the system which invokes the method can decide where the output goes to. Ideally you could remove this function favor of overriding toString.
Customer c;
c=new CustomerAffiliated("1","a",100);
c.printDetails();
Why assigning it on the next line if you could do it on the same line? Why abbreviating the name of the class? It makes the code only unnecessary hard to read, imagine the following, exaggerated, example:
for (Customer c : l) {
double f = c.getFare();
if (f > 0) {
}
}
Not exactly easy to read, is it? You can't take one line and take a guess what's going on. Let's try that again with long names:
for (Customer customer : customers) {
double customerFare = customer.getFare();
if (customerFare > 0) {
}
}
Your structure makes somewhat sense, depending on what you're trying to achieve. It's hard to say if this is the best approach without knowing anything about the rest of the system. But as a more generic approach having a Customer class with a discount and type field would be more efficient in the real world (as most of the time the data for customers comes from a database which has now idea about some class structure).
public class Customer
public String getId()
public String getName()
public BigDecimal getFare()
public BigDecimal getDiscount()
public CustomerType getType()
public enum CustomerType
NORMAL
AFFILIATED
This is not without drawbacks itself, but reduces the need to know about and create different classes. Technically you could also hardcode the discount value into the enum, always depending on what you're trying to achieve. | 2021-12-06 05:56:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20508122444152832, "perplexity": 3664.5678882550337}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363290.39/warc/CC-MAIN-20211206042636-20211206072636-00038.warc.gz"} |
https://www.nature.com/articles/s41567-020-0890-0?error=cookies_not_supported&code=a43c65b9-88b0-4f6a-bd9f-ea623660fe8a | Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# Tunable bandgap renormalization by nonlocal ultra-strong coupling in nanophotonics
## Abstract
In quantum optics, great effort is being invested in enhancing the interaction of quantum emitters with light. The different approaches include increasing the number of emitters, the laser intensity or the local photonic density of states at the location of an atom-like localized emitter. In contrast, solid-state extended emitters hold an unappreciated promise of vastly greater enhancements through their large number of vacant electronic valence states. However, the majority of valence states are considered optically inaccessible by a conduction electron. We show that, by interfacing three-dimensional (3D) solids with 2D materials, we can unlock the unoccupied valence states by nonlocal optical interactions that lead to ultra-strong coupling for each conduction electron. Consequently, nonlocal optical interactions fundamentally alter the role of the quantum vacuum in solids and create a new type of tunable mass renormalization and bandgap renormalization, which reach tens of millielectronvolts in the example we show. To present quantitative predictions, we develop a non-perturbative macroscopic quantum electrodynamic formalism that we demonstrate on a graphene–semiconductor–metal nanostructure. We find new effects, such as nonlocal Rabi oscillations and femtosecond-scale optical relaxation, overcoming all other solid relaxation mechanisms and fundamentally altering the role of optical interactions in solids.
This is a preview of subscription content, access via your institution
## Relevant articles
• ### Combining density functional theory with macroscopic QED for quantum light-matter interactions in 2D materials
Nature Communications Open Access 13 May 2021
## Access options
\$32.00
All prices are NET prices.
## Data availability
Source data are available for this paper. All other data that support the plots in this paper and other findings of this study are available from the corresponding author.
## Code availability
The code that supports the findings of this study is available from the corresponding author.
## References
1. Welton, T. A. Some observable effects of the quantum-mechanical fluctuations of the electromagnetic field. Phys. Rev. 74, 1157–1167 (1948).
2. Coldren, L. A. & Corzine, S. W. Diode Lasers and Photonic Integrated Circuits (Wiley, 1995).
3. Sanchez-Mondragon, J. J., Narozhny, N. B. & Eberly, J. H. Theory of spontaneous-emission line shape in an ideal cavity. Phys. Rev. Lett. 51, 550–553 (1983).
4. Yablonovitch, E. Inhibited spontaneous emission in solid-state physics and electronics. Phys. Rev. Lett. 58, 2059–2062 (1987).
5. Purcell, E. M. Spontaneous emission probabilities at radio frequencies. Phys. Rev. 69, 681 (1946).
6. Haroche, S. & Kleppner, D. Cavity quantum electrodynamics. Phys. Today 42, 24–30 (1989).
7. Khurgin, J. B. Excitonic radius in the cavity polariton in the regime of very strong coupling. Solid State Commun. 117, 307–310 (2001).
8. Ciuti, C., Bastard, G. & Carusotto, I. Quantum vacuum properties of the intersubband cavity polariton field. Phys. Rev. B 72, 115303 (2005).
9. Günter, G. et al. Sub-cycle switch-on of ultrastrong light–matter interaction. Nature 458, 178–181 (2009).
10. Todorov, Y. & Sirtori, C. Intersubband polaritons in the electrical dipole gauge. Phys. Rev. B 85, 045304 (2012).
11. Schwartz, T., Hutchison, J. A., Genet, C. & Ebbesen, T. W. Reversible switching of ultrastrong light-molecule coupling. Phys. Rev. Lett. 106, 196405 (2011).
12. Scully, M. O. Collective Lamb shift in single photon Dicke superradiance. Phys. Rev. Lett. 102, 143601 (2009).
13. Hümmer, T., García-Vidal, F. J., Martín-Moreno, L. & Zueco, D. Weak and strong coupling regimes in plasmonic-QED. Phys. Rev. B 87, 115419 (2013).
14. Forn-Díaz, P., Lamata, L., Rico, E., Kono, J. & Solano, E. Ultrastrong coupling regimes of light–matter interaction. Rev. Mod. Phys. 91, 025005 (2019).
15. Kockum, A. F., Miranowicz, A., De Liberato, S., Savasta, S. & Nori, F. Ultrastrong coupling between light and matter. Nat. Rev. Phys. 1, 19–40 (2019).
16. Basov, D. N., Fogler, M. M. & Garcia de Abajo, F. J. Polaritons in van der Waals materials. Science 354, aag1992 (2016).
17. Low, T. et al. Polaritons in layered 2D materials. Nat. Mater. 16, 182–194 (2016).
18. Rivera, N., Christensen, T. & Narang, P. Phonon polaritonics in two-dimensional materials. Nano Lett. 19, 2653–2660 (2019).
19. Jablan, M., Buljan, H. & Soljačić, M. Plasmonics in graphene at infrared frequencies. Phys. Rev. B 80, 245435 (2009).
20. Giles, A. J. et al. Ultralow-loss polaritons in isotopically pure boron nitride. Nat. Mater. 17, 134–139 (2017).
21. Ni, G. et al. Fundamental limits to graphene plasmonics. Nature 557, 530–533 (2018).
22. Koppens, F. H. L., Chang, D. E. & García de Abajo, F. J. Graphene plasmonics: a platform for strong light–matter interactions. Nano Lett. 11, 3370–3377 (2011).
23. Principi, A., van Loon, E., Polini, M. & Katsnelson, M. I. Confining graphene plasmons to the ultimate limit. Phys. Rev. B 98, 035427 (2018).
24. Alcaraz Iranzo, D. et al. Probing the ultimate plasmon confinement limits with a van der Waals heterostructure. Science 360, 291–295 (2018).
25. Rivera, N., Kaminer, I., Zhen, B., Joannopoulos, J. D. & Soljačić, M. Shrinking light to allow forbidden transitions on the atomic scale. Science 353, 263–269 (2016).
26. Kurman, Y. et al. Control of semiconductor emitter frequency by increasing polariton momenta. Nat. Photon. 12, 423–429 (2018).
27. Machado, F., Rivera, N., Buljan, H., Soljacic, M. & Kaminer, I. Shaping polaritons to reshape selection rules. ACS Photonics 5, 3064–3072 (2018).
28. Lundeberg, M. et al. Tuning quantum non-local effects in graphene plasmonics. Science 357, 187–191 (2017).
29. Christensen, T. et al. Nonlocal response of metallic nanospheres probed by light, electrons and atoms. ACS Nano 8, 1745–1758 (2014).
30. Yan, W., Wubs, M. & Mortensen, N. A. Hyperbolic metamaterials: nonlocal response regularizes broadband supersingularity. Phys. Rev. B 86, 205429 (2012).
31. Grosso, G. & Parravicini, G. P. Solid State Physics 2nd edn (Academic Press, 2013).
32. Giustino, F. Electron–phonon interactions from first principles. Rev. Mod. Phys. 89, 015003 (2017).
33. Mahan, G. D. Many-Particle Physics 3rd edn (Springer, 2000).
34. Glauber, R. J. & Lewenstein, M. Quantum optics of dielectric media. Phys. Rev. A 43, 467–491 (1991).
35. Scheel, S. & Buhmann, S. Y. Macroscopic QED—concepts and applications. Acta Phys. Slov. 58, 675–809 (2008).
36. Vogel, W. & Welsch, D.-G. Quantum Optics 3rd edn (Wiley, 2006).
37. Sundaresan, N. M. et al. Beyond strong coupling in a multimode cavity. Phys. Rev. X 5, 021035 (2015).
38. Polini, M. et al. Plasmons and the spectral function of graphene. Phys. Rev. B 77, 081411 (2008).
39. Song, J. C. & Levitov, L. S. Energy flows in graphene: hot carrier dynamics and cooling. J. Phys. Condens. Matter 27, 164201 (2015).
40. Rosker, M. J., Wise, F. W. & Tang, C. L. Femtosecond optical measurement of hot‐carrier relaxation in GaAs, AlGaAs and GaAs/AlGaAs multiple quantum well structures. Appl. Phys. Lett. 49, 1726–1728 (1986).
41. Jacob, Z., Smolyaninov, I. I. & Narimanov, E. E. Broadband Purcell effect: radiative decay engineering with metamaterials. Appl. Phys. Lett. 100, 181105 (2012).
42. Poddubny, A., Iorsh, I., Belov, P. & Kivshar, Y. Hyperbolic metamaterials. Nat. Photon. 7, 948–957 (2013).
43. Fei, Z. et al. Edge and surface plasmons in graphene nanoribbons. Nano Lett. 15, 8271–8276 (2015).
44. Khurgin, J. B. Two-dimensional exciton–polariton—light guiding by transition metal dichalcogenide monolayers. Optica 2, 740–742 (2015).
45. Khurgin, J. B. Hot carriers generated by plasmons: where are they generated and where do they go from there?. Faraday Discuss. 214, 35–58 (2019).
46. Wang, K. et al. Coherent interaction between free electrons and cavity photons. Preprint at https://arxiv.org/pdf/1908.06206.pdf (2019).
47. Muñoz, M. et al. Burstein–Moss shift of n-doped In0.53Ga0.47As/InP. Phys. Rev. B 63, 233302 (2001).
48. Raja, A. et al. Coulomb engineering of the bandgap and excitons in two-dimensional materials. Nat. Commun. 8, 15251–15257 (2017).
49. Ugeda, M. M. et al. Giant bandgap renormalization and excitonic effects in a monolayer transition metal dichalcogenide semiconductor. Nat. Mater. 13, 1091–1095 (2014).
50. Das Sarma, S., Jalabert, R. & Yang, S.-R. E. Band-gap renormalization in semiconductor quantum wells. Phys. Rev. B 41, 8288–8294 (1990).
51. Walsh, A., Da Silva, J. L. & Wei, S. H. Origins of band-gap renormalization in degenerately doped semiconductors. Phys. Rev. B 78, 075211 (2008).
52. Ryan, J. C. & Reinecke, T. L. Band-gap renormalization of optically excited semiconductor quantum wells. Phys. Rev. B 47, 9615–9620 (1993).
53. Breusing, M., Ropers, C. & Elsaesser, T. Ultrafast carrier dynamics in graphite. Phys. Rev. Lett. 102, 086809 (2009).
54. Pogna, E. A. A. et al. Photo-induced bandgap renormalization governs the ultrafast response of single-layer MoS2. ACS Nano 10, 1182–1188 (2016).
55. Ulstrup, S. et al. Ultrafast band structure control of a two-dimensional heterostructure. ACS Nano 10, 6315–6322 (2016).
56. Gonzalez-Ballestero, C., Feist, J., Badía, E. G., Moreno, E. & Garcia-Vidal, F. J. Uncoupled dark states can inherit polaritonic properties. Phys. Rev. Lett. 117, 156402 (2016).
57. Hisamoto, D. et al. FinFET-a self-aligned double-gate MOSFET scalable to 20 nm. IEEE Trans. Electron Devices 47, 2320–2325 (2000).
58. Bostwick, A. et al. Observation of plasmarons in quasi-freestanding doped graphene. Science 328, 999–1002 (2010).
59. Weisskopf, V. & Wigner, E. Berechnung der natürlichen linienbreite auf grund der diracschen lichttheorie. Z. Phys. 63, 54–73 (1930).
60. Knöll, L., Scheel, S. & Welsch, D.-G. in Coherence and Statistics of Photons and Atoms Ch. 7.3 (Wiley, 2001).
61. Chew, W. Waves and Fields in Inhomogeneous Media (IEEE Press, 1995).
62. Wasey, J. A. E. & Barnes, W. L. Efficiency of spontaneous emission from planar microcavities. J. Mod. Opt. 47, 725–741 (2000).
63. Lin, X. et al. Tailoring the energy distribution and loss of 2D plasmons. N. J. Phys. 18, 105007 (2016).
64. Gonçalves, P. A. D. & Peres, N. M. R. An Introduction to Graphene Plasmonics (World Scientific, 2016).
## Acknowledgements
We thank D. Podolsky, N. Lindner and J.C. Song for their advice and for fruitful discussions regarding this paper. The research is supported by the Azrieli Faculty Fellowship, by the GIF Young Scientists’ Program and by an ERC Starter Grant.
## Author information
Authors
### Contributions
All authors made significant contributions to the manuscript.
### Corresponding authors
Correspondence to Yaniv Kurman or Ido Kaminer.
## Ethics declarations
### Competing interests
The authors declare no competing interests.
Peer review information Nature Physics thanks Nicholas Christakis and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Supplementary information
### Supplementary Information
Supplementary text, methods, Figs. 1–6, Table 1 and references.
### Supplementary Video 1
The relaxation dynamics, showing the different stages by means of the map of optical excitation P(q,ω)(t) and the initial excited state probability $$P_{\bf{k}_{\mathrm{i}}}\left( t \right)$$.
## Source data
### Source Data Fig. 2
Source data for Fig. 2b. The coupling kernel for each frequency and wavenumber.
### Source Data Fig. 3
Source data for Fig. 3. a The time-dependent initial state population probability. b The optical excitations’ occupation probability.
### Source Data Fig. 4
Source data for Fig. 4. a The energy shift as a function of the initial momentum. b The tunable bandgap renormalization. c The shift in the photon absorption spectrum. d The highly expected nearfield emission spectra.
## Rights and permissions
Reprints and Permissions
Kurman, Y., Kaminer, I. Tunable bandgap renormalization by nonlocal ultra-strong coupling in nanophotonics. Nat. Phys. 16, 868–874 (2020). https://doi.org/10.1038/s41567-020-0890-0
• Accepted:
• Published:
• Issue Date:
• DOI: https://doi.org/10.1038/s41567-020-0890-0
• ### Combining density functional theory with macroscopic QED for quantum light-matter interactions in 2D materials
• Mark Kamper Svendsen
• Yaniv Kurman
• Kristian S. Thygesen
Nature Communications (2021)
• ### Light–matter interactions with photonic quasiparticles
• Nicholas Rivera
• Ido Kaminer
Nature Reviews Physics (2020) | 2023-01-28 07:32:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7508386969566345, "perplexity": 13049.534660854162}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499524.28/warc/CC-MAIN-20230128054815-20230128084815-00578.warc.gz"} |
http://math.stackexchange.com/questions/313306/permutation-strategy-for-sudoku-solver-np-complete | # Permutation Strategy for Sudoku solver NP-complete?
We know that Sudoku itself is $\mathbf{\mathsf{NP}}$-complete, but while trying to implement the "Permutation Rule" strategy in my solver, I was unable to find an efficient algorithm to do so. The problem is essentially:
Given $U=\{1,\ldots,n\}$, $n$ sets $Z_1,\ldots,Z_n$ with $\bigcup Z_i=U$, and an integer $k$, $1\leq k\leq n$, is there a subcollection of $k$ sets $Z_{i_1},\ldots,Z_{i_k}$ such that $\left|\bigcup_{j=1}^k Z_{i_j}\right| = k$?
Clearly we have membership in $\mathbf{\mathsf{NP}}$, and it seems $\mathbf{\mathsf{NP}}$-complete (since it's so similar to Set Cover), but I don't have any proof at the moment.
-
Peter Norvig's approach may help – Ben Feb 24 '13 at 21:10
@Ben, thanks, though my question is a theoretical one about the abstract problem I defined. I already have a fast Sudoku solver implemented. – Nick Feb 24 '13 at 21:42 | 2015-07-28 08:37:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.881040632724762, "perplexity": 721.2384044804061}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042981753.21/warc/CC-MAIN-20150728002301-00313-ip-10-236-191-2.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/basic-hydrogen-gravity-thermodynamics-question-from-a-noob.708786/ | # Basic Hydrogen/gravity/thermodynamics question from a noob.
1. Sep 5, 2013
### willdo
What altitude should a weight drop from to produce enough energy to electrolyse the hydrogen needed to lift it? Please keep it simple, i just want to know if it would work in theory.
2. Sep 5, 2013
### Staff: Mentor
Conservation of energy tells us this can't be done at all. Differing elevations doesn't change anything.
3. Sep 5, 2013
### jfizzix
the simplest example would be to consider the energy it takes to break apart a single hydrogen molecule, its bond energy, and compare it to the energy it takes to lift the molecule a certain height off of the earth.
It turns out that the bond energy (about 724 zeptojoules) is actually larger than its total gravitational potential energy (about 209 zeptojoules), so even if you lifted it arbitrarily high above the earth, it would not gain more kinetic energy than its bond energy.
The bond energy of $H_{2}$, I looked up.
The gravitational potential energy I calculated from Newton's law of gravity
$|U|= G\frac{M_{H_{2}}M_{E}}{R_{E}}$
It would be a different story if you were lifting it off of a more massive or more dense body, but Earth just isn't massive enough in this case.
Hope this helps:)
4. Sep 5, 2013
### willdo
Ok, i just read the basics of "conservation of energy" yet, wether i drop a weight from 1 metre or 10 metres the energy produced is not the same.I am not knowledgeable enough to explain it but in Hydroelectricity the height of the waterfall matters very much...So, does it mean that the weight speeds up over a certain distance and then stops accelarating at some point? In wich case, How can i figure that distance?
Edit:sorry jfizzix, we were writing at the same time...somehow this makes sense....
Edit2: Ok, i thought about it again and...what's stopping me from harnessing the energy produced more than once? (in theory of course i know it would be impractical)
example: if a weight attains it's maximum level of kinetic energy after 1000 metres (a wild guess, i have no clue what the real number would be like) and you carry this weight to 30000 metres, then you could harness the kinetic energy 30 times...no?
Last edited: Sep 5, 2013
5. Sep 5, 2013
### MrAnchovy
This is not correct, work is done by the atmosphere raising the weight attached to the hydrogen balloon.
6. Sep 5, 2013
### MrAnchovy
Unfortunately electrolysis of water requires huge amounts of energy - to get a cubic metre of hydrogen needs 11.67MJ even at 100% efficiency.
Now 1m3 of hydrogen should lift about 1kg. A 1kg mass at 50km up (about as high as a balloon will go) has about 0.5MJ of potential energy.
So even if we could convert all of the potential energy to electricity, and all of the electrical energy to extracting hydrogen we would still not have enough by a factor of more than 20x.
You need a more massive planet.
Last edited: Sep 5, 2013
7. Sep 5, 2013
### Staff: Mentor
The maximum kinetic energy is achieved by dropping the weight from infinity... so you only get to do it once.
8. Sep 5, 2013
### MrAnchovy
Yes but if you could this would be plenty enough energy - escape velocity is 11.2km/s which would give 1kg 188MJ. The problem is that you can only use the Earth's atmosphere to power the ascent to the limits of the Earth's atmosphere, not infinity.
9. Sep 5, 2013
### Khashishi
This is not possible even on a hypothetical massive planet. If you increase the gravity, you increase the pressure which will make it more difficult to electrolyze water.
10. Sep 5, 2013
### Staff: Mentor
Right, and the difference is a factor of 10 in energy.
The basic question is: how do you get your mass to that height? Lifting it by 10m needs also ten times the energy needed to lift it by 1 meter.
It is pointless, as you will need the same energy to lift it before you can drop it again.
11. Sep 5, 2013
### willdo
I must say, this is great, this question has bothered me for a while, thank you all for making it easy to understand!
Just to make sure, how would that change if we started in water, say 10km deep? (assuming no engineering issue)
12. Sep 5, 2013
### willdo
Edit2: Ok, i thought about it again and...what's stopping me from harnessing the energy produced more than once? (in theory of course i know it would be impractical)
exemple: if a weight attains it's maximum level of kinetic energy after 1000 metres (a wild guess, i have no clue what the real number would be like) and you carry this weight to 30000 metres, then you could harness the kinetic energy 30 times...no?
T 02:52 PM
My idea was in the exemple, since the hydrogen keeps going up, there wouldn't be any need to get "back up". i since understood that it would be pointless regardless since the energy produced would be vastly insufficient unless i could reach some 1200km and there just isn't that much atmoshere to go
13. Sep 5, 2013
### MrAnchovy
You are missing the point mfb, the energy to raise the weight is provided by the buoyant force of the atmosphere.
14. Sep 5, 2013
### willdo
The following is a quote from wikipédia:
Based on wind resistance, for example, the terminal velocity of a skydiver in a belly-to-earth (i.e., face down) free-fall position is about 195 km/h (122 mph or 54 m/s).[2] This velocity is the asymptotic limiting value of the acceleration process, because the effective forces on the body balance each other more and more closely as the terminal velocity is approached.In this example, a speed of 50% of terminal velocity is reached after only about 3 seconds, while it takes 8 seconds to reach 90%, 15 seconds to reach 99% and so on.[end of quote]
If this is true then there should be reason to break down the harnessing of power into multiple stages (as always, no engineering issue) since every 3s one could harness 50% of terminal velocity 5x50% in 15s rather than 1x99% if we let it go...Basically, keep the acceleration at it's maximum the whole time so as to accumulate as much kinetic energy as possible.
Even for a properly shaped weight, the fall from 50km will last several minutes giving the oportunity to harness the energy many times, wouldn't the total energy harnessed be more important than the sole value of the object dropping without interference?
Last edited: Sep 5, 2013
15. Sep 5, 2013
### Staff: Mentor
It does not. There is no such number at all, if you can neglect effects of friction. And friction just makes it worse. The calculations here all assume no friction, the ideal case to extract energy.
Concerning buoyancy: Electrolysis at ground level needs a bit more energy than combustion of hydrogen releases if you do it a high altitude - the difference is exactly the reason why buoyancy exists at all.
16. Sep 5, 2013
### MrAnchovy
I don't think that's right - by far the largest energy requirement for electrolysis is breaking the chemical bond (which is invariant with pressure), not releasing the gas from solution. At what pressure would the latter become significant?
It's true that you would need a much larger planet and/or much thicker atmosphere which would create surface gravity (and pressure) which would not be survivable.
17. Sep 5, 2013
### MrAnchovy
Note of course that I have not taken into account the recovery of chemical energy from the hydrogen by burning it in the upper atmosphere, but I think it is reasonable to assume that at best this would compensate for the inefficiencies within the system.
18. Sep 5, 2013
### willdo
Isn't that what terminal velocity is? (sorry i just learned the name)
19. Sep 5, 2013
### willdo
That's funny i wasn't counting it for the same reason (plus i woudn't know how)
20. Sep 5, 2013
### Staff: Mentor
Friction will lower the velocity, so you heat up the air a bit and can extract less energy in a useful way.
If you have too much friction, the velocity won't increase any more after a while, and you just waste energy.
21. Sep 5, 2013
### MrAnchovy
Only a bit? We might be on to a winner here then, as long as we can get that combustion energy back down to ground level efficiently (how about spinning the weight that is dropped)?
22. Sep 5, 2013
### MrAnchovy
Yes, but dropping a weight and catching it again at near terminal velocity is a very inefficient way of extracting potential energy. I have all along assumed that a more efficient method would be used, perhaps using an auto-gyro/turbine/flywheel kind of thing.
23. Sep 5, 2013
### willdo
Ah! Could this be the answer to the following?
So the numbers you proposed earlier (0.5MJ produced from a 50km drop) were taking this into account and keeping the object in a constant accelarating mode?! If so, it's a little disapointing that the production result is so far below "the cost" but at least, it's a clear answer.
24. Sep 5, 2013
### Staff: Mentor
Any type of compression would need at least the energy difference you gain in the combustion process.
You cannot win against energy conservation. You cannot even get a draw.
25. Sep 5, 2013
### MrAnchovy
Yes I'm afraid so: 1kg x 9.8ms-2 x 50,000m ≈ 500,000J is the gravitational potential energy of 1kg at 50km altitude so is the maximum you could ever extract from its descent to the earth at 100% efficiency.
I could see this working on Jupiter perhaps - much larger mass, deeper atmosphere, although the high winds would be a problem (hmmm its decades since I read Larry Niven's The Integral Trees - I can't remember how the alien inhabitants of that planet got their power). | 2018-06-24 00:10:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5784829258918762, "perplexity": 1202.9808610425373}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267865438.16/warc/CC-MAIN-20180623225824-20180624005824-00608.warc.gz"} |
https://www.matrix.edu.au/beginners-guide-year-11-maths-advanced/further-functions-and-relations-piecewise-functions/ | # Part 4: Piecewise Functions | Year 11 Further Functions and Relations
Piecewise functions are very useful in the real world. However, it can be quite confusing to understand! So, in this article, we will explain everything you need to know to ace piecewise functions!
Do you feel that your understanding Piecewise Functions looks like a Piecewise Function? Don’t worry! In this article, we’re going to fill in the blanks of your understanding of Piecewise Functions.
## Year 11 Advanced Mathematics: Piecewise Functions
This topic allows us to understand functions which have different definitions for different input values.
This is a very useful idea as many real-world systems are not defined the same for all inputs, unlike most polynomials we have seen so far.
An example to illustrate this is tax. the formula for calculating tax is different depending on a person’s income rather than being the same for everyone.
## NESA Syllabus Outcomes
NESA expects students to demonstrate proficiency in the following syllabus outcomes;
• Model, analyse and solve problems involving quadratic functions
• Solve practical problems involving a pair of simultaneous linear and/or quadratic functions algebraically and graphically, with or without the aid of technology; including determining and interpreting the break-even point of a simple business problem
## Assumed Knowledge
Students should already be familiar with function notation, domain and range, inequalities and evaluating and sketching polynomials.
## Definition and Evaluating Piecewise Functions
A piecewise function is a function defined separately for different intervals of $$x$$ – values (different domains).
The simplest example to illustrate this is the absolute value function, defined by:
$$y = f(x)= \begin{cases} -x & ;x<0 \\ x & ;x \geq 2 \end{cases}$$
This means that:
1. $$f(x) = -x$$ when $$x$$ is less than $$0$$
2. $$f(x) = x$$ when $$x$$ is greater than or equal to $$0$$
Graphically this is:
Evaluating a piecewise function only has one extra step when compared to a regular function, which is to determine which region we are in.
Simply determine which of the inequalities our $$x$$ value is consistent with, and you have the equation of the function! Evaluating is now trivial, as all we need to do is substitute in our $$x$$ value.
Considering our absolute value example:
For $$x = 5$$ we are in the region $$x \geq 0$$, so the function is $$f(x) = x$$ and $$y = f(5) = 5$$
For $$x = -5$$ we are I the region $$x < 0$$, so the function is $$f(x) = -x$$ and $$y = f(-5) = 5$$
Always be careful when considering values at the edge of a region! Make sure that you choose the correct region based on whichever domain includes the $$x$$ value (i.e. $$\geq$$ rather than $$>$$).
This is illustrated by the discontinuity in the next section.
## Sketching Piecewise Functions
Sketching piecewise functions can similarly be made quite easy by considering it as sketching multiple separate functions.
As an example, we will go through the process of sketching:
$$y = f(x)= \begin{cases} -x^2 & ;x \leq 0 \\ 3 & ;0<x<3 \\ x & ;x \geq 3 \end{cases}$$
This may look complicated, but like our first example we can break this down to:
$$y = – x^2$$ when $$x \leq 0$$
$$y = 3$$ when $$x$$ is between $$0$$ and $$3$$
$$y = x$$ when $$x \geq 3$$
And now we have $$3$$ functions defined over $$3$$ separate domains. These are all simple functions and should be recognised as a parabola and $$2$$ straight lines, which we already know how to sketch.
Sketch each function in their respective domains, and you have sketched the piecewise function!
An important feature to note is the discontinuity, where the different parts of the function do not meet each other, as at $$x = 0$$ in our example.
It is important to indicate which point is a part of the function, and which is not, as we cannot have multiple $$y$$ values for the same $$x$$ value (a piecewise function is still a function).
This is denoted by using a closed or open circle as you would have done when graphing inequalities on a number line.
## Want to avoid your Year 11 Maths Advanced progress looking like a Piecewise Function?
Matrix+ online Year 11 Maths Avdanced courses are the expert guided solution to your Maths problems. Learn more.
## Concept Check Questions
1. Evaluate the following:
$$y = f(x)= \begin{cases} x^5+3x^2+5 & ;x \leq 3 \\ 6-3x^3 & ;x >3 \end{cases}$$
2. Sketch the following:
$$f(x)= \begin{cases} -x^3 & ;x>0 \\ x^2 & ;x \leq 0 \end{cases}$$
3. Sketch the following:
$$f(x)= \begin{cases} 2 & ;x \geq 2 \\ x^2 & ;-2<x<2 \\ -2x & ;x \leq -2 \end{cases}$$
## Solutions
1. $$f(0) = 5, f(5) = -369, f(3) = 275$$
2.
3.
## Want to check your progress with Piecewise Functions?
Download our free worksheet to see what your strengths and weaknesses are.
© Matrix Education and www.matrix.edu.au, 2021. Unauthorised use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Matrix Education and www.matrix.edu.au with appropriate and specific direction to the original content.
### Get free study tips and resources delivered to your inbox.
Join 75,893 students who already have a head start.
Our website uses cookies to provide you with a better browsing experience. If you continue to use this site, you consent to our use of cookies. Read our cookies statement. | 2021-01-22 21:24:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43539848923683167, "perplexity": 653.2049117535918}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703531429.49/warc/CC-MAIN-20210122210653-20210123000653-00106.warc.gz"} |
http://workflowforjotform.askbot.com/question/1888/musings-at-minkiewicz-studios-llc-april-2020/ | # Musings At Minkiewicz Studios LLC: April 2020
Kit Siang appears to have warmed as much as Tun Dr Mahathir Mohamad and not does Kit Siang consider Mahathir the father of racism, the godfather of corruption, the man who killed democracy, the man who destroyed the judiciary, the criminal who have to be put in jail for the remainder of his life once Pakatan Harapan comes to energy, and so forth. "Dr Mahathir has a whole lot of baggage during his 22 years in energy, such as Ops Lalang. There are quite a lot of explanation why P2P lending has grown so rapidly. The free cash they're freely giving is alleged to stimulate the economy. They're bluntly telling us popping their bubbles wouldn't be good for the economy so that they should prop the monetary markets up with damaging interest charges if essential and naturally extra money-printing for wall street (perhaps corporations)… I've hired you with my son's mandrakes." Leah bought pregnant and bore a son that she named Issachar. Shortly thereafter, she acquired pregnant again, and had a son that she named Zebulun. Then Leah stated: "God has endowed me with a good dowry, now will my husband dwell with me, as a result of I have borne him six sons .
We expect that diabetes care - and probably even prediabetes care - will inevitably move toward these programs, a lot the best way convenience made take a look at strips a mainstay of illness management. The one manner this could possibly be prevented is for Ambiga to out of the blue produce in courtroom the exhausting proof in opposition to Hadi which prompted her to make the allegation in the first place. Stearns was the primary person in his family to graduate from faculty. Avoid having investors which might be relations - it isn't advisable to mix enterprise with family. The hungover millionaire plans to use his cash to visit his family within the United Kingdom. But like I said, in case you don’t need them to go to you in this manner, you can inform them and be sure that they may cease. Now, who will Ambiga throw beneath the bus? Now, again to Mrs. Jones. HOUSTON , March 16, 2020 /PRNewswire/ -- Goodrich Petroleum Corporation (NYSE American: GDP) at this time announced it has lowered its 2020 preliminary capital expenditure funds by $15 million to$40 - $50 million, which is expected to generate free cash flow of an estimated$15 - $25 million at$2.00 - \$2.50 natural gas costs. Najib has to name for GE14 before August but alerts from the administration are that it'll likely happen in March or April.
Well, I imagine Najib and the entire of BN scored a giant one on this. Especially so when Sarawak Report has made its name worldwide as one of the main critiques of the Malaysian authorities and its leaders. Otherwise, Sarawak Report would had quoted her from the start as a substitute of placing her as just a source. Well, I severely assume Rewcastle-Brown and her Sarawak Report has now been TKOed by this whole thing. In line with her Defence and Counterclaim assertion that was supplied for The Malaysian Insight, Rewcastle-Brown said she spoke to the previous Bersih chairman in July 2016 concerning Najib, 1MDB and PAS. In contrast, the police have but to finish their investigations into 1MDB regardless of the many revelations of fraud and money laundering that began coming to light in mid-2015. They stated the Najib administration had doggedly pursued the enterprise of Bank Negara’s foreign exchange losses that occurred many years ago, however was sluggish in its response to the loss of billions in the 1Malaysia Development Bhd (1MDB) scandal that blew up a mere two years in the past.
International publications had quoted it in articles essential of Najib and the country. No western nation on this earth bans personal cover as a result of they know this is what happens. In fact, at this time Kit Siang considers Mahathir as the man who's going to save lots of Malaysia and he says he never had any personal grudge towards Mahathir but just disagreed with him on matters of coverage and the management of the nation. "BN has lost the legitimacy to discuss good governance, transparency, and corruption," stated Faisal of Universiti Kebangsaan Malaysia (UKM). Now that the whole allegation appears to be like increasingly like a faux information, it certainly doesn’t look good for Pakatan at all. I might invite these opposed to people assuming private accountability for their own safety to experience seven similar seconds themselves after which reevaluate their thoughts on gun control, and whether or not it (or the new nefarious term being rolled out, "gun security") is admittedly such a good idea in any case.
edit retag close delete | 2020-10-29 23:22:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23730286955833435, "perplexity": 5901.526078092157}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107905965.68/warc/CC-MAIN-20201029214439-20201030004439-00035.warc.gz"} |
https://caddellprep.com/subjects/common-core-geometry/rhombuses/ | Watch a video lesson and learn about the properties of rhombuses including properties of sides, angles, diagonals and angles formed by diagonals.
Rhombus: A parallelogram with consecutive sides congruent.
A rhombus has a similar set of rules as a parallelogram does.
The special thing about a rhombus is that all 4 sides are congruent.
The diagonals are perpendicular bisectors to each other. When this happens, four congruent triangles are formed. The diagonals also bisect the angles at each vertex.
Video-Lesson Transcript
Let’s go over a rhombus.
A rhombus is another type of parallelogram.
In a parallelogram, we have opposite sides are parallel. And also opposite sides are congruent.
What’s special about a rhombus is that all four sides are congruent. Not only the opposite sides but all four of them are congruent.
Let’s label these vertices.
When we draw diagonals, they bisect each other.
Something else that happens in a rhombus is when diagonals intersect, we get right angles. The diagonals are perpendicular to each other.
Here, we end up with four congruent right triangles.
This diagonal bisects $\angle A$. So, angle $BAD$ are bisected. Angle $BCD$ also gets bisected.
And also, the second diagonal bisects these two angles. Angle $ABC$ and angle $ADC$ such that these two pairs of angles are congruent. | 2020-02-20 02:49:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5327129364013672, "perplexity": 682.4694392331359}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144498.68/warc/CC-MAIN-20200220005045-20200220035045-00230.warc.gz"} |
https://itecnotes.com/electrical/electronic-how-to-calculate-the-power-dissipation-in-a-transistor/ | # Electronic – How to calculate the power dissipation in a transistor
bjttransistors
Consider this simple sketch of a circuit, a current source:
I'm not sure how to calculate the power dissipation across the transistor.
I'm taking a class in electronics and have the following equation in my notes (not sure if it helps):
$$P = P_{CE} + P_{BE} + P_{base-resistor}$$
So the power dissipation is the power dissipation across the collector and emitter, the power dissipation across the base and emitter and a mystery factor $$\P_{base-resistor}\$$. Note that the β of the transistor in this example was set to 50.
I'm quite confused overall and the many questions here on transistors have been very helpful. | 2023-02-04 05:27:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8392070531845093, "perplexity": 484.16865485574215}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500094.26/warc/CC-MAIN-20230204044030-20230204074030-00508.warc.gz"} |
https://www.aimsciences.org/article/doi/10.3934/dcdsb.2001.1.103 | # American Institute of Mathematical Sciences
February 2001, 1(1): 103-124. doi: 10.3934/dcdsb.2001.1.103
## Analysis of IVGTT glucose-insulin interaction models with time delay
1 Department of Mathematics, Arizona State University, Tempe, Arizona 85287-1804, United States 2 Department of Mathematics, Arizona State University, Tempe, AZ 85287-1804, United States 3 Department of Mathematics, University of Utah, Salt Lake City, Utah 84112, United States
Received October 2000 Revised January 2001 Published January 2001
In the last three decades, several models on the interaction of glucose and insulin have appeared in the literature, the mostly used one is generally known as the "minimal model" which was first published in 1979 and modified in 1986. Recently, this minimal model has been questioned by De Gaetano and Arino [4] from both physiological and modeling aspects. Instead, they proposed a new and mathematically more reasonable model, called "dynamic model". Their model makes use of certain simple and specific functions and introduces time delay in a particular way. The outcome is that the model always admits a globally asymptotically stable steady state. The objective of this paper is to find out if and how this outcome depends on the specific choice of functions and the way delay is incorporated. To this end, we generalize the dynamical model to allow more general functions and an alternative way of incorporating time delay. Our findings show that in theory, such models can possess unstable positive steady states. However, for all conceivable realistic data, such unstable steady states do not exist. Hence, our work indicates that the dynamic model does provide qualitatively robust dynamics for the purpose of clinic application. We also perform simulations based on data from a clinic study and point out some plausible but important implications.
Citation: Jiaxu Li, Yang Kuang, Bingtuan Li. Analysis of IVGTT glucose-insulin interaction models with time delay. Discrete & Continuous Dynamical Systems - B, 2001, 1 (1) : 103-124. doi: 10.3934/dcdsb.2001.1.103
[1] Pasquale Palumbo, Simona Panunzi, Andrea De Gaetano. Qualitative behavior of a family of delay-differential models of the Glucose-Insulin system. Discrete & Continuous Dynamical Systems - B, 2007, 7 (2) : 399-424. doi: 10.3934/dcdsb.2007.7.399 [2] Saloni Rathee, Nilam. Quantitative analysis of time delays of glucose - insulin dynamics using artificial pancreas. Discrete & Continuous Dynamical Systems - B, 2015, 20 (9) : 3115-3129. doi: 10.3934/dcdsb.2015.20.3115 [3] Kimberly Fessel, Jeffrey B. Gaither, Julie K. Bower, Trudy Gaillard, Kwame Osei, Grzegorz A. Rempała. Mathematical analysis of a model for glucose regulation. Mathematical Biosciences & Engineering, 2016, 13 (1) : 83-99. doi: 10.3934/mbe.2016.13.83 [4] Eugen Stumpf. Local stability analysis of differential equations with state-dependent delay. Discrete & Continuous Dynamical Systems - A, 2016, 36 (6) : 3445-3461. doi: 10.3934/dcds.2016.36.3445 [5] Amitava Mukhopadhyay, Andrea De Gaetano, Ovide Arino. Modeling the intra-venous glucose tolerance test: A global study for a single-distributed-delay model. Discrete & Continuous Dynamical Systems - B, 2004, 4 (2) : 407-417. doi: 10.3934/dcdsb.2004.4.407 [6] Songbai Guo, Wanbiao Ma. Global behavior of delay differential equations model of HIV infection with apoptosis. Discrete & Continuous Dynamical Systems - B, 2016, 21 (1) : 103-119. doi: 10.3934/dcdsb.2016.21.103 [7] Sun Yi, Patrick W. Nelson, A. Galip Ulsoy. Delay differential equations via the matrix lambert w function and bifurcation analysis: application to machine tool chatter. Mathematical Biosciences & Engineering, 2007, 4 (2) : 355-368. doi: 10.3934/mbe.2007.4.355 [8] Evelyn Buckwar, Girolama Notarangelo. A note on the analysis of asymptotic mean-square stability properties for systems of linear stochastic delay differential equations. Discrete & Continuous Dynamical Systems - B, 2013, 18 (6) : 1521-1531. doi: 10.3934/dcdsb.2013.18.1521 [9] Abdelhai Elazzouzi, Aziz Ouhinou. Optimal regularity and stability analysis in the $\alpha-$Norm for a class of partial functional differential equations with infinite delay. Discrete & Continuous Dynamical Systems - A, 2011, 30 (1) : 115-135. doi: 10.3934/dcds.2011.30.115 [10] Laura Fumanelli, Pierre Magal, Dongmei Xiao, Xiao Yu. Qualitative analysis of a model for co-culture of bacteria and amoebae. Mathematical Biosciences & Engineering, 2012, 9 (2) : 259-279. doi: 10.3934/mbe.2012.9.259 [11] Yunfeng Jia, Yi Li, Jianhua Wu. Qualitative analysis on positive steady-states for an autocatalytic reaction model in thermodynamics. Discrete & Continuous Dynamical Systems - A, 2017, 37 (9) : 4785-4813. doi: 10.3934/dcds.2017206 [12] Arnaud Ducrot, Michel Langlais, Pierre Magal. Qualitative analysis and travelling wave solutions for the SI model with vertical transmission. Communications on Pure & Applied Analysis, 2012, 11 (1) : 97-113. doi: 10.3934/cpaa.2012.11.97 [13] Mingxin Wang, Peter Y. H. Pang. Qualitative analysis of a diffusive variable-territory prey-predator model. Discrete & Continuous Dynamical Systems - A, 2009, 23 (3) : 1061-1072. doi: 10.3934/dcds.2009.23.1061 [14] Patricio Felmer, Ying Wang. Qualitative properties of positive solutions for mixed integro-differential equations. Discrete & Continuous Dynamical Systems - A, 2019, 39 (1) : 369-393. doi: 10.3934/dcds.2019015 [15] Marzia Bisi, Maria Paola Cassinari, Maria Groppi. Qualitative analysis of the generalized Burnett equations and applications to half--space problems. Kinetic & Related Models, 2008, 1 (2) : 295-312. doi: 10.3934/krm.2008.1.295 [16] Michael Dellnitz, Mirko Hessel-Von Molo, Adrian Ziessler. On the computation of attractors for delay differential equations. Journal of Computational Dynamics, 2016, 3 (1) : 93-112. doi: 10.3934/jcd.2016005 [17] Hermann Brunner, Stefano Maset. Time transformations for delay differential equations. Discrete & Continuous Dynamical Systems - A, 2009, 25 (3) : 751-775. doi: 10.3934/dcds.2009.25.751 [18] Klaudiusz Wójcik, Piotr Zgliczyński. Topological horseshoes and delay differential equations. Discrete & Continuous Dynamical Systems - A, 2005, 12 (5) : 827-852. doi: 10.3934/dcds.2005.12.827 [19] Teresa Faria, José J. Oliveira. On stability for impulsive delay differential equations and application to a periodic Lasota-Wazewska model. Discrete & Continuous Dynamical Systems - B, 2016, 21 (8) : 2451-2472. doi: 10.3934/dcdsb.2016055 [20] Serhiy Yanchuk, Leonhard Lücken, Matthias Wolfrum, Alexander Mielke. Spectrum and amplitude equations for scalar delay-differential equations with large delay. Discrete & Continuous Dynamical Systems - A, 2015, 35 (1) : 537-553. doi: 10.3934/dcds.2015.35.537
2018 Impact Factor: 1.008
## Metrics
• PDF downloads (23)
• HTML views (0)
• Cited by (27)
## Other articlesby authors
• on AIMS
• on Google Scholar
[Back to Top] | 2019-10-15 08:53:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35336601734161377, "perplexity": 4346.785650735139}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986657949.34/warc/CC-MAIN-20191015082202-20191015105702-00065.warc.gz"} |
https://ntthung.wordpress.com/2016/09/14/useful-identities-from-eulers-formula/ | # Useful identities from Euler’s formula
Euler’s formula:
$e^{i\theta} =\cos\theta + i\sin\theta$
From the Euler’s formula, we can derive that
$e^{ik\pi} = \begin{cases} 1 & \text{if } k \text{ is even}\\ -1 & \text{if } k \text{ is odd} \end{cases}$
If we write the above equation in reverse, we get
$-1=e^{i(2k+1)\pi}$
$1=e^{i2k\pi}$
These identities are useful for quick manipulations of complex numbers. Below are two examples:
Example 1: Solve $z^n = 1$.
Solution
$z = 1^{1/n} = e^{i\frac{2k\pi}{n}} \qquad k = 0,1,...,n-1$
Example 2: Solve $z^3=-8$.
Solution
$z = (-8)^{1/3} = 2(-1)^{1/3} = 2e^{i\frac{(2k+1)\pi}{3}} \qquad k =0, 1, 2$
• $k=0 \Rightarrow z=1+\sqrt{3}i$
• $k=1 \Rightarrow z=-2$
• $k=2 \Rightarrow z=1-\sqrt{3}i$
Note that we can also write $k=-1, 0, 1$. On the trigonometric circle, $k = 2$ and $k = -1$ yield the same angle. | 2017-07-23 02:44:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 14, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9321542978286743, "perplexity": 562.097593853681}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424239.83/warc/CC-MAIN-20170723022719-20170723042719-00024.warc.gz"} |
https://math.stackexchange.com/questions/2590470/fatous-lemma-proof-misunderstanding | # Fatou's Lemma Proof Misunderstanding
I am missing something fairly basic in the proof for Fatou's lemma.
# Statement
Suppose $<f_n>$ is a sequence of non-negative measurable functions, such that $f_n\to f$ almost everywhere. Then
$$\int f \leq \underline{\lim} \int f_n$$
# Proof
Suppose as above. Further, let $h$ be a bounded measurable function which is not greater than $f$ and which vanishes outside a set $E'$ of finite measure. Define
$$h_n = \min\{h,f_n\}$$
for each $x$. Then
$$\int_E h = \int_{E'} h = \lim_{n \to \infty} \int_{E'} h_n \leq \underline{\lim} \int f_n$$
Taking the supremum over $h$ gives the result.
# Question
I do not see how taking the supremum over $h$ gives the result. Do they mean $h$ as a function or $h$ over $x$?
• What is $f$ in your statement? – uniquesolution Jan 3 '18 at 17:00
• $lim_{n \to \infty} f_n$ – Aaron Zolotor Jan 3 '18 at 17:44
• Tell us what $f$ is in the statement – zhw. Jan 3 '18 at 18:22
• They mean over bounded measurable functions $h$. – jgon Jan 3 '18 at 20:00 | 2019-10-18 19:27:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8824437856674194, "perplexity": 394.63869963939084}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986684425.36/warc/CC-MAIN-20191018181458-20191018204958-00319.warc.gz"} |
https://dmoj.ca/problem/tle16c1p3 | ## TLE '16 Contest 1 P3 - Joey and Chemistry
View as PDF
Points: 7 (partial)
Time limit: 1.0s
Memory limit: 128M
Author:
Problem types
jlsajfj has been goofing off in his Science 10 class, so he has no idea what complete combustion is! Not wanting to disappoint his teacher, who thinks jlsajfj could do better in the class if he tried, jlsajfj has decided to bother you into doing his homework.
The homework sheet involves balancing hundreds of complete combustion chemical equations. In his homework, only three types of atoms (elements) are involved:
• Carbon (represented by )
• Hydrogen (represented by )
• and Oxygen (represented by )
The general form of such an equation looks like this:
can be any combination of carbon, hydrogen, or oxygen.
In order for the chemical equation to be balanced, the number of atoms of each element (, , and ) must be equal on both sides of the arrow, since atoms cannot be created or destroyed during the reaction. This can be done by setting , , , and to some positive integer value, signifying the number of copies of the substance. A subscript in front of an element signifies the amount of that element in the substance. In the input and output, a subscript will simply be an integer after an element. If there is no subscript in front of an element, there is only one of it.
For example, (2CH3OH in input/output format) contains 2 carbon atoms, 8 hydrogen atoms, and 2 oxygen atoms.
Can you help jlsajfj finish his homework?
#### Input Specification
The first and only line of input will contain a single string, . It is guaranteed that will not begin with a number and will only contain numbers, C, H, and O.
The total amount of each element in will not be greater than .
#### Output Specification
On a single line, output the balanced chemical equation in the form of aR + bO2 -> cCO2 + dH2O, where , , , and are in lowest terms and cause the equation to be balanced, and is the exact copy of what was given in the input.
If it is not possible to balance the equation, simply output Impossible.
#### Sample Input 1
CH3CH2CH3
#### Sample Output 1
1CH3CH2CH3 + 5O2 -> 3CO2 + 4H2O
#### Sample Input 2
CH3OH
#### Sample Output 2
2CH3OH + 3O2 -> 2CO2 + 4H2O
#### Sample Input 3
H2O
#### Sample Output 3
Impossible
• commented on Sept. 22, 2016, 8:41 a.m.
This might be my favourite question yet
• commented on Sept. 22, 2016, 10:46 a.m.
no
• commented on Sept. 21, 2016, 7:24 p.m.
can you use 0 of an element?
• commented on Sept. 21, 2016, 7:40 p.m.
"a, b, c, and d to some positive integer value"
• commented on Sept. 21, 2016, 5:53 p.m.
you say R cannot begin with a number yet you say 2CH3OH in the sample input
• commented on Sept. 21, 2016, 5:55 p.m.
2CH3OH is not in any sample input.
• commented on Sept. 21, 2016, 3:10 p.m.
A little wrong in this problem and I don't have enough time to solve problem P4, P5 and P6. So sad :(
• commented on Sept. 21, 2016, 2:38 p.m.
On a single line, output the balanced chemical equation in the form of aR + bO2 -> cCO2 + dH2O, where aa, bb, cc, and dd are in lowest terms and cause the equation to be balanced, and RR is the exact copy of what was given in the input. For example, 2CH3OH2CH3OH (2CH3OH in input/output) contains 2 carbon atoms, 8 hydrogen atoms, and 2 oxygen atoms. If R is 2CH3OH then how to print output?
1. 2CH3OH + 3O2 -> 2CO2 + 4H2O
2. 12CH3OH + 3O2 -> 2CO2 + 4H2O ( aR + bO2 -> cCO2 + dH2O, a = 1, R = 2CH3OH ) 1 or 2 ???
• commented on Sept. 21, 2016, 2:41 p.m.
It is guaranteed that R will not begin with a number.
• commented on Sept. 21, 2016, 1:56 p.m.
Can R is CH3(CH2)3(OH)6 ???
• commented on Sept. 21, 2016, 1:57 p.m.
No, there are no brackets in R. | 2021-06-22 16:48:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41299673914909363, "perplexity": 2511.071523924079}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488519183.85/warc/CC-MAIN-20210622155328-20210622185328-00284.warc.gz"} |
http://matthewlisp.com/set-up-clojure-api/ | # How to build APIs with Clojure
2019-09-16
## Introduction
What this guide will cover?
This guide aims to set you up to start writing your own API using Clojure. We will not write an entire api, here i’ll just show how you set everything up in order to start writing yourself.
Who is this guide for?
It’s aimed mainly for experienced developers who is coming to clojure but don’t yet know the Clojure way to create a simple API. I wont teach you how to create an api from the zero, nor the specific things that you need to understand in order to create an api, such as http requests, if you never created an api or don’t know how it works, wrong guide for you. But if you’re looking to fast get your head around the clojure way, that’s the place.
What exactly we are going to do?
Using a set of libraries, we will bootstrap the code for an API in clojure, we’re going to see how we deal with HTTP requests and responses, how we parse them to JSON and how to do the Routing of incoming requests.
Is this the correct way of building an API in clojure?
The short answer is always: It depends.
It depends on your project needs, and this is not the only way to create an api in clojure and it’s absolutely not the standard way, because in clojure we have no standards for how you should build your systems. It’s just a simple and fast way to understand how to start messing around with it.
You’re going to see that the way we are doing is very manual and you have to set-up a lot of stuff to start, but this way of doing it and the libraries used here, is the very foundation for most of clojure frameworks i’ve seen, therefore if you truly understand this guide, youl’ll end up understanding lots of stuff related to clojure for the web.
What you will need
What if i don’t have yet what’s needed?
I recommend you take a look on the resources i’ve curated to learn clojure and at least understand the language core aspects: https://github.com/matthewlisp/learn-clojure
I strongly suggest that you follow the guide while replicating all steps on your machine. Here’s the github repo of the entire code
## How libraries are organized
It’s nice to know how libraries are organized in clojure, because this will help you understand your namespace definitions and how to navigate inside documentations.
Most libraries are just a set of functions defined inside files that are in it’s project folder structure.
During this guide, i’ll put links to libraries API docs, and remember that they are just functions defined inside files on the github project folders, this helps claryfing how we actually visualize how we are importing functions to our namespace.
## Creating the project
Open your terminal and create a new clojure project using Leiningen:
lein new app
Enter in the app folder, here is the project structure:
.
├── CHANGELOG.md
├── doc
│ └── intro.md
├── project.clj
├── resources
├── src
│ └── app
│ └── core.clj
└── test
└── app
└── core_test.clj
## Import the library for HTTP
The first file we are going to edit is project.clj, this file is used by Leiningen to manage our project, and we can import libraries using it.
We are going to use the Ring library, Ring acts as an API for web servers.
Inside your project.clj edit the :dependencies key to the following:
:dependencies [[org.clojure/clojure "1.10.0"]
[ring "1.7.1"]]
lein deps
As we move on in this guide, you should take a look on the Ring github page. If something is unclear or you get stucked, check their wiki page or it’s API docs to refer to what the functions does and expects as parameters.
## Let’s fire up the server
Remember that the Ring library talks with a web server under the hood, in our case, the web server that we will use is Jetty, because ring already has an adapter for jetty written and ready to use for us.
Before we start the server, we need to write how this will happen.
Open the core.clj file at: src/app/core.clj
replace everything with this:
(ns app.core
(defn handler [request]
{:status 200
:body "Hello world"})
(defn -main
[]
(run-jetty handler {:port 3000}))
Allow me to explain what’s happening here.
It’s not enough to just import the libraries in our project.clj file. We need to explicitly call what we want from them inside our own project, in this case, our own namespaces.
Right after our namespace definition, we import the ring.adapter.jetty namespace functions, more specifically, the run-jetty function.
After this piece of code, we are defining what we call a Handler.
The handler is nothing more than a function, which should accept an argument, which will be the HTTP request from the client, and should return a clojure hash-map, that contains basic info on how to respond this request, this way our adapter will interpret it and give the response to the client.
Basically:
client request > handler function 'handles it' > returns hash-map describing the http response
The hash-map response key-values is self explanatory if you examine it close enough.
Finally, we are defining a main function, which will be executed when we run our code. This function simply executes the run-jetty function which is the adapter for the jetty web server, using our handler as a parameter and specifies a port. As the ring documentation says:
Adapters are used to convert Ring handlers into running web servers.
Alright. Before we finally turn the server on, we need to specify to Leinigen (which we will use to run our code) that we do have a main function that should run by default.
Open the project.clj file again, and add this right before the enclosing parenthesis:
:main app.core
This is telling leiningen as i said, that there is a main function, on the namespace app.core
Remember, if you get stucked, check how the code is written in the Github repo of this guide.
Let’s try to run the server now, open the terminal inside your project folder and run the command:
lein run
If everything went smooth, open up your browser and go to: localhost:3000
You should see the “Hello world” message which was the body of our response to any given request.
## Understanding how the HTTP request is represented
Ok, before we proceed, it’s important to know how the HTTP requests are represented in the Ring library and more specifically in our application.
The easiest way to do this is by looking at the request, to do that, we’re going to use a clojure function that prints data to the terminal in a more readable way, let’s update our namespace require definitions:
(ns app.core
[clojure.pprint :refer [pprint]]))
And now, let’s update our handler function, because this is the function that receives our HTTP request:
(defn handler [request]
(clojure.pprint/pprint request) ; Prints the request on the console
{:status 200
:body "Hello world"})
Remember that what our functions returns is the last expression, and that means we can execute other tasks before returning the result. Clojure is not a pure functional language, side-effects have its utility, when you are doing software for real use, such as debugging.
Now, while the server is running, you can look at the terminal window and see how the HTTP request is represented, let’s take a look:
{:ssl-client-cert nil,
:protocol "HTTP/1.1",
{"cache-control" "max-age=0",
"accept"
"text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3",
"connection" "keep-alive",
"user-agent"
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36",
"host" "0.0.0.0:3000",
"accept-encoding" "gzip, deflate",
"accept-language" "en-US,en;q=0.9"},
:server-port 3000,
:content-length nil,
:content-type nil,
:character-encoding nil,
:uri "/",
:server-name "0.0.0.0",
:query-string nil,
:body
#object[org.eclipse.jetty.server.HttpInputOverHTTP 0x7b2512bb "HttpInputOverHTTP@7b2512bb[c=0,q=0,[0]=null,s=STREAM]"],
:scheme :http,
:request-method :get}
It’s a simple Clojure map, we’re going to work with it later, and leaving it here is perfect for debugging.
## Import the routing library
We have a very known routing library in clojure, it’s specially made for Ring. This library is Compojure.
To start using it, let’s update our project.clj file again, at the :dependencies key of the map, add:
[compojure "1.6.1"]
I recommend that later you take a deeper look at it’s documentation.
## Creating routes
Before we start creating routes, we have to update again our namespace require definitions, take a look:
(ns app.core
[clojure.pprint :refer [pprint]]
[compojure.core :refer [routes GET]]
[compojure.route :refer [not-found]]))
What is happening here? I’ve added two functions from compojure.core and one from compojure.route, we’re going to see them in action now as i create our http routes:
(def my-routes
(routes
(GET "/endpoint-a" [] "<h1>Hello endpoint A</h1>")
(GET "/endpoint-b" [] "<h1>Hello endpoint B</h1>")
Here’s what the code does:
Creates my-routes which is just a name reference to what our routes function returns.
The routes function, as the compojure API documentation says:
Create a Ring handler by combining several handlers into one.
So, basically, my-routes is just a ring handler! we can pass ‘several handlers’ as arguments to routes function, and compojure have some macros ready for this, one of them is GET, as you can see, the syntax is self explanatory, just keep in mind that what those macros returns under the hood is actually a response map, to proof this we are going to make something, bear with me, open up a REPL and do these steps:
• Be sure your REPL is in the app.core namespace
• If my-routes return a Ring handler, then it’s a function, let’s see
app.core> my-routes
#function[compojure.core/routes/fn--2512]
• Yup! it’s a function, and it’s a Ring handler, so we can pass an HTTP request map? sure we can, let’s use the one i paste here before:
app.core>
(my-routes ; I've ommited the header info to shorten the code
{:ssl-client-cert nil,
:protocol "HTTP/1.1",
:server-port 3000,
:content-length nil,
:content-type nil,
:character-encoding nil,
:uri "/endpoint-a", ; Notice i have changed the uri requested to /endpoint-a
:server-name "0.0.0.0",
:query-string nil,
:body nil,
:scheme :http,
:request-method :get})
app.core> {:status 200, :headers {"Content-Type" "text/html; charset=utf-8"}, :body "<h1>Hello endpoint A</h1>"}
There you go, the answer is a response map, just as expected, compojure is smart enough to create the response map automatically to us!
The last part of our routes, is the not-found function, again let’s look at what the compojure documentation says:
Returns a route that always returns a 404 “Not Found” response with the supplied response body.
Well, this says pretty much everything, and why we need this function? It’s because our routes will be tried one by one when the HTTP request comes, and if it doesn’t match anything, it will raise an error and crash the application, but if we have this not-found function, it will return this anyway in the end if nothing matches.
Alright, the last thing we have to do is replace the handler used by run-jetty in our main function, simply because my-routes returns a handler and this handler takes care now of routing and that’s what we wanted in this section, in the end our code is looking like this now:
(ns app.core
[clojure.pprint :refer [pprint]]
[compojure.core :refer [routes GET]]
[compojure.route :refer [not-found]]))
(def my-routes
(routes
(GET "/endpoint-a" [] "<h1>Hello endpoint A</h1>")
(GET "/endpoint-b" [] "<h1>Hello endpoint B</h1>")
(defn -main
[]
(run-jetty my-routes {:port 3000}))
## JSON Requests and responses
We are setting up an API, so we need a data format to communicate with it, and by means of popularity i’ll choose JSON to demonstrate here, and probably you’re already thinking about how we are going to parse the requests and responses into json, the Ring+Compojure way to do this is to represent JSON as clojure hash-maps. To do the parsing from clojure maps to JSON, we use ready-to-go functions, that we call Middlewares.
This rises something important, we have to understand what a Middleware function is, in our context, middlewares are functions that add’s some functionality (in other words, tweak the data) from handlers, so they receive a handler, mess with the handler and returns it again but with the new functionality added, heres the basic flow:
(middleware-fn handler) -> tweaked-new-handler
ForJSON the two critical middlewares comes from the Ring-JSON library, they are:
• wrap-json-response
• wrap-json-body
But as always, before we use it, we have to update our project.clj again by importing this lib in our project, open the file and at the :dependencies key of the map, add:
[ring/ring-json "0.5.0"]
Cool. Because this iteration on our code makes many modifications, i’ll paste it here and we are going to breakdown the changes, here we go:
(ns app.core
[clojure.pprint :refer [pprint]]
[compojure.core :refer [routes GET POST]]
[compojure.route :refer [not-found]]
[ring.middleware.json :refer [wrap-json-response wrap-json-body]]
[ring.util.response :refer [response]]))
(def my-routes
(routes
(GET "/endpoint-a" [] (response {:foo "bar"}))
(GET "/endpoint-b" [] (response {:baz "qux"}))
(POST "/debug" request (response (with-out-str (clojure.pprint/pprint request))))
(def app
(-> my-routes
wrap-json-body
wrap-json-response))
(defn -main
[]
(run-jetty app {:port 3000}))
At the namespace definitions, i’ve just added the ring.middleware.json from the Ring-JSON library and the two critical functions. Also now we have ring.util.response refering the response function, and you’ll find out what it is in a sec.
P.S: i’ve also added the POST macro from compojure.core!
Before we discuss the changes in our routes, let’s see this new definition that i created called app.
What is this doing? this is wrapping middlewares to our my-routes handler. And i’m using the threading macro -> because otherwise this code would start being ugly to read, as we possibly will need more middlewares in the future, and things would start being like this:
(wrap-blablabla (wrap-json-bdoy (wrap-json-response my-routes)))
(((((()))))) yeah that’s what i’m talking about, so using threading macros are useful for avoiding readability issues.
So in the end, app is just our good old handler from the beginning. That’s precisely why we are now using it as the first argument for the run-jetty function!!
Important to know now, is what the response function that i’ve refered does and why is it here. From the Ring docs:
Returns a skeletal Ring response with the given body, status of 200, and no headers.
Unfortunately, compojure can’t handle clojure hash-maps treated as JSON and just pass this inside a response map as it did before for us, so we need this function.
Ok, now the fun part, our routes. First let me talk about our /debug which is a POST route. I’ve created this route for debugging purposes, so we can see how our HTTP request looks like inside clojure. To do that i’ve used the pprint function that i mentioned earlier, but it needs some tweakes to print on the browser/terminal whatever client you’re using to access the endpoint, that’s the why of the with-out-str function. Note also, we are passing around a value that i called request, the reason is explained at compojure wiki here.
The rest is pretty much self explanatory, i’ve used the response function i mentioned to create the response map, and passing the body to the response function. The body being a clojure map, is treated as a JSON, because the wrap-json-response middleware is handling this for us, as well as wrap-json-body is handling JSON coming from HTTP requests and transforming them into clojure maps, let’s see all this in action:
• Update your core.clj using the code above
• Open the terminal on the project folder and do: lein run
• Open another terminal window and usgin the curl tool, we are going to check the debug endpoint:
$curl -d '{"key1":"value1", "key2":"value2"}' -H "Content-Type: application/json" -X POST http://localhost:3000/debug The output: {:ssl-client-cert nil, :protocol "HTTP/1.1", :remote-addr "127.0.0.1", :params {}, :route-params {}, :headers {"user-agent" "curl/7.58.0", "host" "localhost:3000", "accept" "*/*", "content-length" "34", "content-type" "application/json"}, :server-port 3000, :content-length 34, :compojure/route [:post "/debug"], :content-type "application/json", :character-encoding "UTF-8", :uri "/debug", :server-name "localhost", :query-string nil, :body {"key1" "value1", "key2" "value2"}, :scheme :http, :request-method :post} As you can see, now our :body key has a value of our JSON encoded request, and we are receiving as clojure hash-map, theres one thing to notice though, the keys are in string format, not in symbols, if you want to receive them as symbols (it simplifies the process of taking values from the request) you should read the documentation of Ring-JSON library, there is a middleware for doing this! Let’s also test one of our endpoints, the /endpoint-a: $ curl http://localhost:3000/endpoint-a
The output:
{"foo":"bar"}
As expected, and if you open your browser, you’re going to see that you are indeed getting a JSON response.
## What we accomplished
Although we are lacking lot’s of stuff for an API logic, you already have what i promissed in the beginning, which was creating a very simple code set-up to start developing yourself. Don’t worry, i’ll link you some resources to keep going on, and also will create part II of this post!
## A few tips & advices
I have some tips for you before going ahead:
• Clojure projects have a “connect the pieces” kind of way of building things, if you need something, there will be a library! Here’s a gift: Clojure ToolBox
• As i said before, there’s no correct way to build your system/project/API as long as your code is idiomatic and readable, you can’t go wrong if you plan ahead and analyse your needs. Clojure style guide
• Remember, you’ll get stucked at some point ALWAYS because we are using an approach where you reuse pre-built pieces of your system, you NEED to read library documentations, they are always on their github repo and clojure libs docs never disappointed me.
• When everything seems to fail, search for answers on Clojure communities, if there’s no answer then ask, you can google those communities but here is the most active (i think): Clojure Slack
## What’s next
The next logical step would be Validating input from the client, i’ll cover this on another post using this project, but if you want to go by yourself, to validate input in clojure we use mostly Specs, and i’ve already used this library too. | 2019-11-22 15:49:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17426104843616486, "perplexity": 3221.895032968152}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671363.79/warc/CC-MAIN-20191122143547-20191122172547-00488.warc.gz"} |
https://mathematica.stackexchange.com/questions/213738/performing-operations-on-lhs-of-assignment?noredirect=1 | # Performing operations on LHS of assignment [duplicate]
Is it possible to first perform an operation on LHS of the = symbol, evaluate that, and then assign the RHS to the new LHS as usual? The following shows the most basic example I could think of.
ToExpression["list" <> ToString[4]] = {1,2,3,4,5});
> Set::write: Tag ToExpression in ToExpression[list4] is Protected.
It looks like Mathematica thinks we want to redefine the functions used on the LHS of =, not first evaluate it first and then assign. I've tried surrounding LHS with brackets and parenthesis, but that doesn't work either. Is there a way to make this work? If yes, with what code?
The reason this arose is because I made a function that takes input n, and makes a list accordingly. Now I wanted to AUTOMATICALLY give that list a name that has n in it. So if I compute f[1209], I want to automatically store whatever list I computed to list1209, without having to type list1209 = computedlist.
• Welcome to MSE. Try Evaluate@ToExpression["list" <> ToString[4]] = {1, 2, 3, 4, 5} Jan 29 '20 at 4:20
• In this case, it's safer to use something like Symbol["list" <> IntegerString[4]]. Jan 29 '20 at 4:44 | 2021-09-20 10:42:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6631941199302673, "perplexity": 1030.1667808442176}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057036.89/warc/CC-MAIN-20210920101029-20210920131029-00338.warc.gz"} |
https://xianblog.wordpress.com/tag/harmonic-mean/ | ## approximating evidence with missing data
Posted in Books, pictures, Statistics, University life with tags , , , , , , , , , , , , , , , on December 23, 2015 by xi'an
Panayiota Touloupou (Warwick), Naif Alzahrani, Peter Neal, Simon Spencer (Warwick) and Trevelyan McKinley arXived a paper yesterday on Model comparison with missing data using MCMC and importance sampling, where they proposed an importance sampling strategy based on an early MCMC run to approximate the marginal likelihood a.k.a. the evidence. Another instance of estimating a constant. It is thus similar to our Frontier paper with Jean-Michel, as well as to the recent Pima Indian survey of James and Nicolas. The authors give the difficulty to calibrate reversible jump MCMC as the starting point to their research. The importance sampler they use is the natural choice of a Gaussian or t distribution centred at some estimate of θ and with covariance matrix associated with Fisher’s information. Or derived from the warmup MCMC run. The comparison between the different approximations to the evidence are done first over longitudinal epidemiological models. Involving 11 parameters in the example processed therein. The competitors to the 9 versions of importance samplers investigated in the paper are the raw harmonic mean [rather than our HPD truncated version], Chib’s, path sampling and RJMCMC [which does not make much sense when comparing two models]. But neither bridge sampling, nor nested sampling. Without any surprise (!) harmonic means do not converge to the right value, but more surprisingly Chib’s method happens to be less accurate than most importance solutions studied therein. It may be due to the fact that Chib’s approximation requires three MCMC runs and hence is quite costly. The fact that the mixture (or defensive) importance sampling [with 5% weight on the prior] did best begs for a comparison with bridge sampling, no? The difficulty with such study is obviously that the results only apply in the setting of the simulation, hence that e.g. another mixture importance sampler or Chib’s solution would behave differently in another model. In particular, it is hard to judge of the impact of the dimensions of the parameter and of the missing data.
## rediscovering the harmonic mean estimator
Posted in Kids, Statistics, University life with tags , , , , , , , on November 10, 2015 by xi'an
When looking at unanswered questions on X validated, I came across a question where the author wanted to approximate a normalising constant
$N=\int g(x)\,\text{d}x\,,$
while simulating from the associated density, g. While seemingly unaware of the (huge) literature in the area, he re-derived [a version of] the harmonic mean estimate by considering the [inverted importance sampling] identity
$\int_\mathcal{X} \dfrac{\alpha(x)}{g(x)}p(x) \,\text{d}x=\int_\mathcal{X} \dfrac{\alpha(x)}{N} \,\text{d}x=\dfrac{1}{N}$
when α is a probability density and by using for α the uniform over the whole range of the simulations from g. This choice of α obviously leads to an estimator with infinite variance when the support of g is unbounded, but the idea can be easily salvaged by using instead another uniform distribution, for instance on an highest density region, as we studied in our papers with Darren Wraith and Jean-Michel Marin. (Unfortunately, the originator of the question does not seem any longer interested in the problem.)
## a remarkably simple and accurate method for computing the Bayes factor &tc.
Posted in Statistics with tags , , , , , , , , on February 13, 2013 by xi'an
This recent arXiv posting by Martin Weinberg and co-authors was pointed out to me by friends because of its title! It indeed sounded a bit inflated. And also reminded me of old style papers where the title was somehow the abstract. Like An Essay towards Solving a Problem in the Doctrine of Chances… So I had a look at it on my way to Gainesville. The paper starts from the earlier paper by Weinberg (2012) in Bayesian Analysis where he uses an HPD region to determine the Bayes factor by a safe harmonic mean estimator (an idea we already advocated earlier with Jean-Michel Marin in the San Antonio volume and with Darren Wraith in the MaxEnt volume). An extra idea is to try to optimise [against the variance of the resulting evidence] the region over which the integration is performed: “choose a domain that results in the most accurate integral with the smallest number of samples” (p.3). The authors proceed by volume peeling, using some quadrature formula for the posterior coverage of the region, either by Riemann or Lebesgue approximations (p.5). I was fairly lost at this stage and the third proposal based on adaptively managing hyperrectangles (p.7) went completely over my head! The sentence “the results are clearly worse with O() errors, but are still remarkably better for high dimensionality”(p.11) did not make sense either… The method may thus be remarkably simple, but the paper is not written in a way that conveys this impression!
## estimating a constant (not really)
Posted in Books, Statistics, University life with tags , , , , , , , , , , , , , on October 12, 2012 by xi'an
Larry Wasserman wrote a blog entry on the normalizing constant paradox, where he repeats that he does not understand my earlier point…Let me try to recap here this point and the various comments I made on StackExchange (while keeping in mind all this is for intellectual fun!)
The entry is somehow paradoxical in that Larry acknowledges (in that post) that the analysis in his book, All of Statistics, is wrong. The fact that “g(x)/c is a valid density only for one value of c” (and hence cannot lead to a notion of likelihood on c) is the very reason why I stated that there can be no statistical inference nor prior distribution about c: a sample from f does not bring statistical information about c and there can be no statistical estimate of c based on this sample. (In case you did not notice, I insist upon statistical!)
To me this problem is completely different from a statistical problem, at least in the modern sense: if I need to approximate the constant c—as I do in fact when computing Bayes factors—, I can produce an arbitrarily long sample from a certain importance distribution and derive a converging (and sometimes unbiased) approximation of c. Once again, this is Monte Carlo integration, a numerical technique based on the Law of Large Numbers and the stabilisation of frequencies. (Call it a frequentist method if you wish. I completely agree that MCMC methods are inherently frequentist in that sense, And see no problem with this because they are not statistical methods. Of course, this may be the core of the disagreement with Larry and others, that they call statistics the Law of Large Numbers, and I do not. This lack of separation between both notions also shows up in a recent general public talk on Poincaré’s mistakes by Cédric Villani! All this may just mean I am irremediably Bayesian, seeing anything motivated by frequencies as non-statistical!) But that process does not mean that c can take a range of values that would index a family of densities compatible with a given sample. In this Monte Carlo integration approach, the distribution of the sample is completely under control (modulo the errors induced by pseudo-random generation). This approach is therefore outside the realm of Bayesian analysis “that puts distributions on fixed but unknown constants”, because those unknown constants parameterise the distribution of an observed sample. Ergo, c is not a parameter of the sample and the sample Larry argues about (“we have data sampled from a distribution”) contains no information whatsoever about c that is not already in the function g. (It is not “data” in this respect, but a stochastic sequence that can be used for approximation purposes.) Which gets me back to my first argument, namely that c is known (and at the same time difficult or impossible to compute)!
Let me also answer here the comments on “why is this any different from estimating the speed of light c?” “why can’t you do this with the 100th digit of π?” on the earlier post or on StackExchange. Estimating the speed of light means for me (who repeatedly flunked Physics exams after leaving high school!) that we have a physical experiment that measures the speed of light (as the original one by Rœmer at the Observatoire de Paris I visited earlier last week) and that the statistical analysis infers about c by using those measurements and the impact of the imprecision of the measuring instruments (as we do when analysing astronomical data). If, now, there exists a physical formula of the kind
$c=\int_\Xi \psi(\xi) \varphi(\xi) \text{d}\xi$
where φ is a probability density, I can imagine stochastic approximations of c based on this formula, but I do not consider it a statistical problem any longer. The case is thus clearer for the 100th digit of π: it is also a fixed number, that I can approximate by a stochastic experiment but on which I cannot attach a statistical tag. (It is 9, by the way.) Throwing darts at random as I did during my Oz tour is not a statistical procedure, but simple Monte Carlo à la Buffon…
Overall, I still do not see this as a paradox for our field (and certainly not as a critique of Bayesian analysis), because there is no reason a statistical technique should be able to address any and every numerical problem. (Once again, Persi Diaconis would almost certainly differ, as he defended a Bayesian perspective on numerical analysis in the early days of MCMC…) There may be a “Bayesian” solution to this particular problem (and that would nice) and there may be none (and that would be OK too!), but I am not even convinced I would call this solution “Bayesian”! (Again, let us remember this is mostly for intellectual fun!)
## Harmonic means for reciprocal distributions
Posted in Statistics with tags , , on November 17, 2011 by xi'an
An interesting post on ExploringDataBlog on the properties of the distribution of 1/X. Hmm, maybe not the most enticing way of presenting it, since there does not seem anything special in a generic inversion! What attracted me to this post (via Rbloggers) is the fact that a picture shown there was one I had obtained about twenty years ago when looking for a particular conjugate prior in astronomy, a distribution I dubbed the inverse normal distribution (to distinguish it from the inverse Gaussian distribution). The author, Ron Pearson [who manages to mix the first name and the second name of two arch-enemies of 20th Century statistics!] points out that well-behaved distributions usually lead to heavy tailed reciprocal distributions. Of course, the arithmetic mean of a variable X is the inverse of the harmonic mean of the inverse variable 1/X, so looking at those distributions makes sense. The post shows that, for the inverse normal distribution, depending on the value of the normal mean, the harmonic mean has tails that vary between a Cauchy and a normal distributions…
## Bayesian ideas and data analysis
Posted in Books, R, Statistics, Travel, University life with tags , , , , , , , , , , , , , on October 31, 2011 by xi'an
Here is [yet!] another Bayesian textbook that appeared recently. I read it in the past few days and, despite my obvious biases and prejudices, I liked it very much! It has a lot in common (at least in spirit) with our Bayesian Core, which may explain why I feel so benevolent towards Bayesian ideas and data analysis. Just like ours, the book by Ron Christensen, Wes Johnson, Adam Branscum, and Timothy Hanson is indeed focused on explaining the Bayesian ideas through (real) examples and it covers a lot of regression models, all the way to non-parametrics. It contains a good proportion of WinBugs and R codes. It intermingles methodology and computational chapters in the first part, before moving to the serious business of analysing more and more complex regression models. Exercises appear throughout the text rather than at the end of the chapters. As the volume of their book is more important (over 500 pages), the authors spend more time on analysing various datasets for each chapter and, more importantly, provide a rather unique entry on prior assessment and construction. Especially in the regression chapters. The author index is rather original in that it links the authors with more than one entry to the topics they are connected with (Ron Christensen winning the game with the highest number of entries). Continue reading | 2017-03-23 08:16:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7698807120323181, "perplexity": 730.6404932329548}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218186841.66/warc/CC-MAIN-20170322212946-00554-ip-10-233-31-227.ec2.internal.warc.gz"} |
https://gmatclub.com/forum/if-150-were-increased-by-60-and-then-decreased-by-y-percent-the-resu-267527.html | GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 23 Oct 2018, 18:22
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# If 150 were increased by 60% and then decreased by y percent, the resu
new topic post reply Question banks Downloads My Bookmarks Reviews Important topics
Author Message
TAGS:
### Hide Tags
Math Expert
Joined: 02 Sep 2009
Posts: 50058
If 150 were increased by 60% and then decreased by y percent, the resu [#permalink]
### Show Tags
08 Jun 2018, 23:50
00:00
Difficulty:
35% (medium)
Question Stats:
75% (01:17) correct 25% (01:55) wrong based on 86 sessions
### HideShow timer Statistics
If 150 were increased by 60% and then decreased by y percent, the result would be 192. What is the value of y?
(A) 20
(B) 28
(C) 32
(D) 72
(E) 80
_________________
Senior PS Moderator
Joined: 26 Feb 2016
Posts: 3207
Location: India
GPA: 3.12
Re: If 150 were increased by 60% and then decreased by y percent, the resu [#permalink]
### Show Tags
09 Jun 2018, 00:25
Bunuel wrote:
If 150 were increased by 60% and then decreased by y percent, the result would be 192. What is the value of y?
(A) 20
(B) 28
(C) 32
(D) 72
(E) 80
When 150 is increased by 60% or $$\frac{60}{100} * 150 = 90$$, the resulting number is $$240(150 + 90)$$
When the number is decreased by y percent, it becomes 192. The decrease is $$48(240 - 192)$$
y percent of 240 must be 48, $$\frac{y}{100} * 240 = 48$$
Solving for y, we get y = $$\frac{48 * 100}{240} = 20$$. Therefore, the value of y is 20(Option A)
_________________
You've got what it takes, but it will take everything you've got
Director
Status: Learning stage
Joined: 01 Oct 2017
Posts: 908
WE: Supply Chain Management (Energy and Utilities)
Re: If 150 were increased by 60% and then decreased by y percent, the resu [#permalink]
### Show Tags
09 Jun 2018, 06:32
Bunuel wrote:
If 150 were increased by 60% and then decreased by y percent, the result would be 192. What is the value of y?
(A) 20
(B) 28
(C) 32
(D) 72
(E) 80
When 150 were increased by 60% , resultant value is 150(1+$$\frac{60}{100}$$)
This resultant value is decreased by y%, the new resultant value is:150(1+$$\frac{60}{100}$$)(1-$$\frac{y}{100}$$)
Given that 150(1+$$\frac{60}{100}$$)(1-$$\frac{y}{100}$$)=192
Or, 150*1.6(1-$$\frac{y}{100}$$)=192
Or, $$\frac{y}{100}$$=1- $$\frac{192}{150*1.6}$$=1-$$\frac{12}{15}$$
Or, y=$$\frac{3}{15}$$*100=20
Ans. option (A)
_________________
Regards,
PKN
Rise above the storm, you will find the sunshine
Senior SC Moderator
Joined: 22 May 2016
Posts: 2040
If 150 were increased by 60% and then decreased by y percent, the resu [#permalink]
### Show Tags
11 Jun 2018, 18:28
Bunuel wrote:
If 150 were increased by 60% and then decreased by y percent, the result would be 192. What is the value of y?
(A) 20
(B) 28
(C) 32
(D) 72
(E) 80
150 increased by 60% = ?
$$(150*1.6) = (150*\frac{8}{5}) = 240$$
240 decreases by $$y$$ percent to 192
Percent change, y: $$\frac{New-Old}{Old}*100$$
Percent change, y: $$\frac{240-192}{240}*100=$$
$$(\frac{48}{240}*100)= (\frac{2}{10}*100)=$$
$$(.2*100)=20$$ percent
_________________
___________________________________________________________________
For what are we born if not to aid one another?
-- Ernest Hemingway
Intern
Joined: 08 Oct 2017
Posts: 24
Re: If 150 were increased by 60% and then decreased by y percent, the resu [#permalink]
### Show Tags
14 Jul 2018, 00:22
60% increase over 150 = 150 + 150*0.6 = 240
Post Y% decrease = 192
Change = 240-192 = 48
Y% = ($$\frac{48}{240}$$)*100 = 20%
Board of Directors
Status: QA & VA Forum Moderator
Joined: 11 Jun 2011
Posts: 4110
Location: India
GPA: 3.5
WE: Business Development (Commercial Banking)
Re: If 150 were increased by 60% and then decreased by y percent, the resu [#permalink]
### Show Tags
14 Jul 2018, 10:46
Bunuel wrote:
If 150 were increased by 60% and then decreased by y percent, the result would be 192. What is the value of y?
(A) 20
(B) 28
(C) 32
(D) 72
(E) 80
150 were increased by 60% = $$\frac{160}{100}*150 = 240$$
Then reduced by y% to result in 192, so % decrease is $$\frac{(240 - 192)}{240}*100$$ = 20%, Answer must be (A)
_________________
Thanks and Regards
Abhishek....
PLEASE FOLLOW THE RULES FOR POSTING IN QA AND VA FORUM AND USE SEARCH FUNCTION BEFORE POSTING NEW QUESTIONS
How to use Search Function in GMAT Club | Rules for Posting in QA forum | Writing Mathematical Formulas |Rules for Posting in VA forum | Request Expert's Reply ( VA Forum Only )
Re: If 150 were increased by 60% and then decreased by y percent, the resu &nbs [#permalink] 14 Jul 2018, 10:46
Display posts from previous: Sort by
# If 150 were increased by 60% and then decreased by y percent, the resu
new topic post reply Question banks Downloads My Bookmarks Reviews Important topics
Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®. | 2018-10-24 01:22:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6560152769088745, "perplexity": 6035.242499938183}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583517628.91/warc/CC-MAIN-20181024001232-20181024022732-00309.warc.gz"} |
https://www.blogarama.com/arts-and-entertainment-blogs/1302867-notesformsc-blog/39191323-finding-determinant-cross-multiplication | Finding Determinant By Cross Multiplication
In the previous article, we have seen how Determinant decides whether a system of equation (read square matrix) has inverse, or it has a solution, only when the determinant is not zero. The determinant is obtained from the equation given below.
$determinant = \sum \pm a_{1\alpha}a_{2\beta}...a_{nv}$
To know more about finding determinant in this way , read previous article. Here we will discuss about finding determinant by cross multiplication but before that let us understand the different notations used to represent determinants.
Notation For Determinants
There are several notation for determinants given by earlier mathematicians. Suppose $A$ represents a augmented matrix from a system of linear equations, then determinant of $A$ is given below.
Let the matrix $A$ be a 2 x 2 matrix.
$A = \begin{bmatrix}a_{11} & a_{12}\\a_{21} & a_{22}\end{bmatrix}$
Different ways to represent determinant of matrix $A$$det(A)$ -- (1)
$|A|$ -- (2)
$\begin{vmatrix}a_{11} & a_{12}\\a_{21} & a_{22}\end{vmatrix}$ -- (3)
Determinant as a Function
Imagine determinant to a function that take a square matrix as input and give a single value as output. For example,$f(x) = x^3$ be a function where $x$ could be any real number. Similarly, $det(A)$ is a function that matrix as input and give a determinant value $d$. The determinant value is always integer because it is linear combination of integers, that is, all values are integers in the matrix.
Determinant of $1 \times 1$ Matrix
If $A$ is a matrix with just one element, then its determinant is the same element.
Example #1
Let $A$ be a square matrix of order $1 \times 1$$A = \begin{bmatrix}2\end{bmatrix}$
Then the determinant of $A$ is
$|A| = |2| = 2$
Determinant of $2 \times 2$ Matrix
The determinant of a $2 \times 2$ matrix is obtained by performing cross multiplication. See the following figure.
Example #2
Let $A$ be a $2 \times 2$ square matrix. Find the determinant of the matrix $A$.
Solution:
Let the $A$ be 2 x 2 square matrix.
$A = \begin{bmatrix}2 & 3\\1 & 5\end{bmatrix}$$|A| = a \times d - b \times c$$|A| = 2 \times 5 - 3 \times 1$$|A| = 10 - 3 = 7$
Example #3
Let $B$ be a square matrix of order $2 \times 2$. Find the determinant of the matrix $B$.
Solution:
Let $B$ be a square matrix of order $2 \times 2$.
$B = \begin{bmatrix}5 & -1\\4 & -3\end{bmatrix}$$|B| = a \times d - b \times c$$|B| = 5 \times (-3) - (-1) \times 4$$|B| = (-15) - (-4)$$|B| = (-15) + 4 = -11$
Determinant Of $3 \times 3$ Matrix
The determinant of a $3 \times 3$ matrix is also possible through cross multiplication; Since we have a larger matrix we need to convert the larger matrix into smaller matrix to compute determinant. See figure below.
The post Finding Determinant By Cross Multiplication appeared first on Notesformsc.
This post first appeared on Notesformsc, please read the originial post: here
Share the post
Finding Determinant By Cross Multiplication
× | 2021-09-26 15:26:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 42, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9998098015785217, "perplexity": 953.6205170248005}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057882.56/warc/CC-MAIN-20210926144658-20210926174658-00579.warc.gz"} |
http://physics.stackexchange.com/tags/resistance/new | # Tag Info
1
So what exactly happens to the potential inside the resistor ? Unlike the ideal conductors, for which an electric field cannot exist inside, there is an electric field through the resistor body when there is a current through. And, as you may know, the rate of change in electric potential is related to the value of the electric field. Thus, the ...
3
Let me first take a little detour away from this circuit to particle accelerators. If you have some electrons in vacuum and a potential set up between two points (exactly the same as saying you have an electric field set up) you can accelerate your electrons. If you move a single electron through $1V$ of potential the electron gains $1eV$ of energy where ...
0
I think this is really about which way you count current and voltage to be positive. For every element in a network you can define a a current and a voltage. If voltage and and current point the same direction, it's called "receptor". If they are in opposite direction its' called "generator". The most common convention is to use "receptor" for resistors, ...
1
The bleeder resistor is across the capacitor so they have identical voltage across.
2
I recently answered a similar question here. The ideal capacitor equation $$i_C = C\frac{dv_C}{dt}$$ assumes the passive sign convention which means that the reference direction for $i_C$ is into the positive labelled terminal. When you write $$iR = v_C$$ it is necessarily the case that $$i_C = - i$$ To see this, assume that both positive labelled ...
2
This is a common question. The issue is that the "Q" in $i = dQ/dt$ is not the same as the $Q$ that represents the charge on the capacitor. The variable $Q$ in use here is simply the charge on the capacitor. No problem. When the capacitor discharges the quantity of charge that is introduced into the circuit after a time $\delta t$ has elapsed is ...
0
As DanielSank said, I cannot say anything quantitative about the question without a diagram. That being said, your formula, in general, is correct if the topology of the circuit is symmetric. Without any details about how mechanical energy is coupled to the circuit, I can only guess the general reason for the apparent non-conservation of energy. This ...
0
Are you sure you mean the internal resistance "r"? The internal resistance typically can not be adjusted, often this question is phrased in terms of the load resistance "R". The power dissipated in the external load is: $$P=\frac{V^2 R}{(r+R)^2}\;,$$ which you need to maximize. If you maximize with respect to R, you find R=r... If you maximize with ...
0
Edit: I had previously misread which node was a and which was b in the question. V_ab = 108.75 - 15(4.25) = +45V If a is on the right, and E = -108.75, then you should have $$V_{ab} = -108.75 + (15\Omega)(4.25\mathrm{V}) = -45\mathrm{V}$$
0
Technically the resistance of a wire is never 0. Typical wires of copper have electrical conductivity of $10^{7} S/m$ at room temperature. So there is indeed a very slight potential drop but highly negligible compared to the one that would be experience at a proper resistor. As for the problem of electrostatic, indeed from a single charged particle the ...
3
At equilibrium, the field inside an ideal conductor is zero. http://hyperphysics.phy-astr.gsu.edu/hbase/electric/gausur.html#c2 A charge moving through such a conductor neither gains nor loses energy. We can't attach an ideal conductor to an ideal voltage source. Something has to give. There will be a voltage drop along a real wire due to non-zero ...
2
I think the key thing missing in your thinking is that the energy drop across a resistor is not just determined by the properties of the resistor, but also by how much current flows. The cool thing is that no matter what resistors you put in, the current that flows is such that the potential will fall all the way back down. The reason for this is that ...
1
Conductance is the extrinsic property while conductivity is the intrinsic property. This means that conductance is the property of an object dependent of its amount/mass or physical shape and size, while conductivity is the inherent property of the material that makes up the object. No matter how the object changes in terms of shape/size/mass, as long as it ...
-1
The fact that the conductivity $\sigma=\frac{1}{\rho}$ of a metal scales like $\propto\frac{1}{T}$ is due to elastic electron-phonon scattering, i.e. the coulombian interaction between the charge density fluctuation induce by a phonon and an electron. In an incoherent transport theory of electron in solids (i.e. avoiding corrections due to interferences ...
-1
In my point of view, it is independent thing. Dependence of resistance on temperature is determined by Nernst-Einstein equation. $$R=\frac{l k_B T}{S D Z^2 e^2 C}$$ where $T$ -is a temperature of resistance, $k_B$ is a Boltzmann constant, l- length, S - cross sectional area, $D$ - diffusion coefficient, $C$ is a charge carrier density, $Z$ is a amount of ...
0
The law of currents says that the sum of currents in any node/junction is zero. The law of voltages says that the sum of emf=sum(i*R) around any closed loop and equals the sum of voltage sources; sum(e) in the loop. According to this your equations on the right of the figure are correct. The rest is simple mathematical manipulations. For complex circuits all ...
0
In mathematics we have boundary value problems or initial value problems or a mix. For the first you need steady state and specification of the values on All the boundary points. Your case belong to this I guess. So the answer is no, you have to know all the boundary points. For initial values problems and this you need if you have storage elements like ...
0
The question was not very clear. If $V_2$ was not there, you simply combine $R_2$ and $R_3$ to $R_x$ as $1/R_x=1/R_2+1/R_3$ and sum $R_x$ with $R_1$ and you get simply $I_1$. Inserting $V_2$ creates another current competing with $I_1$. Concerning the boxed statement, I believe that it reflects the sums of voltages, not really it combines the resistors, ...
0
They aren't. Two resistors are in parallel if they have the same voltage drop across them; two resistors are in series if the same current flows through them. In your problem, because of the presence of $v_2$, it's entirely possible to have different currents in and voltage drops across all of $R_1,R_2,R_3$. I believe your boxed statement, \begin{align} ...
-1
Nichrome is used to connect the two terminals of the battery or any source that is connected to the circuit. It is not used for making filaments. Therefore, it strong text offers low resistance to the flow of the electric current.
0
The $\frac{1}{I}$ term is the analogue of $y$ in the equation for a straight line, $y=mx+b$. I'll guess that $R$ is a variable resistance you control and $r$ is some internal or fixed, possibly unknown, resistance in a circuit. When you plot the controlled resistance $R$ on the horizontal axis, that is the analogue of $x$ in the linear equation pattern. ...
2
For your circuit, $V = I\cdot R$. You are plotting (unusually) R along the X axis and $\frac{1}{I}$ along the Y axis, so the slope is $\frac{1}{V}$. Now the fact that this slope is a straight line tells you that the voltage is constant. This means that (over the range of your experiment) your voltage source has a low internal resistance. Imagine for a ...
1
If it's a simple circuit where Ohm's law applies, then we should get $$V=IR$$ so we see that $$V/I = R$$ $$1/I=R/V$$ $$1/I = (1/V) \times R$$ The gradient should then be $1/V$. Seems like a slightly bizarre plot but if you got a straight line then that makes the maths simple at least!
2
A good reference was given in an answer to a related question: Cserti 2000 (arXiv preprint, whose numbers I'll be referring to) solved a number of generalizations of the 2D lattice problem. For a $d$-dimensional lattice, the resistance between the origin and the point $(l_1, \ldots, l_d)$ is given by eq. 18 in that paper: R(l_1, \ldots, l_d) = R_0 ...
0
Each branch containing resistors acts as a potential divider, dividing the voltage across the whole branch in the ratios 40:50 and 50:40 (top and bottom respectively), ie the ratios 4:5 and 5:4. This means that each resistor will have either 4/(4+5)=4/9 or 5/(4+5) = 5/9 of the total voltage (18V), which as you have correctly calculated is 8V for both the ...
2
If you have current flowing one way through a resistor, then the electrons flow through the other way. Since current flows from the high voltage end of a resistor to the low voltage end, then the electrons come in at the low voltage end and come out at the high voltage end. When electrons (which are negatively charged) go from low voltage to high voltage, ...
0
There may be a confusion between enery and power. While the current, which is the number of charges per second, and the energy of the ,electron do'nt change at the output of the resistor, the power does. Power is the amount of energy per unit time, and that does not affect the current which, again, is the amount of charge per unit time.
Top 50 recent answers are included | 2015-03-29 06:27:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7297564148902893, "perplexity": 269.02921539184393}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131298228.32/warc/CC-MAIN-20150323172138-00107-ip-10-168-14-71.ec2.internal.warc.gz"} |
https://aitopics.org/mlt?cdid=arxivorg%3A300F1E3A&dimension=taxnodes | ### Bayesian Inference of Individualized Treatment Effects using Multi-task Gaussian Processes
Predicated on the increasing abundance of electronic health records, we investigate the problem of inferring individualized treatment effects using observational data. Stemming from the potential outcomes model, we propose a novel multi-task learning framework in which factual and counterfactual outcomes are modeled as the outputs of a function in a vector-valued reproducing kernel Hilbert space (vvRKHS). We develop a nonparametric Bayesian method for learning the treatment effects using a multi-task Gaussian process (GP) with a linear coregionalization kernel as a prior over the vvRKHS. The Bayesian approach allows us to compute individualized measures of confidence in our estimates via pointwise credible intervals, which are crucial for realizing the full potential of precision medicine. The impact of selection bias is alleviated via a risk-based empirical Bayes method for adapting the multi-task GP prior, which jointly minimizes the empirical error in factual outcomes and the uncertainty in (unobserved) counterfactual outcomes. We conduct experiments on observational datasets for an interventional social program applied to premature infants, and a left ventricular assist device applied to cardiac patients wait-listed for a heart transplant. In both experiments, we show that our method significantly outperforms the state-of-the-art.
### IBM Watson aligns with 16 health systems and imaging firms to apply cognitive computing to battle cancer, diabetes, heart disease
IBM Watson Health has formed a medical imaging collaborative with more than 15 leading healthcare organizations. The goal: To take on some of the most deadly diseases. The collaborative, which includes health systems, academic medical centers, ambulatory radiology providers and imaging technology companies, aims to help doctors address breast, lung, and other cancers; diabetes; eye health; brain disease; and heart disease and related conditions, such as stroke. Watson will mine insights from what IBM calls previously invisible unstructured imaging data and combine it with a broad variety of data from other sources, such as data from electronic health records, radiology and pathology reports, lab results, doctors' progress notes, medical journals, clinical care guidelines and published outcomes studies. As the work of the collaborative evolves, Watson's rationale and insights will evolve, informed by the latest combined thinking of the participating organizations.
### How 3D Printing and IBM Watson Could Replace Doctors
Health care executives from IBM Watson and Athenahealth athn debated that question onstage at Fortune's inaugural Brainstorm Health conference Tuesday. In addition to partnering with Celgene celg to better track negative drug side effects, IBM ibm is applying its cognitive computing AI technology to recommend cancer treatment in rural areas in the U.S., India, and China, where there is a dearth of oncologists, said Deborah DiSanzo, general manager for IBM Watson Health. For example, IBM Watson could read a patient's electronic medical record, analyze imagery of the cancer, and even look at gene sequencing of the tumor to figure out the optimal treatment plan for a particular person, she said. "That is the promise of AI--not that we are going to replace people, not that we're going to replace doctors, but that we really augment the intelligence and help," DiSanzo said. Athenahealth CEO Jonathan Bush, however, disagreed.
### Identification of Predictive Sub-Phenotypes of Acute Kidney Injury using Structured and Unstructured Electronic Health Record Data with Memory Networks
Acute Kidney Injury (AKI) is a common clinical syndrome characterized by the rapid loss of kidney excretory function, which aggravates the clinical severity of other diseases in a large number of hospitalized patients. Accurate early prediction of AKI can enable in-time interventions and treatments. However, AKI is highly heterogeneous, thus identification of AKI sub-phenotypes can lead to an improved understanding of the disease pathophysiology and development of more targeted clinical interventions. This study used a memory network-based deep learning approach to discover predictive AKI sub-phenotypes using structured and unstructured electronic health record (EHR) data of patients before AKI diagnosis. We leveraged a real world critical care EHR corpus including 37,486 ICU stays. Our approach identified three distinct sub-phenotypes: sub-phenotype I is with an average age of 63.03$\pm 17.25$ years, and is characterized by mild loss of kidney excretory function (Serum Creatinne (SCr) $1.55\pm 0.34$ mg/dL, estimated Glomerular Filtration Rate Test (eGFR) $107.65\pm 54.98$ mL/min/1.73$m^2$). These patients are more likely to develop stage I AKI. Sub-phenotype II is with average age 66.81$\pm 10.43$ years, and was characterized by severe loss of kidney excretory function (SCr $1.96\pm 0.49$ mg/dL, eGFR $82.19\pm 55.92$ mL/min/1.73$m^2$). These patients are more likely to develop stage III AKI. Sub-phenotype III is with average age 65.07$\pm 11.32$ years, and was characterized moderate loss of kidney excretory function and thus more likely to develop stage II AKI (SCr $1.69\pm 0.32$ mg/dL, eGFR $93.97\pm 56.53$ mL/min/1.73$m^2$). Both SCr and eGFR are significantly different across the three sub-phenotypes with statistical testing plus postdoc analysis, and the conclusion still holds after age adjustment.
### How companies are Using AI in the Field of Patient Data Mining
One of the ways AI is and will continue t be helpful in the field of healthcare is allowing medical professionals the ability to create treatment plans as well as discovering the best suited methods for helping their patients; instead of having to battle the tread-wheel of bureaucracy, nurses and physicians can focus on doing their actual jobs. Since we are in the age of big data, patient information is becoming valuable as tech giants, such as IBM and Google, are becoming more involved in acquiring this information; therefore, companies are using AI in the field known as patient data mining in a variety of ways. The AI research branch of the company recently launched a project known as Google Deepmind Health which focuses on mining medical records with the goal of providing faster and better health services; the project can go through hundreds of thousands of medical data within minutes. Also, Google's life sciences are working on a data-collecting initiative that aims to apply some of the same algorithms used to power Goggle's search button to analyze what it is that makes a person healthy. Included in this is experimenting with technologies that monitor diseases such as a digital contact lens that might detect levels of blood sugar. | 2019-12-16 06:07:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18623268604278564, "perplexity": 4490.499269478056}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541317967.94/warc/CC-MAIN-20191216041840-20191216065840-00149.warc.gz"} |
https://dsp.stackexchange.com/questions/62674/adding-white-gaussian-noise-to-a-voice-signal | # Adding White Gaussian noise to a voice signal
I'm trying to add White Gaussian Noise to an audio file. However; the energy of the noise should be 1/10th of that of the signal. My first attempt is as following:
[y,Fs] = audioread('drum.wav');
%sound(y,Fs);
sound(noisy_sig,Fs);
function noisy_sig = addnoise( sig , SNRdb )
sig_power = norm(sig,2) / length(sig);
% noise power is equal to sigma^2
sigma2 = sig_power / 10^(SNRdb/10) ;
noisy_sig = sig + sqrt(sigma2)*randn(size(sig));
end
When I listen the resulting wav, there is no difference between the original and noisy ones. Am I doing something wrong? Any help would be appreciated.
• So, Is it correct to say x = y + 0.1*randn(length(y),1) ? The energy of the noise signal should be 1/10th of the original signal. – Jason Dec 18 '19 at 15:31
Your problem is how you calculate the signal power. You are calling norm which calculates $$\sqrt{\sum_i |x_i|^2}$$ but you want to calculate the sum of the squared values and then divide by the length as you do in your code. Try this instead: sig_power = norm(sig)^2 / length(sig);. This line will calculate $$\frac{1}{N}\sum_{i=1}^N |x_i|^2$$.
• That adds noise with variance $0.1$. But from your comment you don't mention anything about y's power. SNR is signal-to-noise ratio so it is the signal and noise power relative to each other. – Engineer Dec 18 '19 at 17:07 | 2020-05-25 18:21:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7830792665481567, "perplexity": 666.7952449021569}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347389309.17/warc/CC-MAIN-20200525161346-20200525191346-00444.warc.gz"} |
https://courses.engr.illinois.edu/cs225/sp2018/mps/6/ | Partner MP
MP6 is a partner MP!
You should denote who you work with in the PARTNERS.txt file in mp6. If you worked alone, include only your NetID in PARTNERS.txt.
## Assignment Description
In Computer Science, it’s said that one thing brings us together more than anything else:
Segmentation fault
We’ve all been there. There are a few options to debug code:
• You can open it up and see if she can spot the errors, but that’s really difficult to do, especially with segmentation faults.
• You could try inserting cout statements, but that requires adding and removing lots of lines of code for each bug.
• You could try using valgrind (or ASAN), but that’s most helpful with memory errors and doesn’t give much information about logical bugs.
• You could try using a debugging, such as gdb.
This MP explores building a tricky data structure where you’ll find a seg fault around every corner. At the end, you’ll have a better understanding of some of the tools you can use to debug code.
## Getting Started with GDB
To launch your program using gdb, run the following command:
gdb [program name]
To run your program with optional command line arguments:
(gdb) run [arguments]
Alternatively, you can do this in one line with the following command:
gdb --args ./program_name [optional] [args] [here]
This allows you to simply type
(gdb) run
Note
Throughout the MP, we’ll use the notation
(gdb) command...
to indicate that the command should be run from within GDB.
Tip
GDB will provide several helpful features. First, it will output similar debugging information as Valgrind upon errors such as segmentation faults. Second, and more important, it allows you to stop the program execution, move around, and view the state of the running program at any point in time.
To do that, we will use the following common commands (see more details in the slides). We’ll also define the abbreviations of these commands, so you don’t have to type the full names of these commands when you want to use them.
• break [file:line number]
• Example usage: break skipList.cpp:40
• Create a breakpoint at the specified line. This will stop the program’s execution when it is being ran. (See run).
• When your program is stopped (by a previous use of break) in a certain file, break n will create a breakpoint at line n in that same file.
• Note: There are other variations on how to use break here. One variation is breaking at a function belonging to a class. Example: break SkipList::insert.
• Abbreviation: b. Example usage: b skipList.cpp:40
• clear [file:line number]
• Removes a breakpoint designated by break.
• run (arguments)
• Runs the program, starting from the main function.
• Abbreviation: r.
• list
• Shows the next few lines where the program is stopped.
• layout src
• Shows an updating window with your source code and the current line of execution.
• Usually easier than type list every line or referring back to your open code
• next
• Continues to the next line executed. This does not enter any functions. (See step for this).
• Abbreviation: n.
• step
• Continues to the next line executed. Unlike next, this will step into any proceeding functions
• Abbreviation: s.
• finish
• Steps out of a function.
• Abbreviation: fin.
• continue
• Continues the execution of the program after it’s already started to run. continue is usually used after you hit a breakpoint.
• Abbreviation: c.
• Viewing the state of your code.
• info args
• Shows the current arguments to the function.
• If you are stopped within a class’s function, the this variable will appear.
• info locals
• Shows the local variables in the current function.
• print [variable]
• Prints the value of a variable or expression. Example: print foo(5)
• The functionality of print is usually superseded by info locals if you are looking to print local variables. But if you want to view object member variables, print is the way to go.
• Example: print list->head. Or print *integer_ptr.
• Abbreviation: p.
• Other useful commands.
## Checking Out the Code
To check out your files for this MP, use the following command:
git fetch release
git merge release/mp6 -m "Merging initial mp6 files"
If you’re on your own machine, you may need to run:
git fetch release
git merge --allow-unrelated-histories release/mp6 -m "Merging initial mp6 files"
## Skip Lists: Intro
For this MP, we have provided a doubly linked Skip List for you to debug. Skip lists are ordered (sorted) data structures composed of layers of linked lists. Each layer is a normal linked list with only a subset of the nodes. Every node is guaranteed to appear on the first layer, and on each level above that, the probability it appears is $p^{\textrm{level}}$ (which you will choose ). If the probability is too low, we end up with a linked list, and if it’s too high, a stack of linked lists with height layers. The ideal probability for a skip list is $p=\frac{1}{2}$. In our MP, that would be a probability of 50.
Here is an illustration of a skip list (note that this drawing is only singly linked).
Since not all nodes appear on each level, we see on average faster search, insertion, and removal times by effectively “skipping” some nodes. Once you have successfully fixed this implementation of a Skip List, you will be able play around with different probabilities and node counts. If you choose a probability of 50, you can see how a “good” list looks, whereas higher or lower probabilities will illustrate degenerate cases. A note, this Skip List is not ideal, a singly linked list would function faster. We have modified it to allow for the chance to introduce new bugs, so that you can sharpen your debugging skills.
We have also designed this skip list so that it uses sentinel nodes. That is, head and tail will always be pointing at nodes, and these nodes are not logically considered part of the data.
Note:
Due to our use of sentinel nodes, there should be no NULL pointers anywhere in our list, except for all of head’s previous pointers and all of tail’s next pointers.
### Skip Lists: Search (Find)
To find in a Skip List, we need a key to search for. The general algorithm is as follows: start at the highest level of head. Look at the next key. If that key is before your key you want to find, go forward. Otherwise, if you’ve found the key, return that value. Else, you need to go down. Repeat. Let’s illustrate with an example:
Suppose we want to search for the value associated with key 5. The first step is to start at head’s highest level. We look ahead and see that it’s pointing at 1. 1 < 5, so we advance forwards. 1’s next is pointing at tail (whose key is also larger than 5), so we descend. The next pointer at this level is pointing at 4. 4 < 5, so we advance. 4’s next pointer points at 6, and 5 < 6, so we descend. Again, 4’s next is pointing at 6 so we descend. Now, 4’s next at this level points at 5. Since 5 == 5, we return the value stored with key 5.
If the key did not exist, we would return the default value when the next key was larger than the search key and we were on the lowest level.
Note
The search() function needs to return a default value on failure. Because we are returning HSLAPixels, an arbitrary decision needs to be made for what to return in case of failure. For the purposes of this MP, we will return a pixel with value HSLA(0, 0, 0, 0.5) when the key is not found in the list.
### Skip Lists: Insert
To insert into a Skip List, we need a key and a value. For this MP, keys will be integers and values will be HSLAPixels (this could be templatized, but that’s not the focus of the MP). Because Skip Lists are sorted, we need to first find where we should be inserted. To do so, we start from the top of the list and scan forward. If the key we see is before the key we are inserting, we will go forward. If the key is after our key, we would go down a level and repeat. Once we find where we should be inserted, we will calculate a random height based on the probability defined when the program was run, and insert ourselves there. If our height is larger than the list’s current height, we are going to grow our sentinel nodes. Then, we simply loop over each pair of next/prev pointers in the new node and set it to the nodes after and before it.
Let’s consider an example using this diagram:
Suppose we want to insert a node with key 5.5 (assume keys are floats for this example). We begin at head’s top level. We see that the key of the next node is 1. 5.5 > 1 so we move forward. We see then that the next pointer is tail (whose key is also > 5.5), so we therefore go down. We see that the next key is 4. 4 < 5.5, so we move forward. The next node is 6, and 6 > 5.5, so we go down. We see that the next key is still 6, and 6 is still more than 5.5, so we go down again. We see that the next key is 5. 5 < 5.5, so we go forward. We look at the next key and see that it’s 6. 6 > 5.5. Because we are already at the lowest level, we’ve found out where to insert. So, we create a new node of random height. Let’s suppose first that that height was 2. So, we would not need to increase the height of the list. We would set both next pointers to 6, and the first (lowest) prev pointer to 5, and the second prev pointer to 4. At this point we are done. However, let’s also illustrate what would happen if we needed to expand the list. Let’s assume that the height of the new node was actually 6. We would need to expand our list’s height by 2, and we would set those next pointers of head to the new node, and the prev pointers of the tail node to the new node. We would then loop up the new node, setting the first three next pointers to 6, and the next three to tail. The first prev pointer would point at 5, the next two would point at 4, and the last three would point at head. After setting these pointers, we are done with the insertion.
Note:
For this MP, we choose to say that attempting to insert a duplicate key will update the existing value instead of inserting again. This means that a traversal should never show the same keys twice!
### Skip Lists: Remove
Once we have find(), remove is not a long algorithm. First, we will run a search for the node with the given key. If it doesn’t exist, we stop. Otherwise, we will loop through all of the next and previous pointers and assign the prev’s next to our next, and our next’s prev to our prev. Let’s walk through an example:
Suppose we want to remove 4. First, we search for 4 and get a pointer to it. Then, we loop up its pairs of pointers. That is, first we’ll set 5’s prev pointer to 3, 3’s first next pointer to 5. Then we’ll set 6’s second prev pointer to 3, and 3’s second next pointer to 6. Then, we’re going to set 6’s last prev pointer to 1, and 1’s third next pointer to 6. Finally, we free the memory for 4 and finish.
These tasks exist to help you learn how to navigate through code execution with GDB. We are going to walk through two of them, although there are more examples to play around with.
These code files are located in the gdb-examples/ folder, but the executables will be generated in the main directory.
These examples will not be graded, but will prepare you for the task ahead of debugging code!
One of the most useful aspects of GDB is the ability to view variable values. In order to do that, you must stop code execution; here, we will use a breakpoint. The goal of this exercise is to find out why our swap program doesn’t work.
1. Compile the code using make.
make -j swap
2. Start gdb with the executable.
gdb ./swap
3. Insert a breakpoint in the file. We’ll put it at main so we can step through from the beginning.
(gdb) break main
4. Setup our source code view.
(gdb) layout src
5. Run the program. This program takes two arguments.
(gdb) run 5 7
6. Display both the value of a and the value of b. Until you get to where they are initialized, they will have garbage values.
(gdb) display a
(gdb) display b
7. Step through the code until you get to line 20. Notice how a and b have their correct values now
(gdb) next
8. Figure out why this swap program doesn’t work and fix it!
### Backtrace
One of the most useful commands in GDB is backtrace. This shows you the function stack at the current execution time. We will use this to find out why our recurse program segfaults.
1. Compile the code using make:
make -j recurse
2. Start gdb with the program.
gdb ./recurse
3. Run the program. This program takes one argument
(gdb) run 4
4. Your program should be printing several *’s to the screen. Press Ctrl+C to halt the program. This is the as if the program had hit one of your breakpoints
(gdb) <Ctrl+C>
5. Use backtrace to print your program’s function stack
(gdb) backtrace
6. You’ll see a different output depending on when you hit Ctrl+C. However, you may see some library functions at the top and several recursive iterations of the function recurse. You can see that the length parameter is unchanged, and index keeps increasing. To debug this, we will step through our program line by line. First, type q then enter to exit the stack trace. Then set a breakpoint on main.
(gdb) b main
(gdb) layout src
(gdb) r 4
7. Step through your program until you hit line 35
(gdb) next
8. Now, we want to enter the function. In gdb, we do that by using the step command. next will go to the next line of the current function (skipping over function calls), while step will also enter functions.
(gdb) step
9. Now, use next and step appropriately to find out why the program never stops.
## Debugging SkipList Code:
Before you start working on debugging, you should familiarize yourself with the code. Look in skipList.h and skipNode.h to see what you have to work with. Knowing which functions and objects you have will make debugging much smoother and more satisfying.
You’re only going to need to fix bugs in skipList.cpp, but not all the functions in the file are broken. You’re going to need to figure out which functions are broken and how to fix them using the debugging techniques you’ve learned already combined with gdb.
### A First Bug
As you can see, the code compiles but does not work! The first error is semantic, we are segfaulting on what looks like perfectly valid code for a list with sentinels.
1. Compile the code using make:
make -j
2. Start gdb with the executable.
gdb --args ./mp6 4 50
3. Run the program
(gdb) run
4. Print some values
(gdb) p head_
$1 = (SkipNode *) 0x0 (gdb) p tail_$2 = (SkipNode *) 0x0
You should see something like the above. For some reason, our head and tail are NULL! But with sentinel nodes, that shouldn’t be possible! We need to look around for where the sentinel nodes are being initialized, and figure out why they aren’t working.
After taking a look through skipList.cpp, you should notice that the constructor sets up these values. So let’s set a break point, and rerun our program:
(gdb) b skipList.cpp:20
6. Rerun the program
(gdb) run
It will ask you if you want to restart the program from the beginning – choose yes, then press enter.
You should now see something like the following:
Breakpoint 1, SkipList::SkipList (this=0x688010) at skipList.cpp:20
20 SkipNode* head_ = new SkipNode();
Now, we need to step through and see what’s going on. The next command will be helpful here! Make sure to print values after every step to see where the problem is happening. Once you’ve fixed this constructor, move on to the next section to solve the second bug.
### A Second Bug
As you can see, the code still does not work! The second error is logical, the findRHelper function does not seem to work properly.
1. Compile the code using make:
make -j
2. Start gdb with the executable.
gdb --args ./mp6 4 50
3. Run the program
(gdb) run
4. Use backtrace to print your program’s function stack
(gdb) backtrace
You should see something like the following before running backtrace:
Program received signal SIGSEGV, Segmentation fault.
0x000000000040ce28 in SkipList::findRHelper (
at skipList.cpp:205
205 {
and the following after:
#0 0x000000000040ce28 in SkipList::findRHelper (
at skipList.cpp:205
#1 0x000000000040cef8 in SkipList::findRHelper (this=0x673010, key=0, level=0,
curr=0x673040) at skipList.cpp:217
#2 0x000000000040cef8 in SkipList::findRHelper (this=0x673010, key=0, level=0,
curr=0x673040) at skipList.cpp:217
#3 0x000000000040cef8 in SkipList::findRHelper (this=0x673010, key=0, level=0,
curr=0x673040) at skipList.cpp:217
#4 0x000000000040cef8 in SkipList::findRHelper (this=0x673010, key=0, level=0,
curr=0x673040) at skipList.cpp:217
#5 0x000000000040cef8 in SkipList::findRHelper (this=0x673010, key=0, level=0,
curr=0x673040) at skipList.cpp:217
#6 0x000000000040cef8 in SkipList::findRHelper (this=0x673010, key=0, level=0,
curr=0x673040) at skipList.cpp:217
#7 0x000000000040cef8 in SkipList::findRHelper (this=0x673010, key=0, level=0,
--[snip]--
---Type <return> to continue, or q <return> to quit---
Note:
Your output may differ slightly, for example in the key or exact memory addresses. Because the find() function has some randomness involved, this is to be expected.
If you press enter for a while, you will see that we have over 20000 recursive calls! This, along with our first piece of information that we can’t read our parameters (because we don’t own the memory) and the fact that the parameters aren’t changing in any of the calls indicate that we are dealing with a stack overflow - we’ve called so many function that we run out of stack memory and crash. The cause of these is usually having infinite recursion, although some large problems may also stack overflow if attempted to be solved recursively (this won’t be an issue in this MP).
### Where to go from here
Once you’ve fixed the first two bugs, you’ll see that your program is still not working :(. As you fix bugs, you may approach the point where it appears to work correctly. However, this does not mean that the skip list is working! Try adding some test cases to main and see what happens!
Some useful test cases would be:
• Try to remove something
• Try to search for things
• Try increasing the scale (by increasing the image size for example)
• Try changing the probability and make sure your code works if many nodes are at the max height, or many nodes are not high at all
• Testing out all the public methods
• Making sure the constructors all work correctly
• Making sure implementation-specific features such as default return values work correctly
We’ve left a TODO comment in main.cpp which is a good place to start writing some of these test cases. Failing Catch tests will also suggest good test cases to add to main() for you to step through in gdb.
Useful GDB Tips
While GDB is great, there are some aspects of it that aren’t fun to work with. For example, when using STL containers (such as the vectors in this assignment), gdb won’t be able to access them or their functions, giving an error instead.
For this reason, if you want to see what’s inside myVector[5], you need to save it into a local variable in your source code! To alleviate this annoyance, we’ve provided some functions that will return the contents of the vector at a position, as well as print a whole node and it’s surrounding keys.
They are:
• gdbGetPointer(SkipNode*, size_t) - returns the node’s i-th pair of pointers. Example usage:
(gdb) print gdbGetPointer(traverse, 3).next->key
• gdbGetNode(SkipNode*) - Will get a string representing the node’s neighbors. This isn’t super useful in gdb because it will print the std::string strangely
• gdbPrintNode(SkipNode*) - Will print out the string returned by gdbGetNode(). NOTE: This one won’t work (well) if you are using layout src. You’re going to have to not use the TUI to use this one well (the TUI covers it up). Example usage:
(gdb) call gdbPrintNode(traverse)
Fixing The Program
This section is very important! Once your executable runs without segfaulting or infinite looping, you should test it in Catch. You will almost certainly fail some test cases. Once you see which functions appear to be breaking, add some test cases to main which call those functions. This will allow you to see if they segfault and also step through them in gdb.
Don’t just change random lines! Use gdb to pinpoint in which specific line(s) they may occur and reason about why. You don’t want to introduce new bugs into the code!
Once you think everything is working, you can run Catch with the provided tests:
make test && ./test
Running The Tests
Note that the test cases won’t work well if your program is segfaulting or infinite looping! Make sure that if your test cases aren’t working, you try to isolate the test case that is failing to run and test it manually from main.cpp
## Part 1 Testing - SkipList constructor, SkipList::traverse, and SkipList::insert
Grading for Part 1 covers only the SkipList constructor, SkipList::traverse, and SkipList::insert (and the functions called by those functions). You can run test cases for Part 1 by running:
./test [part=1]
## Part 2 Testing - Finishing the Debugging
• printKeys()
• printKeysReverse()
• traverse()
• traverseReverse()
Once you think your code works, feel free to play around with the skip lists. The main function allows you to try different sizes of list, different probabilities, and even change the colors on the output images! The main function will generate two images:
• out-structure.png - Shows the structure of the skip list. You can see how changing the probability can radically alter the the structure and efficiency of the list, and try to apply your intuition as to why.
• out-degenerate.png - Shows how “degenerate” the list is. With a probability of 50%, you will have a list that’s very close to the “perfect” skip list. As you move away from a probability of 50%, you will see the list become more and more degenerate, which means that we are losing more and more of the advantages of a skip list and simply becoming a really fat and memory-inefficient linked list, or an overly-complicated linked list depending on which direction you go. You can see this measure of degenerate-ness by how many black pixels there are – the more black pixels, the more the list diverges from the “perfect” list, and the less black pixels the closer to being “perfect” the list is.
While it may help you debug to run the executable with small image_size numbers, the images will not be very useful for such small lists. Try running with at least an image_size of 16 for out-structure.png and a size of 64 for out-degenerate.png to get the best results. Also note that for out-structure.png, it will be img_size * img_size + 2 pixels wide, which will quickly become unwieldy.
Here are some example images (all seeded with srand(225)):
out-structure for arguments img_size = 16 and probability = 50:
out-structure for arguments img_size = 16 and probability = 85:
out-degenerate for arguments img_size = 128 and probability = 50:
out-degenerate for arguments img_size = 128 and probability = 60:
The degenerate images are showing how far away from a perfect skip list you are. Notice how this very quickly diverges as you distance yourself from probability = 50. The structural images are actually showing the structure of your skip list. Notice how as we increase the probability you get many more nodes capping out at the maximum height of the skip list. If you want to see your own skip list to see if the images match (it is not guaranteed even if you seed it the same), just replace the srand(time(NULL)) in main.cpp with srand(225).
Commit your changes in the usual way:
git add -u
git push origin master
• skipList.cpp | 2020-07-04 05:55:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3488715887069702, "perplexity": 1605.004485842182}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655884012.26/warc/CC-MAIN-20200704042252-20200704072252-00595.warc.gz"} |
https://love2d.org/forums/viewtopic.php?t=89763 | ## Can someone help me with this piece of code?
Questions about the LÖVE API, installing LÖVE and other support related questions go here.
Forum rules
DjKniteX
Prole
Posts: 4
Joined: Wed Nov 11, 2020 9:35 pm
### Can someone help me with this piece of code?
Hello guys! New to Love2D; somewhat familiar with Lua (used Pico8 for last year's Game Off). I'm using Love2D this year for Game Of 2020 and I'm pretty excited to use a language/framework I'm somewhat familiar with.
I'm going through the documentation right now and I'm having some trouble. I followed the animation one and I got it work and I found the basic platformer one for basic movement and I kinda got that working.
The problem is all the sprites from the spritesheet shows when I try to just load as it is; (also when I reference the animation variable I get a draw error so I have to use the static image which gives me the whole sheet).
Is there a way I can just load the first pic from the sprite sheet? Also follow-up question; How do I make the animation work with movement?
Current code I have;
Code: Select all
platform = {}
player = {}
animation = newAnimation(love.graphics.newImage("assets/linus.png"), 32, 32, 0.8)
animation1 = newAnimation(love.graphics.newImage("assets/city.png"), 8, 8, 0.8)
platform.width = love.graphics.getWidth()
platform.height = love.graphics.getHeight()
platform.x = 0
platform.y = platform.height / 2
player.x = love.graphics.getWidth() / 2
player.y = love.graphics.getHeight() / 2
player.speed = 200
player.img = love.graphics.newImage("assets/linus.png")
player.ground = player.y
player.y_velocity = 0
player.jump_height = -300
player.gravity = -500
end
function love.update(dt)
keys(dt)
end
function love.draw()
-- local spriteNum = math.floor(animation.currentTime / animation.duration * #animation.quads) + 1
-- love.graphics.draw(animation.spriteSheet, animation.quads[spriteNum], 0, 10, 0, 4)
love.graphics.setColor(1, 1, 1)
love.graphics.rectangle('fill', platform.x, platform.y, platform.width, platform.height)
love.graphics.draw(player.img, player.x, player.y, 0, 1, 1, 0, 32)
end
function newAnimation(image, width, height, duration)
local animation = {}
animation.spriteSheet = image;
for y = 0, image:getHeight() - height, height do
for x = 0, image:getWidth() - width, width do
end
end
animation.duration = duration or 1
animation.currentTime = 0
return animation
end
function keys(dt)
if love.keyboard.isDown('d') then
if player.x < (love.graphics.getWidth() - player.img:getWidth()) then
player.x = player.x + (player.speed * dt)
end
elseif love.keyboard.isDown('a') then
if player.x > 0 then
player.x = player.x - (player.speed * dt)
end
end
if love.keyboard.isDown('space') then
if player.y_velocity == 0 then
player.y_velocity = player.jump_height
end
end
if player.y_velocity ~= 0 then
player.y = player.y + player.y_velocity * dt
player.y_velocity = player.y_velocity - player.gravity * dt
end
if player.y > player.ground then
player.y_velocity = 0
player.y = player.ground
end
end
nikneym
Citizen
Posts: 56
Joined: Sat Mar 09, 2013 1:22 pm
Contact:
### Re: Can someone help me with this piece of code?
Hi, welcome to the forums! It seems you missed to update your animation frame by frame. Try this:
Code: Select all
--load spritesheet.
local imgsheet = love.graphics.newImage("sample.png")
--frames variable to hold frames, currentFrame to hold our current frame.
local frames = {}
local currentFrame = 1
--crop frames from spritesheet and add them to "frames" table.
for y = 1, 4 do
for x = 1, 7 do
table.insert(frames, love.graphics.newQuad(76 * x - 76, 87 * y - 87, 73, 87, imgsheet:getDimensions()))
end
end
--create time variables.
local time = 0
local maxTime = 0.016
function love.update(dt)
--update our time event each frame.
time = time + dt
--when time becomes equal or bigger than maximum time;
if time >= maxTime then
--reset it
time = 0
--and update our current frame.
currentFrame = currentFrame + 1
--if current frame becomes bigger than length of the frames,
if currentFrame > #frames then
--reset it to first frame, and loop restarts.
currentFrame = 1
end
end
end
function love.draw()
--we have to define our current frame in the second parameter of love.graphics.draw().
love.graphics.draw(imgsheet, frames[currentFrame], 10, 10)
--if you want to see just 1 frame, you have to specify it like frames[9].
love.graphics.draw(imgsheet, frames[9], 100, 10)
end
In the second parameter of the love.graphics.draw(), we have to tell which quad we want. Give it a variable like frames[currentFrame] and update currentFrame each frame, you get an animation.
Attachments
sample.png (240.52 KiB) Viewed 2480 times
darkfrei
Party member
Posts: 109
Joined: Sat Feb 08, 2020 11:09 pm
### Re: Can someone help me with this piece of code?
Last edited by darkfrei on Thu Nov 12, 2020 11:55 am, edited 1 time in total.
darkfrei
Party member
Posts: 109
Joined: Sat Feb 08, 2020 11:09 pm
### Re: Can someone help me with this piece of code?
With this preset is much better:
So, your spritesheet cannot be right cropped, it's too small. Also, you can make the 9x3 tiles, not 7x4-1.
DjKniteX
Prole
Posts: 4
Joined: Wed Nov 11, 2020 9:35 pm
### Re: Can someone help me with this piece of code?
nikneym wrote:
Thu Nov 12, 2020 6:41 am
Hi, welcome to the forums! It seems you missed to update your animation frame by frame. Try this:
Code: Select all
--load spritesheet.
local imgsheet = love.graphics.newImage("sample.png")
--frames variable to hold frames, currentFrame to hold our current frame.
local frames = {}
local currentFrame = 1
--crop frames from spritesheet and add them to "frames" table.
for y = 1, 4 do
for x = 1, 7 do
table.insert(frames, love.graphics.newQuad(76 * x - 76, 87 * y - 87, 73, 87, imgsheet:getDimensions()))
end
end
--create time variables.
local time = 0
local maxTime = 0.016
function love.update(dt)
--update our time event each frame.
time = time + dt
--when time becomes equal or bigger than maximum time;
if time >= maxTime then
--reset it
time = 0
--and update our current frame.
currentFrame = currentFrame + 1
--if current frame becomes bigger than length of the frames,
if currentFrame > #frames then
--reset it to first frame, and loop restarts.
currentFrame = 1
end
end
end
function love.draw()
--we have to define our current frame in the second parameter of love.graphics.draw().
love.graphics.draw(imgsheet, frames[currentFrame], 10, 10)
--if you want to see just 1 frame, you have to specify it like frames[9].
love.graphics.draw(imgsheet, frames[9], 100, 10)
end
In the second parameter of the love.graphics.draw(), we have to tell which quad we want. Give it a variable like frames[currentFrame] and update currentFrame each frame, you get an animation.
I'll have to mess around with the numbers; but it does't seem to show one sprite then animate like in this tutorial: https://www.love2d.org/wiki/Tutorial:Animation and it seems to be going really fast lol. I'll mess around with it more after work.
thanks!
### Who is online
Users browsing this forum: No registered users and 55 guests | 2021-03-02 14:39:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35132449865341187, "perplexity": 10033.55838525958}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178364008.55/warc/CC-MAIN-20210302125936-20210302155936-00487.warc.gz"} |
https://proofwiki.org/wiki/Book:I.M._Gelfand/Calculus_of_Variations | Book:I.M. Gelfand/Calculus of Variations
I.M. Gelfand and S.V. Fomin: Calculus of Variations
Published $1963$, Dover Publications
ISBN 0-486-41448-5 (translated by Richard A. Silverman).
Subject Matter
Calculus of Variations
Contents
Authors' Preface (I.M. Gelfand and S.V. Fomin)
Translator's Preface (Richard A. Silverman)
1 Elements of the Theory
2 Further Generalizations
3 The General Variation of a Functional
4 The Canonical Form of the Euler Equations and Related Topics
5 The Second Variation. Sufficient Conditions for a Weak Extremum
6 Fields. Sufficient Conditions for a Strong Extremum
7 Variational Problems Involving Multiple Integrals
8 Direct Methods in the Calculus of Variations
Appendix I: Propagation of Disturbances and the Canonical Equations
Appendix II: Variational Methods in Problems of Optimal Control
Bibliography
Index | 2019-07-17 16:58:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5523819327354431, "perplexity": 8589.922498516247}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525355.54/warc/CC-MAIN-20190717161703-20190717183703-00453.warc.gz"} |
https://forum.math.toronto.edu/index.php?PHPSESSID=6kf2vvqsai66nlmf1nmtmcof97&topic=428.0 | ### Author Topic: Variation of Parameters vs Undetermined Co-efficients (Read 1749 times)
#### Marcus Tutert
• Jr. Member
• Posts: 12
• Karma: 0
##### Variation of Parameters vs Undetermined Co-efficients
« on: October 05, 2014, 11:59:48 AM »
Do you care which method we use to solve a question? I prefer the former, as it is less guess work, but I wanted to make sure that if we just wanted to use one to solve questions that is fine
#### Victor Ivrii
• Administrator
• Elder Member
• Posts: 2563
• Karma: 0
##### Re: Variation of Parameters vs Undetermined Co-efficients
« Reply #1 on: October 05, 2014, 12:07:38 PM »
Method of variation of parameters is much more general. However in some special situations the method of undermined coefficients works faster. And you need to know how in these situations solutions look like. Unless method is indicated you can use any.
« Last Edit: October 06, 2014, 02:48:47 PM by Victor Ivrii » | 2021-09-25 22:08:27 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8009791970252991, "perplexity": 3508.078819188326}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057775.50/warc/CC-MAIN-20210925202717-20210925232717-00211.warc.gz"} |
https://tug.org/pipermail/xetex/2010-October/018793.html | # [XeTeX] Greek XeLaTeX
Apostolos Syropoulos asyropoulos at yahoo.com
Tue Oct 12 09:16:18 CEST 2010
> I am trying to "greekify" XeLaTex in order to make it easier for greek writers
>to use.
>
>I have translated almost all the commands (e.g., instead of \begin{document}
>I can use \αρχή{κειμένου}, etc.) that I could think of and made new article,
Please do not do this! This way you are going to force many people to create
imcompatible
documents! I do not think that this is the way to go. If you are not happy with
LaTeX, then create
a completely new format. For example, Hagen was not happy with LaTeX and so he
created ConTeXt.
> The idea behind the changes is that a writer of greek text (or arab, chinese
>etc.) should not be
>
>obliged to change his/her computer keyboard to latin for the commands and back
>again for the text.
>
>Also the names of the commands would be more transparent for non-english
>speakers.
Sure but then TeXLive should include zillions of XeLaTeX formats and
localization files which will
make the whole thing a nightmare! Not to mention that it would be almost
impossible to process
a "Greek" XeLaTeX file in a computer located in Italy!
A.S.
----------------------
Apostolos Syropoulos
Xanthi, Greece | 2022-08-20 05:09:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9523190855979919, "perplexity": 3289.063535908955}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573908.30/warc/CC-MAIN-20220820043108-20220820073108-00748.warc.gz"} |
https://economics.stackexchange.com/tags/united-states/hot | # Tag Info
44
There are several reasons for that, following Papanicolas et al (2018): High regulatory and administrative burden. US has one of the highest regulatory and administrative burdens. US healthcare market might be unregulated in terms of prices and range of services and procedures you can get but it is actually quite heavily regulated when it comes to licensing,...
33
Most of the US real earnings data which go only as far as mid 60s. According to the statista data presented in this article by world economic forum the evolution of real hourly earnings in the US for production and non-supervisory workers looked like this: If we extrapolate to 50s then the real earnings are now higher overall. However, this being said the ...
32
In Milton Friedman's view, the cost of health care in the USA is high because consumers don't pay for it directly, and the people paying don't directly care whether it's a good value for the money spent. Consumers pay premiums driven by average costs but pay little of the marginal cost of their health care. Insurers don't care much about reducing costs ...
23
Average standard of living is massively higher today compared to the 1950s, primarly due to technological progress. Even a cheap low end car today is much better than the big cars of the 1950s, it is orders of magnitude safer, much more comfortable and has a whole bunch of new features that didn't exist back then. Houses as such haven't changed that much, ...
22
The metric of median household income is also used by others to argue the presence of income inequality: https://en.wikipedia.org/wiki/Income_inequality_in_the_United_States#Causes However, it seems that it is not only the median but also the mean that stagnates: (I used family instead of household income because I could not find a time series for mean ...
22
Inflation is measured against a basket of goods. It's a symptom of what's going on in markets. Some products go up in price over time. Some go down in time. Some stay the same price, but change their specification. So it's looking down the wrong end of the microscope, to ask why inflation hasn't affected car prices. Car prices are part of inflation. Changes ...
15
The book “The Two Income Trap” (2003) by Elizabeth Warren (recent Democratic presidential candidate) and Amelia Warren Tyagi discussed this. They looked at spending breakdowns in the 1970s and when they were writing. One thing to keep in mind is that the mix of spending has changed, as well as the characteristics of products. E.g., modern cars appear to be ...
14
You also didn't look at car prices in general but rather just the Toyota Camry. For example a 2001 BMW M3 was ~\$46,000 while a 2018 BMW M3 is ~\$66,000. Most cars have increased in price over the last 20 years, but some manufacturers will always have a cheap car in their lineup .
14
It may not explain the entire difference, but GDP isn't a very good way to compare the amount of income that residents of different countries might spend on healthcare for a number of reasons. Usually this has to do with the structure of the economy, but GDP can also be distorted by high trade deficits, the effects of international tax avoidance, or specific ...
8
Specifically treating car prices, well, the prices are determined globally and not necessarily in dollars In the last 20 years: Car manufacturers move factories across borders to save costs, China and India have become major market player both as major manufacturers and as a major consumers As a result of these causes, an additional major impact was added,...
6
Three points - one which has already been raised much better by denesp: Are household sizes the same (we see the answer as no)? How about amount of earners per household? What about the amount of goods and services that these household incomes can buy? Should wages be increasing if a dollar can get more goods and services, thanks to technology? Many ...
4
There are several issues here: There are banks offering close to $1.5\%$, some well-known such as Goldman Sachs and American Express Most banks currently have excess reserves deposited in the Federal Reserve System which they could use for lending if they wanted to, so they are not missing profitable commercial lending opportunities due to lack of deposits ...
4
Question B in the link answers your question: most participants believe that taxable income would not rise enough to offset the tax cut, indicating that they do not believe we are on the wrong side of the Laffer Curve.
4
Looking at gross output (which includes using the outputs of other industries) and value-added (largely wages and profits) by industry you get numbers like this for 2017 in USD trillion. Adding up the value-added gives total GDP, and you can see that there is a lot more to the economy than manufacturing ...
4
Because Fed or any central bank cannot fund 100% of a budget without any adverse effect. I do not know where you heard such argument but it is blatantly false. First, it is virtually unanimously agreed by top policy economists that government cannot fund arbitrary amount of real spending (i.e. spending on real goods and services). This question was actually ...
4
I am not sure why West Virginia is the least vaccinated state* in the US but If vaccination is progressing poorly in your state the government is more likely to try and encourage it by all sorts of measures - like vaccine lotteries/cash giveaways. Thus it is not the giveaways that cause low vaccination rates, but rather the other way around. There has to ...
3
Being an economist, I'd say that carrying out a quantitative forecast ("the policy will create/cost x jobs") would require setting up a model, feeding appropriate data and applying appropriate estimations. You would require decent data on the firm level and a detailed model dealing with lots of production factors. In other words, lots of work and probably ...
3
The video has a transcript with the references. The \$0.25B figure is obtained from here (after adjusting for inflation). Unfortunately, the author does not provide a source for the \$1.2B figure. However, there are estimates of the value of land elsewhere. For example, here. Their estimates on a map: These estimates consider the value of land only, ...
3
To answer this I have to make some guesses because this is not an area of research for me, but having a spouse from there and having spent time there, I think I could make a somewhat educated guess. Especially because it uses a market system rather than a rate setting system for generation. First, Massachusettes has the third highest population density of ...
3
3
This was actually already indirectly answered by the IGM forum. In this question the forum asks economists the following: The US spends roughly 17% of GDP on healthcare, according to the OECD; most European countries spend less than 12% of GDP. Higher quality-adjusted US healthcare prices contribute relatively more to the extra US spending than does the ...
3
Yes there are other factors at play. Inflation is change in a price level. The price level, according to classic textbook monetary equation, is determined as follows: $$P = \frac{MV}{Y}$$ Where $M$ is the money supply, $V$ velocity of money (how much is one dollar used in the economy) and $Y$ is the real output. So beside money supply and real output ...
2
Bureaucratic history time! Yes, this agency (and its predecessors) have always been responsible for GDP; GDP was created a couple decades after the Department of Commerce and the Department of Labor split up. The full answer is that the Bureau of Economic Analysis (BEA) was created in 1972 as a bureau in the Social and Economic Statistics Administration (...
2
The fear is that higher interest rates would damage the economy. The problem with that worry is that the Federal Reserve could buy bonds itself, to cancel out the foreign selling. The hidden assumption is that this selling would have to be very rapid. If the foreign reserves managers liquidated their Treasury holdings over five years (for example), the ...
2
It wouldn't "ruin the US economy". The US (the country as a whole, not the government) bought goods and services from rest of the world by selling them IOUs. So far, the world has been content to let the US roll over these obligations. If the world decided that they wanted to trade these IOUs for goods and services, the US would have to start running current ...
2
Are there any estimates on how many US Dollars are lost or destroyed annually? By "lost or destroyed", I mean permanently removed from circulation because the currency is no longer usable. The link I offered as a comment covers this question nicely. When Currency Is Physically Destroyed Obviously, not all money is electronic. Just look at your ...
2
There are different arguments here, depending on the points of view of the government and the insurance providers. I'm trying to answer to the general question on the title. Insurance providers aim at minimizing their costs with healthcare and the probability of health issues. Free birth control not only prevents unwanted pregnancy, but might also have ...
2
To be the wrong side of the Laffer curve would require there to be another lower tax rate which produced the same or greater tax revenues That is not what was being said: Question A addressed the sign of the impact on GDP, not on tax revenues; Question B addressed the sign of the impact on tax revenues; nobody agreed and the large majority disagreed ...
Only top voted, non community-wiki answers of a minimum length are eligible | 2021-12-09 11:36:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25791412591934204, "perplexity": 1667.323584958024}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363791.16/warc/CC-MAIN-20211209091917-20211209121917-00453.warc.gz"} |
https://math.stackexchange.com/questions/438488/prove-that-a3b3c3-geq-a2bb2cc2a/442429 | # Prove that $a^3+b^3+c^3 \geq a^2b+b^2c+c^2a$
Let $a,b,c$ be positive real numbers. Prove that $a^3+b^3+c^3\geq a^2b+b^2c+c^2a$.
My (strange) proof:
\begin{align*} a^3+b^3+c^3 &\geq a^2b+b^2c+c^2a\\ \sum\limits_{a,b,c} a^3 &\geq \sum\limits_{a,b,c} a^2b\\ \sum\limits_{a,b,c} a^2 &\geq \sum\limits_{a,b,c} ab\\ a^2+b^2+c^2 &\geq ab+bc+ca\\ 2a^2+2b^2+2c^2-2ab-2bc-2ca &\geq 0\\ \left( a-b \right)^2 + \left( b-c \right)^2 + \left( c-a \right)^2 &\geq 0 \end{align*}
Which is obviously true.
However, this is not a valid proof, is it? Because I could just as well have divided by $a^2$ rather than $a$:
\begin{align*} \sum\limits_{a,b,c} a^3 &\geq \sum\limits_{a,b,c} a^2b\\ \sum\limits_{a,b,c} a &\geq \sum\limits_{a,b,c} b\\ a+b+c &\geq a+b+c \end{align*}
Which is true, but it would imply that equality always holds, which is obviously false. So why can't I just divide in a cycling sum?
Edit: Please don't help me with the original inequality, I'll figure it out.
• You can't assume what you want to prove. Jul 7 '13 at 23:26
• @user60887 I'm not doing that, I'm trying to reduce it to something that I can prove. Jul 7 '13 at 23:27
• @timvermeulen You cannot divide with $a$ the cyclic sum is an simpler way to write to expressiong $a^3+b^3+c^3$ since you cannot divide with $a$ in this expression you cannot divide in your other expression(with the cyclic sum symbol). Until you feel comfortable with another way of writing the same thing, first translate what an operation means in the expression where you are familiar with. Jul 7 '13 at 23:31
• for example write $\sum _{a,b,c} a^2 \geq \sum_ {a,b,c}a, \Rightarrow \sum _{a,b,c} a \geq \sum_ {a,b,c}1$ which is false for $a=b=c=0$ Jul 7 '13 at 23:34
• The inequality is obviously true if a=b=c so due to symmetry, why not consider a>b ? That is, write a = b + k with k>0 substitute for a and see if the inequality becomes easier to handle. (It is just a hunch, I am not sure if it works...) Jul 7 '13 at 23:41
Without making any assumption, just simple $AM\ge GM$ $$a^3+a^3+b^3\ge3a^2b$$ $$b^3+b^3+c^3\ge3b^2c$$ $$c^3+c^3+a^3\ge3c^2a$$ $$a^3+b^3+c^3\ge a^2b+b^2c+c^2a$$
Just assume, wlog $a\leq b\leq c$. Then this equation is all you need: $$a^3+b^3+c^3=a^2b+b^2c+c^2a+\underset{\geq 0}{\underbrace{(c^2-a^2)(b-a)}}+\underset{\geq 0}{\underbrace{(c^2-b^2)(c-b)}}\geq a^2b+b^2c+c^2a$$
• but if $b \le a \le c$, this method doesn't work. Jul 9 '13 at 8:30
• This is what the wlog is about. As the equation is somehow symmetrical, you can use $$a^3+b^3+c^3=a^2b+b^2c+c^2a+(c^2-b^2)(c-a)+(a^2-b^2)(a-b)$$ in this case. Jul 9 '13 at 8:59
• OK, that is nice. I simply swap $a$ and $b$ and get wrong result. Jul 9 '13 at 9:06
• This is because, it is not totally symmetrical in $a^2b+b^2c+c^2a$, you do not get the same expression here by arbitrarily swapping. Jul 9 '13 at 9:16
(@HaiDangel told me. https://diendantoanhoc.net/topic/182934-a3-b3-c3geqq-a2b-b2c-c2a/?p=731023)
A stronger version: Let $$a, b, c$$ be real numbers with $$a + b \ge 0, b + c \ge 0$$ and $$c+a\ge 0$$. Prove that $$a^3 + b^3 + c^3 \ge a^2b + b^2c + c^2a.$$
I have an SOS expression: \begin{align} &a^3 + b^3 + c^3 - a^2b - b^2c - c^2a \\ =\ & \frac{(a^2+b^2-2c^2)^2 + 3(a^2-b^2)^2 + \sum_{\mathrm{cyc}} 4(a+b)(c+a)(a-b)^2}{8(a+b+c)}. \end{align}
WOLG, Let $c$=Max{$a,b,c$}, then there is 2 cases:
case I: $0<a \le b \le c$, we want to prove $c^2(c-a) \ge a^2(b-a)+b^2(c-b)$
we have $c^2\ge b^2, c^2\ge a^2 \to$,RHS $\le c^2(b-a)+c^2(c-b)=c^2(c-a)$
case II: $0<b \le a \le c$, we want to prove $a^2(a-b)+c^2(c-a) \ge b^2(c-b)$
we have $a^2\ge b^2,c^2 \ge b^2, \to$LHS $\ge b^2(a-b)+b^2(c-a)=b^2(c-b)$
to summary 2 cases, we have $a^2(a-b)+b^2(b-c)+c^2(c-a) \ge 0$
QED | 2021-12-09 15:04:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9508152008056641, "perplexity": 804.5897123733915}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964364169.99/warc/CC-MAIN-20211209122503-20211209152503-00626.warc.gz"} |
https://math.stackexchange.com/questions/2046494/rigorous-definition-of-differentials-in-the-context-of-integrals | # Rigorous definition of differentials in the context of integrals. [duplicate]
This question already has an answer here:
When using the subsitituion rule in integration of an integral $\displaystyle \int f(x)\,\mathrm dx$, one turns the integral from the form $$\displaystyle \int f(x(t))\,\mathrm dx \quad(*)$$ into the form $$\int f(x(t))\,\frac{\mathrm dx}{\mathrm dt}\mathrm dt \quad(**)$$. This transform is usually accomplished by means of differenting the subsitution $x = x(t)$, such that $\dfrac{\mathrm dx}{\mathrm dt} = \dfrac{\mathrm d x(t)}{\mathrm dt}$. Now, at this point, one turns this into a differential form by means of magic, s.t. $\mathrm dx = \dfrac{\mathrm dx(t)}{\mathrm dt}\mathrm dt$. This now substitutes the differential term $\mathrm dx$ in the original expression $(*)$ to the one in the transformed expression $(**)$.
I'd like to learn that magic step a bit more rigorous – so that I can better understand it. It is often explained by "multiplication" of $\mathrm dt$, which do make sense, but it does not explain the nature of differentials; when is "multiplication" allowed? It seems there should be a more rigorous way of explaining it, perhaps by defining the "multiplication.
So, in what ways can differentials like $\mathrm dx$ and $\mathrm dt$ be formalized in this context? I've seen them being compared to small numbers, which often work, but can this analogy fail? (And what are the prerequisites needed to understand them?)
## marked as duplicate by suomynonA, Namaste integration StackExchange.ready(function() { if (StackExchange.options.isMobile) return; $('.dupe-hammer-message-hover:not(.hover-bound)').each(function() { var$hover = $(this).addClass('hover-bound'),$msg = $hover.siblings('.dupe-hammer-message');$hover.hover( function() { $hover.showInfoMessage('', { messageElement:$msg.clone().show(), transient: false, position: { my: 'bottom left', at: 'top center', offsetTop: -7 }, dismissable: false, relativeToBody: true }); }, function() { StackExchange.helpers.removeMessages(); } ); }); }); Dec 7 '16 at 1:31
• Under the definition of integrals you can prove this, in particular it is easier to prove this for Darboux integral. – Masacroso Dec 6 '16 at 15:16
• Many related questions on this site, some of which may help. See math.stackexchange.com/questions/1991575/ – Ethan Bolker Dec 6 '16 at 19:52
Here's one way:
Consider $x$ and $t$ are coordinate systems of $\mathbb{R}$. If we wish to change coordinate systems, we have to look at how they transform into one another. If we consider $t$ to be a reference coordinate system and let the coordinate transformation be defined as $x(t) = 2t$ then for any $t$ element, $x$ is twice that (under $x$ view).
Now, since $(\mathbb{R}, + , \cdot)$ is a vector space, it has a dual $\mathbb{R}^*$. Using this space, we can start defining the elements $dx, dt$. Specifically, $dt$ will be a basis for $\mathbb{R}^*$ if $t$ is the basis vector for $\mathbb{R}$ . The elements of the dual space are called 1-forms. 1-forms of $\mathbb{R}^*$ "eat" vector elements of $\mathbb{R}$ and return a measure along that direction (only 1 dimension, so one direction). In this case you can consider elements of $\mathbb{R}^*$ as "row" vectors and multiply column vectors in $\mathbb{R}$ (which is the dot product of two vectors).
We can define a different basis for $\mathbb{R}$ and $\mathbb{R}^*$ with a coordinate change. For this example, if $dt$ eats a one dimensional vector $a$, it will return $a$. But when $dx$ eats $a$ it returns $2a$ in the $t$ coordinate system. That is $dx = 2dt$. For a general coordinate transform, a 1-form can be describe by $dx = \frac{dx}{dt} dt$.
This provides us with a way to talk about $dx$ and $dt$ meaningfully. Since $f: \mathbb{R} \to \mathbb{R}$ then $f(x)dx$ is $dx$ "eating" the vector $f(x)$ with regards to the $x$ coordinate system. Sometimes $f$ is easier to think of in a different coordinate system and so we wish to change it. $f(x)$ then becomes $f(x(t))$ and $dx$ becomes $\frac{dx}{dt}dt$. Now $dt$ is eating vectors $f(x(t))$ in its own coordinate system.
Consider how the uniform subdivide interval $(a,b)$ looks in a new coordinate system. For example $\{(0,\frac{1}{2}), (\frac{1}{2},1), (1,\frac{3}{2})\}$ in $t$ looks like $\{(0,\frac{2}{3}), (\frac{2}{3},2), (2, \frac{6}{2})\}$ in $x$ in the example coordinate transform. $\frac{dx}{dt}$ tells us precisely how the intervals change under our transformation.
This analogy doesn't fail. Actually $dx$ indicates a small, rather infinitesimal change in the value of $x$ with change (no matter, however small) in the parameter, say $t$, on which $x$ depends. Looking at in another way, we can comment that the infinitesimal change in $x$, represented here by $dx$ can be considered equivalent to the rate of change of $x$ with $t$ multiplied by the change $dt$ in the value of the parameter $t$.
Mathematically it is written as: $$\mathrm dx = \dfrac{\mathrm dx(t)}{\mathrm dt}\mathrm dt$$
There are a lot of books on non-standard analysis I think that really goes into this, the one I have is Infinitesimal Calculus by James M. Henle and Eugene M Kleinberg. Basically, what I think the confusion is that this dx isnt actually going to be a constant, as it is dependent both on t and the infinitesimal change of t. However, what they show is that it doesn't matter the value of dx (or even if it is non-constant) but rather that it is always infinitesimal. If it is always infinitesimal then the standard part of the integral will always be the same no matter our choice of dx. | 2019-08-18 15:53:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8635308146476746, "perplexity": 282.3335268810258}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313936.42/warc/CC-MAIN-20190818145013-20190818171013-00248.warc.gz"} |
https://www.math.sissa.it/publications?f%5Btg%5D=M&f%5Bauthor%5D=1300&s=year&o=asc | MENU
## Publications
Export 2 results:
Filters: First Letter Of Title is M and Author is Francisco Chinesta [Clear All Filters]
2016
. Model Order Reduction: a survey. In: Wiley Encyclopedia of Computational Mechanics, 2016. Wiley Encyclopedia of Computational Mechanics, 2016. Wiley; 2016. Available from: http://urania.sissa.it/xmlui/handle/1963/35194
2017
. Model Reduction Methods. In: Encyclopedia of Computational Mechanics Second Edition. Encyclopedia of Computational Mechanics Second Edition. John Wiley & Sons; 2017. pp. 1-36. | 2020-07-02 05:39:55 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8705775737762451, "perplexity": 7454.915451011577}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655878519.27/warc/CC-MAIN-20200702045758-20200702075758-00249.warc.gz"} |
http://notofcon.blogspot.com/2006/10/arch-logic-group.html?showComment=1160690940000 | ## Thursday, October 12, 2006
### Arché Logic Group
As I've mentioned earlier, this semester Arché has started a new logic group (boringly called 'Arché Logic Group', but unofficially named 'The Deviants'). There's now an Arché Twiki page for the seminar, although still quite provisional. The first part of the semester we're working through Graham Priest's "classic" An Introduction to Non-Classical Logic , preparing to go through a forthcoming publication by the same author. The usual (Scandinavian) suspects have been spending late office hours going through all of the exercises. True, there is quite a bit of tedious and repetitious work, but we have encountered a great deal of interesting material in some of the harder parts of the book.
However, this is only the warm-up: In early November we fly in the big guns. Then Priest himself will join in the fun when we proceed to uncharted terrain. By then there will hopefully be some comments on the first book on the Twiki page. I'll come back with more later.
Meanwhile: Today's session reminded me of a theorem that a friend of mine persistently has brought to my attention in another context. This is Glivenko's theorem (Glivenko 1929), which Priest has relegated to a footnote on p. 103.
Theorem. (a) If Gamma |- A in classical logic then ¬¬Gamma |- ¬¬A in intuitionistic logic, assuming that CL negations and implications are replaced by their IL counterparts. (b) If ¬Gamma,Sigma |- ¬A in CL then ¬Gamma,¬¬Sigma |- ¬A in IL.
Priest suggests that this theorem says that although IL is a sublogic of CL, CL is "in a sense" contained within IL. Of course, it is this "in a sense" which makes the theorem philosophically loaded - how are we to understand the resurfacing of classical truths within the intuitionistic system? Has it any bearing on the philosophical projects associated with the logics? Etc. I will not get into that here. Rather, I just want to point to a corollary of Glivenko's result that I was unaware of. Priest directs our attention to what he calls the "unobvious" fact that if |-A in classical logic, and A contains no logical notation except negation and conjunction, then |-A in intuitionistic logic as well (with intuitionistic conj. and neg.). Somewhat surprising perhaps, since intuitionistic and classical negation usually are thought to behave quite differently.
I checked out Priest's reference, which is to the standard Introduction to Metamathematics (1952) by Kleene (see pp. 492-493). The proof is something like this: Consider A as a conj. of n formulae (n is 1 or greater), where each conjunct is itself not a conjunction. Then, all conjuncts are either propositional variables or negated. Furthermore, if A is provable in CL, then all of its conjuncts are provable; but no propositional variables are provable. So, the conjuncts are all negated, and by Glivenko's theorem (b), they are provable in IL.
Any comments on this - philosophical or logical? (Paul Simon, if this isn't an invitation, nothing is.)
Hopefully, the ALG will prosper in the next few years, warranting the 'logic' part of the Arché research description. I'm already conspiring to make it into a proof-theory seminar next semester - one interesting suggestion was to read the manuscript for Greg Restall's forthcoming Proof-theory and Philosophy.
Update: Thanks to Aidan for this link.
Aidan said...
Stephen Read's course on algebraic logic two years ago helped me a lot in seeing the 'sense' in which CL is contained in IL. It's really worth talking to him about it.
Philosophically, CL is going to be acceptable to the intuitionist so long as we have some guarentee that none of the statements in question are undecidable. So it would be interesting to be clearer on the relationship between the fragment involved in the proof of the corollary of Glivenko's result and decidability. If restricting our attention to this fragment guarentees decidability, then this fact may be "unobvious", but it wouldn't be surprising. (For some discussion of this partial acceptance of CL, see here.
Aidan said...
Ps. The maths project spent a huge amount of time on the status of HOL, the modality conference has spent countless hours on modal logic, and the vagueness project gave a vast amount of time to exploring non-classical logics. If you want a proper challenge, try justifying the 'mind' in Arche's description....
Ole Thomassen Hjortland said...
Aidan:
Thanks for the link. I've put it as an update on the post.
Some questions:
"Philosophically, CL is going to be acceptable to the intuitionist so long as we have some guarentee that none of the statements in question are undecidable."
I assume that the 'statements in question' refer to the conj.-neg. formulae in the corollary? Logically, of course, the corollary itself ensures the decidability of these formulae, since provability in CL gives provability in IL. But are you perhaps alluding to a non-formal concept of decidability?
"So it would be interesting to be clearer on the relationship between the fragment involved in the proof of the corollary of Glivenko's result and decidability."
Or, are you rather refering to the decidability of the statements in the post, i.e., the proofs? I agree that it is interesting whether the proof is itself intuitionistically acceptable. I haven't looked into that in detail, but as far as I can see, there is only intuitionistically valid reasoning in the proofs (that is, both in Glivenko's theorem and in the corollary). But I'm prepared to be corrected.
Ole Thomassen Hjortland said...
I completely agree about the 'mind' part. Actually, I think they've decided to skip 'mind' in the description when the AHRC funding runs out. 'Logic', however, will still be in there.
Btw, do you know an elegant way of including logical symbols in html?
Paul Simon Svanberg said...
Sorry to have waited so long to comment on a very interesting post. I cannot access internet from home, so there goes.
As I'm working on an article (which you, Ole, have seen the first seed of) detailing the relationship IL-CL, I have too many thoughts on this subject. So I'll stick to a simple observation.
The reason why Glivenko's theorem works, algebraically, is that double pseudo-complement is a closure.
The "unobvious" fact you mention is interesting, but it is a corollary of the fact that all you need of intuitionistic logic to reconstruct classical logic is conjunction and intuitionistic negation.
So we could say that classical logic is the conjunction-negation-universal quantifier fragment of intuitinoistic logic: A rather symmetric sublogic of intuitionistic logic.
In any case, we should dispense entirely with the idea that intuitionistic logic is a sublogic of classical logic. It is simply false. More on this next week...
Aidan said...
Yeah, sorry Ole, I wasn't being clear. When talking of decidability I meant the generalised notion that features heavily in Dummett's writings on anti-realism (and of course others like Wright and Tennant). By the 'statements in question' I meant some class of statements in dispute between the realist and the anti-realist (so mathematical statements, or statements about the past, etc).
When we have some domain of discourse where we are sure there aren't any potentially verification-transcendent (this is basically what I meant by undecidable) statements, the intuitionist can use classical logic and semantics. So this really was an answer to your request for philosophical comments rather than logic ones (sorry for being quite so inexplicit).
Sorry, no idea about how to put logical symbols in.
Pål said...
As a comment on Dummett on undecidability. Dummett has identified three kinds of undecidability that figures in the revisionary arguments: subjunctive conditionals, past tense and quantification of infinite totalities. Without undecidability, Dummett says, the debate between the intuitionist and the classicisit is of no practical consequences, since both their meaning-theories will correspond to the linguistic behaviour of the masters of the discourse in question. So, undecidability is usely taken as a premise in a revisionary argument, even though, e.g. Cogburn takes the possibility of undecidability to be sufficient for revisionary purposes.
Ole Thomassen Hjortland said...
Paul Simon:
"In any case, we should dispense entirely with the idea that intuitionistic logic is a sublogic of classical logic. It is simply false."
I take it that you don't propose that we revise the current def. of sublogic at large. That, of course, will have dramatic consequences for other systems, e.g., the order of modal logics. However, I wonder if you can make a general definition of when L is a sublogic L' s.t. CL is a sublogic of IL, but which avoids undesirable results for other systems. The problem, as I see it, is that such a definition must allow for the corresponding L' to be inside the scope of some L'-constant (e.g., IL negation) when it is *not* the case that the original L theorem is inside the scope of the corresponding L-constant.
Aidan & Pål:
Ad undecidability. Don't know about you, but I find the idea that the *actual* absence of undecidable statements warrants CL for the intuitionist a bit unsettling. I'm more comfortable with what seems to be Cogburn's position, that possibility of undecidability is sufficient for revisionary purposes. There could be, say, mathematical discourses where we don't know of any undecidable statements, but if the intuitionist allows himself CL resources when exploring the field, he might end up proving something he lacks philosophical warrant for.
Paul Simon Svanberg said...
I haven't really thought about how to formulate a general definiton of sublogic which puts CL inside IL. But hopefully such one already exists.
The two ways to answering this question proceeds along the stony road of syntax or the serene path of semantics.
On the one hand, given a logic L as a pair (S , T), where S is its signature and T is its set of provable formulae, we can think of a sublogic L' of L (i.e. L'{L )as a pair (S', T'), such that S'[S and T'[T, where '[' denotes inclusion. However, this approach leaves much to be desired. For example, when are two signatures equivalent? (E.g. when they produce the same set T?) Furthermore, the exclusive focus on syntax is perhaps not very illuminating (endless inductions, term-rewriting etc), seeing that there is always one more x \in T to prove...
On the other hand, one could say that the logic L whose models subsume those of L' also contains L'. I personally think this is good way to think of the ordering of logics. A sublogic L'{L is then the restricting of attention to only certain operations and elements of L. I don't think this will upset anything in e.g. the hierarchy of modal logics, as I think that ordering arose from investigating modal logics from the point of view of universal algebra. (I think Blackburn et. al. writes this in the first chapter of their "modal logic", but my memory has failed me before.)
So, if this semantical approach to sublogics is taken, CL turns out to be a sublogic of IL. The corresponding syntactical fact can of course also be proved. (See e.g. section 2.3 in Troelstra: Basic Proof Theory for a number of embeddings of CL into IL.) But since we almost never consider the syntax of CL as derived from the syntax of IL, the syntactic result is perhaps less convincing. And also, I think, more difficult to get a clear view of.
Still, I'm torn on this. I am hesitant to claim, for instance, that the higher up in the hierarchy L' { ... { L { ... one gets, the logics get more fundamental. In many instances, such orderings will not reflect anything but a notational convenience. On the other hand, I enjoy the idea of sublogics since it seems that given an ordering L' { ... { L, we can isolate a family, perhaps an entire species of logics.
Somebody has probably written a book about this stuff. Anybody know of any literature on this topic?
Ole Thomassen Hjortland said...
I assume we agree that there are at least to usual ways of ordering logics: one proof-theoretical (syntactical) and one semantical. The one which most frequently is used to define sublogicality is the proof-theoretic one: L' is a sublogic of L iff for every A s.t. |-A in L', |-A in L as well; and (if strict) there is an A in L such that |-A but not |-A in L'. This, for instance, is the definition on which the ordinary ordering of the modal systems from K and upwards are ordered (where K is, in a sense, the smallest logic). However, I perfectly agree that by looking at it semantically, the ordering is turned up-side-down: say that L' is a sublogic of L iff every L'-model is is an L-model; and (if strict) there is an L-model which is not an L'-model. In other words, the ordering would now have the modal system K as the "largest logic", model-theoretically containing the other systems.
This duality, however, is not what my challenge consists in. Needless to say, I grant that on the semantic conception CL is a sublogic of IL (I guess this was your point as well). But, claiming that CL is a sublogic of IL proof-theoretically as well (say, because of some mapping like that provided by Glivenko) seems like a more interesting claim. And - it is for this claim that I want a new def. of sublogicality, i.e. a revised proof-theoretical definition.
Why is this a challenge? Because the mappings you refer to (Troelstra 2.3) all seem to use the so-called negative fragment of CL. I see no reason why this particular relation between CL and IL should give rise to a general definition of sublogicality.
Philosophically, it seems that it is this fact, that the negation is essential, which load the dice.
Paul Simon Svanberg said...
This comment has been removed by a blog administrator.
Paul Simon Svanberg said...
This comment has been removed by a blog administrator.
Paul Simon Svanberg said...
I made a mess, alas. Busy weekend, also alas.
Long comment due on sunday.
Paul Simon Svanberg said...
Your challenge points directly to issues I have been obsessing over for a while. I will try to answer it, and make some additional comments in an attempt to present a perspective on the more general questions at stake.
"I assume we agree that there are at least to usual ways of ordering logics: one proof-theoretical (syntactical) and one semantical. The one which most frequently is used to define sublogicality is the proof-theoretic one: L' is a sublogic of L iff for every A s.t.
|-A in L', |-A in L as well... However, I perfectly agree that by looking at it semantically, the ordering is turned up-side-down"
The contravarians between syntax and semantics seems to me to be misleading in this case. That is, not in the isolated case of CL and IL, but in the question of how to order logics w.r.t. some sort of inclusion.
Now, why is this contravarians misleading? If the models M(L') of a logic L' is contained in the models M(L) of a logic L, then this fact should find a syntactic expression as well, if for no other reason than completeness, whenever that obtains.
I suggest we don't think of the ordering of logics as dependent on whether we induce an order from syntax or semantics. A logic should be considered as a pair, syntax and semantics. In doing this we should not be forced to accept dialogues like
a: "L' is a sublogic of L"
b: "... ah, I see, you mean from the syntactic perspective..."
because it seems that a and b have no clear notion of a logic. To put it bluntly, either IL is a sublogic of CL or CL is a sublogic of IL. It should depend at from which angle one is looking at the matter. Dialetism is not an option.
Our claim is semantical sublogicality obtains iff syntactic sublogicality obtains.
Why bother with completeness in the first place? If I remember correctly, Priest's argument that IL is a sublogic of CL is based on semantical (Kripke-style) considerations. This is then a common way to approach the issue of sublogicality, and, depending on the choice of models, one which may yield different results. As an aside, one can note that Kripke-semantics for IL produces wrong metalogical results. This may indicate that something is amiss with Kripke-semantics as a semantical framework.
But in what sense are we justified in holding that IL cannot be a sublogic of CL? in order to answer this we should consider the question what does it mean when a logic proves less formulae?. The sole reason for claiming that IL is a sublogic of CL is that there is some formula which CL proves but not IL, e.g. A v ~A. When CL proves "A v ~A" it claims that this can never fail to hold. I.e. it disregards all instances, all possible situations, where it might fail. This is to say that CL consider a limited range of possible situations. On the other hand, IL does not claim that "A v ~A" holds all the time, but maybe sometimes. One can think of this as the logics ability to make disctinctions. This points to the duality between syntax and semantics you refer to, but we interpret it differently. If L proves some formula A which L' does not, other things being equal, then there is a syntactic distinction which L is not aware of, which L' respects. Think of e.g. ~~A = A in Cl and the de Morgan dualities and so on. They are distinct to our eyes, notation-wise and so forth, but they are equivalent are far as provability goes. This is a part of the syntactic expression of the fact that the less formulae some logic proves, the more models does its semantics subsume.
Keeping this in mind, we can proceed to justifying our above claim. Though, perhaps this is only partially an answer to your challenge, which was a revised proof-theoretical definition of sublogicality, since semantics somehow jumped on the train too. But I think everything will be covered in the end.
"Why is this a challenge? Because the mappings you refer to (Troelstra 2.3) all seem to use the so-called negative fragment of CL. I see no reason why this particular relation between CL and IL should give rise to a general definition of sublogicality."
CL appear as a restriction on the negative fragment of IL. This relation is particular to CL and IL -- it is just the way things are between those logics. What makes the relation between them general and interesting, is the fact that the embedding of CL in IL is faithful. All of CL can be reconstructed from IL. We are less concerned with the particulars of the embedding, in the case of IL-CL, that negation is essential, so long as the embedding exists and is faithfull.
So, what is the syntactic counterpart of model-inclusion? Embeddability! And it must be faithful too.
SUBLOGIC:
A logic L' is a subglogic of L iff there exists a map m() s.t.
L' |- A iff L |- m(A)
I mean, this is bound to have been covered in the literature on abstract model-theory and institutions in one way or another, so I have no delusions of originality here, but it is in my mind the best way to think of the ordering of logics. This way, a sublogic is actually a living fragment of some other logic, not just a subset of a set of wffs.
For example, we get that CL is a sublogic of IL, that both CL and IL are sublogics of S4. Also, we have that CL and IL are sublogics of linear logic (LL). Also, LL and S4 share the same modalities, but with different handling of contexts, or "resources", so I don't really know what happens there, but S4 is probably a sublogic of LL. etc..
To be continued...
Ole Thomassen Hjortland said...
Paul Simon:
"If the models M(L') of a logic L' is contained in the models M(L) of a logic L, then this fact should find a syntactic expression as well, if for no other reason than completeness, whenever that obtains."
Precisely how do you figure that the model-theoretic inclusion will manifest itself? Take for instance the modal systems K and T: M(T) is a (proper) subset of M(K). So, if a sentence A has a T-model (is T-satisfiable), then it has a K-model (is K-satisfiable), but not vice versa. The only immediate syntactic upshot of this is that if the formula A is T-consistent, then it is K-consistent. This follows by soundness.
"As an aside, one can note that Kripke-semantics for IL produces wrong metalogical results. This may indicate that something is amiss with Kripke-semantics as a semantical framework."
What do you mean by "wrong" metalogical results. Granted, they are different from algebraic semantics, but does that make them wrong?
"If L proves some formula A which L' does not, other things being equal, then there is a syntactic distinction which L is not aware of, which L' respects. Think of e.g. ~~A = A in Cl and the de Morgan dualities and so on. They are distinct to our eyes, notation-wise and so forth, but they are equivalent are far as provability goes. This is a part of the syntactic expression of the fact that the less formulae some logic proves, the more models does its semantics subsume."
I more or less agree with this part.
"So, what is the syntactic counterpart of model-inclusion? Embeddability! And it must be faithful too.
SUBLOGIC:
A logic L' is a subglogic of L iff there exists a map m() s.t.
L' |- A iff L |- m(A)"
But I thought that this was satisfied both for CL = L', IL = L and for CL = L, IL = L'. In other words, that there is a faithful embedding both ways. Maybe I'm getting something wrong here, but if this is the case then it seems that it contradicts your statement earlier in the comments:
"To put it bluntly, either IL is a sublogic of CL or CL is a sublogic of IL. It should depend at from which angle one is looking at the matter. Dialetism is not an option."
I'm prepared to be corrected on this.
Paul Simon Svanberg said...
Last things first.
"To put it bluntly, either IL is a sublogic of CL or CL is a sublogic of IL. It should depend at from which angle one is looking at the matter. Dialetism is not an option."
"To put it bluntly, either IL is a sublogic of CL or CL is a sublogic of IL. It should not depend at from which angle one is looking at the matter. Dialetism is not an option."
So, on to the meat.
""SUBLOGIC:
A logic L' is a subglogic of L iff there exists a map m() s.t.
L' |- A iff L |- m(A)"
But I thought that this was satisfied both for CL = L', IL = L and for CL = L, IL = L'. In other words, that there is a faithful embedding both ways"
There is only a faithful embedding of CL into IL. One can, however, embed IL into CL by some map m, but we cannot recover the distinctions collapsed in CL by this m. I think I gave a counterexample to show we lose faithfulness in my thesis, but I don't remember exactly how it got off the ground. The point of embeddings is to preserve the operational meaning of the logical constants. If the embedding is faithful, then we have identified a syntactic fragment L' of a logic L, s.t. all the logical operations (the rules!) are intact. This means that they are not weakened (in their "sense", so to speak). When we embed IL into CL, we are in effect weakening all the intuitionistic operations. Since CL is not fine grained enough, i.e. proves to many equivalences, we cannot reverse this weakening. This can also be thought of in terms of the non-invertibility of Weakening in sequent systems, though this is only a metaphor.
""As an aside, one can note that Kripke-semantics for IL produces wrong metalogical results. This may indicate that something is amiss with Kripke-semantics as a semantical framework."
What do you mean by "wrong" metalogical results. Granted, they are different from algebraic semantics, but does that make them wrong?"
My claim that Kripke-semantics produces wrong metalogical results is not based on the fact that algebraic semantics gives a different metalogical result. However, the upshot of the differences between Kripke-semantics and algebraic semantics, is that I think algebraic semantics produces the right metalogical results. I see it this way. A logic is a language used to describe some structure. The more distinctions the language is able to make, the more differences the language is able to discern -- then, if such a language, a logic L, admits a faithful embedding of another logic L', then I would think that L' is a sublogic of L, since L is able to describe all that L' describes, but at the same time make more distinctions. It appears we agree more or less that on the syntactic level, IL makes more distinctions than CL in that IL does not collapse e.g. ~~A and A. Also, since CL is faithfully embeddable in IL, but not the other way around, I think the coast is clear to say that CL is a sublogic of IL. It is from this line of thought I would argue that Kripke-semantics produces wrong metalogical results.
"Precisely how do you figure that the model-theoretic inclusion will manifest itself? (...)"
I'm not sure. I haven't really thought about it. Maybe my understanding of sublogicality (a very strict one indeed) is applicable and limited only to the class of logics closely related to IL: CL, S4, LL. However, I do not believe this to be the case.
"The only immediate syntactic upshot of this is that if the formula A is T-consistent, then it is K-consistent. This follows by soundness."
I think I can account for the relationship between CL, IL, S4 and LL. I don't know about S5, nor the smaller modal logics like K, T, K4 etc. My feeble attempt at defining sublogicality is probably too strong, and thus also likely to give us wrong metalogical results every now and then. But I think that behind the the hierarchy of modal logics, the intelligent way of defining sublogicality is probably already given. And I think semantics is indispensible in dealing with this question.
Paul Simon Svanberg said...
"My feeble attempt at defining sublogicality is probably too strong, and thus also likely to give us wrong metalogical results every now and then."
I withdraw this comment.
There is a certain amount of relativity in the talk about sublogics, e.g. as evidenced by the "less" tautologies, "more" models duality. It seems that we have some liberty in chosing along which dimensions we would like to measure the unit of sublogicality.
Should we start with the axiomatization, the semantics or perhaps with something else?
I think the right place to start when dealing with the notion of sublogicality, is logical strentgth.
This is what it boils down to: If some logic L' is expressible by another L, then we should say L' is a sublogic of L. Thus we measure sublogicality according to what structures can be described/named by the logics, which seems to me to be fair to the nature of Logic.
Embeddings is the natural way of establishing the syntactic result about logical strength. "Model theoretic inclusion" is the natural way of establishing the semantics result about logical strength. Both of these have non-trivial "methods" of verification, except for in the simplest cases.
One could argue that the notion of logical strength is at heart a semantical one. This is unproblematic to me, but if one so desires, one can counter this by pointing out that it is not a semantical notion exclusively. The whole point of ordering logics according to their expressive power, is to capture both their language and their models. Otherwise the relation between language and models, syntax and semantics, would appear fragmentary and to a certain degree arbitrary. If one by preference would like to work exclusively in the one realm at the expense of the other, one should nonetheless use notions which preserve the "good" qualities which lie inherent in Logic, and respects the nexus between syntax and semantics.
Ole Thomassen Hjortland said...
I must admit that I'm still puzzled. I take it that you want to have a proof-theoretic and a model-theoretic notion of sublogicality, such that these coincide (I assume, only when there is completeness and soundness).
So, according to the two points you have been discussing, such a relation would mean something along the following lines.
Let L, L' be two logics, and M(L), M(L') be their models:
M(L') is a subset of M(L) iff
there is a faithful embedding of L' into M. Is this what you suggest?
A further question: What about the relation between classical propositional logic and classical first-order logic? What is the sublogicality relation between these on your account?
henri galinon said...
For logical symbols in HTML, add this keyboard to your toolbar. The following is a tentative application.
Humberstone writes (The Connectives, p.275):
"One reaction to this [ie : the 'unobvious" corollary of Glivenko you mentionned in your post] has been to suggest that far from being a subsytem of CL, IL is actually an extension, every classical tautology being rewritten in terms of the functionally complete connectives ¬ and ∧ (thus giving a formula by classical lights equivalent to the original, with intuitionistic → and ∨ regarded as additional connectives, like the ⇑ of modal logic [...]. However the temptation is best resisted since the result in question does not extend to the consequence relation concerned (e.g. it is not an IL truth that : ¬¬p l- p ) " (Humbestone's emphasis)
Note also that Glivenko's theorem does not hold for predicate IL and CL logics :
It is classically true that : l- ∀ (Fx ∨ ¬ Fx)
but not intuitionistically true that l- ¬¬ ∀ (Fx ∨ ¬ Fx)
Finally, you have probably noticed that in the definition of the UNILOG 2007 (here) contest, reference is made to a result of Wojcicki and more generally to different notions of translation of a logic in another logic. I don't know what Wojcicki's result states, and I'd be interested in earing from someone having any lights on this.
Best
Ole Thomassen Hjortland said...
Thanks Henri,
This was very helpful. It is quite true that Glivenko's theorem is restricted, but although the provability relation (e.g., in your example ¬¬p |- p) is not within the scope of the corollary in question, extending Glivenko's theorem to a full translation (i.e., the Gödel-Gentzen translation), the above example becomes provable as ¬¬¬¬p |- ¬¬p since all atomic formulae are doubly negated.
Furthermore, the translation also works for the first-order case by devising a translation s.t. exists x (Ax) := ¬ forall ¬Ax. I'll have to see what more Humberstone has to say about this; it's definitely an interesting passage.
I have noticed the references on the UNILOG page, but I've had a hard time finding some of them. I would especially like to look at the Prawitz & Malmnäs paper from 1968, but so far I haven't been able to get the anthology it's printed in.
Best, | 2013-05-21 11:15:43 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8211714029312134, "perplexity": 1202.8077711714516}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699899882/warc/CC-MAIN-20130516102459-00076-ip-10-60-113-184.ec2.internal.warc.gz"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.