url
stringlengths
14
1.76k
text
stringlengths
100
1.02M
metadata
stringlengths
1.06k
1.1k
http://www.gnu.org/software/auctex/manual/auctex/Environments.html
[ < ] [ > ] [ << ] [ Up ] [ >> ] [Top] [Contents] [Index] [ ? ] ## 2.4 Inserting Environment Templates A large apparatus is available that supports insertions of environments, that is ‘\begin{}’ — ‘\end{}’ pairs. AUCTeX is aware of most of the actual environments available in a specific document. This is achieved by examining your ‘\documentclass’ command, and consulting a precompiled list of environments available in a large number of styles. You insert an environment with C-c C-e, and select an environment type. Depending on the environment, AUCTeX may ask more questions about the optional parts of the selected environment type. With C-u C-c C-e you will change the current environment. Command: LaTeX-environment arg (C-c C-e) AUCTeX will prompt you for an environment to insert. At this prompt, you may press <TAB> or <SPC> to complete a partially written name, and/or to get a list of available environments. After selection of a specific environment AUCTeX may prompt you for further specifications. If the optional argument arg is not-nil (i.e. you have given a prefix argument), the current environment is modified and no new environment is inserted. As a default selection, AUCTeX will suggest the environment last inserted or, as the first choice the value of the variable LaTeX-default-environment. User Option: LaTeX-default-environment Default environment to insert when invoking ‘LaTeX-environment’ first time. If the document is empty, or the cursor is placed at the top of the document, AUCTeX will default to insert a ‘document’ environment. Most of these are described further in the following sections, and you may easily specify more. See Customizing Environments. You can close the current environment with C-c ], but we suggest that you use C-c C-e to insert complete environments instead. Command: LaTeX-close-environment (C-c ]) Insert an ‘\end’ that matches the current environment. [ < ] [ > ] [ << ] [ Up ] [ >> ] [Top] [Contents] [Index] [ ? ] This document was generated by Ralf Angeli on January 13, 2013 using texi2html 1.82.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9444924592971802, "perplexity": 4809.126161719554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119655159.46/warc/CC-MAIN-20141024030055-00004-ip-10-16-133-185.ec2.internal.warc.gz"}
http://physics.stackexchange.com/questions/46928/constructing-the-space-of-quantum-states
# Constructing the space of quantum states I want to learn how to construct spaces of quantum states of systems. As an exercize, I tried to build the space of states and to find hamiltonian spectrum of the quantum system whose Hamiltonian is the Hamiltonian of the harmonic oscillator with the quadratic term: $\hat{H}=\hat{H}_{0}+\hat{H}_{1}$, where $\hat{H}_{0}=\hbar\omega\left(\hat{a}^{\dagger}\hat{a}+1/2\right)$, $\hat{H}_{1}=\dot{\imath}\gamma\left(\hat{a}^{\dagger}\right)^{2}-\dot{\imath}\gamma\left(\hat{a}\right)^{2}$; $\hat{a}$, $\hat{a}^{\dagger}$-ladder operators, $\gamma$-real parameter For this purpose, we should define a complete set of commuting observables (CSCO). As for the harmonic oscillator, we can define a "number" operator $N=\hat{a}^{\dagger}\hat{a}$. We can prove the following statement: Let be $a$ and $a^{\dagger}$ Hermitian conjugated operators and $\left[a,a^{\dagger}\right]=1$. Define operator $N=aa^{\dagger}$. Then we can prove that $\left[N,a^{p}\right]=-pa^{p}, \left[N,a^{\dagger p}\right]=pa^{\dagger p}$ and that the only algebraic functions of $a$ and $a^{\dagger}$, which commute with $N$, are the functions of $N$. (For example, see Messiah, Quantum Mechanics, exercises after chapter $12$) Using this statement, we conclude(am i right?) that operator $N$ forms a CSCO. So, sequence of eigenvectors of operator $N$ forms the basis of the space of states. So, I've come to the conclusion that the space of states of the described system is the same as the space of states of harmonic oscillator. But operators $a$, $a^{\dagger}$ can always be determined (as i think), so this arguments will be valid, so I've come to the conclusion, that spaces of states of all systems will be the same. After that I realized that I am mistaken. Would you be so kind to explain where is a mistake in the arguments above? And can you give some references/articles/books where i can read some additional information about constructing spaces of states for different systems? -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9833369851112366, "perplexity": 210.61063339133}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535921872.11/warc/CC-MAIN-20140909032216-00082-ip-10-180-136-8.ec2.internal.warc.gz"}
http://lists.gnu.org/archive/html/help-emacs-windows/2005-11/msg00042.html
help-emacs-windows [Top][All Lists] ## Re: [h-e-w] Unix utilities for Emacs on MS Windows From: Lennart Borgman Subject: Re: [h-e-w] Unix utilities for Emacs on MS Windows Date: Sat, 26 Nov 2005 11:12:07 +0100 User-agent: Mozilla Thunderbird 1.0.7 (Windows/20050923) ```Ismael Valladolid Torres wrote: ``` ```Not much feedback to give. Windows is a dangerous place to live in and in order to avoid viruses and spyware I usually prefer being a non form of a zipped or tarred archive I can simply uncompress it and add the resulting bin to my personal path. If I needed to run an installer, before I would need to contact my IT department in order to become an ``` I tried to write the installer in such a way that you do not need admin priv. ``` ``` ```Moreover I am simply used to UNIX way of life where all the configuration goes to plain text files under /etc or under your home in form of dotfiles. Usually I am not aware of how much garbage an installer puts in my registry, so better simply uncompress things and run them being in full control of their configuration. ``` That makes me remember someone wanted a list of all changes to the registry. I forgot to make such a list (which will be very short). ``` ``` ```My way: Getting a compiled emacs archive, uncompressing it to C:\emacs, adding C:\emacs\bin to my path, setting HOME to C:\cygwin\home\ismaeval2 and putting there my .emacs, adding a shortcut to gnuclienw.exe to my SendTo folder, and that's it! ``` You need to start gnuserv too yourself I guess when you do it this way. For me as a typical command line user it is also important to make it possible to start editing a new file from the command line so I add that also in the installer. (And I add a shortcut to the SendTo folder too;-) ``` Why do you add Emacs to your path? Is that needed outside of Emacs? ``` ```Hope this makes sense. :) ``` Yes, it surely does. When I wrote EmacsW32 I tried to target both new and old users, but that is not very easy ;-) ``` ```
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.93787682056427, "perplexity": 2207.9062059052476}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929096.44/warc/CC-MAIN-20150521113209-00309-ip-10-180-206-219.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/110557/about-the-bloch-conjecture-on-entire-curves
About the Bloch conjecture on entire curves The Bloch conjecture states the following: Bloch's conjecture. Let $X$ be a compact complex Kähler variety such that the irregularity $q = h^0(X,\Omega^1_X)$ is larger than the dimension $n = \dim X$. Then, every entire curve drawn in $X$ is analytically degenerate. Here $X$ may be singular and $\Omega^1_X$ can be defined in any reasonable way (direct image of the $\Omega^1_{\widetilde X}$ of a desingularization $\widetilde X$ or direct image of $\Omega^1_U$ where $U$ is the set of regular points in the normalization of $X$). By an entire curve I mean a non constant holomorphic map $f\colon\mathbb C\to X$, and analytically degenerate means that there exists a closed analytic subset $Y\subsetneq X$ such that $f(\mathbb C)\subset Y$. This conjecture has been proven, thanks to the works of Ochiai, by Kawamata and, independently, by Wong. A standard Albanese map argument permits to reduce the conjecture to the following statement: Let $A$ be an abelian variety and $f\colon\mathbb C\to A$ an entire curve. Then, the Zariski closure $\overline{f(\mathbb C)}$ is a translate of a subtorus. In particular a subvariety of an abelian variety does not have any entire curve (Brody hyperbolicity) if and only if it does not contain any translate of a subtorus. Thus, in a simple abelian variety every subvariety is hyperbolic. More generally, if a subvariety of an abelian variety is not a translate of a subtorus, then every entire curve in $X$ is analytically degenerate. Question 1. Is there any geometric characterization or sufficient condition in order to insure that the Albanese variety of a projective algebraic manifold is simple? Question 2. Is there any geometric characterization or sufficient condition, other than having big irregularity, in order to insure that the image of a projective algebraic manifold via the Albanese map is a proper subvariety? Notice that by the universal property of the Albanese map, if the image of a projective algebraic manifold via the Albanese map is a proper subvariety then this image is necessarily not a translate of a subtorus. N.B. I changed the last part of my post. Now, Question 2 is no more as in the previously, not reedited post. In particular the comment of ulrich refers to my previous Question 2. - What is an analytically degenerate curve? –  aglearner Oct 24 '12 at 16:42 I define this at line 7 from above. –  diverietti Oct 24 '12 at 16:42 A subvariety $X$ of abelian variety $A$ is of general type unless there is an abelian subvariety $B$ of $A$, translation by which preserves $X$; this is proved in Ueno's Springer Lecture Notes on classification theory, but I do not have the precise reference right now. If the abelian variety is not simple, then using products you can construct subvarieties with arbitrarily large irregularity but not of general type. –  ulrich Oct 24 '12 at 18:43 By "Abelian Variety", do you mean a complete, irreducible and reduced group scheme of finite type over $\mathbb{C}$? –  Filippo Alberto Edoardo Oct 25 '12 at 8:37 Ahahahahahahahahahahahah. Exactly! –  diverietti Oct 25 '12 at 8:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9361647367477417, "perplexity": 283.07993609480064}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119646180.24/warc/CC-MAIN-20141024030046-00258-ip-10-16-133-185.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/133638/how-does-this-equation-to-find-the-radius-from-3-points-actually-work
# How does this equation to find the radius from 3 points actually work? I had searched online and found an equation that solves the radius of a circle from 3 points that are located on the circumference of that specific circle. Where I had found this formula did not state its derivation or anything of the likes, however it is to find the radius. With the 3 points, you should form a triangle. Let us call this Triangle $ABC$. Using the distance formula, we can calculate the distance between $AB$, $BC$, and $AC$. To simplify things, let us call these distances A, B, and C respectively. We also need the area of the triangle. After finding an altitude of the specified triangle, we can use the area of a triangle equation to solve for it. Let us call the area of this triangle $K$. This is the formula: $r = \dfrac{ABC}{4K}$ Which is essentially saying $radius = \dfrac{\text{Product~of~the~triangle~side~lengths}}{\text{The~area~of~the~triangle~multiplied~by~4}}$ This above pseudo equation was just to clarify the formula, I understand everyone here are exceptional mathematicians, and as I mature, I also hope to become one as well. This, to my astonishment, finds the correct radius, however I am unable to comprehend how this method works. I hope one of the kind people on Math StackExchange are willing to help me out on understanding this formula and its derivation. Many thanks. - the radius the same as a distance from a vertex to the 'circumcentre' of the triangle and simplifies to the Law of Sines. en.wikipedia.org/wiki/Law_of_sines#Relation_to_the_circumcircle –  Ronald Apr 18 '12 at 21:58 @Ronald yes, and normally, to find the circumcentre I would find the perpendicular bisector of 2 sides, and the POI of those 2 lines would be the circumcentre. However, in this method I can't understand how it works to find the radius. –  Backslash Apr 18 '12 at 22:01 @Ronald if you could explain it in a little more depth I would be very grateful. –  Backslash Apr 18 '12 at 22:06 Let $a$ be the angle opposite to side $A$. First show that if $R$ is the radius of the circle, then $\frac{A}{\sin a} = 2R$. This isn't hard (just drop a perpendicular from $O$ to $A$, and use the definition of $\sin$ on the similar triangles). Then, the area of the triangle $ABC$, $K = \frac{1}{2}BC \sin a$, and so $\frac{A}{\sin a} = \frac{A}{\frac{2K}{BC}} = \frac{ABC}{2K} = 2R$, and so $\frac{ABC}{4K} = R$. I have only one question, however. Why is there a $\text sina$ after the area of a triangle equation? Is that supposed to be there? –  Backslash Apr 19 '12 at 3:18 That is the general formula for the area of a triangle in terms of the side lengths and angles. If we have a right triangle, then $\sin a = 1$ and this reduces to base x height. en.wikipedia.org/wiki/Triangle#Computing_the_area_of_a_triangle –  Michael Biro Apr 19 '12 at 4:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9002147316932678, "perplexity": 183.6363178538954}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375097710.59/warc/CC-MAIN-20150627031817-00199-ip-10-179-60-89.ec2.internal.warc.gz"}
https://rd.springer.com/article/10.1007%2Fs00158-017-1807-0
Structural and Multidisciplinary Optimization , Volume 57, Issue 3, pp 1233–1250 # An adaptive RBF-HDMR modeling approach under limited computational budget • Haitao Liu • Jaime-Rubio Hervas • Yew-Soon Ong • Jianfei Cai • Yi Wang RESEARCH PAPER ## Abstract The metamodel-based high-dimensional model representation (e.g., RBF-HDMR) has recently been proven to be very promising for modeling high dimensional functions. A frequently encountered scenario in practical engineering problems is the need of building accurate models under limited computational budget. In this context, the original RBF-HDMR approach may be intractable due to the independent and successive treatment of the component functions, which translates in a lack of knowledge on when the modeling process will stop and how many points (simulations) it will cost. This article proposes an adaptive and tractable RBF-HDMR (ARBF-HDMR) modeling framework. Given a total of N m a x points, it first uses N i n i points to build an initial RBF-HDMR model for capturing the characteristics of the target function f, and then keeps adaptively identifying, sampling and modeling the potential cuts with the remaining N m a x N i n i points. For the second-order ARBF-HDMR, N i n i ∈ [2n + 2,2n 2 + 2] not only depends on the dimensionality n but also on the characteristics of f. Numerical results on nine cases with up to 30 dimensions reveal that the proposed approach provides more accurate predictions than the original RBF-HDMR with the same computational budget, and the version that uses the maximin sampling criterion and the best-model strategy is a recommended choice. Moreover, the second-order ARBF-HDMR model significantly outperforms the first-order model; however, if the computational budget is strictly limited (e.g., 2n + 1 < N m a x ≪ 2n 2 + 2), the first-order model becomes a better choice. Finally, it is noteworthy that the proposed modeling framework can work with other metamodeling techniques. ## Keywords Metamodeling Adaptive high dimensional model representation Limited computational budget Tractable process ## Notes ### Acknowledgements The majority of this work was finished before joining the Lab. We appreciate the support from the National Research Foundation (NRF) Singapore under the Corp Lab@University Scheme for completing the research. It is also partially supported by the Data Science and Artificial Intelligence Research Center (DSAIR) and the School of Computer Science and Engineering at Nanyang Technological University. ## References 1. Acar E, Rais-Rohani M (2009) Ensemble of metamodels with optimized weight factors. Struct Multidiscip Optim 37(3):279–294 2. Andrews D W, Whang Y J (1990) Additive interactive regression models: circumvention of the curse of dimensionality. Econometric Theory 6(4):466–479 3. Breiman L, Friedman J, Stone CJ, Olshen RA (1984) Classification and regression trees. CRC press, Boca Raton 4. Cai X, Qiu H, Gao L, Yang P, Shao X (2016) An enhanced RBF-HDMR integrated with an adaptive sampling method for approximating high dimensional problems in engineering design. Struct Multidiscip Optim 53 (6):1209–1229 5. Cheng G H, Younis A, Hajikolaei K H, Wang G G (2015) Trust region based mode pursuing sampling method for global optimization of high dimensional design problems. J Mech Des 137(2):021– 407Google Scholar 6. Chowdhury R, Rao B (2009) Hybrid high dimensional model representation for reliability analysis. Comput Methods Appl Mech Eng 198(5):753–765 7. Crombecq K, Gorissen D, Deschrijver D, Dhaene T (2011a) A novel hybrid sequential design strategy for global surrogate modeling of computer experiments. SIAM J Sci Comput 33(4):1948–1974Google Scholar 8. Crombecq K, Laermans E, Dhaene T (2011b) Efficient space-filling and non-collapsing sequential design strategies for simulation-based modeling. Eur J Oper Res 214(3):683–696Google Scholar 9. Fang H, Horstemeyer M F (2006) Global response approximation with radial basis functions. Eng Optim 38(4):407–424 10. Forrester A, Sobester A, Keane A (2008) Engineering design via surrogate modelling: a practical guide. Wiley, Hoboken 11. Friedman J H, Stuetzle W (1981) Projection pursuit regression. J Am Stat Assoc 76(376):817–823 12. Goel T, Haftka R T, Shyy W, Queipo N V (2007) Ensemble of surrogates. Struct Multidiscip Optim 33(3):199–216 13. Hardy R L (1971) Multiquadric equations of topography and other irregular surfaces. J Geophys Res 76 (8):1905–1915 14. Huang Z, Qiu H, Zhao M, Cai X, Gao L (2015) An adaptive SVR-HDMR model for approximating high dimensional problems. Eng Comput 32(3):643–667 15. Johnson M E, Moore L M, Ylvisaker D (1990) Minimax and maximin distance designs. J Stat Plan Inference 26(2):131–148 16. Li E, Wang H, Li G (2012) High dimensional model representation (HDMR) coupled intelligent sampling strategy for nonlinear problems. Comput Phys Commun 183(9):1947–1955 17. Li G, Rosenthal C, Rabitz H (2001a) High dimensional model representations. J Phys Chem A 105 (33):7765–7777Google Scholar 18. Li G, Wang S W, Rosenthal C, Rabitz H (2001b) High dimensional model representations generated from low dimensional data samples. i. mp-Cut-HDMR. J Math Chem 30(1):1–30Google Scholar 19. Li G, Wang S W, Rabitz H (2002) Practical approaches to construct RS-HDMR component functions. J Phys Chem A 106(37):8721–8733 20. Li G, Hu J, Wang S W, Georgopoulos P G, Schoendorf J, Rabitz H (2006) Random sampling-high dimensional model representation (RS-HDMR) and orthogonality of its different order component functions. J Phys Chem A 110(7):2474–2485 21. Li G, Rabitz H, Hu J, Chen Z, Ju Y (2008) Regularized random-sampling high dimensional model representation (RS-HDMR). J Math Chem 43(3):1207–1232 22. Liu H, Xu S, Wang X (2015) Sequential sampling designs based on space reduction. Eng Optim 47 (7):867–884 23. Liu H, Xu S, Ma Y, Chen X, Wang X (2016a) An adaptive bayesian sequential sampling approach for global metamodeling. J Mech Des 138(1):011–404Google Scholar 24. Liu H, Xu S, Wang X (2016b) Sampling strategies and metamodeling techniques for engineering design: comparison and application. In: ASME Turbo Expo 2016: Turbomachinery Technical Conference and Exposition, ASME, pp V02CT45A019–V02CT45A019Google Scholar 25. Liu H, Xu S, Wang X, Meng J, Yang S (2016c) Optimal weighted pointwise ensemble of radial basis functions with different basis functions. AIAA J 54(10):3117–3133Google Scholar 26. Liu H, Ong Y S, Cai J (2017a) An adaptive sampling approach for kriging metamodeling by maximizing expected prediction error. Comput Chem Eng 106:171–182Google Scholar 27. Liu H, Wang X, Xu S (2017b) Generalized radial basis function-based high-dimensional model representation handling existing random data. J Mech Des 139(1):011–404Google Scholar 28. Liu Y, Hussaini M Y, Ökten G (2016d) Accurate construction of high dimensional model representation with applications to uncertainty quantification. Reliab Eng Syst Saf 152:281–295Google Scholar 29. Morris M D, Mitchell T J, Ylvisaker D (1993) Bayesian design and analysis of computer experiments: use of derivatives in surface prediction. Technometrics 35(3):243–255 30. Mueller L, Alsalihi Z, Verstraete T (2013) Multidisciplinary optimization of a turbocharger radial turbine. J Turbomach 135(2):021–022Google Scholar 31. Rabitz H, Aliṡ ÖF (1999) General foundations of high-dimensional model representations. J Math Chem 25(2):197–233 32. Rabitz H, Aliṡ ÖF, Shorter J, Shim K (1999) Efficient input-output model representations. Comput Phys Commun 117(1-2):11–20 33. Razavi S, Tolson B A, Burn D H (2012) Review of surrogate modeling in water resources. Water Resour Res 48(7):1–32 34. Rippa S (1999) An algorithm for selecting a good value for the parameter c in radial basis function interpolation. Adv Comput Math 11(2):193–210 35. Shan S, Wang G G (2010a) Metamodeling for high dimensional simulation-based design problems. J Mech Des 132(5):051–009Google Scholar 36. Shan S, Wang G G (2010b) Survey of modeling and optimization strategies to solve high-dimensional design problems with computationally-expensive black-box functions. Struct Multidiscip Optim 41(2):219–241Google Scholar 37. Shan S, Wang G G (2011) Turning black-box functions into white functions. J Mech Des 133(3):031–003 38. Sobol I M (1993) Sensitivity estimates for nonlinear mathematical models. Math Model Comput Exper 1 (4):407–414 39. Sobol I M (2003) Theorems and examples on high dimensional model representation. Reliab Eng Syst Saf 79 (2):187–193 40. Tang L, Wang H, Li G (2013) Advanced high strength steel springback optimization by projection-based heuristic global search algorithm. Mater Des 43:426–437 41. Tunga M A, Demiralp M (2005) A factorized high dimensional model representation on the nodes of a finite hyperprismatic regular grid. Appl Math Comput 164(3):865–883 42. Ulaganathan S, Couckuyt I, Dhaene T, Degroote J, Laermans E (2016) High dimensional kriging metamodelling utilising gradient information. Appl Math Model 40(9):5256–5270 43. Viana F A, Haftka R T, Steffen V (2009) Multiple surrogates: how cross-validation errors can help us to obtain the best predictor. Struct Multidiscip Optim 39(4):439–457 44. Wang G G, Shan S (2007) Review of metamodeling techniques in support of engineering design optimization. J Mech Des 129(4):370–380 45. Wang S W, Georgopoulos P G, Li G, Rabitz H (2003) Random sampling- high dimensional model representation (RS-HDMR) with nonuniformly distributed variables: Application to an integrated multimedia/multipathway exposure and dose model for trichloroethylene. J Phys Chem A 107(23):4707–4716 46. Xu S, Liu H, Wang X, Jiang X (2014) A robust error-pursuing sequential sampling approach for global metamodeling based on voronoi diagram and cross validation. J Mech Des 136(7):071– 009 47. Yang Q, Xue D (2015) Comparative study on influencing factors in adaptive metamodeling. Eng Comput 31(3):561–577 ## Authors and Affiliations • Haitao Liu • 1 • Jaime-Rubio Hervas • 2 • Yew-Soon Ong • 2 • 3 • Jianfei Cai • 2 • Yi Wang • 4 1. 1.Rolls-Royce@NTU Corporate LaboratoryNanyang Technological UniversitySingaporeSingapore 2. 2.School of Computer Science and EngineeringNanyang Technological UniversitySingaporeSingapore 3. 3.Data Science and Artificial Intelligence Research CenterNanyang Technological UniversitySingaporeSingapore 4. 4.Applied Technology GroupRolls-Royce SingaporeSingaporeSingapore
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8929196000099182, "perplexity": 17166.798116038135}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946256.50/warc/CC-MAIN-20180423223408-20180424003408-00577.warc.gz"}
https://stats.stackexchange.com/questions/370147/the-universal-approximation-theorem-vs-the-no-free-lunch-theorem-whats-the-ca
# The Universal Approximation Theorem vs. The No Free Lunch Theorem: What's the caveat? The universal approximation theorem: A neural network with 3 layers and suitably chosen activation functions can any approximate continuous function on compact subsets of $$R^n$$. The no free lunch theorem: If a learning algorithm performs well on some data sets, it will perform poorly on some other data sets. I sense a contradiction here: the first theorem implies that NNets are the "one learning approach to rule them all", while second says that such a learning approach doesn't exist. I'm pretty certain NFLT holds, so there must be a caveat, but I can't put my finger on it? What is the caveat in the universal approximation theorem so that NFLT holds? • The first theorem does not imply that NNets are the "one learning approach to rule them all". There are other functions that can approximate any continuous function on compact subsets of $R^n$, see the Stone-Weierstrass theorem for example, and then there's the issue of generalizability, which the UAT does not address. – jbowman Oct 4 '18 at 17:08 • Stone-Weierstrass theorem - means polynomial linear regression also satisfies UAT – seanv507 Oct 4 '18 at 17:22 • While what you're saying is correct in terms of traininggeneralization, I don't think it correctly addresses the question. The NFL theorem has nothing to do with generalization. Rather it says that if you sample functions from a uniform distribution of the input/output space, then the overall expected cost of any algorithm will be the same at iteration step $m$. – Alex R. Oct 4 '18 at 17:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8995171189308167, "perplexity": 436.46732334541}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247486480.6/warc/CC-MAIN-20190218114622-20190218140622-00051.warc.gz"}
https://www.hpmuseum.org/forum/thread-6135.html?highlight=Number+of+calculators
New ConKit (10077) eating up System resources! 04-24-2016, 09:53 PM (This post was last modified: 04-25-2016 07:20 PM by Spybot.) Post: #1 Spybot Member Posts: 173 Joined: Feb 2015 New ConKit (10077) eating up System resources! Hello! I don't know if this is just happening to me... but I've noticed something strange on my laptop, the CPU fans go all the way up as soon as I open the ConKit program, you can also check this by opening task manager and under the processes tab, you'll find the percentage of CPU ConKit is consuming. Is this normal? Not to mention the battery life of my laptop due to this situation and that ConKit now crashes more frequently than it used to. On the other hand, this is not happening with the virtual calculator (emulator). Spybot. 04-24-2016, 10:40 PM Post: #2 Dougggg Member Posts: 68 Joined: Dec 2013 RE: New ConKit (10077) eating up System resources! I does use alot of CPU mine is at 25-28%. wonder what it is doing 04-25-2016, 01:38 AM Post: #3 mandresve Member Posts: 92 Joined: Mar 2015 RE: New ConKit (10077) eating up System resources! It happen in previous releases too. At least for me... Success is the ability to go from one failure to the next without any loss of enthusiasm. 04-25-2016, 04:01 AM Post: #4 Carlos295pz Senior Member Posts: 352 Joined: Sep 2015 RE: New ConKit (10077) eating up System resources! I spent sometimes with the virtual calculator, but never with the ConKit, is now inevitable when programming. Viga C | TD | FB 04-25-2016, 05:20 AM Post: #5 eried Senior Member Posts: 741 Joined: Dec 2013 RE: New ConKit (10077) eating up System resources! It is probably updating the calculator screen projection... in a quite ineficient way? My website: erwin.ried.cl 04-25-2016, 05:32 AM Post: #6 Carlos295pz Senior Member Posts: 352 Joined: Sep 2015 RE: New ConKit (10077) eating up System resources! (04-25-2016 05:20 AM)eried Wrote:  It is probably updating the calculator screen projection... in a quite ineficient way? Apparently have this problem even if no connected calculators Viga C | TD | FB 04-25-2016, 06:43 AM Post: #7 Thomas_Sch Senior Member Posts: 310 Joined: Dec 2013 RE: New ConKit (10077) eating up System resources! I also noted this behavior. If i remember correctly, the last version didn't eat up cpu resources. Should be an error. 04-25-2016, 02:20 PM Post: #8 eried Senior Member Posts: 741 Joined: Dec 2013 RE: New ConKit (10077) eating up System resources! OK, after checking it I can confirm the issue. In my system it uses 15% to 20% constantly too. Strangely it appears to be doing several things constantly, first something related with HID usb polling: Also is polling the filesystem, the Conn kit content folder constantly, 12 times per minute, every 5 seconds: And the registry, constantly, 30 times per minute, every 2 seconds: Someone will have to tell the recently graduated from CS coding this part that there are better ways to notify about USB device insertions and filesystem changes than timers: My website: erwin.ried.cl 04-25-2016, 03:59 PM (This post was last modified: 04-25-2016 04:12 PM by Tim Wessman.) Post: #9 Tim Wessman Senior Member Posts: 2,231 Joined: Dec 2013 RE: New ConKit (10077) eating up System resources! (04-25-2016 02:20 PM)eried Wrote:  Someone will have to tell the recently graduated from CS coding this part that there are better ways to notify about USB device insertions and filesystem changes than timers: Not in a cross platform way unfortunately for USB... Also, not using QT for filesystem stuff unfortunately as well (once you have network drives involved especially)... Thanks for the pointers I guess... :-P (which program are you using for anlysis there? Not familiar with that one) (we're working on the resource thing - it isn't actually any of the things you pointed at however. Those have already been eliminated as the cause. Also, remember that pretty much everyone here is looking at the connkit as a "i plug in my single calculator and do stuff with it" - we have to take into account much larger things like having 60+ calculators doing wireless, upcoming network connections to other computers/apps over TCPIP, etc. Some pieces don't make sense until you look at that side of things as well) TW Although I work for the HP calculator group, the views and opinions I post here are my own. 04-25-2016, 04:57 PM Post: #10 eried Senior Member Posts: 741 Joined: Dec 2013 RE: New ConKit (10077) eating up System resources! (04-25-2016 03:59 PM)Tim Wessman Wrote:  Not in a cross platform way unfortunately for USB... Also, not using QT for filesystem stuff unfortunately as well (once you have network drives involved especially)... For USB it might be possible to add some little platform specific code to listen for device changes and throw the other routine, something like: https://ic3man5.wordpress.com/2012/04/29...ng-qt-4-8/ And for the filesystem there might be something for QT: http://doc.qt.io/qt-5/qfilesystemwatcher.html ? (04-25-2016 03:59 PM)Tim Wessman Wrote:  Thanks for the pointers I guess... :-P (which program are you using for anlysis there? Not familiar with that one) The diagnostics apps are process monitor and process explorer. (04-25-2016 03:59 PM)Tim Wessman Wrote:  (we're working on the resource thing - it isn't actually any of the things you pointed at however. Those have already been eliminated as the cause. Also, remember that pretty much everyone here is looking at the connkit as a "i plug in my single calculator and do stuff with it" - we have to take into account much larger things like having 60+ calculators doing wireless, upcoming network connections to other computers/apps over TCPIP, etc. Some pieces don't make sense until you look at that side of things as well) I am not sure how these majestic scenarios should impact a humble single calculator user constantly wasting a quarter of his cpu showing an empty window in the background. It is like if Excel worked sluggish when the user wanted to do his monthly accounts because Excel can handle multi-billion digit accounts. My website: erwin.ried.cl 04-25-2016, 05:15 PM (This post was last modified: 04-25-2016 05:15 PM by Tim Wessman.) Post: #11 Tim Wessman Senior Member Posts: 2,231 Joined: Dec 2013 RE: New ConKit (10077) eating up System resources! (04-25-2016 04:57 PM)eried Wrote:  For USB it might be possible to add some little platform specific code to listen for device changes and throw the other routine, something like: https://ic3man5.wordpress.com/2012/04/29...ng-qt-4-8/ Unfortunately, it doesn't always work... Quote:And for the filesystem there might be something for QT: http://doc.qt.io/qt-5/qfilesystemwatcher.html ? Also doesn't work - especially with network drives. Quote:calculator user constantly wasting a quarter of his cpu showing an empty window in the background. It doesn't for everyone. It has proven difficult to track down as it isn't on every system. :-( TW Although I work for the HP calculator group, the views and opinions I post here are my own. 04-25-2016, 07:39 PM Post: #12 Marcus von Cube Senior Member Posts: 760 Joined: Dec 2013 RE: New ConKit (10077) eating up System resources! I'm pretty sure that polling in intervals of seconds or even fractions of seconds is harmless and is definitely not the cause of the CPU hogging. This must be some thread which does not release its resources, something like a busy wait or some other (unintended) loop. Marcus von Cube Wehrheim, Germany http://www.mvcsys.de http://wp34s.sf.net http://mvcsys.de/doc/basic-compare.html 04-25-2016, 10:16 PM Post: #13 jte Member Posts: 69 Joined: Feb 2014 RE: New ConKit (10077) eating up System resources! (04-25-2016 04:57 PM)eried Wrote:  It is like if Excel worked sluggish when the user wanted to do his monthly accounts because Excel can handle multi-billion digit accounts. Algorithms that scale better (e.g., have better asymptotic characteristics) often do have larger fixed (more precisely: larger constants involved in their resource-consumption bounds) costs. They're not just more work to implement correctly (which is a real limitation), but also require more computational resources for smaller / simpler problem instances. (This doesn't take into account the additional requirements placed on / features expected of software over time. What a word processor does on an 8-bit home computer of the 70s / 80s is quite different than what is done by a word processor on a 64-bit home computer of the 10s.) 04-26-2016, 05:35 AM Post: #14 cyrille de brébisson Senior Member Posts: 973 Joined: Dec 2013 RE: New ConKit (10077) eating up System resources! Hello, USB-HID scanning is done once every 2s, so this is most likely not the CPU hog. As Tim pointed out, on windows, the notification method does not always work well. Which is why we had to twitch to pooling. For the QT file system threads, as Tim pointed, out, the method does not work on networked drives (and teachers often have network mounted drivers from servers)... However, we are aware of the issue and will attempt to fix it. I hate wasting CPU cycles! Cyrille Although I work for the HP calculator group, the views and opinions I post here are my own. I do not speak for HP. 04-26-2016, 11:44 AM Post: #15 jebem Senior Member Posts: 1,322 Joined: Feb 2014 RE: New ConKit (10077) eating up System resources! I had a look into this issue as well. I have just installed the Connectivity Kit on a Windows 8.1 laptop having SSD drives and a Intel i7 dual core with hyper-threading enabled processor (total of 4 "cores" on Windows). I have no virtual calc or psychical calc connected. As soon a I run the Connectivity Kit exe, the Connectivityit.exe process is consuming 100% of the available CPU Core resources, running in a very tight program loop. This is a single-threaded program, so if you have a multi-core processor you will not see 100% total processor usage because Windows will schedule the process threads, one at a time, to one of the cpu cores in a round robin fashion. To see this 100% scenario, just set CPU affinity on this Connectivityit.exe process (in task manager or in process explorer): then you will be able to see that the process consumes 100% of this core. I have to say that this is old news. I remember to see this behavior in older program versions as well. I just don't remember which ones. While this behavior is annoying, it is nearly harmless as long as your Processor has adequate temperature monitoring and adequate refrigeration/ventilation. Of course the electrical power consumption will go up as well, and this is important if you are running the program on a battery powered laptop. Basically the process is looping doing some kind of I/O control functions (not the regular data read/write I/Os), checking the file system and the registry at a very high rate, depending on how fast is your PC system. There are plenty of tools to monitor this, starting with the MS sysinternals. (I remember Mark Russinovich presentation at one of the Microsoft Teched in Barcelona long time ago, just after he joined Microsoft. Really enjoyed and learned from his Windows debug presentation. What an expert he is!). This very busy looping activity generates the following metrics: - CPU average of 57% user time (non privileged, application level); - CPU average of 43% privileged time - 63 I/O other Operations/sec (I/O control functions) - 1350 Interrupts/sec - 118 DPCs Queues/sec All the other metrics are close to zero activity. So no I/O data access to disk, no physical or virtual memory issues, no network activity, just a very busy program loop consuming as much CPU as possible from a single core. As I see it, scalability doesn't need this program behavior in order to cope with all the extra work to manage the maximum number of calculators connected to it. As others have remarked, polling should not cause this heavy CPU usage behavior IF the poll rate is low enough. What we see here is that the Windows kernel returns from the system calls but the program is calling it back all the time, hence you have 43% usage in Kernel and 57% usage inside the program itself (user time). This tight loop cpu usage behavior is easy to simulate in Windows. One just needs a text editor and type something like: @echo off :loop go to loop Then save the file as test.bat, run it and set the cpu affinity for the process. We will see about 80% or more of cpu usage. This reminds me of the good old days of MS-DOS when the Command.com was consuming 100% of the CPU only because it was idling in a really tight loop (this was a MS-DOS design feature, not a bug). Well, because some VMware customers used to run MS-DOS legacy applications, we consolidated them as VM's. Then we noticed this performance issue caused by the Command.com behavior. For those interested, I used a well know 3rd party TSR, but really, one just needs to set a cpu usage limit on those legacy VM's to fix it. Jose Mesquita 04-26-2016, 02:43 PM Post: #16 eried Senior Member Posts: 741 Joined: Dec 2013 RE: New ConKit (10077) eating up System resources! (04-25-2016 10:16 PM)jte Wrote: (04-25-2016 04:57 PM)eried Wrote:  It is like if Excel worked sluggish when the user wanted to do his monthly accounts because Excel can handle multi-billion digit accounts. Algorithms that scale better (e.g., have better asymptotic characteristics) often do have larger fixed (more precisely: larger constants involved in their resource-consumption bounds) costs. They're not just more work to implement correctly (which is a real limitation), but also require more computational resources for smaller / simpler problem instances. (This doesn't take into account the additional requirements placed on / features expected of software over time. What a word processor does on an 8-bit home computer of the 70s / 80s is quite different than what is done by a word processor on a 64-bit home computer of the 10s.) I don't see the point of your comment. We are talking about sending things to a calculator or processing the LHC data from a particle collision? It is just not really acceptable to waste a whole cpu core for an empty window, no matter how it is explained. Anyhow, the issue is within just one thread of the app, maybe the timer is not properly implemented to go to idle?: Here is how it looks: https://cloudup.com/iC9aB0IpWX7 And after suspending/killing the thread, it all goes to normal: https://cloudup.com/it8MvDOQle0 Via advanced decompilation methods I can see the code generating this issue: Code: while(true){   int n; // TODO: Change to long to increase the delay   for(;;){try{n++;}catch(){break;}}    checkUSB();checkFiles(); } Hahaha just kidding... that code actually catches the memory leaks. My website: erwin.ried.cl 04-26-2016, 05:11 PM Post: #17 jrozsas Member Posts: 158 Joined: Nov 2014 RE: New ConKit (10077) eating up System resources! I performed a test on my notebook (Samsung I5) and there is also a large consumption of CPU simply opening the program Initially CPU consumption is as follows: After opening the program: The program increase of 4 - 6% to 32% CPU usage, without opening the emulator or connect the calculator. Leo 04-26-2016, 10:42 PM (This post was last modified: 04-26-2016 10:44 PM by jte.) Post: #18 jte Member Posts: 69 Joined: Feb 2014 RE: New ConKit (10077) eating up System resources! (04-26-2016 02:43 PM)eried Wrote: (04-25-2016 10:16 PM)jte Wrote: (04-25-2016 04:57 PM)eried Wrote:  It is like if Excel worked sluggish when the user wanted to do his monthly accounts because Excel can handle multi-billion digit accounts. Algorithms that scale better (e.g., have better asymptotic characteristics) often do have larger fixed (more precisely: larger constants involved in their resource-consumption bounds) costs. They're not just more work to implement correctly (which is a real limitation), but also require more computational resources for smaller / simpler problem instances. (This doesn't take into account the additional requirements placed on / features expected of software over time. What a word processor does on an 8-bit home computer of the 70s / 80s is quite different than what is done by a word processor on a 64-bit home computer of the 10s.) I don't see the point of your comment. The point of my comment was to state that I wouldn't assume that later version of Excel that are designed to better handle larger and more complex data sets (than earlier versions of Excel) are as efficient at handling basic data sets as older, simpler, versions of Excel. This effect would be masked by the steady increase in computational power available to the typical end user (which happens when they upgrade their hardware) - much of this happens within the noise floor (from the perspective of a typical user - I'm not talking about running precise timers / diagnostics etc.) nowadays (it doesn't invariably lead to observed sluggishness - but observed sluggishness certainly was the case as programs became more sophisticated decades back). (04-26-2016 02:43 PM)eried Wrote:  It is just not really acceptable to waste a whole cpu core for an empty window, no matter how it is explained. I wasn't trying to explain this as being acceptable. I didn't write this part of the code and haven't looked at it. I wouldn't, however, immediately assume that it is "wasting a whole core" - or at least wouldn't phrase it that way as that tends to be an oversimplification. It may, instead, be consuming a whole core as it is not needed (being requested) for anything else (its use of the core may drop dramatically if another process starts consuming more cycles; tight polling loops, e.g., can keep a core busy and prevent a reduction in energy consumption / heat production on modern CPUs - tight polling loops may thus waste energy but nevertheless step out of the limelight when another process starts to use more cycles). To be excessively pedantic: I'm not saying that this is ideal (or acceptable or whatever else: I'm just saying what I said…) « Next Oldest | Next Newest » User(s) browsing this thread: 1 Guest(s)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2148308902978897, "perplexity": 4690.435012056105}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347399830.24/warc/CC-MAIN-20200528170840-20200528200840-00526.warc.gz"}
https://scipost.org/submissions/1704.06614v2/
Flux-closure domains in high aspect ratio electroless-deposited CoNiB nanotubes Submission summary As Contributors: Olivier Fruchart Arxiv Link: https://arxiv.org/abs/1704.06614v2 Date accepted: 2018-10-21 Date submitted: 2018-05-21 Submitted by: Fruchart, Olivier Submitted to: SciPost Physics Domain(s): Experimental Subject area: Condensed Matter Physics - Experiment Abstract We report the imaging of magnetic domains in ferromagnetic CoNiB nanotubes with very long aspect ratio, fabricated by electroless plating. While axial magnetization is expected for long tubes made of soft magnetic materials, we evidence series of azimuthal domains. We tentatively explain these by the interplay of anisotropic strain and/or grain size, with magneto-elasticity and/or anisotropic interfacial magnetic anisotropy. This material could be interesting for dense data storage, as well as curvature-induced magnetic phenomena such as the non-reciprocity of spin-wave propagation. Ontology / Topics See full Ontology or Topics database. Submission & Refereeing History Submission 1704.06614v2 on 21 May 2018 Reports on this Submission Anonymous Report 1 on 2018-8-3 Invited Report • Cite as: Anonymous, Report on arXiv:1704.06614v2, delivered 2018-08-03, doi: 10.21468/SciPost.Report.548 Strengths 1. complete study of magnetic nano tubes from the fabrication to magnetic imaging. 2. combination of two x-ray microscopy techniques and comparison to MOKE. Weaknesses 1. A comparison between simulations and experiments would give a more detailed insight into the domain and domain wall structure. 2. The domain wall structure is not shown. Report The Authors describe the fabrication, characterization and magnetic study of ferromagnetic nanotubes. They introduce the material systems and the imaging techniques used. They present the results clearly and draw the correct conclusions. Requested changes - • validity: high • significance: good • originality: high • clarity: high • formatting: excellent • grammar: excellent Author Olivier Fruchart on 2018-08-23 (in reply to Report 1 on 2018-08-03) Category: In the present manuscript we wished to report on a new material, displaying spontaneously azimuthal magnetization. Comment 3, and partly comment 1, are directly relevant for this concern. We fill that this comment is addressed in the manuscript by the following statement (page 7): "it is difficult to extract quantitatively the direction of magnetization in this series, because of the exponential decay of photon intensity inside matter, uncertainties in the dichroic coefficient, and the existence of a background intensity in the image. We can only provide an estimate of the $H_\mathrm{K}$ from the field for which all contrast vanishes in the corresponding images". While we could always attempt to reproduce the experimental STXM contrast, we fear that the several sources of uncertainties would prevent us from gaining more information than the fact that magnetization is azimuthal.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.730427622795105, "perplexity": 5488.060721028988}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573258.74/warc/CC-MAIN-20190918065330-20190918091330-00204.warc.gz"}
https://motls.blogspot.com/2012/06/expense-cuts-or-pro-growth-measures.html
Sunday, June 03, 2012 ... ///// Expense cuts or pro-growth measures? By Dr Václav Klaus, published in Hospodářské noviny (a Czech WSJ), June 1st, 2012 During the recent regular lunch together with 27 ambassadors of the EU countries, one Western European ambassador asked me what I thought about the present European dilemma that he saw in the Merkel-Hollande dichotomy and that – in his terminology – was described as "austerity vs pro-growtth measures". The word "austerity" – according to the Great English-Czech Dictionary – may mean "cost-saving measures" as well as "hardship" or "suffering". I have mentioned the ambiguous meaning of the word "austerity" for a good reason. The reason is that many European politicians populistically suck up to various critics by their defense of the thesis that we may choose between "hardship and economic growth". However, we don't have such options. The problem can't be defined in this way at all. "Hardship" isn't what the current Europe is all about. The word "hardship" should be reserved for other, genuinely serious situations. We are solving a problem what to do with the fact that Europe has lived beyond its means in the long run and even today, it is living out of a growing debt. We can't allow the interpretation of the efforts to stop the life beyond our means as an invitation to hardship and starvation. Because Europe has been living out of the borrowed money for a long time, we are facing a serious question when and how the totally inevitable cost-saving measures should be adjusted (and distributed in time). It makes no sense to discuss pro-growth measures in the sense of anti-cost-saving measures. There's no room for such measures in Europe today, more precisely in most of the European countries. These countries haven't been building surpluses in the past so they can't use these surpluses today. However, Europe has to work on the reduction of its debt and it must do so at a rate that is economically (and socially) tolerable. To do the opposite is not possible. To create new debts by "pro-growth-oriented" expenses of the governments which require new domestic or foreign loans is an utter irrationality. Despite this fact, this is exactly what many unreasonable politicians (and economists) are repeatedly recommending. In 2012, the Czech Republic will have a budget deficit comparable to CZK 100 billion i.e. USD 5 billion (i.e. we're still living beyond our means) which means that each person (including babies) will see their debt increasing by CZK 10,000 or USD 500 (that's just in this single, allegedly "already cost-saving", year as planned by our reticent finance minister). At this point, I will avoid speculations that our 2012 deficit will be – due to the likely complete halt of the economic growth – even higher than that. Does this increase of the debt mean that we have already started "hardship", as the officials from our labor unions claim, or does it mean that we are only performing very shy expense cuts? But cuts relatively to what? Are they cuts relatively to non-hardship? Or should they be compared with a continuing growth of the public debt which is sometimes inaccurately interpreted as the government's debt even though it is the total shared debt of the citizens? In the contemporary Europe, there is no "austerity vs growth" dilemma. Instead, the responsible politicians are only working to avoid the increase of the debt which has been growing for quite some time because of the uninterrupted growth of diverse demands and expectations by the inhabitants of Europe (and the Czech Republic). The real problem is the evolution of the demands, the culture of entitlement, and not an "organized" hardship because the growing requirements is something that the people take for granted. The reason of this evolution is undoubtedly the European economically-social model which is promising to the people of Europe – independently of the events in the real economy – the increase of incomes and many other fees, pensions, and subsidies coming from the state. We must admit that the true culprit of the contemporary European economic and social situation is the model of the so-called social market economy itself. This model – and the modes of reasoning that follow from it – had been accepted under the pressure by various European politicians of a certain ideological flavor and these people must be held accountable for the present situation. They must confess that they have been cheating the people and they have been promising the impossible. And we must acknowledge that this scheme of thinking has to be abandoned as soon as possible. We must also abandon another deception which is the suggestion that a reservoir of certain pro-growth measures is at the disposal of the government. The government doesn't possess any tools of this kind. By its wise policies, it may perhaps encourage the environment of economic freedom, it may deregulate and desubsidize the economy, it may open the markets, it may gradually eliminate various nonsensical burdens from the economy (expenses to fight the global warming claimed to be man-made is an example). Policies of this kind – whether they are very successful, less successful, or unsuccessful – will not lead to results tomorrow or on the day after tomorrow. Despite the fact that their effects are not immediate, we should try to introduce the policies today for us to have a chance to see the effects during our lifetimes. To summarize: • no one is prescribing any hardship to anyone else in Europe; • cost-saving measures are necessary but expense cuts are not enough. What is needed is a transformation of the whole economically-social system because this system fails to be cost-saving by itself; • the governments don't possess any immediately effective pro-growth measures. If they want to contribute to the economic growth in the long run, they have to create a rational economic system instead of new debts. Václav Klaus, Hospodářské noviny, June 1st, 2012 Fast translation by L.M. The author is the president of the Czech Republic and a professor of economics.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26871463656425476, "perplexity": 1809.4374607374834}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818696653.69/warc/CC-MAIN-20170926160416-20170926180416-00605.warc.gz"}
https://learn.careers360.com/ncert/question-the-slant-height-of-a-frustum-of-a-cone-is-4-cm-and-the-perimeters-circumference-of-its-circular-ends-are-18-cm-and-6-cm-find-the-curved-surface-area-of-the-frustum/
# 2. The slant height of a frustum of a cone is 4 cm and the perimeters (circumference) of its circular ends are 18 cm and 6 cm. Find the curved surface area of the frustum. We are given the perimeter of upper and lower ends thus we can find r1 and r2. $2\pi r_1\ =\ 18$ $r_1\ =\ \frac{9}{\pi}\ cm$ And, $2\pi r_2\ =\ 6$ $r_2\ =\ \frac{3}{\pi}\ cm$ Thus curved surface area of the frustum is given by  :      $=\ \pi \left ( r_1\ +\ r_2 \right )l$ $=\ \pi \left ( \frac{9}{\pi}\ +\ \frac{3}{\pi} \right )4$ $=\ 48\ cm^2$ Exams Articles Questions
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.956436276435852, "perplexity": 541.526318994339}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347425148.64/warc/CC-MAIN-20200602130925-20200602160925-00527.warc.gz"}
https://solvedlib.com/n/discrete-mathematics/4
# Discrete Mathematics All new and solved questions in Discrete Mathematics category Got a STEM Question?Question text is required Equation Upload Image ... Got a STEM Question?Question text is required Equation Upload Image ... Got a STEM Question?Question text is required Equation Upload Image ... ##### What is the cardinality of each of these sets? $\begin{array}{ll}{\text { a) } \emptyset} & {\text { b) }\{\emptyset\}} \\ {\text { c) }\{\emptyset,\{\emptyset\}\}} & {\text { d) }\{\emptyset,\{\emptyset\},\{\emptyset,\{\emptyset\}\}\}}\end{array}$ What is the cardinality of each of these sets? $\begin{array}{ll}{\text { a) } \emptyset} & {\text { b) }\{\emptyset\}} \\ {\text { c) }\{\emptyset,\{\emptyset\}\}} & {\text { d) }\{\emptyset,\{\emptyset\},\{\emptyset,\{\emptyset\}\}\}}\end{array}$... ##### Express each of these statements using quantifiers. Then form the negation of the statement so that no negation is to the left of a quantifier. Next, express the negation in simple English. (Do not simply use the phrase “It is not the case that.”)a) Every student in this class has taken exactly two mathematics classes at this school.b) Someone has visited every country in the world except Libya.c) No one has climbed every mountain in the Himalayas.d) Every movie actor has either been in a movie Express each of these statements using quantifiers. Then form the negation of the statement so that no negation is to the left of a quantifier. Next, express the negation in simple English. (Do not simply use the phrase “It is not the case that.”) a) Every student in this class has taken exactl... ##### What is the bit string corresponding to the symmetric difference of two sets? What is the bit string corresponding to the symmetric difference of two sets?... ##### A drawer contains a dozen brown socks and a dozen black socks, all unmatched. A man takes socks out at random in the dark.a) How many socks must he take out to be sure that he has at least two socks of the same color?b) How many socks must he take out to be sure that he has at least two black socks? A drawer contains a dozen brown socks and a dozen black socks, all unmatched. A man takes socks out at random in the dark. a) How many socks must he take out to be sure that he has at least two socks of the same color? b) How many socks must he take out to be sure that he has at least two black so...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6530242562294006, "perplexity": 556.9098664593726}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334591.19/warc/CC-MAIN-20220925162915-20220925192915-00084.warc.gz"}
https://quant.stackexchange.com/questions/33520/data-issue-observations-in-portfolio-construction
# Question • With 60 data observations, how do I construct a time series analysis properly? • How to do Certain Calculations such as covariances on data with Gaps and Inconsistencies? # Background of Question • I'm currently setting out on doing an assignment for a portfolio theory class # Dataset Characteristics • 15 stocks with their price-adjusted monthly returns from 1986-2016 (roughly 400 monthly observations) listed on the ISEQ (Irish Stock Exchange) # What I think are Data Issues Allocated stocks do not have like-for-like observations - stocks listed at different times have different numbers of observations for each stock. ( Non-uniform time series) Only have 60 observations where all stocks have data from the same time period/across the panel.(Do you mean columns? do you mean same dates?) ( Insert screenshot of data points) • One stock in particular only has 60 observations and is extremely 'blocky' in its returns characteristics. ( ( Insert screenshot of data point) # Data may cause me problems when I: Calculate covariances • should I use the full array (~400 of observations) of my oldest stock (for variance calculations) against the 60 observations of this problematic stock when calculating the variance co-variance matrix? Compare like with like and cut my observations across my portfolio to 60 observations • Am I sacrificing descriptive power in my outputs if I do this? My humblest thanks and best wishes, CM. • Can you share the dataset? – rbm Apr 5 '17 at 19:27 • This is common. You will have to adjust your analysis as it moves forward through your observation period. You only had 15 stocks to analyze for the last 5 years, before that it was 14. There is nothing you can do about data that didn't exist before a certain point in time. If you are insistent upon using all 15 stocks then you are stuck with just 60 months. – amdopt Apr 5 '17 at 19:35 • @amdopt You are mistaken. What would quants do when there is an IPO? Throw out all historical data except the past day? There is a whole category of statistics for handling missing data. Now, that may be beyond the scope of the class and he could probably just use 60 months and get an A. But in practice, it must be handled. – John Apr 6 '17 at 13:52 • @amdopt There are techniques that go beyond just filling in a data point here and there. Perhaps one of the simplest is Stambaugh 1997 nber.org/papers/w5918 – John Apr 6 '17 at 15:38 • @CormacMurphy I'm not going to look at your data. – John Apr 9 '17 at 23:27 Your question shows that you are beginner in time series analysis. Welcome! A common approach to analyzing unevenly spaced time series is to transform the data into equally spaced observations using some form of interpolation - most often linear - and then to apply existing methods for equally spaced data. However, transforming data in such a way can introduce a number of significant and hard to quantify biases especially if the spacing of observations is highly irregular. It depends First start here: Then read these papers as well as what others have shared # Please make sure you understand what you are asking otherwise others will not be so nice. • This means googleing and putting in some effort.Effort is not easy, but part of struggle is important and called 'learning.' Do not be discouraged, ask questions, but make sure you google first. Welcome to QuantFinance Stack Exchange! For your assignment, use only the returns that you have available, even if they are not complete for entire period. You will be able to run all your analysis. Notice that This is not a good solution in real world cases, if you want to use your covariance/correlation matrix for optimization or monte carlo simulation as using pairwise correlations may lead to non positive semi-defined matrices. • Why "This is not a good solution in real world cases" – Ted Taylor of Life May 6 '17 at 16:15 Something fairly standard to do is to work with the returns of portfolios constructed on individual firm characteristics rather than the firms themselves. Some basic problems working directly with firms: • As you discussed, firms come and go from the sample. • As several have mentioned in the comments, firms can significantly change. Apple in 2005 was a computer hardware company. In 2015, Apple was a mobile phone company, its revenues dominated by iPhone. Shouldn't we expect the covariance properties to be significantly different!? • If you only use firms where you have data for all years, you are conditioning inclusion in your sample on not delisting, and you may render your estimate of expected returns upwards biased and inconsistent! • If someone wrote, "My sample is constructed of all firms which did not go bankrupt or get acquired" or "My sample is constructed of all firms which eventually made it into the S&P500," do you think those firms had above average returns? Of course they did! • In general in finance, you can make huge mistakes by using $t+1$ information at time $t$. If we're willing to do simple, 1980s style finance, a sensible method is to construct yearly rebalanced portfolios based upon firm characteristics known at the time (or several months previously to be safe). The idea is that the portfolio returns will be more stable over time in terms of their statistical properties than individual companies. As @Alex27629 mentions, you probably can do most of your analysis using only data you have for each company. I'd expect you get defensible results for the purposes of your project. 1. It is unclear what the 'portfolio assignment' is and what kind of results you are expected to deliver. 2. 60 points of data points (monthly) when it comes to stock returns is more than enough; whereas conclusions based 20 years of data do not seem reliable as a company in its first 5 years of existence will be completely different than the same company 15 years later - given the company still exists. • Your 60 monthly data points argument feels more like a heuristic that the industry has adopted, rather than an argument based on any kind of theory or rigorous evaluation. It's probably sufficient to give the OP an A, but I'm not sure it should be the recommended approach for practitioners. – John Apr 6 '17 at 14:03 • I think your answer would be better if the first part was a comment. – Bob Jansen Apr 6 '17 at 15:22 • @cykor21 My assignment comprises of using the data to compute an alpha and beta for each stock, standard residuals from each regression, correlation coefficient, covariance between each possible pair using the single index model (SIM) , compute, compare/contrast mean return, variance and covariance for each stock using SIM and historical data. This is pretty easy for me to do if I had a complete sample of observations across each stock between 1984 and 2016. – Cormac Murphy Apr 9 '17 at 14:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3169097602367401, "perplexity": 1166.5142109738874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988758.74/warc/CC-MAIN-20210506144716-20210506174716-00517.warc.gz"}
https://swmath.org/software/6468
# FeynRules Beyond the minimal supersymmetric standard model: from theory to phenomenology. Thanks to the latest development in the field of Monte Carlo event generators and satellite programs allowing for a straightforward implementation of any beyond the standard model theory in those tools, studying the property of any softly-broken supersymmetric theory is become an easy task. We illustrate this statement in the context of two nonminimal supersymmetric theories, namely the minimal supersymmetric standard model with $R$-parity violation and the minimal $R$-symmetric supersymmetric standard model and choose to probe interaction vertices involving a nonstandard color structure and the sector of the top quark. We show how to efficiently implement these theories in the {sc Mathematica} package {sc FeynRules} and use its interfaces to Monte Carlo tools for phenomenological studies. For the latter, we employ the latest version of the {sc MadGraph} program. ## References in zbMATH (referenced in 72 articles , 2 standard articles ) Showing results 1 to 20 of 72. Sorted by year (citations)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7882883548736572, "perplexity": 1263.014672221431}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038916163.70/warc/CC-MAIN-20210419173508-20210419203508-00587.warc.gz"}
https://www.math.columbia.edu/~woit/wordpress/?author=2
## Notes on Current Affairs Blogging has been light recently, partly due to quite a bit of traveling. This included a brief trip the week before last to Los Angeles, where I met up with, among others, Sabine Hossenfelder. This past week I was in … Continue reading Posted in Uncategorized | 17 Comments ## The End of (one type of) Physics, and the Rise of the Machines Way back in 1996 science writer John Horgan published The End of Science, in which he made the argument that various fields of science were running up against obstacles to any further progress of the magnitude they had previously experienced. … Continue reading Posted in Multiverse Mania | 43 Comments ## Langlands/Frenkel and Some Other Things The Canadian publication The Walrus today has a wonderful article about Robert Langlands, focusing on his attitude towards the geometric Langlands program and its talented proponent Edward Frenkel. I watched Frenkel’s talk at the ongoing Minnesota conference via streaming video … Continue reading Posted in Langlands, Uncategorized | 14 Comments Based on this preprint from Banks and Fischler, I added an update to the FAQ entry about why the ever-popular “string theory makes predictions, but only at high energies where they can’t be tested” argument is not true. This preprint … Continue reading ## Langlands News Various Langlands program related news, starting with the man himself: For the latest from Langlands about the geometric theory, best if you read both Russian and Turkish. In that case you can read this and this. For the rest of … Continue reading Posted in Langlands | 11 Comments Some experimental HEP news items: Since 2015 the LHC experiments have been taking data from proton-proton collisions at 13 TeV. This is “Run 2” of the LHC, “Run 1” was at the lower energy of 8 TeV. The proton-proton Run … Continue reading Posted in Experimental HEP News | 34 Comments ## Last Night’s Hype If you’re a Friend of the IAS (\$1750/year and up), you were invited to a talk last night, at which IAS member Thomas Rudelius promised to explain to you How to Test String Theory. The video of the talk is … Continue reading Posted in Swampland, This Week's Hype | 3 Comments ## Breaking News Two midday breaking news items: The ACME II experiment is reporting today a new, nearly order of magnitude better, limit on the electric dipole moment of the electron: $$|d_e|\leq 1.1 \times 10^{-29} e\ cm$$ The previous best bound was from … Continue reading Posted in Uncategorized | 9 Comments ## This Week’s Hype The story of string theory as a theory of everything has settled into a rather bizarre steady-state, with these three recent links providing a look at where we are now: At his podcast site, Sean Carroll has an interview with … Continue reading Posted in Swampland, This Week's Hype | 14 Comments ## High Life I spent yesterday night at the New York Film Festival, watching Claire Denis’s new film High Life. For a detailed and accurate review of the film, see the one at Variety. This film is about a voyage to a black … Continue reading Posted in Uncategorized | 15 Comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21543630957603455, "perplexity": 4175.401957968938}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826800.31/warc/CC-MAIN-20181215061532-20181215083532-00597.warc.gz"}
https://lazyprogrammer.me/deep-learning-tutorial-part-13-logistic/
# Deep Learning Tutorial part 1/3: Logistic Regression April 22, 2015 This is part 1/3 of a series on deep learning and deep belief networks. I’ve wanted to do this for a long time because learning about neural networks introduces a lot of useful topics and algorithms that are useful in machine learning in general. Unfortunately, while the material I’ve read focusing on logistic regression and the multiple layer perceptron (building blocks of the deep belief network) are great and accessible to a wide audience, I’ve found most of the material I’ve encountered about deep learning are highly technical and hard to follow. So, I’ve decided to create this series in order to teach the most practical aspects of deep learning and neural networks – enough so that you can implement one yourself, but not so much that you’ll get bogged down by all the theory. Part 1 will focus on logistic regression. Part 2 will focus on the multilayer perceptron (a.k.a. artificial neural network) and backpropagation. Part 3 will focus on restricted Boltzmann machines and deep networks. Each is designed to be a stepping stone to the next. The topic of this post (logistic regression) is covered in-depth in my online course, Deep Learning Prerequisites: Logistic Regression in Python. We derive all the equations step-by-step, and fully implement all the code in Python and Numpy. To solidify the concepts, we apply the method to real world datasets, including an e-commerce dataset and facial expression recognition. Let us begin. Logistic Regression doesn’t do Regression Despite its name, Logistic Regression is actually a classification algorithm. This means the output gives us a label, not a real number. HOWEVER: the methods you read about in this series can be applied to both regression and classification. Just the equations for the outputs and the error function differ. I will note these differences where appropriate, but the tutorials will focus on classification. Diagram of how Logistic Regression works I’ve included a few pictures here so you get used to looking at how we visualize a neural network. Here’s one where X (input) is 3-dimensional and Y (output) is 2-dimensional. Here’s one where the weights use the symbol theta and the summation operation and sigmoid function are shown explicitly. Here’s one where the weights use the variable “w” and the bias is explicitly shown as “b”. Here the sigmoid function uses the Greek letter “phi”, but more often you see the letter “sigma”. A Little Math So what do these diagrams mean about how we calculate the output from a set of inputs? Notice first that we can have more than one output Y. For K classes/labels, as in the digit recognition problem, we would have K outputs, and Y(k) = 1 if the label is the kth digit, otherwise it is 0. The only exception is the 2-class case. In this situation, we only need 1 output because Y = 1 is the first class and Y = 0 is the second class. We’ll focus on this scenario first. The equation in its compact form is this: The inside part is the dot product of the weights and the input: As in linear regression we assume there is an x0 and that it is 1. The “sigma” part is the sigmoid function: If we graph the sigmoid, it looks like this: There are 2 things we can tell from the above equation: 1) For logistic regression to work, the classes must be linearly separable. This is because the dot product between “w” and “x” is a line/plane. (i.e. ax + by + c = 0) w0 + w1x1 + w2x2 + … = 0 is the plane (more correctly, hyperplane) here. So here is a situation where logistic regression would work well: Here is a situation where it wouldn’t work well. But we will cover that more in parts 2 and 3. 2) The sigmoid means the output Y is between 0 and 1. So if w*x = 0, we land right on the hyperplane, and Y = 0.5. If w*x > 0, we get Y > 0.5, and vice versa for w*x < 0. As w*x approaches infinity, Y approaches 1, and vice versa. Probabilistic Interpretation Because Y is between 0 and 1, we can interpret it as a probability. This makes more sense if you consider the following: If we fall right on the barrier/plane between the two classes – our probability of being in either class is 0.5. If we are further away from that barrier, the probability of being in either class increases. We usually denote Y as P(Y=1|X) and P(Y=0|X). Note that while we use some probabilistic concepts here, the way in which we use them is different than for say, a Bayesian classifier. Also note that P(Y=0|X) = 1 – P(Y=1|X). Maximizing the Likelihood We have seen squared error used as an error function before, as with linear regression. In fact, if we were doing regression, we could use the same thing here. For classification, we take a different approach. You may have seen this error function before: t is the target and y is the output of the network/model. (This introduces some ambiguity because we usually write p(y=1 | x) as the output and y as the target). This is called the cross-entropy error. Where does this come from? Let us go back to first principles. Instead of minimizing error, we maximize likelihood. This seems like a logical place to start – maximizing the probability that our model parameters are correct. Consider N IID (independent and identically distributed) training samples and corresponding labels (we’ll call them “t” here). $$L = \prod_{i=1}^{N} y_i ^{t_i} (1 – y_i)^{1 – t_i}$$ The likelihood of the model given the entire dataset can be represented by this equation. We can use the product rule because each sample is independent. (Sidenote 1: This is the same thing we do when we want to say, find the maximum likelihood estimate for the mean. We calculate the joint probability aka. likelihood P(data|mean) and find the “argmax” mean that gives us the highest likelihood, hence the term – “maximum likelihood”) (Sidenote 2: This is the same likelihood you see when we do Bayesian inference – posterior ~ likelihood x prior or P(param | data) ~ P(data | param) P(param)) (Sidenote 3: If you wanted to do regression, you would simply not have a sigmoid at the end, and you would use the squared error. The exponential of the squared error is a Gaussian, because in regression we often assume the error is Gaussian distributed. By making these 2 changes, we would just be doing linear regression.) Recall y = P(y=1|x). The target t can be 1 or 0. When t is 1, only the left part of the product matters (the right side evaluates to 1). All the y’s here are the probability that the output of the network is 1. Given that the target is 1, we want to maximize this probability. When t is 0, only the right part of the product matters. Recall that 1-y is the probability that the output of the network is 0. So when t = 0, we want to maximize this probability. Since each sample is independent, we can get the joint probability by multiplying all the individual probabilities together. 2 key points: 1) There is no analytic solution, we must use iterative methods. In this tutorial we will cover gradient descent, but there are others (such as conjugate gradient, and L-BFGS). The added advantage of learning gradient descent now is that it is also used to train neural networks. 2) As is usual with these ML problems, we will work with the log likelihood instead of the likelihood. Just try taking the derivative of both, and you will see why. If you take the log of the above expression, notice you’ll get the same error function we started with! We take the negative because we want something to minimize. We call this the “error” or “cost” function. Maximizing the likelihood is equivalent to minimizing the negative likelihood. It is also equivalent to minimizing the negative log-likelihood. This is because log() is a monotonically increasing function. How do we actually minimize the negative log-likelihood if we can’t simply set the derivative = 0 and solve for the weights? This is where gradient descent enters the picture. Note that gradient descent is just a numerical method – it can be applied whenever you want to solve for the minima of a function, not just for machine learning. Here is a picture of what we’re trying to do: We start at some random weight, w = random(). Then we update the weight by going in the direction of the derivative of the error function (slope), which we have previously stated is the negative log-likelihood. With squared error it is easy to see that the error function is quadratic, and so we are descending down a parabola in that case. The minimum is global. With log-likelihood the extremum is also global. It may help to plot the function E(y,t) = tlog(y) + (1-t)log(1-y) to see why. The equation for updating the weights is: Here j indexes the dimension, so j = 1…D. t indexes the iteration number (not to be confused with the other t, which was the target). “Eta” is called the “learning rate”. This hyperparameter determines how far along the error surface we travel on each iteration. Bigger values mean we go further, which means our weights might converge to the final solution faster, but it also means we may “overshoot” that solution. Since w is a vector, we can usually speed up our code by doing vector operations (i.e. in MATLAB or Python). In this case, we can use this equation: The full training algorithm is: for i = 1…number of epochs: error = negative log-likelihood ( -L(Y|X,w) ) w = w – learning rate * error gradient The number of epochs is yet another hyperparameter. There are many ways to determine when to stop the gradient descent process. Some other methods you may want to look into: • Stopping when the gradient is small enough • Stopping when the training error is no longer decreasing or approaching 0 • Stopping when the error on a held out test set starts to increase (overfitting) We call things like learning rate and epochs “hyperparameters”. These are parameters that are not part of the model itself, but can still be optimized, perhaps via cross-validation. Biological Inspiration In computational neuroscience, a logistic regression unit is sometimes referred to as a “neuron”. How are the two related? Here is a diagram of a typical neuron. Some notable components: • Dendrites: These are the “inputs” into the neuron – they take electrical signals from other neurons’ axons. • Cell body / Nucleus: This part of the neuron “sums up” all the inputs and propagates this summed signal to the axon. • Axon: This is the “output” of the neuron. It sends the signal from this neuron to other neurons’ dendrites. So dendrites are our logistic unit’s X, and axons are the Y. The brain is essentially a network of neurons, or rather, a neural network. An artificial neuron network, which is the topic discussed in Part 2 of this tutorial, is a network of connected logistic regression units. Another notable feature of neurons is the behavior of the “action potential”. Observe a typical amplitude/potential (voltage) vs. time signal: Notice how the potential rises gradually and then spikes. We call this the “all-or-nothing” principle. If the sum of the inputs to the neuron is high enough, a spike is generated. Otherwise, the voltage stays relatively low. This is reflected in the logistic units’ binary output. The output if a sigmoid is interpreted as P(Y=1|X) – the probability of being “on”, or in other words, the probability that a spike is generated. Inhibitory vs. Excitatory neurons: It is well-known that the signal a neuron sends can either “excite” or “inhibit” the receiving neuron. These are reflected in the logistic model by the weights. A positive weight is excitatory. A negative weight is inhibitory. Researchers have tried to create models with “spiking” neurons, however, it has been difficult to get them to actually learn anything. The topic of this post (logistic regression) is covered in-depth in my online course, Deep Learning Prerequisites: Logistic Regression in Python. We derive all the equations step-by-step, and fully implement all the code in Python and Numpy. To solidify the concepts, we apply the method to real world datasets, including an e-commerce dataset and facial expression recognition. #ann #dbn #deep belief networks #gradient descent #mlp #Multilayer Perceptron #neural networks #rbm #restricted Boltzmann machines ### Deep Learning and Artificial Intelligence Newsletter Get discount coupons, free machine learning material, and new course announcements
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8458930850028992, "perplexity": 740.6874281912541}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572163.61/warc/CC-MAIN-20220815085006-20220815115006-00096.warc.gz"}
https://codereview.stackexchange.com/questions/189504/finding-common-elements-in-two-arrays/189533
# Finding common elements in two arrays I just had this question in an interview. I had to write some code to find all the common elements in two arrays. This is the code I wrote. I could only think of a 2-loop solution, but something tells me there must be a way to accomplish this with only 1 loop. Any ideas? public List<Integer> findCommonElements(int[] arr1, int[] arr2) { List<Integer> commonElements = new ArrayList<>(); for (int i = 0; i < arr1.length; i++) { for (int j = 0; j < arr2.length; j++) { if (arr1[i] == arr2[j]) { break; } } } return commonElements; } You use a hashset, this means you have 2 loops, but not nested. You have a boolean hashset in this case where all values start at false of size k where k (this is typically what is used in Big O'notation) is the the number of possible integer values. You loop over your first array and for each value you go hashset[firstArray[i]] = true; once you have done this you loop over your second array, going if(hashset[secondArray[i]]) commonElements.add(secondArray[i]);. This is O(2n) which then becomes simply O(n) due to getting rid of the constants, your solution was O(n^2). Although it should be noted the storage required for using a hashset is considerably more. • In this case, the size of the boolean array should be the maximum element for both arrays. You should have mentioned in your post, that in order to determine the k value, you would have to make additional loops. But something that really concerns me about this solution is that the maximum value could be 2 ^ 31 which results in wasting lots of memory. – nullbyte Mar 14 '18 at 1:07 • @nullbyte You wouldn't need a second loop to determine k since you can simply set k to the max integer size in this case (2.147E9 4sf). You'r description of k is misleading, since you wouldn't set it to the max value but simply the number of different possible values in the input arrays. To properly calculate k in any circumstance you would need to use a hashset. The alternative solution that uses less memory is a linear search combined with a binary search which is O(nlog(n)). For the specific task I gave the best solution, there is no universally best applicable solution. – Jonathan Woollett-light Mar 15 '18 at 16:58 There is a reason why "Data Structures and Algorithms" has the data structures part added in, especially first. The reason why is data structures should be the first thing you think about even before writing any kind of algorithm. Lets take the specification of the code that you are given: Write some code to find all the common elements in two arrays. Now first thing is first this isn't specific enough, what do you mean by "common elements", does this include repeating elements so for example [1,1,2] and [1,1,3] would that be [1,1] or just [1]? For this I'm going to assume you mean elements non repeating. Now after we have established the specification which is Given two arrays of numbers find the common unique elements. I'd say these arrays are sounding a hell of a lot like a set data structure, and this set data structure, because we want to find the intersection between these two sets. In java this would be: public static void main(String[] args) { List<Integer> alist = Arrays.asList(new Integer[] {1,1,2}); List<Integer> blist = Arrays.asList(new Integer[] {1,1,3}); HashSet<Integer> aset = new HashSet<>(alist); aset.retainAll(blist); System.out.println(aset); } It's very important to consider two things when someone gives you a spec, first think through is it clear enough, and secondly, after you have got an idea of what they want you want to strongly consider the data structure as these are structures that have been tried and tested to be efficient at specific tasks. In the program above, let A be the size of alist and B be the size of blist the time complexity is O(A + B) as it's O(1) to add an element to a hashset which we do for each element in A, then we loop though all elements in B because of the retainAll function needs to do a contains operation on each element, and that is O(1) for a HashSet. This is much more efficient than O(AB) ~ or O(n^2). edit: Thanks to nullbyte for pointing out that its more efficient to make one hashset and do ratainAll for just that single set. The code above has been adjusted. • Where do you see the benefit of creating another hash set? You can just traverse the second list without allocating additional memory (as in the solution below). In your case, you allocate memory for the second hash set, traverse the second list in order to populate the second hash set, and then you call the retainAll() method which also requires O(n) time. – nullbyte Mar 14 '18 at 1:27 • It depends on what you are optimising for, space and time have a give and take relationship. I'm sacrificing space for more time. – James Mar 14 '18 at 1:29 • How can you get more time? In order to create a set, it takes O(n). Then you call the retainAll() method which takes O(n) as well. How do you get more time? From what I can see, it takes O(3 x n), doesn't it? But you can easily optimize it to O(2 x n) without creating the second hash set. – nullbyte Mar 14 '18 at 1:34 • Lets pretend your list is now [1,1,1,1,1,1,...x10000000,2] you need to loop through all of the values in that list, to check to see if it is in the set. Wasted computation. – James Mar 14 '18 at 1:35 • Sorry it's 1:40am here I'm a bit potato when it comes to my brain right now. So yes I think it would make sense to not make the second set and just feed it into the retainAll – James Mar 14 '18 at 1:39 So, the point of this solution is to put the first array into a hash set, and then traverse the second array checking if an element from the second array is present in the set. This solution requires O(n) time and O(n) space if the arrays have the same length. public static List<Integer> findCommon(int[] a, int[] b) { final Set<Integer> set = new HashSet<>(Arrays.stream(a).boxed().collect(Collectors.toList())); final List<Integer> result = new LinkedList<>(); for (int element : b) { if (set.contains(element)) { } } return result; } Or a bit more concise solution: public static List<Integer> findCommon(int[] a, int[] b) { final Set<Integer> set = new HashSet<>(Arrays.stream(a).boxed().collect(Collectors.toList())); set.retainAll(Arrays.stream(b).boxed().collect(Collectors.toList())); return new ArrayList<>(set); }
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40566158294677734, "perplexity": 820.2723351711659}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027317130.77/warc/CC-MAIN-20190822130553-20190822152553-00387.warc.gz"}
https://homerreid.github.io/scuff-em-documentation/examples/PaulTrap/PaulTrap/
# Electrostatic fields of an electrode array In this example we use scuff-static to compute the electrostatic fields in the vicinity of a complicated electrode array with the various electrodes held at various external potentials. More specifically, the calculation will proceed in two stages: 1. First, for each of the N electrodes in the device we will compute the fields produced by maintaining that electrode at a potential of 1 volt, with all other electrodes grounded. This will produce N separate datasets, each reporting the electrostatic potential and E-field components at our desired evaluation points. The structure of the boundary-element-method (BEM) solver implemented by scuff-static ensures that this calculation is fast, even for large N: once we have assembled and factorized the BEM matrix for a given geometry, we can solve any number of electrostatic problems involving different excitations of that geometry essentially "for free." 2. Then we will run a second calculation in which all electrodes are maintained at specific voltages and---in addition---an externally-sourced electrostatic field is present. For this case we will generate graphic visualization files illustrating the fields in the vicinity of the device. The geometry considered in this example is a model of a Paul trap; I am grateful to Anton Grounds for suggesting this application and for providing the sophisticated parameterized gmsh file describing the geometry. The files for this example may be found in the share/scuff-em/examples/PaulTrap subdirectory of your scuff-em installation. ## gmsh geometry and mesh files The gmsh geometry file Trap.geo describes a collection of conductor surfaces constituting a Paul trap. This file contains a user-specifiable parameter ELCNT that may be used to set the number of electrodes; to create a mesh for a 8-electrode geometry, we say % gmsh -2 -setnumber ELCNT 4 Trap.geo -o Trap_4.msh (Note that the total number of electrodes is twice the value specified for ELCNT). This produces the gmsh mesh file Trap_4.msh, which we can open in gmsh to visualize: % gmsh Trap_4.msh ## Simple scuff-em geometry file The gmsh file Trap.geo is designed to ensure that each separate metallic strip in the geometry---including each of the 8 identically-shaped electrodes, plus each of the 7 strips of varying thicknesses running down the center of the structure---is meshed as a separate entity and assigned a unique (integer) identifier. Thus, one way to write a scuff-em geometry file for this geometry would be simply to include each of the 15 distinct surfaces in OBJECT...ENDOBJECT clauses, each clause referencing a unique entity in the mesh. This strategy is pursued by the file Trap_4.scuffgeo, which looks like this: OBJECT GND MESHFILE Trap_4.msh MESHTAG 1 ENDOBJECT OBJECT Rot2 MESHFILE Trap_4.msh MESHTAG 2 ENDOBJECT OBJECT Rot3 MESHFILE Trap_4.msh MESHTAG 3 ENDOBJECT OBJECT RF MESHFILE Trap_4.msh MESHTAG 4 ENDOBJECT OBJECT Rot1 MESHFILE Trap_4.msh MESHTAG 5 ENDOBJECT OBJECT Rot4 MESHFILE Trap_4.msh MESHTAG 6 ENDOBJECT OBJECT UpperDC1 MESHFILE Trap_4.msh MESHTAG 7 ENDOBJECT OBJECT LowerDC1 MESHFILE Trap_4.msh MESHTAG 8 ENDOBJECT OBJECT UpperDC2 MESHFILE Trap_4.msh MESHTAG 9 ENDOBJECT OBJECT LowerDC2 MESHFILE Trap_4.msh MESHTAG 10 ENDOBJECT OBJECT UpperDC3 MESHFILE Trap_4.msh MESHTAG 11 ENDOBJECT OBJECT LowerDC3 MESHFILE Trap_4.msh MESHTAG 12 ENDOBJECT OBJECT UpperDC4 MESHFILE Trap_4.msh MESHTAG 13 ENDOBJECT OBJECT LowerDC4 MESHFILE Trap_4.msh MESHTAG 14 ENDOBJECT Note that, although each of the OBJECT clauses references the same mesh file, the different values of the MESHTAG field select distinct entities within that file, so that each of the 15 OBJECTs are treated by scuff-em as distinct identities. (The values of the MESHTAG identifiers are defined in .geo files by gmsh's Physical Surface construct; see Trap.geo for an example). ## Improved scuff-em geometry file The file Trap_4.scuffgeo above defines a perfectly workable scuff-em geometry, and running calculations with this file will yield results identical to those obtained below. However, the strategy pursued by Trap_4.scuffgeo is not the optimal way to define this geometry to scuff-em, because it ignores significant potential for computational cost savings afforded by the structure of the geometry. Indeed, as we see from the image above, the geometry here contains many copies of identical shapes that are simply rotated and/or translated with respect to one another in space. For geometries of this sort, it is best not to define separate mesh entities for each of the various identical copies of structures, but rather to inform scuff-em of the redundancies that are present so that the code can make maximal reuse of computations carried out for identical structures. More specifically, we will modify the above file as follows: • Instead of defining each of the 8 electrodes to be a separate entity in the mesh, we will reference just one of the electrode rectangles in the mesh file, together with DISPLACED statements indicating how identical copies of that entity are to be translated in space to define the 8 electrodes in the positions shown above. • Similarly, instead of defining separate meshed entities for each of the long runners in the center of the geometry, we will take advantage of the 180$^\circ$ rotational symmetry by referencing only one copy of each distinct shape together with ROTATED statements indicating how identical copies of that shape are to be rotated in space to define the desired configuration of the runners. As a result, scuff-em will need to read and store only 5 distinct entities from the mesh file, together with instructions for displacements and rotations. This is a major reduction in complexity from the 15 distinct mesh structures involved in the simple .scuffgeo file above. (The primary computational efficiency here is that identical mesh structures--independent of displacement or rotation---contribute identical diagonal blocks to the BEM system matrix; if scuff-em knows that an object in a geometry has 7 identical mates, then it need only compute the corresponding matrix block once instead of 8 times, yielding huge cost reductions. scuff-static also detects and exploits redundancies in off-diagonal matrix blocks.) The file that implements this improved strategy is Trap_4_Improved.scuffgeo, and it looks like this: OBJECT UpperDC1 MESHFILE Trap_4.msh MESHTAG 7 ENDOBJECT OBJECT LowerDC1 MESHFILE Trap_4.msh MESHTAG 7 DISPLACED 0 -1656 0 ENDOBJECT OBJECT UpperDC2 MESHFILE Trap_4.msh MESHTAG 7 DISPLACED 220 0 0 ENDOBJECT OBJECT LowerDC2 MESHFILE Trap_4.msh MESHTAG 7 DISPLACED 0 -1656 0 DISPLACED 220 0 0 ENDOBJECT OBJECT UpperDC3 MESHFILE Trap_4.msh MESHTAG 7 DISPLACED 440 0 0 ENDOBJECT OBJECT LowerDC3 MESHFILE Trap_4.msh MESHTAG 7 DISPLACED 0 -1656 0 DISPLACED 440 0 0 ENDOBJECT OBJECT UpperDC4 MESHFILE Trap_4.msh MESHTAG 7 DISPLACED 660 0 0 ENDOBJECT OBJECT LowerDC4 MESHFILE Trap_4.msh MESHTAG 7 DISPLACED 0 -1656 0 DISPLACED 660 0 0 ENDOBJECT OBJECT GND MESHFILE Trap_4.msh MESHTAG 1 ENDOBJECT OBJECT Rot1 MESHFILE Trap_4.msh MESHTAG 5 ENDOBJECT OBJECT Rot2 MESHFILE Trap_4.msh MESHTAG 2 ENDOBJECT OBJECT Rot3 MESHFILE Trap_4.msh MESHTAG 2 ROTATED 180 ABOUT 0 0 1 ENDOBJECT OBJECT Rot4 MESHFILE Trap_4.msh MESHTAG 5 ROTATED 180 ABOUT 0 0 1 ENDOBJECT OBJECT RF MESHFILE Trap_4.msh MESHTAG 4 ENDOBJECT As anticipated above, note that this file references only 5 distinct MESHTAG values instead of the 15 distinct values referenced by the original Trap_4.scuffgeo file. ### Visually confirming the geometry description Before proceeding, we should certainly pause to check that the geometry defined by the improved geometry file does indeed look like what we want. We do this by running the scuff-analyze utility with the --WriteGMSHFiles command-line option: % scuff-analyze --geometry Trap_4_Improved.scuffgeo --WriteGMSHFiles This produces a file named Trap_4_Improved.pp, which we open in gmsh for visual confirmation: % gmsh Trap_4_Improved.pp ## Phase 1 calculation: Computing fields of individual conductors The first phase of our calculation will be to determine the electrostatic field configurations produced by holding each of the individual electrodes at a potential of 1 V with all other electrodes grounded. This will yield 8 distinct field configurations, which we can sample at an arbitrary set of evaluation points or visualize in graphical form; the electrostatic field obtained by driving all conductors with arbitrary specified voltages will be a weighted linear combination of these 8 configurations, so we can use the elemental fields to optimize a set of electrode voltages to yield a given field profile (phase 2, below). ### Running multiple calculations at once: The excitation file One obvious way to do this calculation would be to run scuff-static 8 times, each time using the --PotFile command-line option to define a different set of conductor potentials. However, such an approach would be inefficient given the structure of the boundary-element method (BEM) implemented by scuff-static. In BEM solvers, almost all of the computational cost goes into assembling and factorizing the system matrix, which knows only about the geometry itself and is independent of any excitation that may furnish the stimulus in an electrostatics problem (such as externally-sourced fields or sets of prescribed conductor potentials). Thus, in cases where we wish to consider the response of a geometry to multiple stimuli, it is efficient to do the calculations all at once; having paid the cost of forming and factorizing the system matrix, we can solve electrostatics problems for any number of distinct stimuli essentially for free. To allow this efficiency to be exploited in command-line calculations, scuff-static allows users to specify an excitation file describing one or more stimuli to be applied to the geometry sequentially. For the purposes of our first calculation, the excitation file will specify 8 separate stimuli, each consisting of a choice of one conductor to be held at a potential of 1.0 V (by default, any conductors not specified are maintained at 0 V). This file is called Phase1.Excitations: EXCITATION UpperDC1 UpperDC1 1.0 ENDEXCITATION EXCITATION LowerDC1 LowerDC1 1.0 ENDEXCITATION EXCITATION UpperDC2 UpperDC2 1.0 ENDEXCITATION EXCITATION LowerDC2 LowerDC2 1.0 ENDEXCITATION EXCITATION UpperDC3 UpperDC3 1.0 ENDEXCITATION EXCITATION LowerDC3 LowerDC3 1.0 ENDEXCITATION EXCITATION UpperDC4 UpperDC4 1.0 ENDEXCITATION EXCITATION LowerDC4 LowerDC4 1.0 ENDEXCITATION This file is passed to scuff-em via the --ExcitationFile command-line option. Notice that each EXCITATION is labeled by an arbitrary user-defined tag, which will be used to identify the output produced under that excitation. Speaking of outputs, we will want to tell scuff-static what we'd like it to compute for each excitation. In this case I'll ask for two types of output: • numerical values of the electrostatic potential and field at a set of evaluation points I choose; I will choose a line of points lying slightly above the structure and running down the center conductor. I put the coordinates of these points into a text file called MyEPFile and say --EPFile MyEPFile on the scuff-static command line. • graphical visualization files showing the distribution of surface charge induced on the geometry by each exciting stimulus. (See below for a different type of graphical visualization output.) To request this I use the --PlotFile option to specify the visualization output file name (here I call it MyPlotFile.pp). Here's a script (Phase1.RunScript) that runs the phase-1 calculation: #!/bin/bash ARGS="" ARGS="${ARGS} --geometry Trap_4_Improved.scuffgeo" ARGS="${ARGS} --ExcitationFile Phase1.Excitations" ARGS="${ARGS} --EPFile MyEPFile" ARGS="${ARGS} --PlotFile MyPlotFile.pp" scuff-static ${ARGS} This script takes about 3 seconds to run on my laptop. When it's finished, you have two new output files: • Trap_4_Improved.MyEPFile.out is a text data file reporting values of the electrostatic potential and field components at each evaluation point in MyEPFile for each excitation. • MyPlotFile.pp is a gmsh visualization file plotting the induced charge density for each of the 8 excitations. For example, here's what it looks like when the electrode named LowerDC2 is driven: ## Phase 2 calculation: External sources and field visualization Having determined the fields produced by each electrode in isolation, in practice we would now presumably do some sort of design calculation to identify the optimal voltages at which to drive each electrode for our desired application. As a follow-up calculation, we'll now do a run in which (a) each conductor is set to a nonzero voltage, (b) additional external field sources are present, (c) we wish to visualize the electrostatic fields over a region of space. Items (a) and (b) are handled by writing an excitation file (Phase2.Excitations) that specifies, in addition to prescribed conductor potentials, several external field sources that are also present in the geometry: a point monopole, a point dipole, a constant electric field, and an arbitrary user-specified function. (Needless to say, this contrived assortment of sources is intended primarily to illustrate the types of external-field sources that may be specified in excitation files). EXCITATION KitchenSink # conductor potentials UpperDC1 0.5 LowerDC1 -0.7 UpperDC2 -0.3 LowerDC2 0.5 UpperDC3 0.2 LowerDC3 -0.4 UpperDC4 -0.6 LowerDC4 1.0 # point charge at X=(-400,1000,250) with charge -300 monopole -400.0 1000.0 250.0 -300 # z-directed point dipole at X=(-300,-1000,-400) dipole -300.0 -1000.0 -400.0 0.0 0.0 10000.0 # small constant background field in Z-direction constant_field 0 0 1.0e-4 # arbitrary user-specified function of x, y, z, r, Rho, Theta, Phi phi 1.0e-8*Rho*Rho*cos(2.0*Phi) ENDEXCITATION Item (c) is handled by using gmsh to define a field-visualization mesh---in essence, a screen on which we want an image of the electrostatic field configuration, although it need not be planar---together with a set of geometrical transformations specifying how the screen is to be replicated throughout space to yield quasi-3D visual information on the field configuration. In this case, the mesh is described by the simple gmsh geometry file Screen.geo, which we turn into Screen.msh by running gmsh -2 Screen.geo. Then, the transformation file Screen.trans specifies three geometrical transformations in which the screen is rotated and displaced to define the three walls of the diorama shown in the figure below. The script that runs the calculation is Phase2.RunScript: #!/bin/bash ARGS="" ARGS="${ARGS} --geometry Trap_4_Improved.scuffgeo" ARGS="${ARGS} --ExcitationFile Phase2.Excitations" ARGS="${ARGS} --FVMesh Screen.msh" ARGS="${ARGS} --FVMeshTransFile Screen.trans" scuff-static${ARGS} This produces several files with extension .pp; we open them all simultaneously in gmsh together with the original geometry mesh to get some graphical insight into the spatial variation of the fields in our problem. % gmsh Trap_4*.pp Trap_4.msh ` Click the image below for higher resolution:
{"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.535478949546814, "perplexity": 2790.1244523252453}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301670.75/warc/CC-MAIN-20220120005715-20220120035715-00175.warc.gz"}
https://www.scribd.com/document/133569162/Funky-Math-Physics
You are on page 1of 205 # Funky Mathematical Physics Concepts The Anti-Textbook* A Work In Progress. See physics.ucsd.edu/~emichels for the latest versions of the Funky Series. By Eric L. Michelsen T ijx v x T ijy v y T ijz v z + dR I real Imaginary I I R i -i I study mathematics to learn how to think. I study physics to have something to think about. Perhaps the greatest irony of all is not that the square root of two is irrational, but that Pythagoras himself was irrational. * Physical, conceptual, geometric, and pictorial physics that didnt fit in your textbook. Please cite as: Michelsen, Eric L., Funky Mathematical Physics Concepts, physics.ucsd.edu/~emichels, 8/1/2012. 2006 values from NIST. For more physical constants, see http://physics.nist.gov/cuu/Constants/ . Speed of light in vacuum c = 299 792 458 m s 1 (exact) Boltzmann constant k = 1.380 6504(24) x 10 23 J K 1 Stefan-Boltzmann constant = 5.670 400(40) x 10 8 W m 2 K 4 Relative standard uncertainty 7.0 x 10 6 A , L = 6.022 141 79(30) x 10 23 mol 1 Relative standard uncertainty 5.0 x 10 8 Molar gas constant R = 8.314 472(15) J mol -1 K -1 Electron mass m e = 9.109 382 15(45) x 10 31 kg Proton mass m p = 1.672 621 637(83) x 10 27 kg Proton/electron mass ratio m p /m e = 1836.152 672 47(80) Elementary charge e = 1.602 176 487(40) x 10 19 C Electron g-factor g e = 2.002 319 304 3622(15) Proton g-factor g p = 5.585 694 713(46) Neutron g-factor g N = 3.826 085 45(90) Muon mass m ## = 1.883 531 30(11) x 10 28 kg Inverse fine structure constant 1 = 137.035 999 679(94) Planck constant h = 6.626 068 96(33) x 10 34 J s Planck constant over 2 = 1.054 571 628(53) x 10 34 J s 0 = 0.529 177 208 59(36) x 10 10 m Bohr magneton B = 927.400 915(23) x 10 26 J T 1 Reviews ... most excellent tensor paper.... I feel I have come to a deep and abiding understanding of relativistic tensors.... The best explanation of tensors seen anywhere! -- physics graduate student physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu Contents Introduction..................................................................................................................................... 7 Why Funky? .................................................................................................................................. 7 How to Use This Document ........................................................................................................... 7 Why Physicists and Mathematicians Dislike Each Other ............................................................. 7 Thank You..................................................................................................................................... 7 Scope............................................................................................................................................. 7 Notation......................................................................................................................................... 8 Random Topics .............................................................................................................................. 11 Whats Hyperbolic About Hyperbolic Sine?................................................................................. 11 Basic Calculus You May Not Know............................................................................................. 12 The Product Rule ......................................................................................................................... 14 Integration By Pictures............................................................................................................. 14 Theoretical Importance of IBP.................................................................................................. 15 Delta Function Surprise................................................................................................................ 16 Spherical Harmonics Are Not Harmonics ..................................................................................... 18 The Binomial Theorem for Negative and Fractional Exponents..................................................... 19 When Does a Divergent Series Converge?.................................................................................... 20 Algebra Family Tree .................................................................................................................... 21 Convoluted Thinking ................................................................................................................... 22 Vectors ........................................................................................................................................... 23 Small Changes to Vectors ............................................................................................................ 23 Why (r, , |) Are Not the Components of a Vector ....................................................................... 23 Laplacians Place......................................................................................................................... 24 Vector Dot Grad Vector ............................................................................................................... 32 Greens Functions.......................................................................................................................... 34 Complex Analytic Functions.......................................................................................................... 46 Residues ...................................................................................................................................... 47 Contour Integrals ......................................................................................................................... 48 Evaluating Integrals ..................................................................................................................... 48 Choosing the Right Path: Which Contour?................................................................................ 50 Evaluating Infinite Sums.............................................................................................................. 56 Multi-valued Functions ................................................................................................................ 58 Conceptual Linear Algebra ........................................................................................................... 60 Matrix Multiplication ............................................................................................................... 60 Determinants ............................................................................................................................... 61 Cramers Rule.......................................................................................................................... 62 Area and Volume as a Determinant........................................................................................... 63 The Jacobian Determinant and Change of Variables.................................................................. 64 Expansion by Cofactors............................................................................................................ 66 Proof That the Determinant Is Unique....................................................................................... 68 Getting Determined.................................................................................................................. 69 Getting to Home Basis ............................................................................................................. 70 Contraction of Matrices............................................................................................................ 73 Trace of a Product of Matrices.................................................................................................. 73 Linear Algebra Briefs................................................................................................................... 74 Probability, Statistics, and Data Analysis...................................................................................... 75 Probability and Random Variables ............................................................................................... 75 Precise Statement of the Question Is Critical............................................................................. 76 physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu How to Lie With Statistics ........................................................................................................... 77 Choosing Wisely: An Informative Puzzle..................................................................................... 77 Multiple Events............................................................................................................................ 78 Combining Probabilities........................................................................................................... 79 To B, or To Not B? .................................................................................................................. 81 Continuous Random Variables and Distributions.......................................................................... 82 Population and Samples ........................................................................................................... 83 Variance .................................................................................................................................. 83 Standard Deviation................................................................................................................... 84 New Random Variables From Old Ones ................................................................................... 84 Some Distributions Have Infinite Variance, or Infinite Average.................................................... 86 Samples and Parameter Estimation............................................................................................... 87 Combining Estimates of Varying Uncertainty........................................................................... 90 Statistically Speaking: What Is The Significance of This?............................................................. 90 Predictive Power: Another Way to Be Significant, but Not Important........................................ 93 Bias and the Hood (Unbiased vs. Maximum-Likelihood Estimators)............................................ 94 Correlation and Dependence ........................................................................................................ 96 Data Fitting (Curve Fitting).......................................................................................................... 97 Goodness of Fit ........................................................................................................................ 99 Fitting To Histograms ................................................................................................................ 103 Guidance Counselor: Practical Considerations for Computer Code to Fit Data ............................ 107 Numerical Analysis...................................................................................................................... 110 Round-Off Error, And How to Reduce It .................................................................................... 110 How To Extend Precision In Sums Without Using Higher Precision Variables ........................ 111 Numerical Integration............................................................................................................. 112 Sequences of Real Numbers ................................................................................................... 112 Root Finding.............................................................................................................................. 112 Simple Iteration Equation....................................................................................................... 112 Newton-Raphson Iteration...................................................................................................... 114 Pseudo-Random Numbers.......................................................................................................... 116 Generating Gaussian Random Numbers.................................................................................. 117 Generating Poisson Random Numbers.................................................................................... 118 Generating Weirder Random Numbers ................................................................................... 119 Exact Polynomial Fits ................................................................................................................ 119 Twos Complement Arithmetic .................................................................................................. 121 How Many Digits Do I Get, 6 or 9?............................................................................................ 122 How many digits do I need? ................................................................................................... 123 How Far Can I Go? ................................................................................................................ 123 Software Engineering................................................................................................................. 124 Object Oriented Programming................................................................................................ 124 The Best of Times, the Worst of Times ...................................................................................... 125 Cache Withdrawal: Matrix Multiplication............................................................................... 130 Cache Summary..................................................................................................................... 132 IEEE Floating Point Formats And Concepts ............................................................................... 132 Precision in Decimal Representation....................................................................................... 140 Underflow.............................................................................................................................. 141 Fourier Transforms and Digital Signal Processing..................................................................... 147 Model of Digitization ............................................................................................................. 148 Complex Sequences and Complex Fourier Transform............................................................. 148 Sampling................................................................................................................................ 148 Basis Functions and Orthogonality ......................................................................................... 150 Real Sequences ...................................................................................................................... 151 Normalization and Parsevals Theorem................................................................................... 152 Continuous and Discrete, Finite and Infinite ........................................................................... 153 physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu White Noise and Correlation .................................................................................................. 153 Why Oversampling Does Not Improve Signal-to-Noise Ratio................................................. 153 Filters TBS?? ......................................................................................................................... 154 What Happens to a Sine Wave Deferred? ................................................................................... 154 Nonuniform Sampling and Arbitrary Basis Functions ................................................................. 156 Two Dimensional Fourier Transforms ........................................................................................ 158 Note on Continuous Fourier Series and Uniform Convergence.................................................... 158 Tensors, Without the Tension...................................................................................................... 160 Approach ............................................................................................................................... 160 Two Physical Examples ............................................................................................................. 160 Magnetic Susceptibility.......................................................................................................... 160 Mechanical Strain .................................................................................................................. 164 When Is a Matrix Not a Tensor?............................................................................................. 166 Heading In the Right Direction............................................................................................... 166 Some Definitions and Review.................................................................................................... 166 Vector Space Summary.......................................................................................................... 167 When Vectors Collide ............................................................................................................ 168 Tensors vs. Symbols........................................................................................................ 169 Notational Nightmare............................................................................................................. 169 Tensors? What Good Are They?................................................................................................ 169 A Short, Complicated Definition ............................................................................................ 169 Building a Tensor ...................................................................................................................... 170 Tensors in Action....................................................................................................................... 171 Tensor Fields ......................................................................................................................... 172 Dot Products and Cross Products as Tensors........................................................................... 172 The Danger of Matrices.......................................................................................................... 174 Reading Tensor Component Equations ................................................................................... 174 Adding, Subtracting, Differentiating Tensors .......................................................................... 175 Higher Rank Tensors ................................................................................................................. 175 Tensors In General ................................................................................................................. 177 Change of Basis: Transformations.............................................................................................. 177 Matrix View of Basis Transformation..................................................................................... 179 Non-Orthonormal Systems: Contravariance and Covariance ....................................................... 179 What Goes Up Can Go Down: Duality of Contravariant and Covariant Vectors....................... 182 The Real Summation Convention ........................................................................................... 183 Transformation of Covariant Indexes...................................................................................... 183 Indefinite Metrics: Relativity...................................................................................................... 183 Is a Transformation Matrix a Tensor?......................................................................................... 184 How About the Pauli Vector?..................................................................................................... 184 Cartesian Tensors....................................................................................................................... 185 The Real Reason Why the Kronecker Delta Is Symmetric........................................................... 186 Tensor Appendices .................................................................................................................... 186 Pythagorean Relation for 1-forms........................................................................................... 186 Geometric Construction Of The Sum Of Two 1-Forms: .......................................................... 187 Fully Anti-symmetric Symbols Expanded............................................................................ 188 Metric? We Dont Need No Stinking Metric!............................................................................. 189 References: ................................................................................................................................ 191 Differential Geometry.................................................................................................................. 192 Manifolds .................................................................................................................................. 192 Coordinate Bases ................................................................................................................... 192 Covariant Derivatives ................................................................................................................ 194 Christoffel Symbols ................................................................................................................... 196 Visualization of n-Forms............................................................................................................ 197 Review of Wedge Products and Exterior Derivative.................................................................... 197 physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu 1-D........................................................................................................................................ 197 2-D........................................................................................................................................ 197 3-D........................................................................................................................................ 198 Math Tricks ................................................................................................................................. 199 Math Tricks That Come Up A Lot .............................................................................................. 199 The Gaussian Integral............................................................................................................. 199 Math Tricks That Are Fun and Interesting .................................................................................. 199 Phasors...................................................................................................................................... 200 Future Funky Mathematical Physics Topics................................................................................ 200 Appendices................................................................................................................................... 201 References................................................................................................................................. 201 Glossary .................................................................................................................................... 201 Formulas.................................................................................................................................... 205 Index......................................................................................................................................... 205 a cos a sin a 1 u n i t tan a cot a sec a c s c a O A B C D a cos a From OAD: sin = opp / hyp sin 2 + cos 2 = 1 From OAB: tan = opp / adj tan 2 + 1 = sec 2 (and with OAD) tan = sin / cos sec = hyp / adj = 1 / cos From OAC: cot = adj / opp cot 2 + 1 = csc 2 (and with OAD) cot = cos / sin csc = hyp / opp = 1 / sin physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu Introduction Why Funky? The purpose of the Funky series of documents is to help develop an accurate physical, conceptual, geometric, and pictorial understanding of important physics topics. We focus on areas that dont seem to be covered well in most texts. The Funky series attempts to clarify those neglected concepts, and others that seem likely to be challenging and unexpected (funky?). The Funky documents are intended for serious students of physics; they are not popularizations or oversimplifications. Physics includes math, and were not shy about it, but we also dont hide behind it. Without a conceptual understanding, math is gibberish. http://physics.ucsd.edu/~emichels for the latest versions of the Funky Series, and for contact information. Were looking for feedback, so please let us know what you think. How to Use This Document This work is not a text book. There are plenty of those, and they cover most of the topics quite well. This work is meant to be used with a standard text, to help emphasize those things that are most confusing for new students. When standard presentations dont make sense, come here. You should read all of this introduction to familiarize yourself with the notation and contents. After that, this work is meant to be read in the order that most suits you. Each section stands largely alone, though the sections are ordered logically. Simpler material generally appears before more advanced topics. You may read it from beginning to end, or skip around to whatever topic is most interesting. The Shorts chapter is a diverse set of very short topics, meant for quick reading. If you dont understand something, read it again once, then keep reading. Dont get stuck on one thing. Often, the following discussion will clarify things. The index is not yet developed, so go to the web page on the front cover, and text-search in this document. Why Physicists and Mathematicians Dislike Each Other Physics goals and mathematics goals are antithetical. Physics seeks to ascribe meaning to mathematics that describe the world, to understand it, physically. Mathematics seeks to strip the equations of all physical meaning, and view them in purely abstract terms. These divergent goals set up a natural conflict between the two camps. Each goal has its merits: the value of physics is (or should be) self-evident; the value of mathematical abstraction, separate from any single application, is generality: the results can be applied to a wide range of applications. Thank You I owe a big thank you to many professors at both SDSU and UCSD, for their generosity even when I wasnt a real student: Dr. Herbert Shore, Dr. Peter Salamon, Dr. Arlette Baljon , Dr. Andrew Cooksy, Dr. George Fuller, Dr. Tom ONeil, Dr. Terry Hwa, and others. Scope What This Text Covers This text covers some of the unusual or challenging concepts in graduate mathematical physics. It is also very suitable for upper-division undergraduate level, as well. We expect that you are taking or have taken such a course, and have a good text book. Funky Mathematical Physics Concepts supplements those other sources. physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu What This Text Doesnt Cover This text is not a mathematical physics course in itself, nor a review of such a course. We do not cover all basic mathematical concepts; only those that are very important, unusual, or especially challenging (funky?). This text assumes you understand basic integral and differential calculus, and partial differential equations. Further, it assumes you have a mathematical physics text for the bulk of your studies, and are using Funky Mathematical Physics Concepts to supplement it. Notation Sometimes the variables are inadvertently not written in italics, but I hope the meanings are clear. ?? refers to places that need more work. TBS To be supplied (one hopes) in the future. Interesting points that you may skip are asides, shown in smaller font and narrowed margins. Notes to myself may also be included as asides. Common misconceptions are sometimes written in dark red dashed-line boxes. Formulas: We write the integral over the entire domain as a subscript , for any number of dimensions: 3 1-D: 3-D: dx d x } } Evaluation between limits: we use the notation [function] a b to denote the evaluation of the function between a and b, i.e., [f(x)] a b f(b) f(a). For example, 0 1 3x 2 dx = [x 3 ] 0 1 = 1 3 - 0 3 = 1 We write the probability of an event as Pr(event). Column vectors: Since it takes a lot of room to write column vectors, but it is often important to distinguish between column and row vectors, I sometimes save vertical space by using the fact that a column vector is the transpose of a row vector: ( ) , , , T a b a b c d c d | | | | = | | \ . For Greek letters, pronunciations, and use, see Funky Quantum Concepts. Other math symbols: Symbol Definition for all - there exists such that iff if and only if proportional to. E.g., a b means a is proportional to b perpendicular to therefore ~ of the order of (sometimes used imprecisely as approximately equals) is defined as; identically equal to (i.e., equal in all cases) physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu implies tensor product, aka outer product direct sum In mostly older texts, German type (font: Fraktur) is used to provide still more variable names: Lat in Germ an Capital Germa n Lowercase Notes A A a Distinguish capital from U, V B B b C C c Distinguish capital from E, G D D d Distinguish capital from O, Q E E e Distinguish capital from C, G F F f G G g Distinguish capital from C, E H H h I I i Capital almost identical to J J J j Capital almost identical to I K K k L L l M M m Distinguish capital from W N N n O O o Distinguish capital from D, Q P P p Q Q q Distinguish capital from D, O R R r Distinguish lowercase from x S S s Distinguish capital from C, G, E T T t Distinguish capital from I U U u Distinguish capital from A, V V V v Distinguish capital from A, U W W w Distinguish capital from M physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu X X x Distinguish lowercase from r Y Y y Z Z z physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu Random Topics x sinh a area = a/2 y y = x x 2 y 2 = 1 cos a sin a x 2 + y 2 = 1 x y area = a/2 1 u n i t cosh a 1 unit a From where do the hyperbolic trigonometric functions get their names? By analogy with the circular functions. We usually think of the argument of circular functions as an angle, a. But in a unit circle, the area covered by the angle a is a / 2 (above left): 2 ( 1) 2 2 a a area r r t t = = = Instead of the unit circle, x 2 + y 2 = 1, we can consider the area bounded by the x-axis, the ray from the origin, and the unit hyperbola, x 2 y 2 = 1 (above right). Then the x and y coordinates on the curve are called the hyperbolic cosine and hyperbolic sine, respectively. Notice that the hyperbola equation implies the well-known hyperbolic identity: 2 2 2 2 cosh , sinh , 1 cosh sinh 1 x a y a x y = = = = Proving that the area bounded by the x-axis, ray, and hyperbola satisfies the standard definition of the hyperbolic functions requires evaluating an elementary, but tedious, integral: (?? is the following right?) 2 1 2 2 1 2 2 2 2 2 3 1 1 1 1 1 : 1 2 2 1 2 1 For the integral, let sec , tan sec sec 1 tan sin 1 sec 1 tan sec tan sec cos x x x x x x a area xy y dx Use y x a x x x dx x dx d y x dx d d d u u u u u u u u u u u u u u u u = = = = = = = = = = = } } } } } } Try integrating by parts: 2 2 3 1 1 1 tan sec tan sec , sec tan sec sec tan sec x x x U dV d dU d V d UV V dU d u u u u u u u u u u u u u u = = = = = = } } } physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu This is too hard, so we try reverting to fundamental functions sin( ) and cos( ): ( ) 3 2 2 2 3 2 2 1 1 1 1 2 2 1 1 1 1 sin cos sin cos , cos 2 sin sin sin 2 2 2 cos cos : sec tan cos cos cos sec ln sec tan ln 1 ln 1 ln1 x x x x x x x U dV d dU d V d UV V dU d Use xy xy d xy xy x x xy x x u u u u u u u u u u u u u u u u u u u u u u u = = = = = = = = | | = = + = + = + | \ . } } } } 2 2 2 ln 1 ln 1 1 a a xy xy x x x x e x x = + + = + = + Solve for x in terms of a, by squaring both sides: ( ) ( ) 2 2 2 2 2 2 2 1 1 2 1 1 2 1 1 2 2 cosh 2 a a a e a a a a a a e x x x x x x x xe e xe e e e e x x a = + + = + = + = + + = = The definition for sinh follows immediately from ( ) 2 2 2 2 2 2 2 2 2 2 2 cosh sinh 1 1 2 2 sinh 1 1 2 4 4 4 2 a a a a a a a a a a x y y x e e e e e e e e e e a y = = = | | + + + + = = = = = | | \ . Basic Calculus You May Not Know Amazingly, many calculus courses never provide a precise definition of a limit, despite the fact that both of the fundamental concepts of calculus, derivatives and integrals, are defined as limits! So here we go: Basic calculus relies on 4 major concepts: 1. Functions 2. Limits 3. Derivatives 4. Integrals 1. Functions: Briefly, (in real analysis) a function takes one or more real values as inputs, and produces one or more real values as outputs. The inputs to a function are called the arguments. The simplest case is a real-valued function of a real-valued argument e.g., f(x) = sin x. Mathematicians would write (f : R 1 R 1 ), read f is a map (or function) from the real numbers to the real numbers. A function which produces more than one output may be considered a vector-valued function. 2. Limits: Definition of limit (for a real-valued function of a single argument, f : R 1 R 1 ): physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu L is the limit of f(x) as x approaches a, iff for every > 0, there exists a (> 0) such that |f(x) L| < whenever 0 < |x a| < . In symbols: lim ( ) iff 0, such that ( ) whenever 0 x a L f x f x L x a c o c o ## = > - < < < This says that the value of the function at a doesnt matter; in fact, most often the function is not defined at a. However, the behavior of the function near a is important. If you can make the function arbitrarily close to some number, L, by restricting the functions argument to a small neighborhood around a, then L is the limit of f as x approaches a. Surprisingly, this definition also applies to complex functions of complex variables, where the absolute value is the usual complex magnitude. Example: Show that 2 1 2 2 lim 4 1 x x x = . Solution: We prove the existence of given any by computing the necessary from . Note that for 2 2 2 1, 2( 1) 1 x x x x = = + ( ) 2 2 2 4 0 1 1 x whenever x x c o ## < < < So we solve for x in terms of . Since we dont care what the function is at x = 1, we can use the simplified form, 2(x + 1). When x = 1, this is 4, so we suspect the limit = 4. Proof: 2( 1) 4 2 ( 1) 2 1 1 1 2 2 2 x x x or x c c c c c + < + < < < < + So by setting = /2, we construct the required for any given . Hence, for every , there exists a satisfying the definition of a limit. 3. Derivatives: Only now that we have defined a limit, can we define a derivative: 0 ( ) ( ) '( ) lim x f x x f x f x x A + A A 4. Integrals: A simplified definition of an integral is an infinite sum of areas under a function divided into equal subintervals: ( ) 1 ( ) lim (simplified definition) N b a N i x b a i f x dx f b a N N = A | | | \ . } For practical physics, this definition would be fine. For mathematical preciseness, the actual definition of an integral is the limit over any possible set of subintervals, so long as the maximum of the subinterval size goes to zero. This is called the norm of the subdivision, written as ||x i ||: ( ) 0 1 ( ) lim (precise definition) i N b i i a x i f x dx f x x A = A } physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu (Left) Simplified definition of an integral as the limit of a sum of equally spaced samples. (Right) Precise definition requires convergence for arbitrary, but small, subdivisions. Why do mathematicians require this more precise definition? Its to avoid bizarre functions, such as: f(x) is 1 if x is rational, and zero if irrational. This means f(x) toggles wildly between 1 and 0 an infinite number of times over any interval. However, with the simplified definition of an integral, the following is well defined: 3.14 0 0 ( ) 3.14 ( ) 0 (with simplified definition of integral) f x dx but f x dx t = = } } But properly, and with the precise definition of an integral, both integrals are undefined and do not exist. The Product Rule Given functions U(x) and V(x), the product rule (aka the Leibniz rule) says that for differentials, ( ) d UV U dV V dU = + This leads to integration by parts, which is mostly known as an integration tool, but it is also an important theoretical (analytic) tool, and the essence of Legendre transformations. Integration By Pictures We assume you are familiar with integration by parts (IBP) as a tool for performing indefinite integrals, usually written as: ( ) '( ) ( ) ( ) ( ) '( ) dV dU U dV UV V dU which really means U x V x dx U x V x V x U x dx = = } } } } This comes directly from the product rule above: ( ) U dV d UV V dU = , and integrate both sides. Note that x is the integration variable (not U or V), and x is also the parameter to the functions U(x) and V(x). physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu U(x) V(x) U(a) U(b) V(a) V(b) U(b)V(b) U(a)V(a) V dU U dV U(x) V(x) U(a) U(b) V max V(a) = V(b) = 0 V dU 1 U dV 1 2 V(x) U(a) = 0 U(b), V(b) = 0 V(a) V dU= U dV integration path Three cases if integration by parts: (left) U(x) and V(x) increasing. (Middle) V(x) decreasing to 0. (Right) V(x) progressing from zero, to finite, and back to zero. The diagram above illustrates IBP in three cases. The left is the simplest case where U(x) and V(x) are monotonically increasing functions of x (note that x is not an axis, U and V are the axes, but x is the integration parameter). IBP says | | | | ( ) ( ) ( ) ( ) ( ) ( ) b x a boundary term b b x x b a x a a U x V x U b V b U a V dU V dV U V d a U = = = = = = } } } The LHS (left hand side) of the equation is the red shaded area; the term in brackets on the right is the big rectangle minus the white rectangle; the last term is the blue shaded area. The left diagram illustrates IBP visually as areas. The term in brackets is called the boundary term (or surface term), because in some applications, it represents the part of the integral corresponding to the boundary (or surface) of the region of integration. The middle diagram illustrates another common case: that in which the surface term UV is zero. In this case, UV = 0 at x = a and x = b, because U(a) = 0 and V(b) = 0. The shaded area is the integral, but the path of integration means that dU > 0, but dV < 0. Therefore V dU > 0, but U dV < 0. The right diagram shows the case where one of U(x) or V(x) starts and ends at 0. For illustration, we chose V(a) = V(b) = 0. Then the surface term is zero, and we have: | | ( ) ( ) 0 b b x a x b x a a U x V x U d V d V U = = = = = } } For V(x) to start and end at zero, V(x) must grow with x to some maximum, V max , and then decrease back to 0. For simplicity, we assume U(x) is always increasing. The V dU integral is the blue striped area below the curve; the U dV integral is the area to the left of the curves. We break the dV integral into two parts: path 1, leading up to V max , and path 2, going back down from V max to zero. The integral from 0 to V max (path 1) is the red striped area; the integral from V max back down to 0 (path 2) is the negative of the entire (blue + red) striped area. Then the blue shaded region is the difference: (1) the (red) area to the left of path 1 (where dV is positive, because V(x) is increasing), minus (2) the (blue + red) area to the left of path 2, because dV is negative when V(x) is decreasing: max max max max 0 0 1 2 2 0 0 1 2 1 V V V V path path path path V V V V path pat x a h b U dV U dV U dV U V d U U dV dV = = + = = = = + = = } } } } } } Theoretical Importance of IBP Besides being an integration tool, an important theoretical consequence of IBP is that the variable of integration is changed, from dV to dU. Many times, one differential is unknown, but the other is known: physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu Under an integral, integration by parts allows one to exchange a derivative that cannot be directly evaluated, even in principle, in favor of one that can. The classic example of this is deriving the Euler-Lagrange equations of motion from the principle of stationary action. The action of a dynamic system is defined by ( ( ), ( )) S L q t q t dt } ` where the lagrangian is a given function of the trajectory q(t). Stationary action means that the action does not change (to first order) for small changes in the trajectory. I.e., given a small variation in the trajectory, q(t): 0 ( , ) L L d S L q q q q dt S q q dt Use q q q q dt L L d q q dt q q dt o o o o o o o o o ( c c = = + + = + = ( c c ( c c = + ( c c } } } ` ` ` ` ` ` The quantity in brackets involves both q(t) and its time derivative, q-dot. We are free to vary q(t) arbitrarily, but that fully determines q-dot. We cannot vary both q and q-dot separately. We also know that q(t) = 0 at its endpoints, but q-dot is unconstrained at its endpoints. Therefore, it would be simpler if the quantity in brackets was written entirely in terms of q(t), and not in terms of q-dot. IBP allows us to eliminate the time derivative of q(t) in favor of the time derivative of L/q-dot. Since L(q, q-dot) is given, we can easily determine L/q-dot. Therefore, this is a good trade. Integrating the 2 nd term in brackets by parts gives: 0 ' , . , ( ) U t f t V L d L d Let U dU dt dV q dt V q q dt q dt L d L d q dt t UV V dU q q t q o o o o = = | | c c = = = = | c c \ . c c ( c = = ( c } } ` ` ` ' V U d L d d q t q t o | | c | c \ . } ` The boundary term is zero because q(t) is zero at both endpoints. The variation in action S is now: 0 ( ) L d L S q dt q t q dt q o o o ( c c = = ( c c } ` The only way S = 0 can be satisfied for any q(t) is if the quantity in brackets is identically 0. Thus IBP has lead us to an important theoretical conclusion: the Euler-Lagrange equation of motion. This fundamental result has nothing to do with evaluating a specific difficult integral. IBP: its not just for doing hard integrals any more. Delta Function Surprise Rarely, one needs to consider the 3D -function in coordinates other than rectangular. The 3D - function is written 3 (r r). For example, in 3D Greens functions, whose definition depends on a 3 - function, it may be convenient to use cylindrical or spherical coordinates. In these cases, there are some unexpected consequences [Wyl p280]. This section assumes you understand the basic principle of a 1D and 3D -function. (See the introduction to the delta function in Funky Quantum Concepts.) Recall the defining property of 3 (r - r): 3 3 3 3 ( ') 1 ' ( " ") ( ') ( ) ( ') d for all d f f o o = = } } r r r r r r r r r . The above definition is coordinate free, i.e. it makes no reference to any choice of coordinates, and is true in every coordinate system. As with Greens functions, it is often helpful to think of the -function as a physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu function of r, which is zero everywhere except for an impulse located at r. As we will see, this means that it is properly a function of r and r separately, and should be written as 3 (r, r) (like Greens functions are). Rectangular coordinates: In rectangular coordinates, however, we now show that we can simply break up 3 (x, y, z) into 3 components. By writing (r r) in rectangular coordinates, and using the defining integral above, we get: 3 3 ' ( ', ', ') ( ', ', ') 1 ( ', ', ') ( ') ( ') ( ') . x x y y z z dx dy dz x x y y z z x x y y z z x x y y z z o o o o o = = } } } r r In rectangular coordinates, the above shows that we do have translation invariance, so we can simply write: 3 ( , , ) ( ) ( ) ( ) x y z x y z o o o o = . In other coordinates, we do not have translation invariance. Recall the 3D infinitesimal volume element in 4 different systems: coordinate-free, rectangular, cylindrical, and spherical coordinates: 3 2 sin d dx dy dz r dr d dz r dr d d | u u | = = = r The presence of r and imply that when writing the 3D -function in non-rectangular coordinates, we must include a pre-factor to maintain the defining integral = 1. We now show this explicitly. Cylindrical coordinates: In cylindrical coordinates, for r > 0, we have (using the imprecise notation of [Wyl p280]): 2 3 0 0 3 ' ( ', ', ') ( ', ', ') 1 1 ( ', ', ') ( ') ( ') ( '), ' 0 ' r r z z dr d dz r r r z z r r z z r r z z r r t | | | o | | o | | o o | | o = = = > } } } r r Note the 1/r pre-factor on the RHS. This may seem unexpected, because the pre-factor depends on the location of 3 ( ) in space (hence, no radial translation invariance). The rectangular coordinate version of 3 ( ) has no such pre-factor. Properly speaking, 3 ( ) isnt a function of r r; it is a function of r and r separately. In non-rectangular coordinates, 3 ( ) does not have translation invariance, and includes a pre-factor which depends on the position of 3 ( ) in space, i.e. depends on r. At r = 0, the pre-factor blows up, so we need a different pre-factor. Wed like the defining integral to be 1, regardless of |, since all values of | are equivalent at the origin. This means we must drop the (| |), and replace the pre-factor to cancel the constant we get when we integrate out |: 2 3 0 0 3 0 ( ', ', ') 1, ' 0 1 ( ', ', ') ( ) ( '), ' 0, 2 assuming that ( ) 1. dr d dz r r r z z r r r z z r z z r r dr r t | o | | o | | o o t o = = = = = } } } } This last assumption is somewhat unusual, because the -function is usually thought of as symmetric about 0, where the above radial integral would only be . The assumption implies a right-sided -function, whose entire non-zero part is located at 0 + . Furthermore, notice the factor of 1/r in (r 0, z z). This factor blows up at r = 0, and has no effect when r 0. Nonetheless, it is needed because the volume element r dr d| dz goes to zero as r 0, and the 1/r in (r 0, z z) compensates for that. physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu Spherical coordinates: In spherical coordinates, we have similar considerations. First, away from the origin, r > 0: 2 2 3 0 0 0 3 2 sin ( ', ', ') 1 1 ( ', ', ') ( ') ( ') ( '), ' 0 . [Wyl 8.9.2 p280] ' sin ' dr d d r r r r r r r r r t t u | u o u u | | o u u | | o o u u o | | u = = > } } } Again, the pre-factor depends on the position in space, and properly speaking, 3 ( ) is a function of r, r, , and separately, not simply a function of r r and . At the origin, wed like the defining integral to be 1, regardless of | or . So we drop the (| |) ( ), and replace the pre-factor to cancel the constant we get when we integrate out | and : 2 2 3 0 0 0 3 2 0 sin ( 0, ', ') 1, ' 0 1 ( 0, ', ') ( ), ' 0, 4 assuming that ( ) 1. dr d d r r r r r r r dr r t t u | u o u u | | o u u | | o t o = = = = = } } } } Again, this definition uses the modified (r), whose entire non-zero part is located at 0 + . And similar to the cylindrical case, this includes the 1/r 2 factor to preserve the integral at r = 0. 2D angular coordinates: For 2D angular coordinates and |, we have: 2 2 0 0 2 sin ( ', ') 1, ' 0 1 ( ', ') ( ') ( '), ' 0. sin ' d d t t u | u o u u | | u o u u | | o u u o | | u u = > = > } } Once again, we have a special case when = 0: we must have the defining integral be 1 for any value of |. Hence, we again compensate for the 2 from the | integral: 2 2 0 0 2 sin ( ', ') 1, ' 0 1 ( 0, ') ( ), ' 0. 2 sin d d t t u | u o u u | | u o u | | o u u t u = = = = } } Similar to the cylindrical and spherical cases, this includes a 1/(sin ) factor to preserve the integral at = 0. Spherical Harmonics Are Not Harmonics See Funky Electromagnetic Concepts for a full discussion of harmonics, Laplaces equation, and its solutions in 1, 2, and 3 dimensions. Here is a brief overview. Spherical harmonics are the angular parts of solid harmonics, but we will show that they are not truly harmonics. A harmonic is a function which satisfies Laplaces equation: 2 ( ) 0 V u = r , with r typically in 2 or 3 dimensions. Solid harmonics are 3D harmonics: they solve Laplaces equation in 3 dimensions. For example, one form of solid harmonics separates into a product of 3 functions in spherical coordinates: physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu ( ) ( ) ( ) ( ) ( ) 1 1 ( , , ) ( ) ( ) ( ) (cos ) sin cos ( ) is the radial part ( ) (cos ) is the polar angle part, the associated Legendre functions ( ) sin cos is the azimuthal part l l m l l l l l l l l l m l l l r R r P Q A r B r P C m D m where R r Ar B r P P Q C m D m u | u | u | | u u | | | + + u = = + + = + = = + The spherical harmonics are just the angular (, |) parts of these solid harmonics. But notice that the angular part alone does not satisfy the 2D Laplace equation (i.e., on a sphere of fixed radius): 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 sin , but for fixed : sin sin 1 1 sin sin sin r r r r r r r r r u u u u u | u u u u u | c c c c c | | | | V = + + | | c c c c c \ . \ . c c c | | = + | c c c \ . But direct substitution of spherical harmonics into the above Laplace operator shows that the result is not 0 (we let r = 1). We proceed in small steps: 2 2 2 ( ) sin cos ( ) ( ) Q C m D m Q m Q | | | | | | c = + = c For integer m, the associated Legendre functions, P l m (cos ), satisfy, for given l and m: ( ) 2 2 2 1 1 sin (cos ) (cos ) sin m m l l l l P m P r r u u u u u u | | + c c | | = + | | | c c \ . \ . Combining these 2 results (r = 1): ( ) ( ) ( ) ( ) ( ) 2 2 2 2 2 2 1 1 ( ) ( ) sin ( ) ( ) sin sin 1 (cos ) ( ) (cos ) ( ) 1 (cos ) ( ) m m l l m l P Q P Q l l m P Q m P Q l l P Q u | u u | u u u u | u u u u u u ( c c c | | V = + ( | c c c \ . ( = + + = + Hence, the spherical harmonics are not solutions of Laplaces equation, i.e. they are not harmonics. The Binomial Theorem for Negative and Fractional Exponents You may be familiar with the binomial theorem for positive integer exponents, but it is very useful to know that the binomial theorem also works for negative and fractional exponents. We can use this fact to easily find series expansions for things like ( ) 1/ 2 1 and 1 1 1 x x x + = + . First, lets review the simple case of positive integer exponents: ( ) ( ) ( )( ) 0 1 1 2 2 3 3 0 1 1 2 ! ... 1 1 2 1 2 3 ! n n n n n n n n n n n n n a b a b a b a b a b a b n + = + + + + [For completeness, we note that we can write the general form of the m th term: ( ) ! , integer 0; integer, 0 ! ! th n m m n m term a b n m m n n m m = > s s .] physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu But were much more interested in the iterative procedure (recursion relation) for finding the (m + 1) th term from the m th term, because we use that to generate a power series expansion. The process is this: 1. The first term (m = 0) is always a n b 0 = a n , with an implicit coefficient C 0 = 1. 2. To find C m+1 , multiply C m by the power of a in the m th term, (n m) 3. divide it by (m + 1), [the number of the new term were finding]: 1 ( ) 1 m m n m C C m + = + 4. lower the power of a by 1 (to n m), and 5. raise the power of b by 1 to (m + 1). This procedure is valid for all n, even negative and fractional n. A simple way to remember this is: For any real n, we generate the (m + 1)th term from the mth term by differentiating with respect to a, and integrating with respect to b. The general expansion, for any n, is then: ( ) ( ) 1 2 ...( 1) , real; integer 0 ! th n m m n n n n m m term a b n m m + = > Notice that for integer n > 0, there are n+1 terms. For fractional or negative n, we get an infinite series. Example 1: Find the Taylor series expansion of 1 1 x . Since the Taylor series is unique, any method we use to find a power series expansion will give us the Taylor series. So we can use the binomial theorem, and apply the rules above, with a = 1, b = (x): ( ) ( ) ( ) ( ) ( )( ) ( ) ( )( )( ) ( ) 1 1 2 3 1 2 3 4 2 1 1 2 1 2 3 1 1 1 1 1 1 ... 1 1 1 2 1 2 3 1 ... ... m x x x x x x x x = + = + + + + = + + + + + Notice that all the fractions, all the powers of 1, and all the minus signs cancel. Example 2: Find the Taylor series expansion of ( ) 1/ 2 1 1 x x + = + . The first term is a 1/2 = 1 1/2 : ( ) ( ) ( ) ( ) ( ) ( ) ( )( ) ( ) 1/ 2 1/ 2 1/ 2 1 3/ 2 2 5/ 2 3 1 2 3 1 1 1 1 1 1 1 3 1 1 1 1 1 1 ... 2 1 2 2 1 2 2 2 2 1 2 3 2 3 !! 1 1 3 1 ... 1 2 8 48 2 ! !! 2 4 ... 2 1 m m m x x x x m x x x x m where p p p p or + | | | || | + = + + + + | | | \ . \ .\ . = + + + When Does a Divergent Series Converge? Consider the infinite series 2 1 ... ... n x x x + + + + + When is it convergent? Apparently, when |x| < 1. What is the value of the series when x = 2 ? Undefined! you say. But there is a very important sense in which the series converges for x = 2, and its value is 1! How so? Recall the Taylor expansion (you can use the binomial theorem, see above): ( ) 1 2 1 1 1 ... ... 1 n x x x x x = = + + + + + physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu It is exactly the original infinite series above. So the series sums to 1/(1 x). This is defined for all x = 1. And its value for x = 2 is 1. Why is this important? There are cases in physics when we use perturbation theory to find an expansion of a number in an infinite series. Sometimes, the series appears to diverge. But by finding the analytic expression corresponding to the series, we can evaluate the analytic expression at values of x that make the series diverge. In many cases, the analytic expression provides an important and meaningful answer to a perturbation problem. This happens in quantum mechanics, and quantum field theory. This is an example of analytic continuation. A Taylor series is a special case of a Laurent series, and any function with a Laurent expansion is analytic. If we know the Laurent series (or if we know the values of an analytic function and all its derivatives at any one point), then we know the function everywhere, even for complex values of x. The original series is analytic around x = 0, therefore it is analytic everywhere it converges (everywhere it is defined). The process of extending a function which is defined in some small region to be defined in a much larger (even complex) region, is called analytic continuation (see Complex Analysis, discussed elsewhere in this document). TBS: show that the sum of the integers 1 + 2 + 3 + ... = 1/12. ?? Algebra Family Tree Doo Properties Examples gro up Finite or infinite set of elements and operator (), with closure, associativity, identity element and inverses. Possibly commutative: ab = c w/ a, b, c group elements rotations of a square by n 90 o continuous rotations of an object ring Set of elements and 2 binary operators (+ and *), with: commutative group under + left and right distributivity: a(b + c) = ab + ac, (a + b)c = ac + bc usually multiplicative associativity integers mod m polynomials p(x) mod m(x) inte gral domain, or domain A ring, with: commutative multiplication multiplicative identity (but no inverses) no zero divisors ( cancellation is valid): ab = 0 only if a = 0 or b = 0 integers polynomials, even abstract polynomials, with abstract variable x, and coefficients from a field fiel d rings with multiplicative inverses (& identity) commutative group (excluding 0) under multiplication. distributivity, multiplicative inverses Allows solving simultaneous linear equations. Field can be finite or infinite integers with arithmetic modulo 3 (or any prime) real numbers complex numbers vect or space field of scalars group of vectors under +. Allows solving simultaneous vector equations for unknown scalars or vectors. Finite or infinite dimensional. physical vectors real or complex functions of space: f(x, y, z) kets (and bras) physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu Hil bert space vector space over field of complex numbers with: a conjugate-bilinear inner product, <av|bw> = (a*)b<v|w>, <v|w> = <w|v>* a, b scalars, and v, w vectors Mathematicians require it to be infinite dimensional; physicists dont. real or complex functions of space: f(x, y, z) quantum mechanical wave functions Convoluted Thinking Convolution arises in many physics, engineering, statistics, and other mathematical areas. f(t) t t g(t) Two functions, f(t) and g(t). increasing t g(t 1 -) f() t 1 (f *g)(t 1 ) g(t 2 -) f() t 2 (f *g)(t 2 ) g(t 0 -) f() t 0 < 0 (f *g)(t 0 ) (Left) (f *g)(t 0 ), t 0 < 0. (Middle) (f *g)(t 1 ), t 1 > 0. (Right) (f*g)(t 2 ), t 2 > t 1 . The convolution is the magenta shaded area. Given two functions, f(t) and g(t), the convolution of f(t) and g(t) is a function of a time-displacement, defined by (see diagram above): ( ) * ( ) ( ) ( ) the integral covers some domain of interest f g t d f g t where t t t A A } When t < 0, the two functions are backing into each other (above left). When t > 0, the two functions are backing away from each other (above middle and right). Of course, we dont require functions of time. Convolution is useful with a variety of independent variables. E.g., for functions of space, f(x) and g(x), f *g(x) is a function of spatial displacement, x. Notice that convolution is (1) commutative: * * f g g f = (2) linear in each of the two functions: ( ) ( ) ( ) * * * * * * f kg k f g kf g and f g h f g f h = = + = + The verb to convolve means to form the convolution of. We convolve f and g to form the convolution f *g. physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu Vectors Small Changes to Vectors Projection of a Small Change to a Vector Onto the Vector r dr dr dr d|r| r' r ' ' r ~ r r r r r r' ' r r r |r| (Left) A small change to a vector, and its projection onto the vector. (Right) Approximate magnitude of the difference between a big and small vector. It is sometimes useful (in orbital mechanics, for example) to relate the change in a vector to the change in the vectors magnitude. The diagram above (left) leads to a somewhat unexpected result: (multiplying both sides by and using ) d dr or r r d r dr = = = r r r r r r And since this is true for any small change, it is also true for any rate of change (just divide by dt): r r = r r` ` Vector Difference Approximation It is sometimes useful to approximate the magnitude of a large vector minus a small one. (In electromagnetics, for example, this is used to compute the far-field from a small charge or current distribution.) The diagram above (right) shows that: ' ' , ' ~ >> r r r r r r r Why (r, , |) Are Not the Components of a Vector (r, , |) are parameters of a vector, but not components. That is, the parameters (r, , |) uniquely define the vector, but they are not components, because you cant add them. This is important in much physics, e.g. involving magnetic dipoles (ref Jac problem on mag dipole field). Components of a vector are defined as coefficients of basis vectors. For example, the components v = (x, y, z) can multiply the basis vectors to construct v: x y z = + + v x y z There is no similar equation we can write to construct v from its spherical components (r, , |). Position vectors are displacements from the origin, and there are no ## , , r defined at the origin. Put another way, you can always add the components of two vectors to get the vector sum: ( ) ( ) ( ) ( , , ) rectangular components. Let a b c Then a x b y c z = + = + + + + + w v w x y z We cant do this in spherical coordinates: ( ) ( , , ) spherical components. , , w w w v w v w v w Let r Then r r u | u u | | = + = + + + w v w However, at a point off the origin, the basis vectors ## , , r are well defined, and can be used as a basis for general vectors. [In differential geometry, vectors referenced to a point in space are called tangent physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu vectors, because they are tangent to the space, in a higher dimensional sense. See Differential Geometry elsewhere in this document.] Laplacians Place What is the physical meaning of the Laplacian operator? And how can I remember the Laplacian operator in any coordinates? These questions are related because understanding the physical meaning allows you to quickly derive in your head the Laplacian operator in any of the common coordinates. Lets take a step-by-step look at the action of the Laplacian, first in 1D, then on a 3D differential volume element, with physical examples at each step. After rectangular, we go to spherical coordinates, because they illustrate all the principles involved. Finally, we apply the concepts to cylindrical coordinates, as well. We follow this outline: 1. Overview of the Laplacian operator 2. 1D examples of heat flow 3. 3D heat flow in rectangular coordinates 4. Examples of physical scalar fields [temperature, pressure, electric potential (2 ways)] 5. 3D differential volume elements in other coordinates 6. Description of the physical meaning of Laplacian operator terms, such as 2 2 2 2 , , , , T T T T T r r r r r r r r r r c c c c c c | | | | V | | c c c c c c \ . \ . Overview of Laplacian operator: Let the Laplacian act on a scalar field T(r), a physical function of space, e.g. temperature. Usually, the Laplacian represents the net outflow per unit volume of some physical quantity: something/volume, e.g., something/m 3 . The Laplacian operator itself involves spatial second- derivatives, and so carries units of inverse area, say m 2 . 1D Example: Heat Flow: Consider a temperature gradient along a line. It could be a perpendicular wire through the wall of a refrigerator (below left). It is a 1D system, i.e. only the gradient along the wire matters. Refrigerator Room t e m p e r a t u r e x metal wire Refrigerator Warmer Room t e m p e r a t u r e x current carrying wire heat flow heat flow Let the left and right sides of the wire be in thermal equilibrium with the refrigerator and room, at 2 C and 27 C, respectively. The wire is passive, and can neither generate nor dissipate heat; it can only conduct it. Let the 1D thermal conductivity be k = 100 mW-cm/C. Consider the part of the wire inside the insulated wall, 4 cm thick. How much heat (power, J/s or W) flows through the wire? physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu ( ) 25 100 mW-cm/C 625 mW 4 dT C P k dx cm = = = There is no heat generated or dissipated in the wire, so the heat that flows into the right side of any segment of the wire (differential or finite) must later flow out the left side. Thus, the heat flow must be constant along the wire. Since heat flow is proportional to dT/dx, dT/dx must be constant, and the temperature profile is linear. In other words, (1) since no heat is created or lost in the wire, heat-in = heat- out; (2) but heat flow ~ dT/dx; so (3) the change in the temperature gradient is zero: 2 2 0 d dT d T dx dx dx | | = = | \ . (At the edges of the wall, the 1D approximation breaks down, and the inevitable nonlinearity of the temperature profile in the x direction is offset by heat flow out the sides of the wire.) Now consider a current carrying wire which generates heat all along its length from its resistance (diagram above, right). The heat that flows into the wire from the room is added to the heat generated in the wire, and the sum of the two flows into the refrigerator. The heat generated in a length dx of wire is 2 2 resistance per unit length, and gen P I dx where I const = = In steady state, the net outflow of heat from a segment of wire must equal the heat generated in that segment. In an infinitesimal segment of length dx, we have heat-out = heat-in + heat-generated: 2 2 2 2 2 2 out in gen a a dx a dx a dT dT P P P I dx dx dx dT dT I dx dx dx d dT d T dx I dx I dx dx dx + + = + = + = | | = = | \ . The negative sign means that when the temperature gradient is positive (increasing to the right), the heat flow is negative (to the left), i.e. the heat flow is opposite the gradient. Many physical systems have a similar negative sign. Thus the 2 nd derivative of the temperature is the negative of heat outflow (net inflow) from a segment, per unit length of the segment. Longer segments have more net outflow (generate more heat). 3D Rectangular Volume Element Now consider a 3D bulk resistive material, carrying some current. The current generates heat in each volume element of material. Consider the heat flow in the x direction, with this volume element: dx x y z Outflow surface area is the same as inflow flow The temperature gradient normal to the y-z face drives a heat flow per unit area, in W/m 2 . For a net flow to the right, the temperature gradient must be increasing in magnitude (becoming more negative) as we move to the right. The change in gradient is proportional to dx, and the heat outflow flow is proportional to the area, and the change in gradient: 2 2 out in out in P P d dT d T P P k dx dy dz k dx dx dx dy dz dx | | = = | \ . physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu Thus the net heat outflow per unit volume, due to the x contribution, goes like the 2 nd derivative of T. Clearly, a similar argument applies to the y and z directions, each also contributing net heat outflow per unit volume. Therefore, the total heat outflow per unit volume from all 3 directions is simply the sum of the heat flows in each direction: 2 2 2 2 2 2 out in P P T T T k dx dy dz x y z | | c c c = + + | | c c c \ . We see that in all cases, the net outflow of flux per unit volume = change in (flux per unit area), per unit distance We will use this fact to derive the Laplacian operator in spherical and cylindrical coordinates. General Laplacian We now generalize. For the Laplacian to mean anything, it must act on a scalar field whose gradient drives a flow of some physical thing. Example 1: My favorite is T(r) = temperature. Then VT(r) drives heat (energy) flow, heat per unit time, per unit area: / ( ) heat t k T where k thermal conductivity area heat flowvector = V q r q Then ~ r T q radial component of heat flow r c = c Example 2: T(r) = pressure of an incompressible viscous fluid (e.g. honey). Then VT(r) drives fluid mass (or volume) flow, mass per unit time, per unit area: / ( ) mass t k T where k some viscous friction coefficient area mass flow density vector = V j r j Then ~ r T j radial component of mass flow r c = c Example 3: T(r) = electric potential in a resistive material. Then VT(r) drives charge flow, charge per unit time, per unit area: charge / ( ) t T where electrical conductivity area current density vector o o = V j r j Then ~ r T j radial component of current density r c = c Example 4: Here we abstract a little more, to add meaning to the common equations of electromagnetics. Let T(r) = electric potential in a vacuum. Then VT(r) measures the energy per unit distance, per unit area, required to push a fixed charge density through a surface, by a distance of dn, normal to the surface: energy/distance ( ) electric charge volume density T where area V r Then T/r ~ net energy per unit radius, per unit area, to push charges of density out the same distance through both surfaces. physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu In the first 3 examples, we use the word flow to mean the flow in time of some physical quantity, per unit area. In the last example, the flow is energy expenditure per unit distance, per unit area. The requirement of per unit area is essential, as we soon show. Laplacian In Spherical Coordinates To understand the Laplacian operator terms in other coordinates, we need to take into account two effects: 1. The outflow surface area may be different than the inflow surface area 2. The derivatives with respect to angles ( or |) need to be converted to rate-of-change per unit distance. Well see how these two effects come into play as we develop the spherical terms of the Laplacian operator. The cylindrical terms are simplifications of the spherical terms. Spherical radial contribution: We first consider the radial contribution to the spherical Laplacian operator, from this volume element: dr x y z | ## Outflow surface area is differentially larger than inflow flow d s i n d | d d = sin d| d The differential volume element has thickness dr, which can be made arbitrarily small compared to the lengths of the sides. The inner surface of the element has area r 2 dO. The outer surface has infinitesimally more area. Thus the radial contribution includes both the surface-area effect, but not the converting- derivatives effect. The increased area of the outflow surface means that for the same flux-density (flow) on inner and outer surfaces, there would be a net outflow of flux, since flux = (flux-density)(area). Therefore, we must take the derivative of the flux itself, not the flux density, and then convert the result back to per-unit- volume. We do this in 3 steps: ( )( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) 2 2 2 2 2 2 - ~ 1 1 flux area flux density r d r d flux r d dr r r d flux outflow r d r volume area dr r r r r r d r c | | = O | c \ . c c | | = O | c c \ . c c c c | | | | = = O = | | c c c c \ . \ . O The constant dO factor from the area cancels when converting to flux, and back to flux-density. In other words, we can think of the fluxes as per-steradian. We summarize the stages of the spherical radial Laplacian operator as follows: physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu 2 2 2 2 2 2 1 ( ) ( ) ( )( ) change in radial flux per unit length, per unit solid-angle; positive is increasing flux 1 r T r T r r r T r area flow per unit area r T r d r T r r r c c V = c c c = c c = = c O c c = c c c r r 2 2 2 per unit area change in radial flux per unit length, per unit area net outflow of flux per unit volume 1 r T r r r T r r r c = c c = c c c c unit length, per unit area Following the steps in the example of heat flow, let T(r) = temperature. Then 2 2 2 2 2 radial heat flow per unit area, W/m Watts change in radial heat flux per unit length, per unit solid-angle 1 net outflow of heat flux per unit volume T r r T r r T r r r T r r r c = c c = c c c = c c c c = c c Spherical azimuthal contribution: The spherical | contribution to the Laplacian has no area-change, but does require converting derivatives. Consider the volume element: d| Outflow surface area is identical to inflow x y z | flow The inflow and outflow surface areas are the same, and therefore area-change contributes nothing to the derivatives. However, we must convert the derivatives with respect to | into rates-of-change with respect to distance, because physically, the flow is driven by a derivative with respect to distance. In the spherical | case, the effective radius for the arc-length along the flow is r sin , because we must project the position vector into the plane of rotation. Thus, (/|) is the rate-of-change per (r sin ) meters. Therefore, physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu 1 rate-of-change-per-meter sin r u | c = c Performing the two derivative conversions, we get 2 1 1 ( ) ( ) sin sin 1 azimuthal flux per unit area sin 1 change in (azimuthal flux per unit area) per radian sin 1 1 change in (azimuthal flux per unit area) per unit distance sin sin net azimu T T r r T r T r T r r | u | u | u | | u | u | u | c c V = c c c = c c c = c c c c = c c = r r azimuthal flux per unit area change in (azimuthal flux change in (azimuthal flux per unit area) per unit distance thal outflow of flux per unit volume 1 1 sin sin T r r u | u | c c c c 2 2 2 2 1 sin T r u | c = c Notice that the r 2 sin 2 in the denominator is not a physical area; it comes from two derivative conversions. Spherical polar angle contribution: d Outflow surface area is differentially larger than inflow x y z | flow The volume element is like a wedge of an orange: it gets wider (in the northern hemisphere) as increases. Therefore the outflow area is differentially larger than the inflow area (in the northern hemisphere). In particular, ( ) sin area r dr u = , but we only need to keep the dependence, because the factors of r cancel, just like dO did in the spherical radial contribution. So we have sin area u In addition, we must convert the / to a rate-of-change with distance. Thus the spherical polar angle contribution has both area-change and derivative-conversion. Following the steps of converting to flux, taking the derivative, then converting back to flux-density, we get physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu ( ) 2 1 1 1 ( ) sin ( ) sin 1 ( )( ) 1 1 1 1 ## sin change in -flux per T T r r T r area flux per unit area T r dr T r T r r u u u u u u u u u u u u u u u u u u c c V = c c c = c c = = c c c = c c c c = c c r r ( ) -flux per unit area -flux, per change unit radius , per unit distance 1 1 1 ## sin change in ( -flux per unit area), per unit distance sin net outflow of flux per unit volume 1 1 1 sin sin T r r T r r u u u u u u u u u u u c c = c c = c c c c in ( -flux per ## change in ( -flux per unit area), per unit distance 1 sin sin T r u u u u u u u c c = c c Notice that the r 2 in the denominator is not a physical area; it comes from two derivative conversions. Cylindrical Coordinates The cylindrical terms are simplifications of the spherical terms. dr x y z | surface area is differentially larger than inflow flow r d| flow | and z outflow surface areas are identical to inflow dz flow Cylindrical radial contribution: The picture of the cylindrical radial contribution is essentially the same as the spherical, but the height of the slab is exactly constant. We still face the issues of varying inflow and outflow surface areas, and converting derivatives to rate of change per unit distance. The change in area is due only to the arc length r d|, with the z (height) fixed. Thus we write the radial result directly: physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu 2 1 ( ) ( ) (Cylindrical Coordinates) ( )( ) 1 change i r T r T r r r T r flow per unit area area r T r d dz r T r r r T r r r | c c V = c c c = c c = = c c c = c c c c = c c r r per unit area net outflow of flux per unit volume 1 r T r r r = c c c c r Cylindrical azimuthal contribution: Like the spherical case, the inflow and outflow surfaces have identical areas. Therefore, the | contribution is similar to the spherical case, except there is no sin factor; r contributes directly to the arc-length and rate-of-change per unit distance: ( ) 2 1 1 ( ) ( ) 1 azimuthal flux per unit area 1 change in azimuthal flux per unit area per radian 1 1 change in (azimuthal flux per unit area) per unit distance net azimuthal outflow of flux per unit vo T T r r T r T r T r r | | | | | | | | c c V = c c c = c c c = c c c c = c c = r r 2 2 2 change in (azimuthal flux per unit area) per unit distance lume 1 1 1 azimuthal flow per unit area change in azimuthal T T r r r | | | c c c = c c c Cylindrical z contribution: This is identical to the rectangular case: the inflow and outflow areas are the same, and the derivative is already per unit distance, ergo: (add cylindrical volume element picture??) physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu 2 vertical flux per unit area change in (vertical flux per unit area) pe ( ) ( ) vertical flux per unit area change in (vertical flux per unit area) per unit distance net outflow of flux per unit volume z T T z z T z T z z T z z c c V = c c c = c c c = c c = c c c c r r 2 2 r unit distance T z c = c In electromagnetic propagation, and elsewhere, one encounters the dot-product of a vector field with the gradient operator, acting on a vector field. What is this v V operator? Here, v(r) is a given vector field. The simple view is that v(r) V is just a notational shorthand for ( ) ( ) , ( ) x y z x y z x y z v v v x y z because v v v v v v x y z x y z | | c c c V + + | c c c \ . | | | | c c c c c c V = + + + + = + + | | c c c c c c \ . \ . v r v r x y z x y z by the usual rules for a dot-product in rectangular coordinates. There is a deeper meaning, though, which is an important bridge to the topics of tensors and differential geometry. We can view the v V operator as simply the dot-product of the vector field v(r) with the gradient of a vector field. You may think of the gradient operator as acting on a scalar field, to produce a vector field. But the gradient operator can also act on a vector field, to produce a tensor field. Heres how it works: You are probably familiar with derivatives of a vector field: ( , , ) be a vector field. Then is a vector field. Writing spatial vectors as column vectors, , Similarly, are y x z x x y y z z A A A Let x y z x x x x A x A A A A and x x A A x and y z c | | c c c = + + | c c c c \ . c | | | c | | | c | c | = = | | c c | | \ . c | | c \ . c c c c A A x y z A A A also vector fields. By the rule for total derivatives, for a small displacement (dx, dy, dz), physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu x x y x x x y y z x y y z z z z z y A z A dx A A x x dA A A dx dx x x A A A y y A A dy dA dy y y A A A y x x dz y dA A dz z y d x y z | | | | | | | | | | | | | c c c = + + = = + | | | c c c | | | \ . | | | | | | \ . c c | | | c c | c c | | c c | c c | | | c c c c c c c c c c c \ | c | c c \ . c c c c c . c \ . A A A A x y z A z A dz y A z dy c | | | c | c | | c | | | | | | c | c \ | + . | | This says that the vector dA is a linear combination of 3 column vectors A/x, A/y, and A/z, weighted respectively by the displacements dx, dy, and dz. The 3 x 3 matrix above is the gradient of the vector field A(r). It is the natural extension of the gradient (of a scalar field) to a vector field. It is a rank-2 tensor, which means that given a vector (dx, dy, dz), it produces a vector (dA) which is a linear combination of 3 (column) vectors (VA), each weighted by the components of the given vector (dx, dy, dz). Note that VA and VA are very different: the former is a rank-2 tensor field, the latter is a scalar field. This concept extends further to derivatives of rank-2 tensors, which are rank-3 tensors: 3 x 3 x 3 cubes of numbers, producing a linear combination of 3 x 3 arrays, weighted by the components of a given vector (dx, dy, dz). And so on. Note that in other coordinates (e.g., cylindrical or spherical), VA is not given by the derivative of its components with respect to the 3 coordinates. The components interact, because the basis vectors also change through space. That leads to the subject of differential geometry, discussed elsewhere in this document. physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu Greens Functions Greens functions are a method of solving inhomogeneous linear differential equations (or other linear operator equations): { } { } ( ) ( ), f x s x where is a linear operator = L L , We use them when other methods are hard, or to make a useful approximation (the Born approximation). Sometimes, the Greens function itself can be given physical meaning, as in Quantum Field Theory. Greens functions can generate particular (i.e. inhomogeneous) solutions, and solutions matching boundary conditions. They dont generate homogeneous solutions (i.e., where the right hand side is zero). We explore Greens functions through the following steps: 1. Extremely brief review of the -function. 2. The tired, but inevitable, electromagnetic example. 3. Linear differential equations of one variable (1-dimensional), with sources. 4. Delta function expansions. 5. Greens functions of two variables (but 1 dimension). 6. When you can collapse a Greens function to one variable (portable Greens functions: translational invariance) 7. Dealing with boundary conditions: at least 5 (6??) kinds of BC 8. Green-like methods: the Born approximation You will find no references to Greens Theorem or self-adjoint until we get to non-homogeneous boundary conditions, because those topics are unnecessary and confusing before then. We will see that: The biggest hurdle in understanding Greens functions is the boundary conditions. Dirac Delta Function Recall that the Dirac -function is an impulse, an infinitely narrow, tall spike function, defined as ( ) 0, 0, ( ) 1, 0 (the area under the d-function is 1) a a x for x and x dx a o o = = = > } . The linearity of integration implies the delta function can be offset, and weighted, so that ( ) 0 b a b a w x b dx w a o + = > } Since the -function is infinitely narrow, it can pick out a single value from a function: ( ) ( ) ( ) 0 b a b a x b f x dx f b a o + = > } [It also implies (0) o , but we dont focus on that here.] (See Funky Quantum Concepts for more on the delta function). The Tired, But Inevitable, Electromagnetic Example You probably have seen Poissons equation relating the electrostatic potential at a point to a charge distribution creating the potential (in gaussian units): (1) 2 ( ) 4 ( ) electrostatic potential, charge density where | t | V = r r physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu We solved this by noting three things: (1a) electrostatic potential, |, obeys superposition: the potential due to multiple charges is the sum of the potentials of the individual charges; (1b) the potential is proportional to the source charge; and (2) the potential due to a point charge is 1 ( ) (point charge at origin) q r | = r The properties (1a) and (1b) above, taken together, define a linear relationship: 1 1 2 2 1 2 1 2 ( ) ( ), ( ) ( ) ( ) ( ) ( ) ( ) ( ) total Given and Then a a | | | | | + = + r r r r r r r r r To solve Eq (1), we break up the source charge distribution into an infinite number of little point charges spread out over space, each of charge d 3 r. The solution for | is the sum of potential from all the point charges, and the infinite sum is an integral, so we find | as 3 1 ( ) ( ') ' ' d r | = } r r r r Note that the charge distribution for a point charge is a -function: infinite charge density, but finite total charge. [We have also implicitly used the fact that the potential is translationally invariant, and depends only on the distance from the source. We will remove this restriction later.] But all of this followed from simple mathematical properties of Eq (1) that have nothing to do with electromagnetics. All we used to solve for | was that the left-hand side is a linear operator on | (so superposition applies), and we have a known solution when the right-hand side is a delta function: 2 2 " " int " " ' 1 ( ) ( ) ( ') ' linear linear unknown given shource given po operator operator function function shource at known solution and | t o V = 4 V = r r r r r r r _ _ _ The solution for a given is a sum of delta-function solutions. Now we generalize all this to arbitrary (for now, 1D) linear operator equations by letting r x, | f, V 2 L, s, and call the known - function solution G(x): { } { } Given ( ) ( ) ( ) ( ), then ( ) ( ') ' ( ') f x s x and G x x f x s x dx G x x o = = = } L L assuming, as above, that our linear operator, and any boundary conditions, are translationally invariant. A Fresh, New Signal Processing Example If this example doesnt make sense to you, just skip it. Signal processing folk have long used a Greens function concept, but with different words. A time-invariant linear system (TILS) produces an output which is a linear operation on its input: { } { } ( ) ( ) is a linear operation taking input to output o t i t where = In this case, we arent given {}, and we dont solve for it. However, we are given a measurement (or computation) of the systems impulse response, called h(t) (not to be confused with a homogeneous solution to anything). If you poke the system with a very short spike (i.e., if you feed an impulse into the system), it responds with h(t). { } ( ) ( ) ( ) is the system's impulse response h t t where h t o = Note that the impulse response is spread out over time, and usually of (theoretically) infinite duration. h(t) fully characterizes the system, because we can approximate any input function as a series of impulses, and sum up all the responses. Therefore, we find the output for any input, i(t), with: physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu ( ) ( ') ( ') ' o t i t h t t dt = } h(t) acts like a Greens function, giving the system response at time t to a delta function at t = 0. Linear differential equations of one variable, with sources We wish to solve for f(x), given s(x): { } { } 2 2 2 2 2 2 ( ) ( ), is a linear operator ( ) is called the "source," or forcing function . ., ( ) ( ) ( ) ( ) f x s x where s x d d E g f x f x f x s x dx dx e e = | | + + = | | \ . L L We ignore boundary conditions for now (to be dealt with later). The differential equations often have 3D space as their domain. Note that we are not differentiating s(x), which will be important when we get to the delta-function expansion of s(x). Greens functions solve the above equation by first solving a related equation: if we can find a function (i.e., a Greens function) such that { } 2 2 2 ( ) ( ), ( ) is the Dirac delta function . ., ( ) ( ) G x x where x d E g G x x dx o o e o = | | + = | | \ . L then we can use that Greens function to solve our original equation. This might seem weird, because (0) , but it just means that Greens functions often have discontinuities in them or their derivatives. For example, suppose G(x) is a step function: ( ) 0, 0 ( ) ( ) 1, 0 G x x d G x x x dx Then o = < = ` = > ) . Now suppose our source isnt centered at the origin, i.e., ( ) ( ) s x x a o = . If { } L is translation invariant [along with any boundary conditions], then G( ) can still solve the equation by translation: { } ( ) ( ) ( ), ( ) ( ) is a solution. f x s x x a f x G x a o = = = L If s(x) is a weighted sum of delta functions at different places, then because { } L is linear, the solution is immediate; we just add up the solutions from all the -functions: { } ( ) ( ) ( ) ( ) ( ) i i i i i i f x s x w x x f x wG x x o = = = L Usually the source s(x) is continuous. Then we can use -functions as a basis to expand s(x) as an infinite sum of delta functions (described in a moment). The summation goes over to an integral, and a solution is { } { } ' ( ') ' 1 ( ) ( ) ( ) ( ) ( ) ' ( ') ( ') ( ) ' ( ') ( ') i i x x w s x dx i i i f x s x w x x f x s x dx s x x x and f x dx s x G x x o o = = = = = = } } L L physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu We can show directly that f(x) is a solution of the original equation by plugging it in, and noting that { } L acts in the x domain, and goes through (i.e., commutes with) any operation in x: { } { } { } ( ) ' ( ') ( ') ' ( ') ( ') moving inside the integral ' ( ') ( ') ( ) ( ) ( ). . f x dx s x G x x dx s x G x x dx s x x x s x picks out the value of s x QED o o = ` ) = = = } } } L L L L We now digress for a moment to understand the -function expansion. Delta Function Expansion As in the EM example, it is frequently quite useful to expand a given function s(x) as a sum of - functions: 1 ( ) ( ), are the weights of the basis delta functions N i i i i s x w x x where w o = ~ [This same expansion is used to characterize the impulse-response of linear systems.] x Approximating a function with delta functions s(x) N = 8 x s(x) x x i w i = area s(x i )x N = 16 On the left, we approximate s(x) first with N = 8 -functions (green), then with N = 16 -functions (red). As we double N, the weight of each -function is roughly cut in half, but there are twice as many of them. Hence, the integral of the -function approximation remains about the same. Of course, the approximation gets better as N increases. As usual, we let the number of -functions go to infinity: N . On the right above, we show how to choose the weight of each -function: its weight is such that its integral approximates the integral of the given function, s(x), over the interval covered by the -function. In the limit of N , the approximation becomes arbitrarily good. In what sense is the -function series an approximation to s(x)? Certainly, if we need the derivative s'(x), the delta-function expansion is terrible. However, if we want the integral of s(x), or any integral operator, such as an inner product or a convolution, then the delta-function series is a good approximation: 1 ( ) *( ) ( ) ( ' ) ( ) , ( ) ( ) ( ) N i i i i i For s x dx or f x s x dx or f x x s x dx then s x w x x where w s x x o = ~ = A } } } As N , we expand s(x) in an infinite sum (an integral) of -functions: physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu ' ' ( ') ' ( ) ( ) ( ) ' ( ') ( ') i i x x x dx w s x dx i i i s x w x x s x dx s x x x o o = = } , which if you think about it, follows directly from the definition of (x). [Aside: Delta-functions are a continuous set of orthonormal basis functions, much like sinusoids from quantum mechanics and Fourier transforms. They satisfy all the usual orthonormal conditions for a continuous basis, i.e. they are orthogonal and normalized: ( ) ( ) ( ) dx x a x b a b o o o = } ] Note that in the final solution of the prior section, we integrate s(x): ( ) ' ( ') ( ') f x dx s x G x x = } and integrating s(x) is what makes the -function expansion of s(x) valid. Introduction to Boundary Conditions We now incorporate a simple boundary condition. Consider a 2D problem in the plane: { } ( , ) ( , ) inside the boundary ( ) 0, where the boundary is given. f x y s x y f boundary = = L We define the vector r (x, y), and recall that ( ) ( ) ( ), ( ') ( ') ( ') x y so that x x y y o o o o o o = r r r [Some references use the notation (2) (r) for a 2D -function.] x Domain of f(x, y) f(boundary) = 0 boundary y x y (r) (r r') Boundary condition does NOT translate with r Boundary condition remains fixed (Left) The domain of interest, and its boundary. (Right) A solution meeting the BC for the source at (0, 0) does not translate to another point and still meet the BC. The boundary condition removes the translation invariance of the problem. The delta-function response of L{G(r)} translates, but the boundary condition does not. I.e., a solution of { } { } ( ) ( ), ( ) 0 ( ') ( ') ( ') 0 G and G boundary G BUT does NOT G boundary o o = = = = r r r r r r r L L With boundary conditions, for each source point r', we need a different Greens function! The Greens function for a source point r', call it G r (r), must satisfy both: physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu { } ' ' ( ) ( ') ( ) 0 G and G boundary o = = r r r r r L . We can think of this as a Greens function of two arguments, r and r', but really, r is the argument, and r' is a parameter. In other words, we have a family of Greens functions, G r (r), labeled by the location of the delta-function, r'. Example: Returning to a 1D example in r: Find the Greens function for the equation 2 2 ( ) ( ), on the interval [0,1], (0) (1) 0. d f r s r subject to f f dr = = = Solution: The Greens function equation replaces the source s(r) with (r r'): 2 ' 2 ( ) ( ') r d G r r r dr o = Note that G r (r) satisfies the homogeneous equation on either side of r: 2 ' 2 ( ') 0 r d G r r dr = = The full Greens function simply matches two homogeneous solutions, one to the left of r, and another to the right of r, such that the discontinuity at r creates the required -function there. First we find the homogeneous solutions: 2 2 ( ) 0 Integrate both sides: ( ) C is an integration constant. Integrate again: ( ) , are arbitrary constants d h r dr d h r C where dr h r Cr D where C D = = = + There are now 2 cases: (left) r < r', and (right) r > r'. Each solution requires its own set of integration constants. ' ' ' ' : ' ( ) Only the left boundary condition applies to ' : (0) 0 0 : ' ( ) Only the right boundary condition applies to ' : (1) 0 0, r r r r Left case r r G r Cr D r r G D Right case r r G r Er F r r G E F F E < = + < = = > = + > = + = = So far, we have : ( ') : ( ') Left case G r r Cr Right case G r r Er E < = > = The integration constants C and E are as-yet unknown. Now we must match the two solutions at r = r', and introduce a delta function there. The -function must come from the highest derivative in L{ }, in this case the 2 nd derivative, because if G(r) had a delta function, then the 2 nd derivative G(r) would have the derivative of a -function, which cannot be canceled by any other term in L{ }. Since the derivative of a step (discontinuity) is a -function, G(r) must have a discontinuity, so that G(r) has a -function. And finally, if G(r) has a discontinuity, then G(r) has a cusp (aka kink or sharp point). We can find G(r) to satisfy all this by matching G(r) and G(r) of the left and right Greens functions, at the point where they meet, r = r: physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu ' ' : ( ') : ( ') There must be a unit step in the derivative across ' : 1 r r d d Left G r r C Right G r r E dr dr r r C E < = > = = + = So we eliminate E in favor of C. Also, G(r) must be continuous (or else G(r) would have a - function), which means ' ' ( ' ) ( ' ) ' ( 1) ' 1, ' 1 r r G r r G r r Cr C r C C r + = = = = + = yielding the final Greens function for the given differential equation: ( ) ( ) ' ' ( ') ' 1 , ( ') ' ' ' 1 r r G r r r r G r r r r r r r < = > = = Heres a plot of these Greens functions for different values of r': r r' = 0.3 G r' (r) 0 0.5 -0.5 r 0 0.5 -0.5 r 0 0.5 -0.5 0 1 r' = 0.5 r' = 0.8 G r' (r) G r' (r) 0 1 0 1 To find the solution f(x), we need to integrate over r'; therefore, it is convenient to write the Greens function as a true function of two variables: { } ' ( ; ') ( ) ( ; ' ( '), ( ; ') 0 r G r r G r G r r r r and G boundary r o = = L , where the ; between r and r' emphasizes that G(r ; r') is a function of r, parameterized by r'. I.e., we can still think of G(r; r') as a family of functions of r, where each family member is labeled by r, and each family member satisfies the homogeneous boundary condition. It is important here that the boundary condition is zero, so that any sum of Greens functions still satisfies the boundary condition. Our particular solution to the original equation, which now satisfies the homogeneous boundary condition, is ( ) ( ) 1 1 0 0 ( ; '), ' ( ; '), ' ( ) ' ( ') ( ; ') ' ( ') ' 1 ' ( ') ' 1 which satisfies ( ) 0 r r G r r r r G r r r r f r dr s r G r r dr s r r r dr s r r r f boundary > < = = + = } } } Summary: To solve { } ' ( ) ( ') x G x x x o = L , we break G(x) into left- and right- sides of x. Each side satisfies the homogeneous equation, { } ' ( ) 0 x G x = L , with arbitrary constants. We use the matching conditions to achieve the -function at x, which generates a set of simultaneous equations for the unknown constants in the homogeneous solutions. We solve for the constants, yielding the left-of-x and right-of-x pieces of the complete Greens function, G x (x). Aside: It is amusing to notice that we use solutions to the homogeneous equation to construct the Greens function. We then use the Greens function to construct the particular solution to the given equation. So we are ultimately constructing a particular solution from a homogeneous solution. Thats not like anything we learned in physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu When Can You Collapse a Greens Function to One Variable? Portable Greens Functions: When we first introduced the Greens function, we ignored boundary conditions, and our Greens function was a function of one variable, r. If our source wasnt at the origin, we just shifted our Greens function, and it was a function of just (r r). Then we saw that with (certain) boundary conditions, shifting doesnt work, and the Greens function is a function of two variables, r and r. In general, then, under what conditions can we write a Greens function in the simpler form, as a function of just (r r)? When both the differential operator and the boundary conditions are translation-invariant, the Greens function is also translation-invariant. We can say its portable. This is fairly common: differential operators are translation-invariant (i.e., they do not explicitly depend on position), and BCs at infinity are translation-invariant. For example, in E&M it is common to have equations such as 2 ( ) ( ), ( ) 0 with boundary condition | | V = = r r Because both the operator V 2 and the boundary conditions are translation invariant, we dont need to introduce r' explicitly as a parameter in G(r). As we did when introducing Greens functions, we can take the origin as the location of the delta-function to find G(r), and use translation invariance to move around the delta function: { } ' ( ; ') ( ) ( ') ( ') ( ') ( ) 0 r G r r G r G r r and G r r r r with BC G o = = = L Non-homogeneous Boundary Conditions So far, weve dealt with homogeneous boundary conditions by requiring ' ( ) ( ; ') r G r G r r to be zero on the boundary. There are different kinds of boundary conditions, and different ways of dealing with each kind. Note that in general, constraint conditions dont have to be specified at the boundary of anything. They are really just constraints or conditions. For example, one constraint is often that the solution be a normalized function, which is not a statement about any boundaries. But in most physical problems, at least one condition does occur at a boundary, so we defer to this, and limit ourselves to boundary conditions. Boundary Conditions Specifying Only Values of the Solution In one common case, we are given a general (inhomogeneous) boundary condition, m(r) along the boundary of the region of interest. Our problem is now to find the complete solution c(r) such that { } ( ) ( ), ( ) ( ) c r s r and c boundary m boundary = = L One approach to find c(r) is from elementary differential equations: we find a particular solution f(x) to the given equation, that doesnt necessarily meet the boundary conditions. Then we add a linear combination of homogeneous solutions to achieve the boundary conditions, while preserving the solution of the non-homogeneous equation. Therefore, we (1) first solve for f(r), as above, such that { } { } ( ) ( ), ( ) 0, using a Green's function satisfying ( ; ') ( ') ( ; ') 0 f r s r and f boundary G r r r r and G boundary r o = = = = L L (2) We then find homogeneous solutions h i (r) which are non-zero on the boundary, using ordinary methods (see any differential equations text): { } ( ) 0, ( ) 0 i i h r and h boundary = = L Recall that in finding the Greens function, we already had to find homogeneous solutions, since every Greens function is a homogeneous solution everywhere except at the -function position, r'. physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu (3) Finally, we add a linear combination of homogeneous solutions to the particular solution to yield a complete solution which satisfies both the differential equation and the boundary conditions: { } { } { } { } 1 1 2 2 1 1 2 2 1 1 2 2 1 1 2 2 ( ) ( ) ... ( ), ( ) ( ) ... 0 by superposition ( ) ( ) ( ) ( ) ... , ( ) ( ) ( ) ( ) ... ( ) ( ) ( ( ) A h r A h r m r A h r A h r c r f r A h r A h r Therefore c r f r A h r A h r f r s r and c boundary m boundary + + = + + = = + + + = + + + = = = Continuing Example: In our 1D example above, we have: { } ( ) ( ) 2 ' ' 2 ' ' ( ') ' 1 , ( ') ' 1 , : (0) (1) 0 (0) (1) 0, ( ) r r r r and G r r r r G r r r r r satisfying BC G G f f s r c = < = > = c = = = = We now add boundary conditions to the original problem. We must satisfy c(0) = 2, and c(1) = 3, in addition to the original problem. Our linearly independent homogeneous solutions are: 1 1 0 0 ( ) ( ) (a constant) h r A r h r A = = To satisfy the BC, we need 1 0 0 1 0 1 (0) (0) 2 2 (1) (1) 3 1 h h A h h A + = = + = = and our complete solution is 1 0 ( ) ' ( ') ( ; ') 2 c r dr s r G r r r ( = + + ( } Boundary Conditions Specifying a Value and a Derivative Another common kind of boundary conditions specifies a value and a derivative for our complete solution. For example, in 1D: (0) 1 '(0) 5 c and c = = But recall that our Greens function does not have any particular derivative at zero. When we find the particular solution, f(x), we have no idea what its derivative at zero, f '(0), will be. And in particular, different source functions, s(r), will produce different f(r), with different values of f '(0). This is bad. In the previous case of BC, f(r) was zero at the boundaries for any s(r). What we need with our new BC is f(0) = 0 and f '(0) = 0 for any s(r). We can easily achieve this by using a different Greens function! We subjected our first Greens function to the boundary conditions G(0; r) = 0 and G(1; r) = 0 specifically to give the same BC to f(r), so we could add our homogeneous solutions independently of s(r). Therefore, we now choose our Greens function BC to be: { } (0; ') 0 '(0; ') 0, ( ; ') ( ') G r and G r with G r r r r o = = = We can see by inspection that this leads to a new Greens function: ( ; ') 0 ', ( ; ') ' ' G r r r r and G r r r r r r = < = > physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu r r' = 0.3 G(r ; r') 0 0.5 r 0 0.5 r 0 0.5 0 1 0 1 0 1 r' = 0.5 r' = 0.8 G(r ; r') G(r ; r') The 2 nd derivative of G(r; r) is everywhere 0, and the first derivative changes from 0 to 1 at r. Therefore, our new particular solution f(r) also satisfies: 1 0 ( ) ' ( ') ( ; ') (0) 0, '(0) 0, ( ) f r dr s r G r r and f f s r = = = } We now construct the complete solution using our homogeneous solutions to meet the BC: 1 1 0 0 1 0 0 1 0 1 1 0 ( ) ( ) (a constant) (0) (0) 1 1 '(0) '(0) 5 5. ( ) ' ( ') ( ; ') 5 1 h r A r h r A h h A h h A Then c r dr s r G r r r = = + = = + = = ( = + + ( } In general, the Greens function depends not only on the particular operator, but also on the kind of boundary conditions specified. Boundary Conditions Specifying Ratios of Derivatives and Values Another kind of boundary conditions specifies a ratio of the solution to its derivative, or equivalently, specifies a linear combination of the solution and its derivative be zero. This is equivalent to a homogeneous boundary condition: '(0) , (0) 0 '(0) (0) 0 (0) c or equivalently if c c c c o o = = = This BC arises, for example, in some quantum mechanics problems where the normalization of the wave-function is not yet known; the ratio cancels any normalization factor, so the solution can proceed without knowing the ultimate normalization. Note that this is only a single BC. If our differential operator is 2 nd order, there is one more degree of freedom that can be used to achieve normalization, or some other condition. (This BC is sometimes given as c'(0) c(0) = 0, but this simply multiplies both sides by a constant, and fundamentally changes nothing.) Also, this condition is homogeneous: a linear combination of functions which satisfy the BC also satisfies the BC. This is most easily seen from the form given above, right: ( ) ( ) '(0) (0) 0, '(0) (0) 0, ( ) ( ) ( ) '(0) (0) 0 '(0) (0) '(0) (0) '(0) (0) If d d and e e then c r Ad r Be r satisfies c c because c c A d d B e e o o o o o o = = = + = = + Therefore, if we choose a Greens function which satisfies the given BC, our particular solution f(r) will also satisfy the BC. There is no need to add any homogeneous solutions. Continuing Example: In our 1D example above, with L = d 2 /dr 2 , we now specify BC: '(0) 2 (0) 0 c c = physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu Since our Greens functions for this operator are always two connected line segments (because their 2 nd derivatives are zero), we have ' : ( ; ') , 0 (0) 0 ' : ( ; ') 0 : 2 0 r r G r r Cr D D so that c r r G r r Er F BC at C D < = + = = > = + = With this BC, we have an unused degree of freedom, so we choose D = 1, implying C = 2. We must find E and F so that G(r; r) is continuous, and G(r; r) has a unit step at r. The latter condition requires that E = 3, and then continuity requires ' ' 2 ' 1 3 ' , ' 1. ' : ( ; ') 2 1 ' : ( ; ') 3 ' 1 Cr D Er F r r F F r So r r G r r r and r r G r r r r + = + + = + = + < = + > = + r r' = 0.3 G(r ; r') 1 0 1 1.6 2.5 4.0 r r' = 0.5 G(r ; r') 1 0 1 r r' = 0.8 G(r ; r') 1 0 1 2.5 4.0 2.5 4.0 and our complete solution is just 1 0 ( ) ( ) ' ( ') ( ; ') c r f r dr s r G r r = = } Boundary Conditions Specifying Only Derivatives (Neumann BC) Another common kind of BC specifies derivatives at points of the solution. For example, we might have '(0) 0 '(1) 1 c and c = = Then, analogous to the BC specifying two values for c( ), we choose a Greens function which has zeros for its derivatives at 0 and 1: ( 0 ; ') 0 ( 1; ') 0 d d G r r and G r r dr dr = = = = Then the sum (or integral) of any number of such Greens functions also satisfies the zero BC: 1 0 ( ) ' ( ') ( ; ') '(0) 0 '(1) 0 f r dr s r G r r satisfies f and f = = = } We can now form the complete solution, by adding homogeneous solutions that satisfy the given BC: 1 1 2 2 1 1 2 2 1 1 2 2 ( ) ( ) '( ) '( ) '(0) '(0) 0 '(1) '(1) 1 c r f r A h r A h r where A h A h and A h A h = + + + = + = Example: We cannot use our previous example where L{ } = d 2 /dr 2 , because there is no solution to 2 2 ( ; ') ( ') ( 0 ; ') ( 1; ') 0 d d d G r r r r with G r r G r r dr dr dr o = = = = = This is because the homogenous solutions are straight line segments; therefore, any solution with a zero derivative at any point must be a flat line. So we choose another operator as our example: physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu 3D Boundary Conditions: Yet Another Method TBS: Using Greens theorem. Green-Like Methods: The Born Approximation In the Born approximation, and similar problems, we have our unknown function, now called (x), on both sides of the equation: (1) { } ( ) ( ) x x = L The theory of Greens functions still works, so that ( ) ( ') ( ; ') ' x x G x x dx = } , but this doesnt solve the equation, because we still have on both sides of the equation. We could try rearranging Eq (1): { } { } { } { } ( ) ( ) 0 which is the same as ( ) 0, ( ) ( ) ( ) x x x with x x x = = L L' L' L But recall that Greens functions require a source function, s(x) on the right-hand side. The method of Greens functions cant solve homogeneous equations, because it yields { } ( ) ( ) 0 ( ) ( ') ( ; ') ' 0 ' 0 x s x x s x G x x dx dx = = = = = } } L which is a solution, but not very useful. So Greens functions dont work when (x) appears on both sides. However, under the right conditions, we can make a useful approximation. If we have an approximate solution, { } 0 0 ( ) ( ) x x ~ L we can use 0 (x) as the source term, and use the method of Greens functions, to get a better approximation to (x): { } { } 1 0 1 0 ( ) ( ) ( ) ( ') ( ; ') ' ( ; ') is the Green's function for , . . ( ; ') ( ') x x x x G x x dx where G x x i e G x x x x o = = = } L L L 1 (x) is called the first Born approximation of (x). Of course, this process can be repeated to arbitrarily high accuracy: 2 1 1 ( ) ( ') ( ; ') ' . . . ( ) ( ') ( ; ') ' n n x x G x x dx x x G x x dx + = = } } TBS: a real QM example. physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu Complex Analytic Functions For a review of complex numbers and arithmetic, see Funky Quantum Concepts. Notation: In this chapter, z, w are always complex variables; x, y, r, are always real variables. Other variables are defined as used. A complex function of a complex variable f(z) is analytic over some domain if it has an infinite number of continuous derivatives in that domain. It turns out, if f(z) is once differentiable on a domain, then it is infinitely differentiable, and therefore analytic on that domain. A necessary condition for analyticity of f(z) = u(x, y) + iv(x, y) near z 0 is that the Cauchy-Riemann equations hold, to wit: , f f u v u v u v u v v u i i i i i and x y x x y y y y x y x y | | c c c c c c c c c c c c = + = + = + = = | c c c c c c c c c c c c \ . A sufficient condition for analyticity of f(z) = u(x, y) + iv(x, y) near z 0 is that the Cauchy-Riemann equations hold, and the first partial derivatives of f exist and are continuous in a neighborhood of z 0 . Note that if the first derivative of a complex function is continuous, then all derivatives are continuous, and the function is analytic. This condition implies 2 1 2 2 0 0 " " ( ) is countour independent if ( ) is single-valued z z u v u v level lines are perpendicular f z dz f z V = V = V V = } Note that a function can be analytic in some regions, but not others. Singular points, or singularities, are not in the domain of analyticity of the function, but border the domain [Det def 4.5.2 p156]. E.g., \z is singular at 0, because it is not differentiable, but it is continuous at 0. Poles are singularities near which the function is unbounded (infinite), but can be made finite by multiplication by (z z 0 ) k for some finite k [Det p165]. This implies f(z) can be written as: 1 1 1 0 1 0 1 0 0 1 0 ( ) ( ) ( ) ... ( ) ( ) ... k k k k f z a z z a z z a z z a a z z + = + + + + + + The value k is called the order of the pole. All poles are singularities. Some singularities are like poles of infinite order, because the function is unbounded near the singularity, but it is not a pole because it cannot be made finite by multiplication by any (z z 0 ) k , for example e 1/z . Such a singularity is called an essential singularity. A Laurent series expansion of a function is similar to a Taylor series expansion, but the sum runs from to +, instead of from 1 to . In both cases, an expansion is about some point, z 0 : ( ) ( ) ( ) 0 ( ) 0 0 0 1 0 1 0 ( ) Taylor series: ( ) ( ) ! 1 ( ) Laurent series: ( ) , 2 n n n n n n n n k around z n f z f z f z b z z where b n f z f z a z z where a dz i z z t + = = + = = = } [Det thm 4.6.1 p163] Analytic functions have Taylor series expansions about every point in the domain. Taylor series can be thought of as special cases of Laurent series. But analytic functions also have Laurent expansions about isolated singular points, i.e. the expansion point is not even in the domain of analyticity! The Laurent series is valid in some annulus around the singularity, but not across branch cuts. Note that in general, the a k and b k could be complex, but in practice, they are often real. physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu Properties of analytic functions: 1. If it is differentiable once, it is infinitely differentiable. 2. The Taylor and Laurent expansions are unique. This means you may use any of several methods to find them for a given function. 3. If you know a function and all its derivatives at any point, then you know the function everywhere in its domain of analyticity. This follows from the fact that every analytic function has a Laurent power series expansion. It implies that the value throughout a region is completely determined by its values at a boundary. 4. An analytic function cannot have a local extremum of absolute value. (Why not??) Residues Mostly, we use complex contour integrals to evaluate difficult real integrals, and to sum infinite series. To evaluate contour integrals, we need to evaluate residues. Here, we introduce residues. The residue of a complex function at a complex point z 0 is the a 1 coefficient of the Laurent expansion about the point z 0 . Residues of singular points are the only ones that interest us. (In fact, residues of branch points are not defined [Sea sec 13.1].) Common ways to evaluate residues 1. The residue of a removable singularity is zero. This is because the function is bounded near the singularity, and thus a 1 must be zero (or else the function would blow up at z 0 ): 1 0 1 1 0 1 For 0, as , 0 a z z a a z z = = 2. The residue of a simple pole at z 0 (i.e., a pole of order 1) is ( ) 0 1 0 lim ( ) z z a z z f z = . 3. Extending the previous method: the residue of a pole at z 0 of order k is ( ) ( ) 0 1 1 0 1 1 lim ( ) 1 ! k k k z z d a z z f z k dz , which follows by substitution of the Laurent series for f(z), and direct differentiation. Noting that poles of order m imply that a k = 0 for k < m, we get: ( ) ( ) ( ) 1 1 1 0 1 0 1 0 0 1 0 1 1 1 0 1 0 1 0 0 0 1 0 1 1 1 0 1 0 0 0 1 0 1 ( ) ( ) ( ) ... ( ) ( ) ... ( ) ( ) ( ) ... ( ) ( ) ( ) ... 1 ! ! ( ) 1 ! ( ) ( ) ( ) 1! 2! k k k k k k k k k k k k k k k k f z a z z a z z a z z a a z z z z f z a a z z a z z a z z a z z k d k z z f z k a z z a z z a z z dz + + = + + + + + + = + + + + + + + = + + + ( ) ( ) ( ) ( ) 0 0 1 0 1 1 1 1 0 1 ... lim ( ) 1 ! 1 lim ( ) 1 ! k k k z z k k k z z d z z f z k a dz d a z z f z k dz = = 4. If f(z) can be written as ( ) ( ) ( ) P z f z Q z = , where P is continuous at z 0 , and Q(z 0 ) = 0 (and is continuous at z 0 ), then f(z) has a simple pole at z 0 , and physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu ( ) ( ) ( ) ( ) 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ( ) ( ) Res ( ) . ? , ( ) '( ). '( ) ( ) ( ) ( ) Res ( ) lim ( ) lim '( ) '( ) z z z z z z z z z P z P z f z Why Near z Q z z z Q z Then d Q z Q z dz P z P z f z z z f z z z z z Q z Q z = = = ~ = = = 5. Find the Laurent series, and hence its coefficient of (z z 0 ) 1 . This is sometimes easy if f(z) is given in terms of functions with well-known power series expansions. See the sum of series example later. We will include real-life examples of most of these as we go. Contour Integrals Contour integration is an invaluable tool for evaluating both real and complex-valued integrals. Contour integrals are used all over advanced physics, and we could not do physics as we know it today without them. Contour integrals are mostly useful for evaluating difficult ordinary (real-valued) integrals, and sums of series. In many cases, a function is analytic except at a set of distinct points. In this case, a contour integral may enclose, or pass near, some points of non-analyticity, i.e. singular points. It is these singular points that allow us to evaluate the integral. You often let the radius of the contour integral go to for some part of the contour: real imaginary C R R .Any arc where 1 1 lim ( ) ~ , 0 R f z z c c + > has an integral of 0 over the arc. Beware that this is often stated incorrectly as any function which goes to zero faster than 1/|z| has a contour integral of 0. The problem is that it has to have an exponent < 1; it is not sufficient to be simply smaller than 1/|z|. E.g. 1 1 1 z z < + , but the contour integral still diverges. Jordans lemma: ??. Evaluating Integrals Surprisingly, we can use complex contour integrals to evaluate difficult real integrals. The main point is to find a contour which (a) includes some known (possibly complex) multiple of the desired (real) integral, (b) includes other segments whose values are zero, and (c) includes a known set of poles whose residues can be found. Then you simply plug into the residue theorem: ( ) 2 Res ( ), are the finite set of isolated singularities n n C z n residues f z dz i f z where z t = } physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu We can see this by considering the contour integral around the unit circle for each term in the Laurent series expanded about 0. First, consider the z 0 term (the constant term). We seek the value of O dz } . dz is a small complex number, representable as a vector in the complex plane. The diagram below (left) shows the geometric meaning of dz. Below (right) shows the geometric approximation to the desired integral. real Imaginary u du unit circle dz = e i(u+/4) du dz 1 dz 2 dz i dz N (Left) Geometric description of dz. (Right) Approximation of O dz } as a sum of 32 small complex terms (vectors). We see that all the tiny dz elements add up to zero: the vectors add head-to-tail, and circle back to the starting point. The sum vector (displacement from start) is zero. This is true for any large number of dz, so we have 0 O dz = } Next, consider the z 1 term, 1 O dz z | | | \ . } , and a change of integration variable to : 2 2 0 0 1 , : 2 i i i i O Let z e dz ie d dz e ie d id i z t t u u u u u u u t | | = = = = = | \ . } } } The change of variable maps the complex contour and z into an ordinary integral of a real variable. Geometrically, as z goes positively (counter-clockwise) around the unit circle (below left), z 1 goes around the unit circle in the negative (clockwise) direction (below middle). Its complex angle, arg(1/z) = , where z = e i . As z goes around the unit circle, dz has infinitesimal magnitude c = d, and argument + t/4. Hence, the product of (1/z) dz always has argument of + + t/4 = t/4; it is always purely imaginary. real Imaginary A B C D E Path of z = e i around unit circle real Imaginary A B C D E Path of z = e i around unit circle real Imaginary A B C D E Path of dz = ie i around unit circle Paths of z, 1/z, and dz in the complex plane physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu The magnitude of (1/z) dz = du; thus the integral around the circle is 2ti. Multiplying the integrand by some constant, a 1 (the residue), just multiplies the integral by that constant. And any contour integral that encloses the pole 1/z and no other singularity has the same value. Hence, for any contour around the origin ( ) 1 1 1 1 1 1 2 2 O O a z dz a z dz i a a i t t = = } } Now consider the other terms of the Laurent expansion of f(z). We already showed that the a 0 z 0 term, which on integration gives the product a 0 dz, rotates uniformly about all directions, in the positive (counter- clockwise) sense, and sums to zero. Hence the a 0 term contributes nothing to the contour integral. The a 1 z 1 dz product rotates uniformly twice around all directions in the positive sense, and of course, still sums to zero. Higher powers of z simply rotate more times, but always an integer number of times around the circle, and hence always sum to zero. Similarly, a 2 z 2 , and all more negative powers, rotate uniformly about all directions, but in the negative (clockwise) sense. Hence, all these terms contribute nothing to the contour integral. So in the end: The only term of the Laurent expansion about 0 that contributes to the contour integral is the residue term, a1 z1. The simplest contour integral: Evaluate 2 0 1 1 I dx x = + } . We know from elementary calculus (let x = tan u) that I = / 2. We can find this easily from the residue theorem, using the following contour: real imaginary C I C R i -i R C I C denotes a contour, and I denotes the integral over that contour. We let the radius of the arc go to infinity, and we see that the closed contour integral I C = I + I + I R . But I R = 0, because f(R ) < 1/R 2 . Then I = I C / 2. f(z) has poles at i. The contour encloses one pole at i. Its residue is ( ) 2 1 1 1 1 Res ( ) . 2 Res ( ) 2 2 2 2 1 2 2 C n z i n z i C f i I i f z i d z i i z dz I I t t t t = = = = = = = = + = = Note that when evaluating a real integral with complex functions and contour integrals, the is always cancel, and you get a real result, as you must. Its a good check to make sure this happens. Choosing the Right Path: Which Contour? The path of integration is fraught with perils. How will I know which path to choose? There is no universal answer. Often, many paths lead to the same truth. Still, many paths lead nowhere. All we can do physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu is use wisdom as our guide, and take one step in a new direction. If we end up where we started, we are grateful for what we learned, and we start anew. We here examine several useful and general, but oft neglected, methods of contour integration. We use a some sample problems to illustrate these tools. This section assumes a familiarity with contour integration, and its use in evaluating definite integrals, including the residue theorem. Example: Evaluate 2 2 sin x I dx x = } The integrand is everywhere nonnegative, and somewhere positive, and it is in the positive direction, so I must be positive. We observe that the given integrand has no poles. It has only a removable singularity at x = 0. If we are to use contour integrals, we must somehow create a pole (or a few), to use the residue theorem. Simple poles (i.e. 1 st -order) are sometimes best, because then we can also use the indented contour theorem. real Imaginary I R = 0 I r real Imaginary I R = 0 I r Contours for the two exponential integrals: (left) positive (counter-clockwise) exp(2z); (right) negative (clockwise) exp(2z) To use a contour integral (which, a priori, may or may not be a good idea), we must do two things: (1) create a pole; and (2) close the contour. The same method does both: expand the sin( ) in terms of exponentials: ( ) ( ) 2 2 2 2 2 2 2 2 2 2 sin 1 2 4 2 iz iz i z i z e e x e e I dx dz dz dz dz x z z z i z ( = = = + ( ( } } } } } All three integrals have poles at z = 0. If we indent the contour underneath the origin, then since the function is bounded near there, the limit as r 0 leaves the original integral unchanged (above left). The first integral must be closed in the upper half-plane, to keep the exponential small. The second integral can be closed in either half-plane, since it ~ 1/z 2 . The third integral must be closed in the lower half-plane, again to keep the exponential small (above right). Note that all three contours must use an indentation that preserves the value of the original integral. An easy way to insure this is to use the same indentation on all three. Now the third integral encloses no poles, so is zero. The 2 nd integral, by inspection of its Laurent series, has a residue of zero, so is also zero. Only the first integral contributes. By expanding the exponential in a Taylor series, and dividing by z 2 , we find its residue is 2i. Using the residue theorem, we have: ( ) 2 2 sin 1 2 2 4 x I dx i i x t t = = = ( } Example: Evaluate 2 0 cos( ) cos( ) ax bx I dx x = } [B&C p?? Q1] physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu This innocent looking problem has a number of funky aspects: - The integrand is two terms. Separately, each term diverges. Together, they converge. - The integrand is even, so if we choose a contour that includes the whole real line, the contour integral includes twice the integral we seek (twice I). - The integrand has no poles. How can we use any residue theorems if there are no poles? Amazingly, we can create a useful pole. - A typical contour includes an arc at infinity, but cos(z) is ill-behaved for z far off the real-axis. How can we tame it? - We will see that this integral leads to the indented contour theorem, which can only be applied to simple poles, i.e., first order poles (unlike the residue theorem, which applies to all poles). Each of these funky features is important, and each arises in practical real-world integrals. Let us consider each funkiness in turn. 1. The integrand is two terms. Separately, each term diverges. Together, they converge. Near zero, cos(x) 1. Therefore, the zero endpoint of either term of the integral looks like 2 2 0 0 0 cos 1 1 ~ anywhere anywhere anywhere ax dx dx x x x = + } } Thus each term, separately, diverges. However, the difference is finite. We see this by power series expanding cos(x): ( ) ( ) ( ) 2 4 2 2 2 2 4 2 2 2 2 2 2 2 2 2 2 0 cos( ) 1 ... cos( ) cos( ) 2! 4! 2 2 cos( ) cos( ) 2 2 2 cos( ) cos( ) ~ which is to say, is finite. 2 anywhere x x a x b x x ax bx O x and ax bx a b b a O x O x x ax bx b a dx x = + = + + = + + = + } 2. The integrand is even, so if we choose a contour that includes the whole real line, the contour integral includes twice the integral we seek (twice I). Perhaps the most common integration contour (below left) covers the real line, and an infinitely distant arc from + back to . When our real integral (I in this case) is only from 0 to , the contour integral includes more than we want on the real axis. If our integrand is even, the contour integral includes twice the integral we seek (twice I). This may seem trivial, but the point to notice is that when integrating from to 0, dx is still positive (below middle). real R x imaginary f(x) even dx > 0 (Left) A common contour. (Right) An even function has integral over the real-line twice that of 0 to infinity. Note that if the integrand is odd (below left), choosing this contour cancels out the original (real) integral from our contour integral, and the contour is of no use. Or if the integrand has no even/odd symmetry (below middle), then this contour tells us nothing about our desired integral. In these cases, a different contour may work, for example, one which only includes the positive real axis (below right). physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu real R imaginary x f(x) asymmetric x f(x) odd dx > 0 (Left) An odd function has zero integral over the real line. (Middle) An asymmetric function has unknown integral over the real line. (Right) A contour containing only the desired real integral. 3. The integrand has no poles. How can we use any residue theorems if there are no poles? Amazingly, we can create a useful pole. This is the funkiest aspect of this problem, but illustrates a standard tool. We are given a real-valued integral with no poles. Contour integration is usually useless without a pole, and a residue, to help us evaluate the contour integral. Our integrand contains cos(x), and that is related to exp(ix). We could try replacing cosines with exponentials, ( ) ( ) exp exp cos (does no good) 2 iz iz z + = but this only rearranges the algebra; fundamentally, it buys us nothing. The trick here is to notice that we can often add a made-up imaginary term to our original integrand, perform a contour integration, and then simply take the real part of our result: ( ) { } ( ) , ( ) ( ). Re ( ) b b a a Given I g x dx let f z g z ih z Then I f z dz = + = } } For this trick to work, ih(z) must have no real-valued contribution over the contour we choose, so it doesnt mess up the integral we seek. Often, we satisfy this requirement by choosing ih(z) to be purely imaginary on the real axis, and having zero contribution elsewhere on the contour. Given an integrand containing cos(x), as in our example, a natural choice for ih(z) is i sin(z), because then we can write the new integrand as a simple exponential: cos( ) ( ) cos( ) sin( ) exp( ) x f z z i z iz = + = In our example, the corresponding substitution yields 2 2 0 0 cos cos exp( ) exp( ) Re ax bx iax ibx I dx I dx x x = = ` ) } } Examining this substitution more closely, we find a wonderful consequence: this substitution introduced a pole! Recall that 3 2 sin 1 sin ... ... 3! 3! z i z z z z i z z | | = + = + | \ . We now have a simple pole at z = 0, with residue i. By choosing to add an imaginary term to the integrand, we now have a pole that we can work with to evaluate a contour integral! Its like magic. In our example integral, our residue is: ( ) 2 sin sin ... , i az i bz a b i and residue i a b z z | | = + = | \ . Note that if our original integrand contained sin(x) instead of cos(x), we would have made a similar substitution, but taken the imaginary part of the result: physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu ( ) { } sin( ) , cos( ) sin( ). Im ( ) b b a a Given I x dx let f z z i z Then I f z dz = + = } } 4. A typical contour includes an arc at infinity, but cos(z) is ill-behaved for z far off the real- axis. How can we tame it? This is related to the previous funkiness. Were used to thinking of cos(x) as a nice, bounded, well- behaved function, but this is only true when x is real. When integrating cos(z) over a contour, we must remember that cos(z) blows up rapidly off the real axis. In fact, cos(z) ~ exp(Im{z}), so it blows up extremely quickly off the real axis. If were going to evaluate a contour integral with cos(z) in it, we must cancel its divergence off the real axis. There is only one function which can exactly cancel the divergence of cos(z), and that is i sin(z). The plus sign cancels the divergence above the real axis; the minus sign cancels it below. There is nothing that cancels it everywhere. We show this cancellation simply: ( ) ( ) cos sin exp( ) exp exp( ) exp( ) exp( ) exp( ) exp( ) exp( ) exp( ) Let z x iy z i z iz i x iy ix y and ix y ix y y + + = = + = = = For z above the real axis, this shrinks rapidly. Recall that in the previous step, we added i sin(x) to our integrand to give us a pole to work with. We see now that we also need the same additional term to tame the divergence of cos(z) off the real axis. For the contour weve chosen, no other term will work. 5. We will see that this integral leads to the indented contour theorem, which can only be applied to simple poles, i.e., first order poles (unlike the residue theorem, which applies to all poles). Were now at the final step. We have a pole at z = 0, but it is right on our contour, not inside it. If the pole were inside the contour, we would use the residue theorem to evaluate the contour integral, and from there, wed find the integral on the real axis, cut it in half, and take the real part. That is the integral we seek. But the pole is not inside the contour; it is on the contour. The indented contour theorem allows us to work with poles on the contour. We explain the theorem geometrically in the next section, but state it briefly here: Indented contour theorem: For a simple pole, the integral of an arc of tiny radius around the pole, of angle , equals (i)(residue). See diagram below. real imaginary real imaginary 0, ( ) ( )( ) arc As f z dz i residue = } arc (Left) A tiny arc around a simple pole. (Right) A magnified view; we let 0. Note that if we encircle the pole completely, = 2t, and we have the special case of the residue theorem for a simple pole: ( ) ( ) 2 f z dz i residue t = } However, the residue theorem is true for all poles, not just simple ones (see The Residue Theorem earlier). physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu Putting it all together: We now solve the original integral using all of the above methods. First, we add i sin(z) to the integrand, which is equivalent to replacing cos(z) with exp(iz): { } 2 2 0 0 2 0 cos cos exp( ) exp( ) Re exp( ) exp( ) , Re ax bx iax ibx I dx I dx x x iax ibx Define J dx so I J x = = ` ) = } } } We choose the contour shown below left, with R , and 0. real imaginary R real imaginary R C 2 C R C There are no poles enclosed, so the contour integral is zero. The contour includes twice the desired integral, so (1) 2 exp( ) exp( ) ( ) ( ) ( ) 2 ( ) 0 R C C iaz ibz Define f z then f z dz f z dz J f z dz z = + + = } } } For C R , |f(z)| < 1/R 2 , so as R , the integral goes to 0. For C ## , the residue is i(a b), and the arc is t radians in the negative direction, so the indented contour theorem says: ( ) ( ) ( ) 0 lim ( ) C f z dz i i a b a b t t = = } Plugging into (1), we finally get ( ) { } ( ) 2 0 Re 2 J a b I J b a t t + = = = In this example, the contour integral J happened to be real, so taking I = Re{J} is trivial, but in general, theres no reason why J must be real. It could well be complex, and we would need to take the real part of it. To illustrate this and more, we evaluate the integral again, now with the alternate contour shown above right. Again, there are no poles enclosed, so the contour integral is zero. Again, the integral over C R = 0. We then have: ( ) ( ) R C f z dz f z dz = } } ( ) ( ) ( ) 2 0 ( ) ( ) 0 lim ( ) / 2 2 C C C f z dz J f z dz And f z dz i i a b a b t t + + + = = = } } } The integral over C 2 is down the imaginary axis: ( ) ( ) ( ) ( ) 2 2 0 2 2 0 , exp exp exp exp ( ) C C Let z x iy iy iy then dz i dy iaz ibz ay by f z dz dz i dy z y = + = + = = = = } } } We dont know what this integral is, but we dont care! In fact, it is divergent, but we see that it is purely imaginary, so will contribute only to the imaginary part of J. But we seek I = Re{J}, and therefore physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu { } 0 lim Re is well-defined. I J = So we ignore the divergent imaginary contribution from C 2 . We then have ( ) ( ) { } ( ) 0 Re 2 2 i something J a b I J b a t t + + = = = as before. Evaluating Infinite Sums The simplest infinite sum in the world is 2 1 1 n S n = = ## . The general method for using contour integrals is to find an countably infinite set of residues whose values are the terms of the sum, and whose contour integral can be evaluated by other means. Then 1 2 Res ( ) 2 2 C C n n I I i f z iS S i t t t = = = = The hard part is finding the function f(z) that has the right residues. Such a function must first have poles at all the integers, and then also have residues at those poles equal to the terms of the series. Consider the complex function cot(z). Clearly, this has poles at all real integer z, due to the sin(z) function in the denominator of cot(z). Hence, ( ) ( ) ( ) ( ) ( ) cos cos , Res cot Res 1 sin cos n n n n n n z z For z n z z z t t t t t t t ( ( = = = = ( ( Thus t cot(tz) can be used to generate lots of infinite sums, by simply multiplying it by a continuous function of z that is the terms of the infinite series. For example, for the sum above, 2 1 1 n S n = = , we simply define ( ) 2 2 1 1 ( ) cot , and its residues are Res ( ) , 0 n f z z f z n z n t t = = = . [In general, to find 1 ( ) n s n , define ( ) ( ) ( ) cot , and its residues are Res ( ) ( ) z n f z s z z f z s n t t = = = ( . However, now you may have to deal with the residues for n s 0.] Continuing our example, now we need the residue at n = 0. Since cot(z) has a simple pole at zero, cot(z)/z 2 has a 3 rd order pole at zero. We optimistically try tedious brute force for an m th order pole with m = 3, only to find that it fails: physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu 2 2 3 2 2 2 2 0 0 0 2 2 2 0 0 0 2 cot 1 cot 1 Res lim lim cot 2! 2! 1 sin 2 cos sin 2 lim cot csc lim lim 2 2 2 sin sin : l 2 z z z z z z z d z d z z z z dz z dz z z d d z z z d z z z dz dz dz z z U VdU UdV Use d V V t t t t t t t t t t t t t t t t t t t t = ( ( = = ( ( ( ( ( ( = = = ( ( ( = = ( ) ( ) ( ) ( ) ( ) 2 4 0 3 0 0 1 sin cos 2 sin 2 2 sin cos 2 im sin 1 sin cos 2 sin 2 2 cos 2 lim 2 sin ' ' : 1 cos cos2 sin 2 sin 2 1 cos 2 2 cos sin 2 2 lim 2 z z z z z z z z z z z z z z z z Use L hopital s rule z z z z z z z z t t t t t t t t t t t t t t t t t t t t t t t t t t t t t t t t t t t t | | | \ . | | | \ . = | + = ( ) ( ) 2 2 2 2 2 0 2 sin 3 sin cos 1 cos cos 2 1 sin 2 sin 2 1 2 sin 2 sin 2 lim 2 3 sin cos z z z z z z z z z z z z z t t t t t t t t t t t t t t t t t t t | | \ . | | + | \ . = At this point, we give up on brute force, because we see from the denominator that well have to use LHopitals rule twice more to eliminate the zero there, and the derivatives wil l get untenably complicated. But in 2 lines, we can find the a 1 term of the Laurent series from the series expansions of sin and cos. The z 1 coefficient of cot(z) becomes the z -1 coefficient of f(z) = cot(z)/z 2 : ( ) ( ) ( ) 2 2 2 2 2 3 2 2 2 0 cos 1 / 2 ... 1 1 / 2 1 1 1 cot 1 / 2 1 / 6 1 / 3 sin 3 / 6 ... 1 / 6 1 cot cot Res 3 3 z z z z z z z z z z z z z z z z z z z z z z t t t t t t = + | | | | | | = ~ = ~ + ~ = | | | + \ . \ . \ . ~ = Now we take a contour integral over a circle centered at the origin: (no good, because cot(tz) blows up every integer ! ??) physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu real imaginary I C As R , I C 0. Hence 2 0 0 0 2 2 2 2 1 1 1 1 1 1 1 1 0 2 2 0, 2 6 C n n n n K I i K K n n n n t t = = = = | | = = + + + = = = | | \ . Multi-valued Functions Many functions are multi-valued (despite the apparent oxymoron), i.e. for a single point in the domain, the function can have multiple values. An obvious example is a square-root function: given a complex number, there are two complex square roots of it. Thus, the square root function is two-valued. Another example is arc-tangent: given any complex number, there are an infinite number of complex numbers whose tangent is the given complex number. [picture??] We refer now to nice functions, which are locally (i.e., within any small finite region) analytic, but multi-valued. If youre not careful, such multi-valuedness can violate the assumptions of analyticity, by introducing discontinuities in the function. Without analyticity, all our developments break down: no contour integrals, no sums of series. But, you can avoid such a breakdown, and preserve the tools weve developed, by treating multi-valued functions in a slightly special way to insure continuity, and therefore analyticity. A regular function, or region, is analytic and single valued. (You can get a regular function from a multi-valued one by choosing a Riemann sheet. More below.) A branch point is a point in the domain of a function f(z) with this property: when you traverse a closed path around the branch point, following continuous values of f(z), f(z) has a different value at the end point of the path than at the beginning point, even though the beginning and end point are the same point in the domain. Example TBS: square root around the origin. Sometimes branch points are also singularities. A branch cut is an arbitrary (possibly curved) path connecting branch points, or running from a branch point to infinity (connecting the branch point to infinity). If you now evaluate integrals of contours that never cross the branch cuts, you insure that the function remains continuous (and thus analytic) over the domain of the integral. When the contour of integration is entirely in the domain of analyticity of the integrand, ordinary contour integration, and the residue theorem, are valid. This solves the problem of integrating across discontinuities. Branch cuts are like fences in the domain of the function: your contour integral cant cross them. Note that youre free to choose your branch cuts wherever you like, so long as the function remains continuous when you dont cross the branch cuts. Connecting branch points is one way to insure this. A Riemann sheet is the complex plane plus a choice of branch cuts, and a choice of branch. This defines a domain on which a function is regular. physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu A Riemann surface is a continuous joining of Riemann sheets, gluing the edges together. This looks like sheets layered on top of each other, and each sheet represents one of the multiple values a multi- valued analytic function may have. TBS: consider ( ) ( ) z a z b . real imaginary branch cut real imaginary branch cuts physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu Conceptual Linear Algebra Instead of lots of summation signs, we describe linear algebra concepts, visualizations, and ways to think about linear operations as algebraic operations. This allows fast understanding of linear algebra methods that is extremely helpful in almost all areas of physics. Tensors rely heavily on linear algebra methods, so this section is a good warm-up for tensors. Matrices and linear algebra are also critical for quantum mechanics. Caution In this section, vector means a column or row of numbers. In other sections, vector has a more general meaning. In this section, we use bold capitals for matrices (A), and bold lower-case for vectors (a). Matrix Multiplication It is often helpful to view a matrix as a horizontal concatenation of column-vectors. You can think of it as a row-vector, where each element of the row-vector is itself a column vector. or ( ( ( ( = = ( ( ( ( d A a b c A e f Equally valid, you can think of a matrix as a vertical concatenation of row-vectors, like a column- vector where each element is itself a row-vector. Matrix multiplication is defined to be the operation of linear transformation, e.g., from one set of coordinates to another. The following properties follow from the standard definition of matrix multiplication: Matrix times a vector: A matrix B times a column vector v, is a weighted sum of the columns of B: 11 11 21 21 13 13 23 23 3 12 12 22 22 32 32 3 1 33 31 3 y y z z x x B v B B v B B B B B B v v B B B B B B v B B v B ( ( ( ( ( ( ( ( ( = + + ( ( ( ( ( ( ( ( ( ( ( Bv We can visualize this by laying the vector on its side above the columns of the matrix, multiplying each matrix-column by the vector component, and summing the resulting vectors: 13 13 23 23 13 33 33 23 3 12 12 22 22 12 32 32 2 11 11 21 21 11 31 31 21 31 2 2 3 3 z z z y y y x x x v B B B v v B B B B B B v B B B v B B B v v B v B B v B B B B B B B B B ( ( ( ( ( ( ( ( ( ( ( ( = = = + + + + ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( Bv The columns of B are the vectors which are weighted by each of the input vector components, v j . Another important way of conceptualizing a matrix times a vector: the resultant vector is a column of dot-products. The i th element of the result is the dot-product of the given vector, v, with the i th row of B. Writing B as a column of row-vectors: physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu 1 1 1 2 2 2 3 3 3 ( ( ( ( ( ( ( ( = = = ( ( ( ( ( ( ( ( r r r v B r Bv r v r v r r r v - - - This view derives from the one above, where we lay the vector on its side above the matrix, but now consider the effect on each row separately: it is exactly that of a dot-product. In linear algebra, even if the matrices are complex, we do not conjugate the left vector in these dot- products. If they need conjugation, the application must conjugate them separately from the matrix multiplication, i.e. during the construction of the matrix. We use this dot-product concept later when we consider a change of basis. Matrix times a matrix: Multiplying a matrix B times another matrix C is defined as multiplying each column of C by the matrix B. Therefore, by definition, matrix multiplication distributes to the right across the columns: , Let then ( ( ( ( ( ( = = ( ( ( ( ( ( C x y z BC B x y z Bx By Bz [Matrix multiplication also distributes to the left across the rows, but we dont use that much.] Determinants This section assumes youve seen matrices and determinants, but probably didnt understand the reasons why they work. The determinant operation on a matrix produces a scalar. It is the only operation (up to a constant factor) which is (1) linear in each row and each column of the matrix; and (2) antisymmetric under exchange of any two rows or any two columns. The above two rules, linearity and antisymmetry, allow determinants to help solve simultaneous linear equations, as we show later under Cramers Rule. In more detail: 1. The determinant is linear in each column-vector (and row-vector). This means that multiplying any column (or row) by a scalar multiplies the determinant by that scalar. E.g., det det ; det det det k k and = + = + a b c a b c a d b c a b c d b c 2. The determinant is anti-symmetric with respect to any two column-vectors (or row-vectors). This means swapping any two columns (or rows) of the matrix negates its determinant. The above properties of determinants imply some others: 3. Expansion by minors/cofactors (see below), whose derivation proves the determinant operator is unique (up to a constant factor). 4. The determinant of a matrix with any two columns equal (or proportional) is zero. (From anti- symmetry, swap the two equal columns, the determinant must negate, but its negative now equals itself. Hence, the determinant must be zero.) det det det 0 = = b b c b b c b b c physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu 5. det det det = A B AB . This is crucially important. It also fixes the overall constant factor of the determinant, so that the determinant (with this property) is a completely unique operator. 6. Adding a multiple of any column (row) to any other column (row) does not change the determinant: det det det det det k k k + = + = + a b b c a b c b b c a b c b b c det = a b c 7. det|A + B| det|A| + det|B|. The determinant operator is not distributive over matrix addition. 8. det|kA| = k n det|A|. The ij-th minor, M ij , of an nn matrix (A A ab ) is the product A ij times the determinant of the (n 1)(n1) matrix formed by crossing out the i-th row and j-th column: i th row j th column 11 1 11 1, 1 1,1 1, 1 1 det . . . ' . . ' . . . . . . . . . . . . . . . . . . . . . . ' . . ' . . . n n ij ij ij n n n nn n A A A A M A A A A A A ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( A cofactor is just a minor with a plus or minus sign affixed: | | ( 1) ( 1) det without row and column i j i j th th ij ij ij C M A i j + + ( = = A Cramers Rule Its amazing how many textbooks describe Cramers rule, and how few explain or derive it. I spent years looking for this, and finally found it in [Arf ch 3]. Cramers rule is a turnkey method for solving simultaneous linear equations. It is horribly inefficient, and virtually worthless above 3 3, however, it does have important theoretical implications. Cramers rule solves for n equations in n unknowns: , is a coefficient matrix, is a vector of unknowns, is a vector of constants, i i Given where x b = Ax b A x b To solve for the i th unknown x i , we replace the i th column of A with the constant vector b, take the determinant, and divide by the determinant of A. Mathematically: 1 2 1 1 1 Let is the column of . We can solve for as det ... ... is the column of det th n i i i i n th i i where i x x where i + = ( = A a a a a A a a b a a a A A physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu This seems pretty bizarre, and one has to ask, why does this work? Its quite simple, if we recall the properties of determinants. Lets solve for x 1 , noting that all other unknowns can be solved analogously. Start by simply multiplying x 1 by det|A|: 1 1 1 2 det det ... n x x = A a a a from linearity of det[ ] 1 1 2 2 2 adding a multiple of any column to det ... another doesn't change the determinant n x x = + a a a a 1 1 2 2 2 det ... ... n n n x x x = + + a a a a a ditto (n 2) more times 2 2 det ... det ... n n = = Ax a a b a a rewriting the first column 2 1 det ... det n x = b a a A Area and Volume as a Determinant (a,0) (c,d) c d (a,b) (c,d) c d b a a c b d Determining areas of regions defined by vectors is crucial to geometric physics in many areas. It is the essence of the Jacobian matrix used in variable transformations of multiple integrals. What is the area of the parallelogram defined by two vectors? This is the archetypal area for generalized (oblique, non- normal) coordinates. We will proceed in a series of steps, gradually becoming more general. physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu First, consider that the first vector is horizontal (above left). The area is simply base height: A = ad. We can obviously write this as a determinant of the matrix of column vectors, though it is as-yet contrived: det (0) 0 a c d = = = For a general parallelogram (above right), we can take the big rectangle and subtract the smaller rectangles and triangles, by brute force: 1 1 ( )( ) 2 2 2 2 2 A a c b d bc cd ab ab | | | | = + + = | | \ . \ . ad cb cd + + + 2bc cd ab det a c b d = = This is simple enough in 2-D, but is incomprehensibly complicated in higher dimensions. We can achieve the same result more generally, in a way that allows for extension to higher dimensions by induction. Start again with the diagram above left, where the first vector is horizontal. We can rotate that to arrive at any arbitrary pair of vectors, thus removing the horizontal restriction: ( ) the rotation matrix. Then the rotated vectors are 0 det det det 0 0 a c Let and d a c a c d d ( ( = ( ( | | ( ( ( = = | ( ( ( \ . R R R R R R R det det 0 0 a c a c d d = The final equality is because rotation matrices are orthogonal, with det = 1. Thus the determinant of arbitrary vectors defining arbitrary parallelograms equals the determinant of the vectors spanning the parallelogram rotated to have one side horizontal, which equals the area of the parallelogram. What about the sign? If we reverse the two vectors, the area comes out negative! Thats ok, because in differential geometry, 2-D areas are signed: positive if we travel counter-clockwise from the first vector to the 2 nd , and negative if we travel clockwise. The above areas are positive. In 3-D, the signed volume of the parallelepiped defined by 3 vectors a, b, and c, is the determinant of the matrix formed by the vectors as columns (positive if abc form a right-handed set, negative if abc are a left-handed set). We show this with rotation matrices, similar to the 2-D case: First, assume that the parallelogram defined by bc lies in the x-y plane (b z = c z = 0). Then the volume is simply (area of the base) height: ( ) ( ) ( ) det det 0 0 x x x z y y y z a b c V area of base height a a b c a | | = = = | \ . b c where the last equality is from expansion by cofactors along the bottom row. But now, as before, we can rotate such a parallelepiped in 3 dimensions to get any arbitrary parallelepiped. As before, the rotation matrix is orthogonal (det = 1), and does not change the determinant of the matrix of column vectors. This procedure generalizes to arbitrary dimensions: the signed hyper-volume of a parallelepiped defined by n vectors in n-D space is the determinant of the matrix of column vectors. The sign is positive if the 3-D submanifold spanned by each contiguous subset of 3 vectors (v 1 v 2 v 3 , v 2 v 3 v 4 , v 3 v 4 v 5 , ...) is right- handed, and negated for each subset of 3 vectors that is left-handed. The Jacobian Determinant and Change of Variables How do we change multiple variables in a multiple integral? Given physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu | | ( , , ) and the change of variables to , , : ( , , ), ( , , ), ( , , ). The simplistic ( , , ) ( , , ), ( , , ), ( , , ) ( !) f a b c da db dc u v w a a u v w b b u v w c c u v w f a b c da db dc f a u v w b u v w c u v w du dv dw wrong = = = }}} }}} }}} fails, because the volume du dv dw associated with each point of f() is different than the volume da db dc in the original integral. da du dv dw db dc dv dw da db dc du Example of new-coordinate volume element (du dv dw), and its corresponding old-coordinate volume element (da db dc). The new volume element is a rectangular parallelepiped. The old- coordinate parallelepiped has sides straight to first order in the original integration variables. In the diagram above, we see that the volume (du dv dw) is smaller than the old-coordinate volume (da db dc). Note that volume is a relative measure of volume in coordinate space; it has nothing to do with a metric on the space, and distance need not even be defined. There is a concept of relative volume in any space, even if there is no definition of distance. Relative volume is defined as products of coordinate differentials. The integrand is constant (to first order in the integration variables) over the whole volume element. Without some correction, the weighting of f() throughout the new-coordinate domain is different than the original integral, and so the integrated sum (i.e., the integral) is different. We correct this by putting in the original-coordinate differential volume (da db dc) as a function of the new differential coordinates, du, dv, dw. Of course, this function varies throughout the domain, so we can write | | ( ) ( ) ( , , ) ( , , ), ( , , ), ( , , ) ( , , ) ( , , ) f a b c da db dc f a u v w b u v w c u v w V u v w du dv dw where V u v w takes du dv dw da db dc }}} }}} To find V(), consider how the a-b-c space vector daa is created from the new u-v-w space. It has contributions from displacements in all 3 new dimensions, u, v, and w: . , a a a da du dv dw Similarly u v w b b b db du dv dw u v w c c c dc du dv dw u v w c c c | | = + + | c c c \ . c c c | | = + + | c c c \ . c c c | | = + + | c c c \ . a a b b c c The volume defined by the 3 vectors , , du dv and dw u v w maps to the volume spanned by the corresponding 3 vectors in the original a-b-c space. The a-b-c space volume is given by the determinant of the components of the vectors da, db, and dc (written as rows below, to match equations above): physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu ( ) det det a a a a a a du dv dw u v w u v w b b b b b b volume du dv dw du dv dw u v w u v w c c c c c c du dv dw u v w u v w c c c c c c c c c c c c c c c c c c = = c c c c c c c c c c c c c c c c c c where the last equality follows from linearity of the determinant. Note that all the partial derivatives are functions of u, v, and w. Hence, ( , , ) det ( , , ) the Jacobian , ( , , ) ( , , ), ( , , ), ( , , ) ( , , ) a a a u v w b b b V u v w J u v w and u v w c c c u v w f a b c da db dc f a u v w b u v w c u v w J u v w du dv dw c c c c c c c c c = ( c c c c c c c c c ( }}} }}} QED. Expansion by Cofactors Let us construct the determinant operator from its two defining properties: linearity, and antisymmetry. First, well define a linear operator, then well make it antisymmetric. [This section is optional, though instructive.] We first construct an operator which is linear in the first column. For the determinant to be linear in the first column, it must be a sum of terms each containing exactly one factor from the first column: ( ) ( ) ( ) 11 21 11 21 1 12 1 2 1 2 2 2 . det . . . . . . . . . n n n n nn n A A A A A A A A A A Let Then A A ( ( ( = = + + + ( ( A A . . . . . . To be linear in the first column, the parentheses above must have no factors from the first column (else they would be quadratic in some components). Now to also be linear in the 2 nd column, all of the parentheses above must be linear in all the remaining columns. Therefore, to fill in the parentheses we need a linear operator on columns 2...n. But that is the same kind of operator we set out to make: a linear operator on columns 1..n. Recursion is clearly called for, therefore the parentheses should be filled in with more determinants: ( ) ( ) ( ) 11 1 21 2 1 det det det det ( ) n n A A A so far = + + + A M M M We now note that the determinant is linear both in the columns, and in the rows. This means that det M 1 must not have any factors from the first row or the first column of A. Hence, M 1 must be the submatrix of A with the first row and first column stricken out. physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu 1 st row 1 st column 2 nd row 1 st column 11 1 11 12 1 21 22 2 21 2 31 32 3 1 2 1 2 . . . . . . . . . . . . . . , . . , . 1 2 . . . . . . . . . . . . . . n n n n n n n nn n n nn A A A A A A A A A A A A A A etc ij A A A A A A ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( M M Similarly, M 2 must be the submatrix of A with the 2 nd row and first column stricken out. And so on, through M n , which must be the submatrix of A with the n th row and first column stricken out. We now have an operator that is linear in all the rows and columns of A. So far, this operator is not unique. We could multiply each term in the operator by a constant, and still preserve linearity in all rows and columns: ( ) ( ) ( ) 1 11 1 2 21 2 1 det det det det n n n k A k A k A = + + + A M M M We choose these constants to provide the 2 nd property of determinants: antisymmetry. The determinant is antisymmetric on interchange of any two rows. We start by considering swapping the first two rows: Define A (A with A 1* A 2* ). 11 12 1 11 12 1 1 21 2 1 1 2 21 2 . . . . . . . . . . . . . . , . . . . . . . . . . . . ' ' . . . . . . . . n n n n n nn n n nn A A A A A A A A etc ij ij A A A A A A A A A ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( A M swap swapped Recall that M 1 strikes out the first row, and M 2 strikes out the 2 nd row, so swapping row 1 with row 2 replaces the first two terms of the determinant: ( ) ( ) ( ) ( ) 1 11 1 2 21 2 1 21 2 1 2 11 det det det ... det det ' ' det .. ' . k A k A k A k A = + + = + + A M A M M M But M 1 = M 2 , and M 2 = M 1 . So we have: ( ) ( ) 1 21 2 2 11 1 det det det ' ... k A k A = + + A M M This last form is the same as det A, but with k 1 and k 2 swapped. To make our determinant antisymmetric, we must choose constants k 1 and k 2 such that terms 1 and 2 are antisymmetric on interchange of rows 1 and 2. This simply means that k 1 = k 2 . So far, the determinant is unique only up to an arbitrary factor, so we choose the simplest such constants: k 1 = 1, k 2 = 1. For M 3 through M n , swapping the first two rows of A swaps the first two rows of M 3 through M n : 31 41 42 4 21 22 2 1 11 12 1 2 3 . . . . , . . . . . . . ' . . n n n n n n n A A A A etc A A A A A A A A A ( ( ( ( ( ( ( M swapped physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu Since M 3 through M n appear inside determinant operators, and such operators are defined to be antisymmetric on interchange of rows, terms 3 through n also change sign on swapping the first two rows of A. Thus, all the terms 1 through n change sign on swapping rows 1 and 2, and det A = det A. We are almost done. We have now a unique determinant operator, with k 1 = 1, k 2 = 1. We must determine k 3 through k n . So consider swapping rows 1 and 3 of A, which must also negate our determinant: 11 12 1 1 21 2 21 2 31 3 3 1 12 2 2 1 1 1 1 1 2 3 . . . . . . . . , . . . . . . . . . . . . . . . . . . . . . " " . . n n n n n n n n n n n nn A A A A A A A A A etc A A A A A A A A A A A ( ( ( ( ( ( ( ( ( ( ( ( ( ( A M swap swapped Again, M 4 through M n have rows 1 & 3 swapped, and thus terms 4 through n are negated by their determinant operators. Also, M 2 (formed by striking out row 2 of A) has its rows 1 & 2 swapped, and is also thus negated. The terms remaining to be accounted for are ( ) ( ) 11 1 3 31 3 det and det A k A M M . The new M 1 is the same as the old M 3 , but with its first two rows swapped. Similarly, the new M 3 is the same as the old M 1 , but with its first two rows swapped. Hence, both terms 1 and 3 are negated by their determinant operators, so we must choose k 3 = 1 to preserve that negation. Finally, proceeding in this way, we can consider swapping rows 1 & 4, etc. We find that the odd numbered ks are all 1, and the even numbered ks are all 1. We could also have started from the beginning by linearizing with column 2, and then we find that the k are opposite to those for column 1: this time for odd numbered rows, k odd = 1, and for even numbered rows, k even = +1. The ks simply alternate sign. This leads to the final form of cofactor expansion about any column c: ( ) ( ) ( ) 1 2 1 1 2 2 det ( 1) det ( 1) det ( 1) det c c n c c c nc n A A A + + + = + + + A M M M Note that We can perform a cofactor expansion down any column, or across any row, to compute the determinant of a matrix. We usually choose an expansion order which includes as many zeros as possible, to minimize the computations needed. Proof That the Determinant Is Unique If we compute the determinant of a matrix two ways, from two different cofactor expansions, do we get the same result? Yes. We here prove the determinant is unique by showing that in a cofactor expansion, every possible combination of elements from the rows and columns appears exactly once. This is true no matter what row or column we expand on. Thus all expansions include the same terms, but just written in a different order. Also, this complete expansion of all combinations of elements is a useful property of the cofactor expansion which has many applications beyond determinants. For example, by performing a cofactor expansion without the alternating signs (in other word, an expansion in minors), we can fully symmetrize a set of functions (such as boson wave functions). The proof: lets count the number of terms in a cofactor expansion of a determinant for an nn matrix. We do this by mathematical induction. For the first level of expansion, we choose a row or column, and construct n terms, where each term includes a cofactor (a sub-determinant of an n1 n1 matrix). Thus, physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu the number of terms in an nn determinant is n times the number of terms in an n1 n1 determinant. Or, turned around, ( )( ) # ( 1 1) 1 # terms in n n n terms in n n + + = + There is one term in a 11 determinant, 2 terms in a 22, 6 terms in a 33, and thus n! terms in an nn determinant. Each term is unique within the expansion: by construction, no term appears twice as we work our way through the cofactor expansion. Lets compare this to the number of terms possible which are linear in every row and column: we have n choices for the first factor, n1 choices for the second factor, and so on down to 1 choice for the last factor. That is, there are n! ways to construct terms linear in all the rows and columns. That is exactly the number of terms in the cofactor expansion, which means every cofactor expansion is a sum of all possible terms which are linear in the rows and columns. This proves that the determinant is unique up to a sign. To prove the sign of the cofactor expansion is also unique, we can consider one specific term in the sum. Consider the term which is the product of the main diagonal elements. This term is always positive, since TBS ?? Getting Determined You may have noticed that computing a determinant by cofactor expansion is computationally infeasible for n > ~15. There are n! terms of n factors each, requiring O(n n!) operations. For n = 15, this is ~10 13 operations, which would take about a day on a few GHz computer. For n = 20, it would take years. Is there a better way? Fortunately, yes. It can be done in O(n 3 ) operations, so one can easily compute the determinant for n = 1000 or more. We do this by using the fact that adding a multiple of any row to another row does not change the determinant (which follows from anti-symmetry and linearity). Performing such row operations, we can convert the matrix to upper-right-triangular form, i.e., all the elements of A below the main diagonal are zero. 11 12 1, 1 1 11 12 1 22 2, 1 2 21 22 2 1, 1 1, 1 2 ' ' ' 0 ' ' ' ' 0 0 ' ' 0 0 0 ' n n n n n n n n n n n n nn nn A A A A A A A A A A A A A A A A A A A ( ( ( ( ( ( ( = = ( ( ( ( ( A A . . . . . . . . . . . . . By construction, det|A| = det|A|. Using the method of cofactors on A, we expand down the first column of A and first column of every submatrix in the expansion. E.g., 11 22 33 44 ' x x x 0 ' x x ' 0 0 ' x 0 0 0 ' A A A A ( ( ( = ( ( A Only the first term in each expansion survives, because all the others are zero. Hence, det|A| is the product of its diagonal elements: 1 det det ' ' ' are the diagonal elements of ' n ii ii i A where A = = = A A A [ Lets look at the row operations needed to achieve upper-right-triangular form. We multiply the first row by (A 21 / A 11 ) and subtract it from the 2 nd row. This makes the first element of the 2 nd row zero (below left): physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu 11 12 13 1 11 12 13 1 11 12 13 1 22 23 24 22 23 24 22 23 24 31 32 33 34 32 33 34 33 34 41 42 43 44 42 43 44 43 44 0 0 0 0 0 0 0 0 0 n n n A A A A A A A A A A A A B B B B B B B B B A A A A B B B C C A A A A B B B C C ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( A Perform this operation for rows 3 through n, and we have made the first column below row 1 all zero (above middle). Similarly, we can zero the 2 nd column below row 2 by multiplying the (new) 2 nd row by (B 32 / B 22 ) and subtracting it from the 3 rd row. Perform this again on the 4 th row, and we have the first two columns of the upper-right-triangular form (above right). Iterating for the first (n 1) columns, we complete the upper-right-triangular form. The determinant is now the product of the diagonal elements. About how many operations did that take? There are n(n 1)/2 row-operations needed, or O(n 2 ). Each row-operation takes from 1 to n multiplies (average n/2), and 1 to n additions (average n/2), summing to O(n) operations. Total operations is then of order ( ) ( ) ( ) 2 3 ~ O n O n O n TBS: Proof that det|AB| = det|A| det|B| Getting to Home Basis We often wish to change the basis in which we express vectors and matrix operators, e.g. in quantum mechanics. We use a transformation matrix to transform the components of the vectors from the old basis to the new basis. Note that: We are not transforming the vectors; we are transforming the components of the vector from one basis to another. The vector itself is unchanged. There are two ways to visualize the transformation. In the first method, we write the decomposition of a vector into components in matrix form. We use the visualization from above that a matrix times a vector is a weighted sum of the columns of the matrix: y y y y z x z z x z x x v v v v v v ( ( ( ( = = + + ( ( ( ( v e e e e e e This is a vector equation which is true in any basis. In the x-y-z basis, it looks like this: 1 0 0 1 0 0 0 1 0 0 , 1 , 0 0 0 1 0 0 1 x x y y x y z z z v v v v where v v ( ( ( ( ( ( ( ( ( ( ( ( = = = = = ( ( ( ( ( ( ( ( ( ( ( ( v e e e If we wish to convert to the e 1 , e 2 , e 3 basis, we simply write e x , e y , e z in the 1-2-3 basis: (in the1-2-3 basis) : , , x x y y x y z z z a d g v v a d g b e h v v where b e h c f i v v c f i ( ( ( ( ( ( ( ( ( ( ( ( = = = = = ( ( ( ( ( ( ( ( ( ( ( ( v e e e Thus: physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu The columns of the transformation matrix are the old basis vectors written in the new basis. This is true even for non-ortho-normal bases. Now let us look at the same transformation matrix, from the viewpoint of its rows. For this, we must restrict ourselves to ortho-normal bases. This is usually not much of a restriction. Recall that the component of a vector v in the direction of a basis vector e i is given by: ( ) ( ) ( ) i i x x y y z z v = = + + e v v e v e e v e e v e But this is a vector equation, valid in any basis. So i above could also be 1, 2, or 3 for the new basis: ( ) ( ) ( ) 1 2 3 1 2 3 1 1 2 2 3 3 , , v v v = = = = + + e v e v e v v e v e e v e e v e Recall from the section above on matrix multiplication that multiplying a matrix by a vector is equivalent to making a set of dot-products, one from each row, with the vector: ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 x x y z y x y z z x y z v v v v or v v v v v ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( = = = = ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( e e v e e e e v e v e v e e e e v e e v e e e e v ( ( Thus: The rows of the transformation matrix are the new basis vectors written in the old basis. This is only true for ortho-normal bases. There is a beguiling symmetry in the above two boxed statements about the columns and rows of the transformation matrix. For complex vectors, we must use the dot-product defined with the conjugate of the row basis vector, i.e. the rows of the transformation matrix are the hermitian adjoints of the new basis vectors written in the old basis: 1 1 1 2 2 2 3 3 3 v v v ( ( ( ( ( ( ( ( ( ( ( ( ( = = ( ( ( ( ( ( ( ( ( ( ( ( ( e e v e v e v e e v A special case of basis changing comes up often in quantum mechanics: we wish to change to the basis of eigenvectors of a given operator. In this basis, the basis vectors (which are also eigenvectors) always have the form of a single 1 component, and the rest 0. E.g., 1 2 3 1 0 0 0 1 0 0 0 1 ( ( ( ( ( ( = = = ( ( ( ( ( ( e e e The matrix operator A, in this basis (its own eigenbasis), is diagonal, because: physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu 1 1 1 1 2 2 2 2 3 3 3 3 = ( ( = = ` ( ( = ) Ae e Ae e A Ae e Finding the unitary (i.e., unit magnitude) transformation from a given basis to the eigenbasis of an operator is called diagonalizing the matrix. We saw above that the transformation matrix from one basis to another is just the hermitian adjoint of the new basis vectors written in the old basis. We call this matrix U: 1 1 1 1 2 2 2 2 3 3 3 3 v v v ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( = = = ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( e e v e e v e v U e e e v e U transforms vectors, but how do we transform the operator matrix A itself? The simplest way to see this is to note that we can perform the operation A in any basis by transforming the vector back to the original basis, using A in the original basis, and then transforming the result to the new basis: ( ) ( ) ( ) ( ) 1 1 1 1 new old old new new new old old old new old new new old = = = = = = v Uv v U v A v U A v U A U v UA U v A UA U where we used the fact that matrix multiplication is associative. Thus: The unitary transformation that diagonalizes a (complex) self-adjoint matrix is the matrix of normalized eigen-row-vectors. We can see this another way by starting with: 1 1 2 3 1 2 3 1 1 2 2 3 3 are the otho-normal eigenvectors are the eigenvalues i i where ( ( ( ( ( ( = = = ( ( ( ( ( ( AU A e e e Ae Ae Ae e e e e Recall the eigenvectors (of self-adjoint matrices) are orthogonal, so we can now pre-multiply by the hermitian conjugate of the eigenvector matrix: ( ) ( ) 1 1 1 2 1 2 3 2 1 1 2 2 3 3 3 3 1 1 1 2 1 2 ( ( ( ( ( ( ( ( = = ( ( ( ( ( ( ( ( ( ( = e e UAU e A e e e e e e e e e e e e e ( ) 1 2 1 e e ( ) 2 2 2 e e ( ) 1 2 3 3 3 3 0 0 0 0 0 0 ( ( ( ( ( = ( ( ( ( ( e e where the final equality is because each element of the result is the inner product of two eigenvectors, weighted by an eigenvalue. The only non-zero inner products are between the same eigenvectors physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu (orthogonality), so only diagonal elements are non-zero. Since the eigenvectors are normalized, their inner product is 1, leaving only the weight (i.e., the eigenvalue) as the result. Warning Many books write the diagonalization as U 1 AU, instead of the correct UAU 1 . This is confusing and pointless, and these very books then change their notation when they have to transform a vector, because nearly all books agree that vectors transform with U, and not U 1 . Contraction of Matrices You dont see a dot-product of matrices defined very often, but the concept comes up in physics, even if they dont call it a dot-product. We see such products in QM density matrices, and in tensor operations on vectors. We use it below in the Trace section for traces of products. For two matrices of the same size, we define the contraction of two matrices as the sum of the products of the corresponding elements (much like the dot-product of two vectors). The contraction is a scalar. Picture the contraction as overlaying one matrix on top of the other, multiplying the stacked numbers (elements), and adding all the products: A ij B ij } sum A:B We use a colon to convey that the summation is over 2 dimensions (rows and columns) of A and B (whereas the single-dot dot product of vectors sums over the 1 dimensional list of vector components): , 1 11 11 12 12 13 13 21 21 22 22 23 23 11 11 12 12 13 13 21 21 22 22 23 23 31 31 32 32 33 33 31 31 32 32 33 33 For example, for 33 matrices: n ij ij i j a b a b a b a b a b a b a b a b a b a b a b a b a b a b a b a b a b a b a b = + + = + + + = + + + + + + + + + + + A: B A: B which is a single number. If the matrices are complex, we do not conjugate the left matrix (such conjugation is often done in defining the dot-product of complex vectors). Trace of a Product of Matrices The trace of a matrix is defined as the sum of the diagonal elements: ( ) ( ) 12 13 21 23 31 11 22 33 1 32 11 22 33 Tr . . : , Tr n jj j a a a a a a a E g a a a a a a = | | | = = + + | | \ . A A A The trace of a product of matrices comes up often, e.g. in quantum field theory. We first show that Tr(AB) = Tr(BA): physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu ( ) 12 13 21 22 23 22 23 21 22 23 31 11 12 13 11 11 12 13 11 22 * * 1 21 32 3 3 3 1 1* * 32 33 3 3 1 1 1 32 3 . Tr ... as the row of , and as the column of , nn th th r c b b a a Let c c c Define a b b a a a a a a b b a a a a r b c c a a b o a b a a a r a b b = = + + + | || | | | | | | = | | | | | \ .\ . \ . C AB AB A B ( ) ( ) ( ) ( ) ( ) ( ) 11 12 1 11 3 12 13 12 21 22 23 22 21 22 23 21 22 23 1 3 1 13 11 12 13 21 23 31 32 33 31 33 31 32 33 2 * 2 2 2 *2 . . . . . . . . . . . . T T T T T T b a a a b a a a b b a a a b b a a a b b a a a a c a or b b a a | | | | | | | \ . | | | || | | | | | | | = | | | | | | | | \ .\ . \ . \ . B B B B B B and so on. The diagonal elements of the product C are the sums of the overlays of the rows of A on the columns of B. But this is the same as the overlays of the rows of A on the rows of B T . Then we sum the overlays, i.e., we overlay A onto B T , and sum all the products of all the overlaid elements: Tr( ) T = AB A: B Now consider Tr(BA) = B : A T . But visually, B : A T overlays the same pairs of elements as A : B T , but in the transposed order. When we sum over all the products of the pairs, we get the same sum either way: ( ) ( ) Tr Tr T T because = = AB BA A: B B: A This leads to the important cyclic property for the trace of the product of several matrices: ( ) ( ) ( ) ( ) ( ) ( ) Tr ... Tr ... Tr ... Tr ... because = = AB C CAB AB C C AB and matrix multiplication is associative. By simple induction, any cyclic rotation of the matrices leaves the trace unchanged. Linear Algebra Briefs The determinant equals the product of the eigenvalues: 1 det are the eigenvalues of n i i i where = = A A [ This is because the eigenvalues are unchanged through a similarity transformation. If we diagonalize the matrix, the main diagonal consists of the eigenvalues, and the determinant of a diagonal matrix is the product of the diagonal elements (by cofactor expansion). physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu Probability, Statistics, and Data Analysis I think probability and statistics are among the most conceptually difficult topics in mathematical physics. We start with a brief overview of the basics, but overall, we assume you are familiar with simple probabilities, and gaussian distributions. Probability and Random Variables We assume you have an basic idea of probability, and since we seek understanding over mathematical purity, we give here intuitive definitions. A random variable, say X, is a quantity that you can observe (or measure), multiple times (at least in theory), and is not completely predictable. Each observation of a random variable may give a different value. Random variables may be discrete (the roll of a die), or continuous (the angle of a game spinner after you spin it). A uniform random variable has all its values equally likely. Thus the roll of a die is a uniform discrete random variable. The angle of a game spinner is a uniform continuous random variable. But in general, the values of a random variable are not necessarily equally likely. For example, a gaussian (aka normal) random variable is more likely to be near the mean. Given a large sample of observations of any physical quantity X, there will be some structure to the values X assumes. For discrete random variables, each possible value will appear (close to) some fixed fraction of the time in any large sample. The fraction of a large sample that a given value appears is that values probability. For a 6-sided die, the probability of rolling 1 is 1/6, i.e. Pr(1) = 1/6. Because probability is a fraction of a total, it is always between 0 and 1 inclusive: 0 Pr( ) 1 anything s s Note that one can imagine systems of chance specifically constructed to not provide consistency between samples, at least not on realistic time scales. By definition, then, observations of such a system do not constitute a random variable in the sense of our definition. Strictly speaking, a statistic is a number that summarizes in some way a set of random values. Many people use the word informally, though, to mean the raw data from which we compute true statistics. Conditional Probability Probability, in general, is a combination of physics and knowledge: the physics of the system in question, and what you know about its state. Conditional probability specifically addresses probability when the state of the system is partly known. A priori probability generally implies less knowledge of state (a priori means in the beginning or beforehand). But there is no true, fundamental distinction, because all probabilities are in some way dependent on both physics and knowledge. Suppose you have 1 bag with 2 white and 2 black balls. You draw 2 balls without replacement. What is the chance the 2 nd ball will be white? A priori, its obviously . However, suppose the first ball is known white. Now Pr(2 nd ball is white) = 1/3. So we say the conditional probability that the 2 nd ball will be white, given that the first ball is white, is 1/3. In symbols, Pr(2 | ) 1/ 3 nd ball white first ball white = Another example of how conditional probability of an event can be different than the a priori probability of that event: I have a bag of white and a bag of black balls. I give you a bag at random. What is the chance the 2 nd ball will be white? A priori, its . After seeing the 1 st ball is white, now Pr(2 nd ball is white) = 1. In this case, Pr(2 | ) 1 nd ball white first ball white = physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu Precise Statement of the Question Is Critical Many arguments arise about probability because the questions are imprecise, each combatant has a different interpretation of the question, but neither realizes the other is arguing a different question. Consider this: You deal 4 cards from a shuffled deck. I tell you 3 of them are aces. What is the probability that the 4 th card is also an ace? The question is ambiguous, and could reasonably be interpreted two ways, but the two interpretations have quite different answers. It is very important to know exactly how I have discovered 3 of them are aces. Case 1: I look at the 4 cards and say At least 3 of these cards are aces. There are 193 ways that 4 cards can hold at least 3 aces, and only 1 of those ways has 4 aces. Therefore, the chance of the 4 th card being an ace is 1/193. Case 2: I look at only 3 of the 4 cards and say, These 3 cards are aces. There are 49 unseen cards, all equally likely to be the 4 th card. Only one of them is an ace. Therefore, the chance of the 4 th card being an ace is 1/49. It may help to show that we can calculate the 1/49 chance from the 193 hands that have at least 3 aces: Of the 192 that have exactly 3 aces, we expect that 1/4 of them = 48 will show aces as their first 3 cards (because the non-ace 1/4 chance of being last) . Additionally, the one hand of 4 aces will always show aces as its first 3 cards. Hence, of the 193 hands with at least 3 aces, 49 show aces as their first 3 cards, of which exactly 1 will be the 4-ace hand. Hence, its conditional probability, given that the first 3 cards are aces, is 1/49. Lets Make a Deal This is an example of a problem that confuses many people (including me), and how to properly analyze it. We hope this example illustrates some general methods of analysis that you can use to navigate more general confusing questions. In particular, the methods used here apply to renormalizing entangled quantum states when a measurement of one value is made. Your in the Big Deal on the game show Lets Make a Deal. There are 3 doors. Hidden behind two of them are goats; behind the other is the Big Prize. You choose door #1. Monty Hall, the MC, knows whats behind each door. He opens door #2, and shows you a goat. Now he asks, do you want to stick with your choice, or switch to door #3? Should you switch? Without loss of generality (WLOG), we assume you choose door #1 (and of course, it doesnt matter which door you choose). We make a chart of mutually exclusive events, and their probabilities: Bgg shows door #2 1/6 gBg shows door #3 1/3 ggB shows door #2 1/3 After you choose, Monty shows you that door #2 is a goat. So from the population of possibilities, we strike out those that are no longer possible, and renormalize the remaining probabilities: Bgg shows door #2 1/6 1/3 gBg shows door #3 1/3 ggB shows door #2 1/3 2/3 Another way to think of this: Monty showing you door #2 is equivalent to saying, The big prize is either the door you picked, or its door #3. Since your chance of having picked right is unaffected by Monty telling you this, Pr(#3) = 2/3. Monty uses his knowledge to pick a door with a goat. That gives you information, which improves your ability to guess right on your second guess. physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu You can also see it this way: Theres a 1/3 chance you picked right the first time. Then youll switch, and lose. But theres a 2/3 chance you picked wrong the first time. Then youll switch, and win. So you win twice as often as you lose, much better odds than 1/3 of winning. Lets take a more extreme example: suppose there are 100 doors, and you pick #1. Now Monty tells you, The big prize is either the door you picked, or its door #57. Should you switch? Of course. The chance you guessed right is tiny. But Monty knows for sure. How to Lie With Statistics In 2007, on the front page of newspapers, was a story about a big study of sexual behavior in America. The headline point was that on average, heterosexual men have 7 partners in their lives, and women have only 4. Innumeracy, a book about math and statistics, uses this exact point from a previous study of sexual behavior, and noted that one can easily prove that the average number of heterosexual partners of men and women must be exactly the same (if there are equal numbers of men and women in the population. The US has equal numbers of men and women to better than 1%). The only explanation for the survey results is that most people are lying. Men lie on the high side, women lie on the low side. The article goes on to quote all kinds of statistics and facts, oblivious to the fact that most people were lying in the study. So how much can you believe anything the subjects said? Even more amazing to me is that the scientists doing the study seem equally oblivious to the mathematical impossibility of their results. Perhaps some graduate student got a PhD out of this study, too. The proof: every heterosexual encounter involves a man and a woman. If the partners are new to each other, then it counts as a new partner for both the man and the woman. The average number of partners for men is the total number of new partners for all men divided by the number of men considered. But this is exactly equal to the total number of new partners for all women divided by the number of women considered. QED. An insightful friend noted, Maybe to some women, some guys arent worth counting. Choosing Wisely: An Informative Puzzle Heres a puzzle which illuminates the physical meaning of the n k | | | \ . binomial forms. Try it yourself ( ) ! ! ! n n n choose k k k n k | | | \ . is the number of ways of choosing k items from n distinct items, without replacement, where the order of choosing doesnt matter. In other words, n k | | | \ . is the number of combinations of k items taken from n distinct items. The puzzle: Show in words, without algebra, that 1 1 n n n k k k + | | | | | | = + | | | \ . \ . \ . . Some purists may complain that the demonstration below lacks rigor (not true), or that the algebraic demonstration is shorter. However, though the algebraic proof is straightforward, it is dull and uninformative. Physicists may like the demonstration here because it uses the physical meaning of the mathematics to reach an iron-clad conclusion. The solution: The LHS is the number of ways of choosing k items from n + 1 items. Now there are two distinct subsets of those ways: those ways that include the (n + 1) th item, and those that dont. In the first subset, after choosing the (n + 1) th item, we must choose k 1 more items from the remaining n, and physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu there are 1 n k | | | \ . ways to do this. In the second subset, we must choose all k items from the first n, and there are n k | | | \ . ways to do this. Since this covers all the possible ways to choose k items from n + 1 items, it must be that 1 1 n n n k k k + | | | | | | = + | | | \ . \ . \ . . QED. Multiple Events First we summarize the rules for computing the probability of combinations of events from their individual probabilities, then we justify them: Pr(A and B) = Pr(A)Pr(B), A and B independent Pr(A or B) = Pr(A) + Pr(B), A and B mutually exclusive Pr(not A) = 1 Pr(A) Pr(A or B) = Pr(A) + Pr(B) Pr(A)Pr(B), A and B independent For independent events A and B, Pr(A and B) = Pr(A)Pr(B). This follows from the definition of probability as a fraction. If A and B are independent (have nothing to do with each other), then Pr(A) is the fraction of trials with event A. Then of the fraction of those with event A, the fraction that also has B is Pr(B). Therefore, the fraction of the total trials with both A and B is Pr(A and B) = Pr(A)Pr(B) For mutually exclusive events, Pr(A or B) = Pr(A) + Pr(B). This also follows from the definition of probability as a fraction. The fraction of trials with event A Pr(A); fraction with event B Pr(B). If no trial can contain both A and B, then the fraction with either is simply the sum: fraction with A fraction with B Total trials - - - - fraction with A or B - - - Pr(not A) = 1 Pr(A). Since Pr(A) is the fraction of trials with event A, and all trials must either have event A or not, Pr(A) + Pr(not A) = 1 Notice that A and (not A) are mutually exclusive events (a trial cant both have A and not have A), so By Pr(A or B) we mean Pr(A or B or both). For independent events, you might think that Pr(A or B) = Pr(A) + Pr(B), but this is not so. A simple example shows that it cant be: suppose Pr(A) = Pr(B) = 0.7. Then Pr(A) + Pr(B) = 1.4, which cant be the probability of anything. The reason for the failure of simple addition of probabilities is that doing so counts the probability of (A and B) twice: fraction with A fraction with A and B fraction with B Total trials - - - - fraction with A or B - - - - - physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu Note that Pr(A or B) is equivalent to Pr(A and maybe B) OR Pr(B and maybe A). But Pr(A and maybe B) includes the probability of both A and B, as does the 2 nd term, hence it is counted twice. So subtracting the probability of (A and B) makes it counted only once: Pr(A or B) = Pr(A) + Pr(B) Pr(A)Pr(B), A and B independent A more complete statement, which breaks down (A or B) into mutually exclusive events is: Pr(A or B) = Pr(A and not B) + Pr(not A and B) + Pr(A and B) Since the right hand side are now mutually exclusive events, their probabilities add: Pr(A or B) = Pr(A)[1 Pr(B)] + Pr(B)[1 Pr(A)] + Pr(A)Pr(B) = Pr(A) + Pr(B) 2Pr(A)Pr(B) + Pr(A)Pr(B) = Pr(A) + Pr(B) Pr(A)Pr(B) . TBS: Example of rolling 2 dice. Combining Probabilities Here is a more in-depth view of multiple events, with several examples. This section should be called Probability Calculus, but most people associate calculus with something hard, and I didnt want to scare them off. In fact, calculus simply means a method of calculation. Probabilities describe binary events: an event either happens, or it doesnt. Therefore, we can use some of the methods of Boolean algebra in probability. Boolean algebra is the mathematics of expressions and variables that can have one of only two values: usually taken to be true and false. We will use only a few simple, intuitive aspects of Boolean algebra here. An event is something that can either happen, or not (its binary!). We define the probability of an event as the fraction of time, out of many (possibly hypothetical) trials, that the given event happens. For example, the probability of getting a heads from a toss of a fair coin is 0.5, which we might write as Pr(heads) = 0.5 = 1/2. Probability is a fraction of a whole, and so lies in [0, 1]. We now consider two random events. Two events have one of 3 relationships: independent, mutually exclusive, or conditional (aka conditionally dependent). We will soon see that the first two are special cases of the conditional relationship. We now consider each relationship, in turn. Independent: For now, we define independent events as events that have nothing to do with each other, and no effect on each other. For example, consider two events: tossing a heads, and rolling a 1 on a 6-sided die. Then Pr(heads) = 1/2, and Pr(rolling 1) = 1/6. The events are independent, since the coin cannot influence the die, and the die cannot influence the coin. We define one trial as two actions: a toss and a roll. Since probabilities are fractions, of all trials, will have heads, and 1/6 of those will roll a 1. Therefore, 1/12 of all trials will contain both a heads and a 1. We see that probabilities of independent events multiply. We write: Pr(A and B) = Pr(A)Pr(B) (independent events) In fact, this is the precise definition of independence: if the probability of two events both occurring is the product of the individual probabilities, then the events are independent. [Aside: This definition extends to PDFs: if the joint PDF of two random variables is the product of their individual PDFs, then the random variables are independent.] Geometric diagrams are very helpful in understanding the probability calculus. We can picture the probabilities of A, B, and (A and B) as areas. The sample space or population is the set of all possible outcomes of trials. We draw that as a rectangle. Each point in the rectangle represents one possible outcome. Therefore, the probability of an outcome being within a region of the population is proportional to the area of the region. (Below, (a)) An event A either happens, or it doesnt. Therefore, physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu Pr(A) + Pr(~A) = 1. (always) B A A and B 0 1 1 Pr(B) Pr(A) ~A A 0 1 Pr(A) B A A and B sample space, aka population B A (b) independent (c) conditional (d) mutually exclusive (a) (Left) An event either happens, or it doesnt. (Middle) The (continuous) sample space is the square. Events A and B are independent. (Right) A and B are dependent. (Above, (b)) Pr(A) is the same whether B occurs or not, shown by the fraction of B covered by A is the same as the fraction of the sample space covered by A. Therefore, A and B are independent. (Above, (c)) The probability of (A or B) is the red, blue, and magenta areas. Geometrical, then we see Pr(A or B) = Pr(A) + Pr(B) Pr(A and B) (always) This is always true, regardless of any dependence between A and B. Conditionally dependent: From the diagram, when A and B are conditionally dependent, we see that the Pr(B) depends on whether A happens or not. Pr(B given that A occurred) is written as Pr(B | A), and read as probability of B given A. From the ratio of the magenta area to the red, we see Pr(B | A) = Pr(B and A)/Pr(A) (always) Mutually exclusive: Two events are mutually exclusive when they cannot both happen (diagram above, (d)). Thus, Pr(A and B) = 0, and Pr(A or B) = Pr(A) + Pr(B) (mutually exclusive) Note that Pr(A or B) follows the rule from above, which always applies. We see that independent events are an extreme case of conditional events: independent events satisfy Pr(B | A) = Pr(B) (independent) since the occurrence of A has no effect on B. Also, mutually exclusive events satisfy Pr(B | A) = 0 (mutually exclusive) Summary of Probability Calculus Always Pr(~A) = 1 Pr(A) Pr(entire sample space) = 1 (diagram above, (a)) Pr(A or B) = Pr(A) + Pr(B) Pr(A and B) Subtract off any double-count of A and B (diagram above, (c)) A & B independent All from diagram above, (b) Pr(A and B) = Pr(A)Pr(B) Precise defn of independent Pr(A or B) = Pr(A) + Pr(B) Pr(A)Pr(B) Using the and and or rules above Pr(B | A) = Pr(B) special case of conditional probability physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu A & B mutually exclusive All from diagram above, (d) Pr(A and B) = 0 Defn of mutually exclusive Pr(A or B) = Pr(A) + Pr(B) Nothing to double-count; special case of Pr(A or B) from above Pr(B | A) = Pr(A | B) = 0 Cant both happen Conditional probabilities All from diagram above, (c) Pr(B | A) = Pr(B and A) / Pr(A) fraction of A that is also B. Pr(B and A) = Pr(B | A)Pr(A) = Pr(A | B)Pr(B) Bayes Rule: Shows relationship between Pr(B | A) and Pr(A | B) Pr(A or B) = Pr(A) + Pr(B) Pr(A and B) Same as Always rule above Note that the and rules are often simpler than the or rules. To B, or To Not B? Sometimes its easier to compute Pr(~A) than Pr(A). Then we can find Pr(A) from Pr(A) = 1 Pr(~A). Example: What is the probability of rolling 4 or more with two dice? The population has 36 possibilities. To compute this directly, we must use 4 5 11 12 33 3 4 5 6 5 4 3 2 1 33 Pr( 4) 36 way to ways to ways to ways to roll roll roll roll + + + + + + + + = > = Thats a lot of addition. Its much easier to note that: 2 3 3 33 Pr( 4) 1 2 3 Pr( 4) , Pr( 36 36 4) 1 Pr( 4) ways to ways to roll roll and < = + = < = > = < = In particular, the and rules are often simpler than the or rule. Therefore, when asked for the probability of this or that, it is sometimes simpler to convert to its complementary and statement, compute the and probability, and subtract it from 1 to find the or probability. Example: From a standard 52-card deck, draw a single card. What is the chance it is a spade or a face-card (or both)? Note that these events are independent. To compute directly, we use the or rule: Pr( ) 1/ 4, Pr( ) 3/13, 1 3 1 3 13 12 3 22 Pr( ) 4 13 4 13 52 52 = = + = + = = It may be simpler to compute the probability of drawing neither a spade nor a face-card, and subtracting from 1: Pr(~ ) 3/ 4, Pr(~ ) 10 /13, 3 10 30 22 Pr( ) 1 Pr(~ ~ ) 1 1 4 13 52 52 = = = = = = The benefit of converting to the simpler and rule increases with more or terms. physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu Example: Remove the 12 face cards from a standard 52-card deck, leaving 40 number cards (aces are 1). Draw a single card. What is the chance it is a spade (S), low (L) (4 or less), or odd (O)? Note that these 3 events are independent. To compute directly, we can count up the number of ways the conditions can be met, and divide by the population of 40 cards. There are 10 spades, 16 low cards, and 20 odd numbers. But we cant just sum those numbers, because we would double (and triple) count many of the cards. Instead, we can extend the or the rules to 3 conditions, shown below. S L O Venn diagram for Spade, Low, and Odd. Without proof, we state that the direct computation from a 3-term or rule is this: Pr( ) 1/ 4, Pr( ) 4/10, Pr( ) 1/ 2 Pr( ) Pr( ) Pr( ) Pr( ) Pr( ) Pr( ) Pr( ) Pr( ) Pr( ) Pr( ) Pr( ) Pr( ) Pr( ) 1 4 1 1 4 1 1 4 1 1 4 1 4 10 2 4 10 4 2 10 2 4 10 2 10 16 20 4 5 8 2 31 40 40 S L O S or L or O S L O S L S O L O S L O = = = = + + + | | | | | | | | = + + + | | | | \ . \ . \ . \ . + + + = = It is far easier to compute the chance that it is neither spade, nor low, nor odd: Pr(~ ) 3/ 4, Pr(~ ) 6/10, Pr(~ ) 1/ 2 Pr( ) 1 Pr(~ ~ ~ ) 1 Pr(~ ) Pr(~ ) Pr(~ ) 3 6 1 9 31 1 1 4 10 2 40 40 S L O S or L or O S and L and O S L O = = = = = = = = You may have noticed that converting S or L or O into ~(~S and ~L and ~O) is an example of De Morgans theorem from Boolean algebra. Continuous Random Variables and Distributions Probability is a little more complicated for continuous random variables. A continuous population is a set of random values than can take on values in a continuous interval of real numbers; for example, if I spin a board-game spinner, the little arrow can point in any direction: 0 < 2. u = 0 u = u physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu Board game spinner Further, all angles are equally likely. By inspection, we see that the probability of being in the first quadrant is , i.e. Pr(0 < t/2) = . Similarly, the probability of being in any interval d is: ( ) 1 Pr in any interval 2 d d u u u t = If I ask, what is the chance that it will land at exactly = ? the probability goes to zero, because the interval d goes to zero. In this simple example, the probability of being in any interval d is the same as being in any other interval of the same size. In general, however, some systems have a probability per unit interval that varies with the value of the random variable (call it X) (I wish I had a simple, everyday example of this??). So ( ) Pr in an infinitesimal interval around pdf( ) X dx x x dx = , where pdf(x) the probability distribution function. pdf(x) has units of 1/x. By summing mutually exclusive probabilities, the probability of X in any finite interval [a, b] is: Pr( ) pdf( ) b a a X b dx x s s = } Since any random variable X must have some real value, the total probability of X being between and + must be 1: ( ) Pr pdf( ) 1 X dx x < < = = } The probability distribution function of a random variable tells you everything there is to know Population and Samples A population is a (often infinite) set of all possible values that a random variable may take on, along with their probabilities. A sample is a finite set of values of a random variable, where those values come from the population of all possible values. The same value may be repeated in a sample. We often use samples to estimate the characteristics of a much larger population. A trial or instance is one value of a random variable. There is enormous confusion over the binomial (and similar) distributions, because each instance of a binomial random variable comes from many attempts at an event, where each attempt is labeled either success or failure. Superficially, an attempt looks like a trial, and many sources confuse the terms. In the binomial distribution, n attempts go into making a single trial (or instance) of a binomial random variable. Variance The variance of a population is a measure of the spread of any distribution, i.e. it is some measure of how widely spread out values of a random variable are likely to be [there are other measures of spread, too]. The variance of a population or sample is the most important parameter in statistics. Variance is always positive, and is defined as the average squared-difference between the random values and their average value: physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu ( ) ( ) 2 var is an operator which takes the average X X X where X X The units of variance are the square of the units of X. If I multiply a set of random numbers by a constant k, then I multiply the variance by k 2 : ( ) ( ) ( ) 2 var var is any set of random numbers var takes the variance kX k X where X = Any function, including variance, with the above property is homogeneous-of-order-2 (2 nd order homogeneous??). We will return later to methods of estimating the variance of a population. Standard Deviation The standard deviation of a population is another measure of the spread of any distribution, closely related to the variance. Standard deviation is always positive, and is defined as the square root of the variance, which equals the root-mean-square (RMS) of the deviations from the average. ( ) ( ) ( ) 2 dev var is an operator which takes the average X X X X where X X = The units of standard deviation are the units of X. If I multiply a set of random numbers by a constant k, then I multiply the standard deviation by the same constant k: ( ) ( ) ( ) dev dev is any set of random numbers dev takes the standard deviation kX k X where X = TBS: Why we care about standard deviation, even for non-normal populations. Bounds on percentage of population contained for any population. Stronger bounds for unimodal populations. Normal (aka Gaussian) Distribution From mathworld.wolfram.com/NormalDistribution.html : While statisticians and mathematicians uniformly use the term normal distribution for this distribution, physicists sometimes call it a Gaussian distribution and, because of its curved flaring shape, social scientists refer to it as the bell curve. A gaussian distribution is one of a family of distributions defined as a population with 2 1 2 population average 1 pdf( ) population standard deviation 2 x x e where o o t o | | | \ . = [picture??]. and are parameters: can be any real value, and > 0 and real. This illustrates a common feature named distributions: they are usually a family of distributions, parameterized by one or more parameters. The gaussian distribution is a 2-parameter distribution: and . New Random Variables From Old Ones Given two random variables X and Y, we can construct new random variables as functions of x and y (trial values of X and Y). One common such new random variable is simply the sum: , i i i Define Z X Y which means trial i z x y + + X (x) and pdf Y (y) (all we can know about X and Y), what is pdf Z (z)? Given a particular value x of X, we see that physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu ( ) ( ) : Pr( ) Pr Given x Z within dz of z Y within dz of z x = But x is a value of a random variable, so the total Pr(Z within dz of z) is the sum (integral) over all x: ( ) ( ) ( ) ( ) ( ) ( ) ( ) Pr( ) pdf ( ) Pr , Pr pdf Pr( ) pdf ( ) pdf pdf ( ) pdf ( ) pdf X Y X Y Z X Y Z within dz of z dx x Y within dz of z x but Y within dz of z x z x dz so Z within dz of z dz dx x z x z dx x z x = = = = } } } This integral way of combining two functions, pdf X (x) and pdf Y (y) with a parameter z is called the convolution of pdf X and pdf Y , which is a function of a parameter z. x y x Convolution of pdf X with pdf Y at z = 8 pdf Y (y) pdf X (x) z = 8 The convolution evaluated at z is the area under the product pdf X (x)pdf Y (z x). From the above, we can easily deduce the pdf Z (z) if Z X Y = X + (Y). First, we find pdf (Y) (y), and then use the convolution rule. Note that ( ) ( ) pdf ( ) pdf ( ) pdf ( ) pdf ( ) pdf ( ) pdf ( ) pdf ( ) Y Y Z X Y X Y y y z dx x z x dx x x z = = = } } Since we are integrating from to +, we can shift x with no effect: pdf ( ) pdf ( ) pdf ( ) Z X Y x x z z dx x z x + = + } which is the standard form for the correlation function of two functions, pdf X (x) and pdf Y (y). physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu x pdf X (x) y pdf Y (y) x Correlation of pdf X with pdf Y at z = 2 z = 2 The correlation function evaluated at z is the area under the product pdf X (x + z)pdf Y (x). The pdf of the sum of two random variables is the convolution of the pdfs of those random variables. The pdf of the difference of two random variables is the correlation function of the pdfs of those random variables. Note that the convolution of a gaussian distribution with a different gaussian is another gaussian. Therefore, the sum of a gaussian random variable with any other gaussian random variable is gaussian. Some Distributions Have Infinite Variance, or Infinite Average In principle, the only requirement on a PDF is that it be normalized: PDF( ) 1 x dx = } Given that, it is possible that the variance is infinite (or properly, undefined). For example, consider: 3 2 2 0 0 PDF( ) 1 PDF( ) 1, PDF( ) 0 0 x x x x x x dx but x x dx x o = > = = = = ` = < ) } } The above distribution is normalized, and has finite average, but infinite deviation. Even worse, 2 2 2 0 0 PDF( ) 1 PDF( ) , PDF( ) 0 0 x x x x x x dx and x x dx x o = > = = = = ` = < ) } } This distribution is normalized, but has both infinite average and infinite deviation. Are such distributions physically meaningful? Sometimes. The Lorentzian (aka BreitWigner) distribution is common in physics, or at least, a good approximation to physical phenomena. It has infinite average and deviation. Its normal and parameterized forms are: ( ) ( ) 0 2 2 0 0 1 1 1 1 ( ) ( ; , ) 1 1 / location of peak, half-width at half-maximum L x L x x x x x where x t t = = + + This is approximately the energy distribution of particles created in high-energy collisions. Its CDF is: 0 1 1 CDF ( ) arctan 2 Lorentzian x x x t | | = + | \ . physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu Samples and Parameter Estimation In statistics, an efficient estimator the most efficient estimator [ref??]. There is none better (i.e., none with smaller variance). You can prove mathematically that the average and variance of a sample are the most efficient estimators (least variance) of the population average and variance. It is impossible to do any better, so its not worth looking for better ways. The most efficient estimators are least squares estimators, which means that over many samples, they minimize the sum-squared error from the true value. We discuss least-squares vs. maximum-likelihood estimators later. Note, however, than given a set of measurements, some of them may not actually measure the population of interest (i.e., they may be noise). If you can identify those bad measurements from a sample, you should remove them before estimating any parameter. Usually, in real experiments, there is always some unremovable corruption of the desired signal, and this contributes to the uncertainty in the measurement. The sample average is defined as: 1 1 n i i x x n = and is the least variance estimate of the average <X> of any population. It is unbiased, which means the average of many estimates approaches the true population average: average, over the given parameter if not obvious many samples over what x X where = - Note that the definition of unbiased is not that the estimator approaches the true value for large samples; it is that the average of the estimator approaches the true value over many samples, even small samples. The sample variance and standard deviation are defined as: ( ) 2 2 1 2 1 is the sample average, as above : 1 n i i i s x x where x x x n s s = The sample variance is an efficient and unbiased estimate of Var(X), which means no other estimate of Var(X) is better. Note that s 2 is unbiased, but s is biased, because the square root of the average is not equal to the average of the square root. I.e., ( ) 2 2 many samples s Dev X because s s = = This exemplifies the importance of properly defining bias: ( ) ( ) lim many samples N s Dev X even though s Dev X = = Sometimes you see variance defined with 1/n, and sometimes with 1/(n 1). Why? The population variance is defined as the mean-squared deviation from the population average. For a finite population, we find the population variance using 1/N, where N is the number of values in the whole population: ( ) 2 1 is the # of values in the entire population 1 ( ) is the value of the population exact population average N th i i i N Var X X where X i N physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu In contrast, the sample variance is the variance of a sample taken from a population. The population average is usually unknown. We can only estimate <x>. Then to make s 2 unbiased, one can show that it must use 1/(n 1), where n is the sample size, not population size. (Show this??). This is actually a special case of curve fitting, where we fit a constant, <x>, to the population. This is a single parameter, and so removes 1 degree of freedom from our fit errors. Hence, the mean-squared fit error has 1 degree of freedom less than the sample size. (Show this with algebra.) For a sample from a population when the average is exactly known, we use n as the weighting for s 2 : ( ) 2 2 1 1 n i i s x n = = ## , which is just the above equation with X i x i , N n. Notice that infinite populations with unknown can only have samples, and thus always use n1. But as n , it doesnt matter, so we can compute the population variance either way: 1 1 1 1 ( ) lim lim 1 n n i i n n i i Var X x x n n = = = , because n = n 1, when n = . Central Limit Theorem For Continuous And Discrete Populations The central limit theorem is important because it allows us to estimate some properties of a population given only sample of the population, with no a priori information. Given a population, we can take a sample of it, and compute its average. If we take many samples, each will (likely) produce a different average. Hence, the average of a sample is a new random variable, created from the original. The central limit theorem says that for any population, as the sample size grows, the sample average becomes a gaussian random variable, with average equal to the population average, and variance equal to the population variance divided by n. 2 2 Given a random variable , with mean and variance , then lim , sample average X X n X x Gaussian where x n o o | | e | | \ . Note that the central limit theorem applies only to multiple samples from a single population (though there are some variations that can be applied to multiple populations). [It is possible to construct large sums of multiple populations whose averages are not gaussian, e.g. in communication theory, inter-symbol interference (ISI). But we will not go further into that.] How does the Central Limit Theorem apply to a discrete population? If a population is discrete, then any sample average is also discrete. But the gaussian distribution is continuous. So how can the sample average approach a gaussian for large sample size N? Though the sample average is discrete, the density of allowed values increases with N. If you simply plot the discrete values as points, those points approach the gaussian curve. For very large N, the points are so close, they look continuous. TBS: Why binomial (discreet), Poisson (discreet), and chi-squared (continuous) distributions approach gaussian for large n (or v). Uncertainty of Average The sample average <x> gives us an estimate of the population average . The sample average, when taken as a set of values of many samples, is itself a random variable. The Central Limit Theorem (CLT) says that if we know the population standard deviation , the sample average <x> will have standard deviation x n o o < > = (proof below) physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu In statistics, <x> is called the standard error of the mean. In experiments, <x> is the 1-sigma uncertainty in our estimate of the average . However, most often, we know neither nor , and must estimate both from our sample, using <x> and s. For large samples, we use simply s, and then for "large" samples, i.e "large" x s n n o < > ~ For small samples, we must still use s as our estimate of the population deviation, since we have nothing else. But instead of assuming that <x> is gaussian, we use the exact distribution, which is a little wider, called a T-distribution [W&M ??], which is complicated to write explicitly. It has a parameter, t, similar to the gaussian , which measures its spread: sample average, sample standard deviation x x t where x s s = = We then use t, and t-tables, to establish confidence intervals. Uncertainty of Uncertainty: How Big Is Infinity? Sometimes, we need to know the uncertainty in our estimate of the population variance (or standard deviation). So lets look more closely at the uncertainty in our estimate s 2 of the population variance 2 . The random variable ( ) 2 2 1 n s o ## has chi-squared distribution with n 1 degrees of freedom [W&M Thm 6.16 p201]. So ( ) ( ) ( ) 2 2 2 4 2 2 2 1 2 2 2 2 2 1 , 1 1 1 2 2 1 1 1 n s Var s n n n n Dev s n n n o o o _ o o | | e = = | | \ . = = However, usually were more interested in the uncertainty of the standard deviation estimate, rather than the variance. For that, we use the fact that s is function of s 2 : s (s 2 ) 1/2 . For moderate or bigger sample sizes, and confidence ranges up to 95% or so, we can use the approximate formula for the deviation of a function of a random variable (see Functions of Random Variables, elsewhere) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) 1/ 2 1/ 2 2 2 2 2 ( ) ' for small Dev(X) 1 1 1 2 1 1 ( ) 2 2 1 2 1 2 1 Y f X Dev Y f X Dev X s s Dev s Dev s s n n n o o o o = ~ ~ = = ~ This allows us to explain the rule of thumb: n > 30 is statistical infinity. This rule is most often used in estimating the standard error of the mean <x> (see above), given by x s n n o o < > = ~ . We dont know the population deviation, , so we approximate it with s . For small samples, this isnt so good. Then the uncertainty <x> needs to include both the true sampling uncertainty in <x> and the uncertainty in s. To be confident that our <x> is within our claim, we need to expand our confidence limits, to allow for the chance that s happens to be low. The Student T-distribution exactly handles this correction to our confidence limits on <x> for all sample sizes. However, roughly, an upper bound on would be, say, the 95% limit on s, which is about ( ) 95% 1 2 2 2 1 s s s s n o o ~ + ~ + This might seem circular, because we still have (which we dont know) on the right hand side. However, its effect is now reduced by the fraction multiplying it. So the uncertainty in is also reduced by this factor, and we can neglect it. Thus to first order, we have physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu ( ) 95% 1 2 2 2 1 1 2 1 s s s s s s n n o | | ~ + ~ + = + | | \ . So long as \[2/(n 1)] is small compared to 1, we can ignore it. Plug into our formula for <x> : 95% ,95% 2 1 1 x x s s n n n n o o o < > < > | | = ~ = + | | \ . When n = 30, \[2/(n 1)] = 0.26. That seems like a lot to me, but n = 30 is the generally agreed upon bound for good confidence that s . Combining Estimates of Varying Uncertainty Weight measurements by 1/ 2 : When taking data, our measurements often have varying uncertainty: some samples are better than others. We can still find an average, but what is the best average, and what is <x> of a set of samples, each with its own uncertainty, i ? We find this from the rule that variances add (if the random variables are uncorrelated). In general, if you have a set of estimates of a parameter, but each estimate has a different uncertainty, how do you combine the estimates for the most reliable estimate of the parameter? Clearly, estimates with smaller variance should be given more weight than estimates with larger variance. But exactly how much? In general, you weight each estimate by 1/ 2 . For example, to estimate the average of a population from several samples of different uncertainty (different variance): 2 1 2 2 1 are the estimates of the average 1 are the variances of those estimates n est i est i i i n i i i x x x where o o o = = ~ This is just a weighted average, where the denominator is the sum of the weights. There are whole books written on parameter estimation, so it can be a big topic. Functions of Random Variables It follows from the definition of probability that the average value of any function of a random variable is ( ) ( ) pdf ( ) X f X dx f x x = } We can apply this to our definitions of population average and population variance: ( ) 2 pdf ( ) ( ) pdf ( ) X X X X dx x x and Var X dx x X x = = } } Statistically Speaking: What Is The Significance of This? Before we compute any uncertainties, we should understand what they mean. Statistical significance interprets uncertainties. It is one of the most misunderstood, and yet most important, concepts in science. It underlies virtually all experimental and simulation results. Beliefs (correct and incorrect) about statistical significance drive experiment, research, funding, and policy. Understanding statistical significance is a prerequisite to understanding science. This cannot be overstated, and yet many (if not most) scientists and engineers receive no formal training in statistics. The following few pages describe statistical significance, surprisingly using almost no math. physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu Overview of Statistical Significance The term statistically significant has a precise meaning which is, unfortunately, different than the common meaning of the word significant. Many experiments compare quantitative measures of two populations, e.g. the IQs of ferrets vs. gophers. In any real experiment, the two measures will almost certainly differ. How should we interpret this difference? We can use statistics to tell us the meaning of the difference. A difference which is not statistically significant in some particular experiment may, in fact, be quite important. But we can only determine its importance if we do another experiment with finer resolution, enough to satisfy our subjective judgment of importance. For this section, I use the word importance to mean a subjective assessment of a measured result. The statement We could not measure a difference is very different from There is no important difference. Statistical significance is a quantitative comparison of the magnitude of an effect and the resolution of the statistics used to measure it. This section requires an understanding of probability and uncertainty. statistical significance is and is not. We then give more specific statements and examples. Statistical significance is many things: Statistical significance is a measure of an experiments ability to resolve its own measured result. It is not a measure of the importance of a result. Statistical significance is closely related to uncertainty. Statistical significance is a quantitative statement of the probability that a result is real, and not a measurement error or the random result of sampling that just happened to turn out that way (by chance). Statistically significant means measurable by this experiment. Not statistically significant means that we cannot fully trust the result from this experiment alone; the experiment was too crude to have confidence in its own result. Statistical significance is a one-way street: if a result is statistically significant, it is real. However, it may or may not be important. In contrast, if a result is not statistically significant, then we dont know if its real or not. However, we will see that even a not significant result can sometimes provide meaningful and useful information. If the difference between two results in an experiment is not statistically significant, that difference may still be very real and important. Details of Statistical Significance A meaningful measurement must contain two parts: the magnitude of the result, and the confidence limits on it, both of which are quantitative statements. When we say, the average IQ of ferrets in our experiment is 102 5 points, we mean that there is a 95% chance that the actual average IQ is between 97 and 107. We could also say that our 95% confidence limits are 97 to 107. Or, we could say that our 95% uncertainty is 5 points. The confidence limits are sometimes called error bars, because on a graph, confidence limits are conventionally drawn as little bars above and below the measured values. Suppose we test gophers and find that their average IQ is 107 4 points. Can we say on average, gophers have higher IQs than ferrets? In other words, is the difference we measured significant, or did it happen just by chance? To assess this, we compute the difference, and its uncertainty (recall that ( ) 2 2 107 102 4 5 5 6 (gophers ferrets) IQ A = + = physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu This says that the difference lies within our uncertainty, so we are not 95% confident that gophers have higher IQs. Therefore, we still dont know if either population has higher IQs than the other. Our experiment was not precise enough to measure a difference. This does not mean that there is no difference. However, we can say that there is a 95% chance that the difference is between 1 and 11 (5 6). A given experiment measuring a difference can produce one of two results of statistical significance: (1) the difference is statistically significant; or (2) it is not. In this case, the difference is not (statistically) significant at the 95% level. In addition, confidence limits yield one of three results of importance: (1) confirm that a difference is important; or (2) not important, or (3) be inconclusive. But the judgment of how much is important is outside the scope of the experiment. For example, we may know from prior research that a 10 point average IQ difference makes a population a better source for training pilots, enough better to be important. Note that this is a subjective statement, and its precise meaning is outside our scope here. Five of the six combinations of significance and importance are possible, as shown by the following examples. Example 1, not significant, and inconclusive importance: With the given numbers, IQ = 5 6, the importance of our result is inconclusive, because we dont know if the average IQ difference is more or less than 10 points. Example 2, not significant, but definitely not important: Suppose that prior research showed (somehow) that a difference needed to be 20 points to be important. Then our experiment shows that the difference is not important, because the difference is very unlikely to be as large as 20 points. In this case, even though the results are not statistically significant, they are very valuable; they tell us something meaningful and worthwhile, namely, the difference between the average IQs of ferrets and gophers is not important for using them as a source for pilots. The experimental result is valuable, even though not significant, because it establishes an upper bound on the difference. Example 3, significant, but inconclusive importance: Suppose again that a difference of 10 points is important, but our measurements are: ferrets average 100 3 points, and gophers average 107 2 points. Then the difference is: ( ) 2 2 107 100 2 3 7 4 (gophers ferrets) IQ A = + = These results are statistically significant: there is better than a 95% chance that the average IQs of ferrets and gophers are different. However, the importance of the result is still inconclusive, because we dont know if the difference is more or less than 10 points. Example 4, significant and important: Suppose again that a difference of 10 points is important, but we measure that ferrets average 102 3 points, and gophers average 117 2 points. Then the difference is: ( ) 2 2 117 102 2 3 15 4 (gophers ferrets) IQ A = + = Now the difference is both statistically significant, and important, because there is a 95% chance that the difference is > 10 points. We are better off choosing gophers to go to pilot school. Example 5, significant, but not important: Suppose our measurements resulted in 5 4 IQ A = Then the difference is significant, but not important, because we are confident that the difference < 10. This result established an upper bound on the difference. In other words, our experiment was precise enough that if the difference were important (i.e., big enough to matter), then wed have measured it. Finally, note that we cannot have a result that is not significant, but important. Suppose our result was: 11 12 IQ A = The difference is unmeasurably small, and possibly zero, so we certainly cannot say the difference is important. In particular, we cant say the difference is greater than anything. physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu Thus we see that stating there is a statistically significant difference is (by itself) not saying much, because the difference could be tiny, and physically unimportant. We have used here the common confidence limit fraction of 95%, often taken to be ~2. The next most common fraction is 68%, or ~1. Another common fraction is 99%, taken to be ~3. More precise gaussian fractions are 95.45% and 99.73%, but the digits after the decimal point are usually meaningless (i.e., not statistically significant!) Note that we cannot round 99.73% to the nearest integer, because that would be 100%, which is meaningless in this context. Because of the different confidence fractions in use, you should always state your fractions explicitly. You can state your confidence fraction once, at the beginning, or along with your uncertainty, e.g. 10 2 (1). Caveat: We are assuming random errors, which are defined as those that average out with larger sample sizes. Systematic errors do not average out, and result from biases in our measurements. For example, suppose the IQ test was prepared mostly by gophers, using gopher cultural symbols and metaphors unfamiliar to most ferrets. Then gophers of equal intelligence will score higher IQs because the test is not fair. This bias changes the meaning of all our results, possibly drastically. Ideally, when stating a difference, one should put a lower bound on it that is physically important, and give the probability (confidence) that the difference is important. E.g. We are 95% confident the difference is at least 10 points (assuming that 10 points on this scale matters). Examples Here are some examples of meaningful and not-so-meaningful statements: Meaningless Statements (appearing frequently in print) Meaningful Statements, possibly subjective (not appearing enough) The difference in IQ between groups A and B is not statistically significant. because the difference is small?) We measured an average IQ difference of 5 points. (With what confidence?) Group A has a statistically significantly higher IQ than group B. (How much higher? Is it important?) Our data show there is a 99% likelihood that the IQ difference between groups A and B is less than 1 point. resolution to tell if there was an important difference in IQ. Our data show there is a 95% likelihood that the IQ difference between groups A and B is greater than 10 points. Statistical significance summary: Statistical significance is a quantitative statement about an experiments ability to resolve its own result. Importance is a subjective assessment of a measurement that may be guided by other experiments, and/or gut feel. Statistical significance says nothing about whether the measured result is important or not. Predictive Power: Another Way to Be Significant, but Not Important Suppose that we have measured IQs of millions of ferrets and gophers over decades. Suppose their population IQs are gaussian, and given by (note the switch to 1 uncertainties): :101 20 :103 20 (1 ) ferrets gophers o The average difference is small, but because we have millions of measurements, the uncertainty in the average is even smaller, and we have a statistically significant difference between the two groups. Suppose we have only one slot open in pilot school, but two applicants: a ferret and a gopher. Who should get the slot? We havent measured these two individuals, but we might say, Gophers have significantly higher IQs than ferrets, so well accept the gopher. Is this valid? To quantitatively assess the validity of this reasoning, let us suppose (simplistically) that pilot students with an IQ of 95 or better are 20% more likely to succeed than those with IQ < 95. From the given physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu statistics, 65.5% of gophers have IQs > 95, vs. 61.8% of ferrets. The relative probabilities of success are then: : 0.382 0.618(1.2) 1.12 : 0.345 0.655(1.2) 1.13 ferrets gophers + = + = So a random gopher is less than 0.7% more likely to succeed than a random ferret. This is pretty unimportant. In other words, species (between ferrets and gophers) is not a good predictor of success. Its so bad that many, many other facts will be better predictors of success. Height, eyesight, years of schooling, and sports ability are probably all better predictors. The key point is this: Differences in average, between two populations, which are much smaller than the deviations within the populations, are poor predictors of individual outcomes. Bias and the Hood (Unbiased vs. Maximum-Likelihood Estimators) In experiments, we frequently have to estimate parameters from data. There is a very important difference between unbiased and maximum likelihood estimates, even though sometimes they are the same. Sadly, two of the most popular experimental statistics books completely destroy these concepts, and their distinction. [Both books try to derive unbiased estimates using the principle of maximum likelihood, which is impossible since the two concepts are very different. The incorrect argument goes through the exercise of deriving the formula for sample variance from the principle of maximum likelihood, and (of course) gets the wrong answer! Hand waving is then applied to wiggle out of the mistake.] Everything in this section applies to arbitrary distributions, not just gaussian. We follow these steps: 1. Terse definitions, which wont be entirely clear at first. 2. Example of estimating the variance of a population (things still fuzzy). 3. Silly example of the need for maximum-likelihood in repeated trials. 4. Real-world physics examples of different situations leading to different choices between unbiased and maximum-likelihood. Terse definitions: In short: An unbiased statistic is one whose average is exactly right: the average of many samples of an unbiased statistic is closer to the right answer than one sample is. In the limit of an infinite number of estimates, the average of an unbiased statistic is exactly the population parameter. A maximum likelihood statistic is one which is most likely to have produced the given the data. Note that if it is biased, then the average of many maximum likelihood estimates does not get you closer to right answer. In other words, given a fixed set of data, maximum-likelihood estimates have some merit, but biased ones cant be combined well with other sets of data (perhaps future data, not yet taken). This concept should become more clear below. Which is better, an unbiased estimate or a maximum-likelihood estimate? It depends on what you goals are. Example of population variance: Given a sample of values from a population, an unbiased estimate of the population variance is ( ) 2 2 1 (unbiased estimate) 1 n i i x x n o = ~ If we take several samples of the population, compute an unbiased estimate of the variance for each sample, and average those estimates, well get a better estimate of the population variance. Generally, physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu unbiased estimators are those that minimize the sum-squared-error from the true value (principle of least- squares). However, suppose we only get one shot at estimating the population variance? Suppose Monty Hall says Ill give you a zillion dollars if you can estimate the variance (to within some tolerance)? What estimate should we give him? Since we only get one chance, we dont care about the average of many estimates being accurate. We want to give Mr. Hall the variance estimate that is most likely to be right. One can show that the most likely estimate is given by using n in the denominator, instead of (n 1): ( ) 2 2 1 (maximum-likelihood estimate) n i i x x n o = ~ This is the estimate most likely to win the prize. Perhaps more realistically, if you need to choose how long to fire a retro-rocket to land a spacecraft on the moon, do you choose (a) the burn time that, averaged over many spacecraft, reaches the moon, or (b) the burn time that is most likely to land your one-and-only craft on the moon? In the case of variance, the maximum-likelihood estimate is smaller than the unbiased estimate by a factor of (n 1)/n. If we were to make many maximum-likelihood estimates, each one would be small by the same factor. The average would then also be small by that factor. No amount of averaging would ever fix this error. Our average estimate of the population variance would not get better with more estimates. You might conclude that maximum-likelihood estimates are only good for situations where you get a single trial. However, we now show that maximum-likelihood estimates can be useful even when there are many trials of a statistical process. A silly example: You are a medieval peasant barely keeping your family fed. Every morning, the benevolent king goes to the castle tower overlooking the public square, and tosses out a gold coin to the crowd. Whoever catches it, keeps it. Being better educated than most medieval peasants, each day you record how far the coin goes, and generate a pdf (probability distribution function) for the distance from the tower. It looks like this: distance Gold Coin Toss Distance pdf most likely average The most-likely distance is notably different than the average distance. Given this information, where do you stand each day? Answer: At the most-likely distance, because that maximizes your payoff not only for one trial, but across many trials over a long time. The best estimator is in the eye of the beholder: as a peasant, you dont care much for least squares, but you do care about most money. Note that the previous example of landing a spacecraft is the same as the gold coin question: even if you launch many spacecraft, for each one you would give the burn most-likely to land the craft. The average of many failed landings has no value. Real physics examples: Example 1: Suppose you need to generate a beam of ions, all moving at very close to the same speed. You generate your ions in a plasma, with a Maxwellian thermal speed distribution (roughly the same shape as the gold coin toss pdf). Then you send the ions through a velocity selector to pick out only those very close to a single speed. You can tune your velocity selector to pick any speed. Now ions are not cheap, so you want your velocity selector to get the most ions from the speed physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu distribution that it can. That speed is the most-likely speed, not the average speed. So here again, we see that most-likely has a valid use even in repeated trials of random processes. Example 2: Suppose you are tracing out the orbit of the moon around the earth by measuring the distance between the two. Any given days measurement has limited ability to trace out an entire orbit, so you must make many measurements over several years. You have to fit a model of the moons orbit to this large set of measurements. Youd like your fit to get better as you collect more data. Therefore, you choose to make unbiased estimates of the distance, so that on-average, over time, your estimate of the orbit gets better and better. If instead you chose each days maximum-likelihood estimator, youd be off of the average (in the same direction) every day, and no amount of averaging would ever fix that. Wrap up: When you have symmetric, unimodal distributions (symmetric around a single maximum), then the unbiased and maximum-likelihood estimates are identical. This is true, for example, for the average of a gaussian distribution. For asymmetric or multi-modal distributions, the unbiased and maximum-likelihood estimates are different, and have different properties. In general, unbiased estimates are the most efficient estimators, which means they have the smallest variance of all possible estimators. Unbiased estimators are also least-squares estimators, which means they minimize the sum-squared error from the true value. This property follows from being unbiased, since the average of a population is the least-squares estimate of all its values. Correlation and Dependence To take a sample of a random variable X, we get a value of X i for each sample point i, i = 1 ... n. Sometimes when we take a sample, for each sample point we get not one, but two, random variables, X i and Y i . The two random variables X i and Y i may or may not be related to each other. We define the joint probability distribution function of X and Y such that Pr( ) pdf ( , ) XY x X x dx and y Y y dy x y < < + < < + = This is just a 2-dimensional version of a typical pdf. Since X and Y are random variables, we could look at either of them and find its individual pdf: pdf X (x), and pdf Y (y). If X and Y have nothing to do with each other (i.e., X and Y are independent), then a fundamental axiom of probability says that the probability of finding x < X < x + dx and y < Y < y + dy is the product of the two pdfs: pdf ( , ) pdf ( ) pdf ( ) XY X Y x y x y if X and Y are independent = The above equation is the definition of statistical independence: Two random variables are independent if and only if their joint distribution function is the product of the individual distribution functions. A very different concept is the correlation. Correlation is a measure of how linearly related two random variables are. It turns out that we can define correlation mathematically by the correlation coefficient: ( ) ( ) ( ) ( ) ( ) ( ) , , , X Y X Y X X Y Y Cov X Y where Cov X Y X X Y Y o o o o = For a discrete variable, ( ) ( ) 1 , # N i i x y i x x y y where N elements in population N o o = = = If = 0, then X and Y are uncorrelated. If = 0, then X and Y are correlated. Note that ( ) ( ) 1 0 ( , ) 0, or for a discrete variable 0 population i i i Cov X Y x x y y = = = = , where Cov(X, Y) is the covariance of X and Y. physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu Two random variables are uncorrelated if and only if their covariance, defined above, is zero. Being independent is a stronger statement that uncorrelated. Random variables which are independent are necessarily uncorrelated (proof??). But variables which are uncorrelated can be highly dependent. For example, suppose we have a random variable X, which is uniformly distributed over [1, 1]. Now define a new random variable Y such that Y = X 2 . Clearly, Y is dependent on X, but Y is uncorrelated with X. Y and X are dependent because given either, we know a lot about the other. They are uncorrelated because for every Y value, there is one positive and one negative value of X. So for every value of ( ) ( ) X X Y Y , there is its negative, as well. The average is therefore 0; hence, Cov(X, Y) = 0. A crucial point is: Variances add for uncorrelated variables, even if they are dependent. This is easy to show. Given that X and Y are uncorrelated, ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) 2 2 2 2 2 2 2 Var X Y X Y X Y X X Y Y X X X X Y Y Y Y X X X X Y Y ( ( + = + + = + = + + = + ( ) 2 ( ) ( ) Y Y Var X Var Y + = + All we needed to prove that variances add is that Cov(X, Y) = 0. Data Fitting (Curve Fitting) Suppose we have an ideal process, with an ideal curve mapping an independent variable x to a dependent variable y. Now we take a set of measurements of this process, that is, we measure a set of data pairs (x i , y i ), below left: x y(x) Ideal curve, with non-ideal data x Data, with straight line guess y(x) Suppose further we dont know the ideal curve, but we have to guess it. Typically, we make a guess of the general form of the curve from theoretical or empirical information, but we leave the exact parameters of the curve free. For example, we may guess that the form of the curve is a straight line (above right) y mx b = + , but we leave the slope and intercept (m and b) of the curve unknown. (We might guess another form, with other, possibly more parameters.) Then we fit our curve to the data, which means we compute the values of m and b which best fit the data. Best means that the values of m and b minimize some physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu measure of error, called the figure of merit, compared to all other values of m and b. For data with constant uncertainty, the most common figure of merit is the sum-squared error: ( ) ( ) ( ) 2 2 2 1 1 1 sum-squared-error ( ) is our fitting function n n n i i i i i i i i SSE error measurement curve measurement f x where f x = = = = = In our example of fitting to a straight line, for given values of m and b, we have: ( ) 2 2 1 1 ( ) n n i i i i i SSE error y mx b = = = + Curve fitting is the process of finding the values of all our unknown parameters such that (for constant uncertainty) they minimize the sum-squared error from our data. The (measurement curve) is often written as (O C) for (observed computed). We discuss data with varying uncertainty later, but in that more general case, we adjust parameters to minimize the 2 parameter. Multiple Linear Regression and Polynomial Fitting Fitting a polynomial to data is actually a simple example of multiple linear regression (see also the Numerical Analysis section for exact polynomial fits). A simple example of linear regression is: you measure some function y of an independent variable x, i.e. you measure y(x) for some set of x = {x i }. You have a model for y(x) which is a linear combination of basis functions: 1 1 2 2 1 ( ) ( ) ( ) ... ( ) ( ) k k k m m m y x b f x b f x b f x b f x = = + + + = You use multiple linear regression to find the coefficients b i of the basis functions f i which compose the measured function, y(x). Note that: Linear regression is not fitting data to a straight line. Fitting data to a line is called fitting data to a line (seriously). The funky part is understanding what are the random variables or predictors to which we perform the regression. Most intermediate statistics texts cover multiple linear regression, e,g, [W&M p353], but we remind you of some basic concepts here: 1. Multiple linear regression predicts the value of some random variable y i from k (possibly correlated) predictors, x mi , m = 1, 2, ... k. The predictors may or may not be random variables. In the example above, the predictors are x mi = f m (x i ). 2. Its linear prediction, so our prediction model is that y is a linear combination of the xs, i.e., for each i: 0 1 1 2 2 1 ... k i i i k ki m mi m y b b x b x b x b x = = + + + + = 3. Multiple linear regression determines the unknown regression coefficients b 0 , b 1 , ... b k from n samples of the y and x m , by solving the following k + 1 linear equations in k + 1 unknowns [W&M p355]: physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu 0 1 1 2 2 1 1 1 1 0 1 1 2 2 1 1 1 1 1 ( ) ... 1, 2,... : ... n n n n i i k ki i i i i i n n n n n mi mi i mi i k mi ki mi i i i i i i b n b x b x b x y And for each m k b x b x x b x x b x x x y = = = = = = = = = + + + + = = + + + + = Again, all the y i and x mi are given; we solve for b m . Numerically, we slap the sums on the left into a matrix, and the constants on the right into a vector, and call a function (e.g., gaussj( ) in Numerical Recipes) to solve for the unknowns. Polynomials are just a special case of multiple linear regression [W&M p357], where we are predicting y i from powers of x i . As such, we let ( ) m mi i x x = , and proceed with standard multiple linear regression: 2 0 1 2 1 1 1 1 1 2 0 1 2 1 1 1 1 1 ( ) ... 1, 2,... : ... n n n n k i i k i i i i i i n n n n n m m m m k m i i i k i i i i i i i i b n b x b x b x y And for each m k b x b x b x b x x y = = = = + + + = = = = = + + + + = = + + + + = Goodness of Fit Chi-Squared Distribution You dont really need to understand the 2 distribution to understand the 2 parameter, but we start here Notation: ( ) X D x e means X is a random variable with probability distribution function (pdf) = D(x). Chi-squared ( 2 ) distributions are a family of distributions characterized by 1-parameter, called (Greek nu). (Contrast with the gaussian distribution, which has two parameters, the mean, , and standard deviation, .) So we say chi-squared is a 1-parameter distribution. is almost always an integer. The simplest case is = 1: if we define a new random variable X from a gaussian random variable , as 2 , ( 0, 1), . . 0, . 1 X where gaussian i e avg std deviation _ _ o = e = = = = then X has a 2 1 distribution. I.e., 2 =1 (x) is the probability distribution function (pdf) of the square of a gaussian. For general , 2 (x) is the pdf of the sum of the squares of gaussian random variables: 2 1 , ( 0, 1), . . 0, . 1 i i i X where gaussian i e avg std deviation v _ _ o = = e = = = = Thus, the random variable X above has a 2 ## distribution. [picture??] Chi-squared random variables are always 0, since they are the sums of squares of gaussian random variables. Since the gaussian distribution is continuous, the chi-squared distributions are also continuous. From the definition, we can also see that the sum of two chi-squared random variables is another chi- squared random variable: 2 2 2 , n m n m Let A B then A B _ _ _ + e e + e By the central limit theorem, this means that for large , chi-squared itself approaches gaussian. We can show that physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu ( ) ( ) ( ) 2 2 1 1 2 2 2 1, var 2 , var 2 dev 2 v v v _ _ _ v _ v _ v = = = = = Chi-Squared Parameter As seen above, 2 is a continuous probability distribution. However, there is also a goodness-of-fit test which computes a parameter also called chi-squared. This parameter is from a distribution that is often close to a 2 distribution, but be careful to distinguish between the parameter 2 and the distribution 2 . The chi-squared parameter is not required to be from a chi-squared distribution, though it often is. All the chi-squared parameter really requires is that the variances of our errors add, which is to say that our errors are uncorrelated (not necessarily independent). The 2 parameter is valid for any distribution of uncorrelated errors. The 2 parameter has a 2 distribution only if the errors are gaussian. However, for large , the 2 distribution approaches gaussian, as does the sum of many values of any distribution. Therefore: The 2 distribution is a reasonable approximation to the distribution of any 2 parameter with >~ 20, even if the errors are not gaussian. If we know the standard deviation of our measurement error, then the set of {error divided by } has standard-deviation = 1: 2 dev( ) standard deviation of random variable , var( ) variance of random variable , dev 1 var 1 X X Define X X also written X X also written error error o o o o | | | | = = | | \ . \ . As a special case, but not required for a 2 parameter, if our errors are gaussian, 2 2 1 (0,1) error error gaussian _ o o | | e e | \ . Often, the uncertainties vary from measurement to measurement. In that case, we are fitting a curve to data triples: (x i , y i , i ). Still, the error divided by uncertainty for any single measurement is unit deviation: dev 1, var 1, i i i i error error and for all i o o | | | | = = | | \ . \ . If we have n measurements, with uncorrelated errors, then because variances add: 2 2 1 1 var . : n n i i n i i i i error error n For gaussian errors _ o o = = | | | | = e | | | \ . \ . Returning to our ideal process from above, with a curve mapping an independent variable x to a dependent variable y, we now take a set of measurements with known errors i : physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu x y(x) Then our parameter 2 is defined as 2 2 2 2 2 1 1 , n n i i i n i i i i error measurement curve If gaussian errors _ _ _ o o = = | | | | ( = e | | \ . \ . If n is large, this sum will be close to the average, and (for zero-mean errors), 2 2 1 1 n n i i i i i i error error Var n _ o o = = | | | | = = = | | | \ . \ . Now suppose we have fit a curve to our data, i.e., we guessed a form, and found the parameters which minimize the 2 parameter. If our fit is good, then our curve is close to the real dependence curve for y as a function of x, and our errors will be purely random (no systematic error). We now compute the 2 parameter for our fit, as if our fit-curve were the ideal curve above: 2 2 2 1 1 n n i i i i i i i error measurement fit _ o o = = | | | | = | | \ . \ . If our fit is good, the number 2 will likely be close to n. (We will soon modify the distribution of the 2 parameter, but for now, it illustrates our principle.) If our fit is bad, there will be significant systematic fit error in addition to our random error, and our 2 parameter will be much larger than n. Summarizing, If 2 is close to n, then our errors are no worse than our measurement errors, and the fit is good. If 2 is much larger than n, then our errors are worse than our measurement errors, so our fit must be bad. Degrees of freedom: So far we have ignored something called degrees of freedom. Consider again the hypothetical fit to a straight line. We are free to choose our parameters m and b to define our fit-line. But in a set of n data points, we could (if we wanted) choose our m and b to exactly go through two of the data points: physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu x y(x) This guarantees that two of our fit errors are zero. If n is large, it wont significantly affect the other 2 being the sum of n squared-errors, it is approximately the sum of (n 2) squared- errors, because our fit procedure guarantees that 2 of the errors are zero. In this case, ( ) 2 var 2 n _ ~ . A rigorous analysis shows that for the best fit line (which probably doesnt go through any of the data points), and gaussian measurement errors, then ( ) 2 var 2 n _ = , exactly. This concept generalizes quite far: - even if we dont fit 2 points exactly to the line, - even if our fit-curve is not a line, - even if we have more than 2 fit parameters, the effect is to reduce the 2 parameter to be a sum of less than n squared-errors. The effective number of squared-errors in the sum is called the degrees of freedom (dof): ( ) # dof n fit parameters Thus the statistics of our 2 parameter are really ( ) ( ) 2 2 , dev 2 dof dof _ _ = = Reduced Chi-Squared Parameter Since it is awkward for everyone to know n, the number of points in our fit, we simply divide our chi- squared parameter by dof, to get the reduced chi-squared parameter. Then it has these statistics: ( ) ( ) ( ) 2 2 2 1 2 2 2 2 1 dev 2 2 1, dev n i i i i measurement fit reduced dof dof dof dof reduced reduced dof dof dof dof dof _ _ o _ _ _ _ = | | = | \ . = = = = = = If reduced 2 is close to 1, the fit is good. If reduced 2 is much larger than 1, the fit is bad. By much larger we mean several deviations away from 1, and the deviation gets smaller with larger dof (larger n). Of course, our confidence in 2 or reduced- 2 depends on how many data points went into computing it, and our confidence in our measurement errors, i . Remarkably, one reference on 2 [which I dont remember] says that our estimates of measurement errors, i , should come from a sample of at least five! That seems to me to be a very small number to have much confidence in . physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu Fitting To Histograms Data analysis often requires fitting a function to binned data, that is, fitting a predicted probability distribution to a histogram of measured values. While such fitting is very commonly done, it is much less commonly understood. There are important subtleties often overlooked. This section assumes you are familiar with the binomial distribution, the 2 goodness of fit parameter (described earlier), and some basic statistics. The general method for fitting a model to a histogram of data is this: - Start with n data points (measurements), and a parameterized model for the PDF of those data. - Bin the data into a histogram. - Find the model parameters which best fit the data histogram For example, a gaussian distribution is a 2-parameter model; the parameters are the average, , and standard deviation, . If we believe our data should follow a gaussian distribution, and we want to know the and of that distribution, we might bin the data into a histogram, and fit the gaussian PDF to it: model PDF measurement x x i predicted bin count, model i fit error measured bin count, c i Sample histogram with a 2-parameter model PDF ( and ). The fit model is gaussian in this example, but could be any pdf with any parameters. We must define best fit. Usually, we use the 2 (chi-squared) goodness of fit parameter as the figure of merit (FOM). The smaller 2 , the better the fit. Fitting to a histogram is a special case of general 2 fitting. Therefore, we need to know two things for each bin: (1) the predicted (model) count, and (2) the uncertainty in the measured count. We find these things in the next section. (This gaussian fit is a simplified example. In reality, if we think the distribution is gaussian, we would compute the sample average and standard deviation directly, using the standard formulas. More on this later. In general, the model is more complicated, and there is no simple formula to compute the parameters. For now, we use this as an example because it is a familiar model to many.) Chi-squared For Histograms We now develop the 2 figure of merit for fitting to a histogram. A sample is a set of n measurements (data points). In principle, we could take many samples of data. For each sample, there is one histogram, i.e., there is an infinite population of samples, each with its own histogram. But we have only one sample. The question is, how well does our one histogram represent the population of samples, and therefore, the population of data measurements. To develop the 2 figure of merit for the fit, we must understand the statistics of a single histogram bin, from the population of all histograms that we might have produced from different samples. The key point is this: given a sample of n data points, and a particular histogram bin numbered i, each data point in the sample is either in the bin (with probability p i ), or its not (with probability (1 - p i ) ). Therefore, the count in the i th histogram bin is binomially distributed, with some probability p i , and n trials. (See standard references on the binomial distribution if this is not clear.) Furthermore, this is true of every histogram bin: The number of counts in each histogram bin is a binomial random variable. Each bin has its own probability, pi, but all bins share the same number of trials, n. physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu Recall that a binomial distribution is a discrete distribution, i.e. it gives the probability of finding values of a whole-number random variable; in this case, it gives the probability for finding a given number of counts in a given histogram bin. The binomial distribution has two parameters: p is the probability of a given data point being in the bin n is the number of data points in the sample, and therefore the number of trials in the binomial distribution. Recall that the binomial distribution has average, c-bar, and variance, 2 given by: 2 , (1 ) (binomial distribution) c np np p o = = For a large number of histogram bins, N bins , the probability of being in a given bin is of order p ~ 1/N bins , which is small. Therefore, we approximate 2 (1) ( 1 1) bins np c N p o ~ = >> << We find c-bar for a bin from the pdf model: typically, we assume the bins are narrow, and the probability of being in a bin is just ( ) Pr being in bin pdf ( ) i X i i i p x x ~ A Then the model average (expected) count is Pr(being in bin) times the number of data points, n: 2 pdf ( ) (narrow bins) bin center, bin width pdf ( ) model pdf at bin center 1 1 For example, for a gaussian histogram: pdf ( , ; ) exp 2 2 i X i i i i X i X model n x x where x x x x x o o o t ~ A A | | | | = | | | \ . \ . However, one can use any more sophisticated method to properly integrate the pdf to find e for each bin. We now know the two things we need for evaluating a general 2 goodness-of-fit parameter: for each histogram bin, we know (1) the model average count, model i , and (2) the variance of the measured count, which is also approximately model i . We now compute 2 for the model pdf (given a set of model parameters) in the usual way: ( ) 2 2 1 the measured count in the bin the model average count in the bin bins N i i th i i i th i c model where c i model model i _ = If your model predicts a count of zero (model i = 0 for some i), then 2 below. Reducing the Effect of Noise To find the best-fit parameters, we take our given sample histogram, and try different values of the pdf(x) parameters (in this example, and ) to find the combination which produces the minimum 2 . Notice that the low count bins carry more weight than the higher count bins: 2 weights the terms by 1/model i . This reveals the first common misunderstanding: A fit to a histogram is driven by the tails, not by the central peak. This is usually bad. Tails are often the worst part of the model (theory), and often the most contaminated (percentage-wise) by noise: background levels, crosstalk, etc. Three methods help reduce these problems: physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu - limiting the weight of low-count bins - truncating the histogram - rebinning Limiting the weight: The tails of the model distribution are often less than 1, and approach zero. This gives them extremely high weights compared to other bins. Since the model is probably inadequate at these low bin counts (due to noise, etc.), one can limit the denominator in the 2 sum to at least 1; this also avoids division-by-zero: ( ) 2 2 1 1 1 bins N i i i i i i i model if model c model where d d otherwise _ = > This is an ad-hoc approach, and the minimum weight can be anything; it doesnt have to be 1. Notice, though, that this modified 2 value is still a monotonic function of the model parameters, which is critical for stable parameter fits (it avoids local minima, see Practical Considerations below). Truncating the histogram: Alternatively, we can truncate the histogram on the left and right sides to those bins with a reasonable number of counts, substantially above the noise (below left). [Bev p110] recommends a minim bin count of 10, based on a desire for gaussian errors. I dont think that matters much. In truth, the minimum count completely depends on the noise level. model PDF x model PDF x s x f x 1 x 7 x 2 x 3 x 4 x 5 x 6 1.2 3.9 10.8 3 8 3 Avoiding noisy tails by (left) truncating the histogram, or (right) rebinning. Truncation requires renormalizing: we normalize the model within the truncated limits to the data count within those same limits: pdf ( ) , are the start and final bins to include pdf ( ) f f f i norm X i i i i s i s i s f i i s norm f X i i i s model n x x c where s f c n x x = = = = = = A = = A You might think that we should use the model, not the data histogram, to choose our truncation limits. After all, why should we let sampling noise affect our choice of bins? This approach fails miserably, however, because our bin choices change as we vary our parameters in the hunt for the optimum 2 . Changing which bins are included in the FOM causes unphysical steps in 2 as we vary our parameters, making many local minima. This makes the fit unstable, and generally unusable. For stability: truncate your histogram based on the data, and keep it fixed during the parameter search. Rebinning: Alternatively, bins dont have to be of uniform width [Bev p175], so combining adjacent bins into a single, wider bin with higher count can help improve signal-to-noise ratio (SNR) in that bin (above right). Note that when rebinning, we evaluate the theoretical count as the sum of the original physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu (narrow) bin theoretical counts. In the example of the diagram above right, the theoretical and measured counts for the new (wider) bin 1 are 1 1 1.2 3.9 10.8 15.9 3 3 8 14 a c mo nd del = + + = = + + = Other Histogram Fit Considerations Slightly correlated bin counts: Bin counts are binomially distributed (a measurement is either in a bin, or its not). However, there is a small negative correlation between any two bins, because the fact that a measurement lies in one bin means it doesnt lie in any other bin. Recall that the 2 parameter relies on uncorrelated errors between bins, so a histogram slightly violates that assumption. With a moderate number of bins (> ~15 ??), this is usually negligible. Overestimating the low count model: If there are a lot of low-count bins in your histogram, you may find that the fit tends to overestimate the low-count bins, and underestimate the high-count bins (diagram below). When properly normalized, the sum of overestimates and underestimates must be zero: the sum of bin counts equals the sum of the model predicted counts. model PDF x underestimated overestimated 2 is artificially reduced by overestimating low-count bins, and underestimating high-count bins. But since low-count bins weigh more than high-count bins, and since an overestimated model reduces 2 (the model value model i appears in the denominator of each 2 term), the overall 2 is reduced if low- count bins are overestimated, and high-count bins are underestimated. This effect can only happen if your model has the freedom to bend in the way necessary: i.e., it can be a little high in the low-count regions, and simultaneously a little low in the high-count regions. Most realistic models have this freedom. If the model is reasonably good, this effect can cause reduced- 2 to be consistently less than 1 (which should be impossible). I dont know of a simple fix for this. It helps to limit the weight of low-count bins to (say) 1, as described above. However once again, the best approach is to minimize the number of low-count bins in Noise not zero mean: for counting experiments, such as those that fill in histograms with data, all bin counts are zero or positive. Any noise will add positive counts, and therefore noise cannot be zero-mean. If you know the pdf of the noise, then you can put it in the model, and everything should work out fine. However, if you have a lot of un-modeled noise, you should see that your reduced- 2 is significantly greater than 1, indicating a poor fit. Some people have tried playing with the denominator in the 2 sum to try to get more accurate fit parameters in the presence of noise, but there is little theoretical justification for this, and it usually amounts to ad-hoc tweaking to get the answers you want. Non- 2 figure of merit: One does not have to use 2 as the fit figure of merit. If the model is not very good, or if there are problems as mentioned above, other FOMs might work better. The most common alternative is probably least-squares, which means minimizing the sum-squared-error: ( ) 2 1 (sum-squared-error) bins N i i i SSE c model = This is like 2 where the denominator in each term in the sum is always 1. physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu Guidance Counselor: Practical Considerations for Computer Code to Fit Data Computer code for finding the best-fit parameters is usually divided into two pieces, one piece you buy, and one piece you have to write yourself: - You buy a generic optimization algorithm, which varies parameters without knowledge of what they mean, looking for the minimum figure-of-merit (FOM). For each trial set of parameters, it calls your FOM function to compute the FOM as a function of the current trial parameters. - You write the FOM function which computes the FOM as a function of the given parameters. Generic optimizers usually minimize the figure-of-merit, consistent with the FOM being a cost or error that we want reduced. If instead, you want to maximize a FOM, return its negative to the minimizer. Generic optimization algorithms are available off-the-shelf, e.g. [Numerical Recipes]. However, they are sometimes simplistic, and in the real world, often fail with arithmetic faults (overflow, underflow, domain-error, etc). The fault (no pun intended) lies not in their algorithm, but in their failure to tell you what you need to do to avoid such failures: Your job is to write a bullet-proof figure-of-merit function. This is harder than it sounds, but quite do-able with proper care. A bullet-proof FOM function requires only two things: - Proper validation of all parameters. - A properly bad FOM for invalid parameters (a guiding error). Guiding errors are similar to penalty functions, but they operate outside the valid parameter space, rather than inside it. A simple example: Suppose you wish to numerically find the minimum of the figure-of-merit function below left. Suppose the physics is such that only p > 1 is sensible. p 1 2 3 1 2 3 1 ( ) f p p p = + 4 p f(p) 1 2 3 1 2 3 4 p f(p) 1 2 3 1 2 3 4 valid p (Left and middle) Bad figure-of-merit (FOM) functions. (Right) A bullet-proof FOM. Your optimization-search algorithm will try various values of p, evaluating f(p) at each step, looking for the minimum. You might write your FOM function like this: fom(p) = 1./p + sqrt(p) But the search function knows nothing of p, or which values of p are valid. It may well try p = 1. Then your function crashes with a domain-error in the sqrt( ) function. You fix it with (above middle): float fom(p) if(p < 0.) return 4. return 1./p + sqrt(p) Since you know 4 is much greater than the true minimum, you hope this will fix the problem. You run the code again, and now it crashes with divide-by-zero error, because the optimizer tried p = 0. Easy fix: float fom(p) physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu if(p <= 0.) return 4. return 1./p + sqrt(p) Now the optimizer crashes with an overflow error, p < (max_float). The big flat region to the left confuses the optimizer. It searches negatively for a value of p that makes the FOM increase, but it never finds one, and gets an overflow trying. Your flat value for p 0 is no good. It needs to grow upward to the left to provide guidance to the optimizer: float fom(p) if(p <= 0.) return 4. + fabs(p - 1) // fabs() = absolute value return 1./p + sqrt(p) Now the optimizer says the minimum is 4 at p = 10 6 . It found the local minimum just to the left of zero. Your function is still ill-behaved. Since only p > 1 is sensible, you make yet another fix (above right): float fom(p) if(p <= 1.) return 4. return 1./p + sqrt(p) Finally, the optimizer returns the minimum FOM of 1.89 at p = 1.59. After 5 tries, you have made A bullet-proof FOM has only one minimum, which it monotonically approaches from both sides, even with invalid parameters, and it never crashes on any parameter set. In this example, the FOM is naturally bullet-proof from the right. However, if it werent, the absolute value of (p 1) on the error return value provides a V-shape which guides the optimizer into the valid range from either side. Such guiding errors are analogous to so-called penalty functions, but better, because they take effect only for invalid parameter choices, thus leaving the valid parameter space completely free for full optimization. Multi-parameter FOMs: Most fit models use several parameters, p i , and the optimizer searches over all of them iteratively to find a minimum. Your FOM function must be bullet-proof over all parameters: it must check each parameter for validity, and must return a large (guaranteed unoptimal) result for invalid inputs. It must also slope the function toward valid values, i.e. provide a restoring force to the invalid parameters toward the valid region. Typically, with multiple parameters p i , one uses 1 _ _ # N i i i i i guiding bad FOM big p valid where valid a valid value for p = = + This guides the minimization search when any parameter is outside its valid range. p g(p) 1 2 3 4 guiding error guiding error valid p Guiding errors lead naturally to a valid solution, and are better than traditional penalty functions. A final note: The big # for invalid parameters may need to be much bigger than you think. In my thesis research, I used reduced 2 as my FOM, and the true minimum FOM is near 1. I started with 1,000,000 as my big #, but it wasnt big enough! I was fitting to histograms with nearly a thousand counts in several bins. When the trial model bin count was small, the error was about 1,000, and the sum- physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu squared-error over several bins was > 1,000,000. This caused the optimizer to settle on an invalid set of parameter values as the minimum! I had to raise big # to 10 9 . physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu Numerical Analysis Round-Off Error, And How to Reduce It Floating point numbers are stored in a form of scientific notation, with a mantissa and exponent. E.g., 1.23 10 45 has mantissa m = 1.23 and exponent e = 45 Computer floating point stores only a finite number of digits. float (aka single-precision) stores at least 6 digits; double stores at least 15 digits. Well work out some examples in 6-digit decimal scientific notation; actual floating point numbers are stored in a binary form, but the concepts remain the same. (See IEEE Floating Point in this document.) Precision loss due to summation: Adding floating point number with different exponents results in round-off error: 1.234 56 10 2 1.234 56 10 2 + 6.111 11 10 0 + 0.061 111 1 10 2 = 1.295 67 10 2 where 0.000 001 1 of the result is lost, because the computer can only stored 6 digits. (Similar round-off error occurs if the exponent of the result is bigger than both of the addend exponents.) When adding many numbers of similar magnitude (as is common in statistical calculations), the round-off error can be quite significant: float sum = 1.23456789; // Demonstrate precision loss in sums printf("%.9f\n", sum); // show # significant digits for(i = 2; i < 10000; i++) sum += 1.23456789; printf("Sum of 10,000 = %.9f\n", sum); 1.234567881 8 significant digits Sum of 10,000 = 12343.28 only 4 significant digits You lose about 1 digit of accuracy for each power of 10 in n, the number of terms summed. I.e. 10 - log digit loss n ~ When summing numbers of different magnitudes, you get a better answer by adding the small numbers first, and the larger ones later. This minimizes the round-off error on each addition. E.g., consider summing 1/n for 1,000,000 integers. We do it in both single- and double-precision, so you can see the error: float sum = 0.; double dsum = 0.; // sum the inverses of the first 1 million integers, in order for(i = 1; i <= 1000000; i++) sum += 1./i, dsum += 1./i; printf("sum: %f\ndsum: %f. Relative error = %.2f %%\n", sum, dsum, (dsum-sum)/dsum); sum: 14.357358 dsum: 14.392727. Relative error = 0.002457 This was summed in the worst possible order: largest to smallest, and (in single-precision) we lose about 5 digits of accuracy, leaving only 3 digits. Now sum in reverse (smallest to largest): float sumb = 0.; double dsumb = 0.; for(i = 1000000; i >= 1; i--) physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu sumb += 1./i, dsumb += 1./i; printf(" sumb: %f\ndsumb: %f. Relative error = %.6f\n", sumb, dsumb, (dsumb-sumb)/dsumb); sumb: 14.392652 dsumb: 14.392727. Relative error = 0.000005 The single-precision sum is now good to 5 digits, losing only 1 or 2. [In my research, I needed to fit a polynomial to 6000 data points, which involves many sums of 6000 terms, and then solving linear equations. I needed 13 digits of accuracy, which easily fits in double-precision (double, 15-17 decimal digits). However, the precision loss due to summing was over 3 digits, and my results failed. Simply changing the sums to long double, then converting the sums back to double, and doing all other calculations in double solved the problem. The dominant loss was in the sums, not in solving the equations.] Summing from smallest to largest is very important for evaluating polynomials, which are widely used for transcendental functions. Suppose we have a 5 th order polynomial, f(t): 2 3 4 5 0 1 2 3 4 5 ( ) f t a a x a x a x a x a x = + + + + + which might suggest a computer implementation as : f = a0 + a1*t + a2*t*t + a3*t*t*t + a4*t*t*t*t + a5*t*t*t*t*t Typically, the terms get progressively smaller with higher order. Then the above sequence is in the worst order: biggest to smallest. (It also takes 15 multiplies.) It is more accurate (and faster) to evaluate the polynomial as: f = ((((a5*t + a4)*t + a3)*t + a2)*t + a1)*t + a0 This form adds small terms of comparable size first, progressing to larger ones, and requires only 5 multiplies. How To Extend Precision In Sums Without Using Higher Precision Variables (Handy for statistical calculations): You can avoid round-off error in sums without using higher precision variables with a simple trick. For example, lets sum an array of n numbers: sum = 0.; for(i = 0; i < n; i++) sum += a[i]; This suffers from precision loss, as described above. The trick is to actually measure the round-off error of each addition, and save that error for the next iteration: sum = 0.; error = 0.; // the carry-in from the last add for(i = 0; i < n; i++) { newsum = sum + (a[i] + error); // include the lost part of prev add diff = newsum - sum; // what was really added error = (a[i] + error) - diff; // the round-off error sum = newsum; } The error variable is always small compared to the sum, because it is the round-off error. Keeping track of it effectively doubles the number of accurate digits in the sum, until it is lost in the final addition. Even then, error still tells you how far off your sum is. For all practical purposes, this eliminates any precision loss due to sums. Lets try summing the inverses of integers again, in the bad order, but with this trick: float newsum, diff, sum = 0., error = 0.; for(i = 1; i <= 1000000; i++) { newsum = sum + (1./i + error); diff = newsum - sum; // what was really added error = (1./i + error) - diff; // the round-off error sum = newsum; physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu } printf(" sum: %f\ndsumb: %f. Relative error = %.6f, error = %g\n", sum, dsumb, (dsumb-sum)/dsumb, error); sum: 14.392727 dsumb: 14.392727. Relative error = -0.000000, error = -1.75335e-07 As claimed, the sum is essentially perfect. Numerical Integration The above method of sums is extremely valuable in numerical integration. Typically, for accurate numerical integration, one must carefully choose an integration step size: the increment by which you change the variable of integration. E.g., in time-step integration, it is the time step-size. If you make the step size too big, accuracy suffers because the rectangles (or other approximations) under the curve dont follow the curve well. If you make the step size too small, accuracy suffers because youre adding tiny increments to large numbers, and the round-off error is large. You must thread the needle of step-size, getting it just right for best accuracy. This fact is independent of the integration interpolation method: By virtually eliminating round-off error in the sums (using the method above), you eliminate the lower-bound on step size. You can then choose a small step-size, and be confident your answer is right. It might take more computer time, but integrating 5 times slower and getting the right answer is vastly better than integrating 5 times faster and getting the wrong answer. Sequences of Real Numbers Suppose we want to generate the sequence 2.01, 2.02, ... 2.99, 3.00. A simple approach is this: real s; for(s = 2.01; s <= 3.; s += 0.01) ... The problem with this is round-off error: 0.01 is inexact in binary (has round-off error). This error accumulates 100 times in the above loop, making the last value 100 times more wrong than the first. In fact, the loop might run 101 times instead of 100. The fix is to use integers where possible, because they are exact: real s; int i; for(i = 201; i <= 300; i++) s = i/100.; When the increment is itself a variable, note that multiplying a real by an integer incurs only a single round-off error: real s, base, incr; int i; for(i = 1; i <= max; i++) s = base + i*incr; Hence, every number in the sequence has only one round-off error. Root Finding In general, a root of a function f(x) is a value of x for which f(x) = 0. It is often not possible to find the roots analytically, and it must be done numerically. [TBS: binary search] Simple Iteration Equation Some forms of f( ) make root finding easy and fast; if you can rewrite the equation in this form: ( ) 0 ( ) f x x g x = = then you may be able to iterate, using each value of g( ) as the new estimate of the root, r. This is the simplest method of root finding, and generally the slowest to converge. physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu It may be suitable if you have only a few thousand solutions to compute, but may be too slow for millions of calculations. You start with a guess that is close to the root, call it r 0 . Then 1 0 2 1 1 ( ), ( ), ... ( ) n n r g r r g r r g r + = = = If g( ) has the right property (specifically, |g(x)| < 1 near the root) this sequence will converge to the solution. We describe this necessary property through some examples. Suppose we wish to solve / 2 0 x x = numerically. First, we re-arrange it to isolate x on the left side: 2 x x = (below left). x x/2 1 0.5 x 4x 2 1 0.5 1 1 y=x y=x Two iteration equations for the same problem. The left converges; the right fails. From the graph, we might guess r 0 0.2. Then we would find, 1 2 1 3 4 5 6 7 0.2 / 2 0.2236, / 2 0.2364, 0.2431, 0.2465, 0.2483, 0.2491, 0.2496 r r r r r r R r = = = = = = = = = We see that the iterations approach the exact answer of 0.25. But we could have re-arranged the equation differently: 2 2 , 4 x x x x = = (above right). Starting with the same guess x = 0.2, we get this sequence: 1 2 1 3 4 0.2 / 2 0.16, / 2 0.1024, 0.0419, 0..0070 r r r r r = = = = = = But they are not converging on the nearby root; the sequence diverges away from it. So whats the difference? Look at a graph of whats happening, magnified around the equality: x x/2 0.25 4x 2 y=x 0.25 0.2 r 0 r 1 r 2 x 0.25 y=x 0.25 0.2 r 0 r 1 r 2 When the curve is flatter than y = x (above left), then trial roots that are too small get bigger, and trial roots that are too big get smaller. So iteration approaches the root. When the curve is steeper than y = x (above right), trial roots that are too small get even smaller, too big get even bigger; the opposite of what we want. So for positive slope curves, the condition for convergence is physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu 0 1, is the exact root i y in the region r r r r where r r A < < A Consider another case, where the curve has negative slope. Suppose we wish to solve 1 cos 0 x x = , (x in radians). We re-write it as 1 cos x x = . On the other hand, we could take the cosine of both sides and get an equivalent equation: cos x x = . Which will converge? Again look at the graphs: x cos x 0.739 cos -1 x y=x 0.739 r 0 r 1 r 2 x y=x r 0 r 1 r 2 r 3 0.739 0.739 r 3 So long as the magnitude of the slope < 1, the iterations converge. When the magnitude of the slope > 1, they diverge. We can now generalize to all curves of any slope: The general condition for convergence is 0 1, is the exact root i y in the region r r r r where r r A < < A The flatter the curve, the faster the convergence. Given this, we could have easily predicted that the converging form of our iteration equation is cos x x = , because the slope of cos x is always < 1, and cos 1 x is always > 1. Note, however, that if the derivative is > 1/2, then the binary search will be faster than iteration. Newton-Raphson Iteration The above method of variable iteration is kind of blind, in that it doesnt use any property of the given functions to advantage. Newton-Raphson iteration is a method of finding roots that uses the derivative of the given function to provide more reliable and faster convergence. Newton-Raphson uses the original form of the equation: ( ) / 2 0 f x x x = = . The idea is to use the derivative of the function to approximate its slope to the root (below left). We start with the same guess, r 0 = 0.2. x x/2 x 0.25 0.0 x 4x 2 x 0 tangent 0.1 0.2 f 0.25 0.1 0.2 tangent f x x physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu 0 0 1/ 2 1/ 2 3/ 2 1/ 2 1/ 2 1/ 2 1/ 2 ( ) '( ) (Note '( ) 0) '( ) / 2 4 2 4 1 '( ) 1 4 / 4 1 4 1 4 i i i i i i i i i i f r f f r r f r x f r r r r r r f x x r r r r A ~ A ~ < A = A = = Heres a sample computer program fragment, and its output: // Newton-Raphson iteration r = 0.2; for(i = 1; i < 10; i++) { r -= (2.*r - 4.*r*sqrt(r)) / (1. - 4.*sqrt(r)); printf("r%d %.16f\n", i, r); } r1 0.2535322165454392 r2 0.2500122171752588 r3 0.2500000001492484 r4 0.2500000000000000 In 4 iterations, we get essentially the exact answer, to double precision accuracy of 16 digits. This is much faster than the variable isolation method above. In fact, it illustrates a property of some iterative Quadratic convergence is when the fractional error (aka relative error) gets squared on each iteration, which doubles the number of significant digits on each iteration. You can see this clearly above, where r 1 has 2 accurate digits, r 2 has 4, r 3 has 9, and r 4 has at least 16 (maybe more). Derivation of quadratic convergence?? Also, Newton-Raphson does not have the restriction on the slope of any function, as does variable isolation. We can use it just as well on the reverse formula (previous diagram, right): 2 2 ( ) 4 ( ) 4 , '( ) 8 1, '( ) 8 1 i i f r x x f x x x f x x r f r x = = A = = ## , with these computer results: r1 0.2666666666666667 r2 0.2509803921568627 r3 0.2500038147554742 r4 0.2500000000582077 r5 0.2500000000000000 This converges essentially just as fast, and clearly shows quadratic convergence. If you are an old geek like me, you may remember the iterative method of finding square roots on an old 4-function calculator: to find a: divide a by r, then average the result with r. Repeat as needed: 1 / 2 n n n a r r r + + = You may now recognize that as Newton-Raphson iteration: 2 2 1 ( ) 0, '( ) 2 , ( ) 1 '( ) 2 2 2 2 n n n n n n n n n n n f r r a f r r r a r f r a a r r r r r r r f r r r r + = = = | | = + A = = = + = + | \ . physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu If you are truly a geek, you tried the averaging method for cube roots: 2 1 / 2 n n n a r r r + + = . While you found that it converged, it was very slow; cube-root(16) with r 0 = 2 gives only 2 digits after 10 iterations. Now you know that the proper Newton-Raphson iteration for cube roots is: 3 3 2 1 2 2 2 1 ( ) 0, '( ) 3 , 2 3 3 3 3 n n n n n n n n n r a r a a f r r a f r r r r r r r r r + | | = = = = = + = + | | \ . which gives a full 17 digits in 5 iterations for r 0 = 2, and shows (of course) quadratic convergence: r1 2.6666666666666665 r2 2.5277777777777777 r3 2.5198669868999541 r4 2.5198421000355395 r5 2.5198420997897464 It is possible for Newton-Raphson to cycle endlessly, if the initial estimate of the root is too far off, and the function has an inflection point between two successive iterations: x f(x) 0 tangent tangent Failure of Newton-Raphson iteration. It is fairly easy to detect this failure in code, and pull in the root estimate before iterating again. Pseudo-Random Numbers We use the term random number to mean pseudo-random number, for brevity. Uniformly distributed random numbers are equally likely to be anywhere in a range, typically (0, 1). Uniformly distributed random numbers are the starting point for many other statistical applications. Computers can easily generate uniformly distributed random numbers, with the linear congruential method described in [Numerical Recipes in C, 2nd ed., p284??] [New info 10/2010: 3 rd ed. Describes better LFSR-based generators.]. E.g., the best such generator (known at publication) is // Uniform random value, 0 < v < 1, i.e. on (0,1) exclusive. // Numerical Recipes in C, 2nd ed., p284 static uint32 seed=1; // starting point vflt rand_uniform(void) { do seed = 1664525L*seed + 1013904223L; // period 2^32-1 while(seed == 0); rand_calls++; // count calls for repetition check return seed / 4294967296.; } // rand_uniform() Many algorithms which use such random numbers fail on 0 or 1, so this generator never returns them. physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu After a long simulation with a large number of calls, its a good idea to check rand_calls to be sure its < ~400,000,000 = 10% period. This insures the numbers are essentially random, and not predictable. Arbitrary distribution random numbers: To generate any distribution from a uniform random number, 1 1 cdf ( ) is the random variable of the desired distribution cdf inverse of the desired cumulative distribution function of is a uniform random number on (0,1) R R R U where R R U = = To see why, recall that the cumulative distribution function gives the probability of a random variable being less than or equal to its argument: cdf ( ) Pr( ) pdf ( ) is a random variable a X X a X a dx x where X s = } x pdf(x) 1 2 -0.5 0.5 x cdf(x) 1 -0.5 0.5 u 1 0.5 -0.5 cdf -1 (x) Steps to generating the probability distribution function (pdf) on the left. Also, the pdf of a function F = f(u) of a random variable is (see Probability and Statistics elsewhere in this document): pdf ( ) pdf ( ) , '( ) ( ) '( ) X F x x where f x is the derivative of f x f x = Then 1 1 1 1 1 cdf ( ). Using pdf ( ) 1on [0, 1] pdf ( ) 1 pdf ( ) using ( ) ( ) , cdf ( ) ( ) pdf ( ) as desired. R U U Q R R R Let Q U u u d d r g u g u and u r d du du d u cdf r du dr r = | | = = = | \ . | | | \ . = ?? Need a simple picture. Generating Gaussian Random Numbers The inverse cdf method is a problem for gaussian random numbers, because there is no closed-form expression for the cdf of a gaussian: 2 / 2 1 cdf( ) ( ) 2 a x a dx e gaussian t = } But [Knu] describes a clever way based on polar coordinates to use two uniform random numbers to generate a gaussian. He gives the details, but the final result is this: physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu ( ) ( ) ( ) 2ln cos is uniformon 0, 2 is uniformon 0,1 gaussian u where u u u t = /* Gaussian random value, 0 mean, unit variance. From Knuth, "The Art of Computer Programming, Vol. 2: Seminumerical Algorithms," 2nd Ed., p. 117. It is exactly normal if rand_uniform() is uniform. */ PUBLIC double rand_gauss(void) { double theta = (2.*M_PI) * rand_uniform(); return sqrt( -2. * log(rand_uniform()) ) * cos(theta); } // rand_gauss() Generating Poisson Random Numbers Poisson random numbers are integers; we say the Poisson distribution is discrete: n pdf(n) 0.25 0 u 0.50 0.75 1.00 1 2 3 4 5 n cdf(n) 0.25 0 0.50 0.75 1.00 1 2 3 4 5 0 1 2 3 4 5 .25 .50 .75 1.00 cdf -1 (u) 0 Example of generating the (discrete) Poisson distribution. We can still use the inverse-cdf method to generate them, but in an iterative way. The code starts with a helper function, poisson( ), that compute the probability of exactly n events in a Poisson distribution with an average of avg events: // --------------------------------------------------------------------------- PUBLIC vflt poisson( // Pr(exactly n events in interval) vflt avg, // average events in interval int n) // n to compute Pr() of { vflt factorial; int i; if(n <= 20) factorial = fact[n]; else { factorial = fact[20]; for(i = 21; i <= n; i++) factorial *= i; } return exp(-avg) * pow(avg, n) / factorial; } // poisson() /*---------------------------------------------------------------------------- Generates a Poisson randum value (an integer), which must be <= 200. Prefix 'irand_...' emphasizes the discreteness of the Poisson distribution. ----------------------------------------------------------------------------*/ PUBLIC int irand_poisson( // Poisson random integer <= 200 double avg) // avg # "events" { int i; double cpr; // uniform probability // Use inverse-cdf(uniform) for Poisson distribution, where // inverse-cdf() consists of flat, discontinuous steps cpr = rand_uniform(); for(i = 0; i <= 200; i++) // safety limit of 200 { cpr -= poisson(avg, i); physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu if(cpr <= 0) break; } return i; // 201 indicates an error } // irand_poisson() Other example random number generators: TBS. Generating Weirder Random Numbers Sometimes you need to generate more complex distributions, such as a combination of a gaussian with a uniform background of noise. This is a raised gaussian: pdf(x) 0 x uniform pdf gaussian pdf Construction of a raised gaussian random variable from a uniform and a gaussian Since this distribution has a uniform component, it is only meaningful if its limited to some finite width. To generate distributions like this, you can compose two different distributions, and use the principle: The PDF of a random choice of two random variables is the weighted sum of the individual PDFs. For example, the PDF for an RV (random variable) which is taken from X 20% of the time, and Y the remaining 80% of the time is: pdf( ) 0.2pdf ( ) 0.8pdf ( ) X Y z z z = + In this example, the two component distributions are uniform and gaussian. Suppose the uniform part of the pdf has amplitude 0.1 over the interval (0, 2). Then it accounts for 0.2 of all the random values. The remainder are gaussian, which we assume to be mean of 1.0, and = 1. Then the random value can be generated from 3 more fundamental random values: // Raised Gaussian random value: gaussian part: mean=1, sigma=1 // Uniform part (20% chance): interval (0, 2) if(rand_uniform() <= 0.2) random_variable = rand_uniform()*2.0; else random_variable = rand_gauss() + 1.0; // mean = 1, sigma = 1 Exact Polynomial Fits Its sometimes handy to make an exact fit of a quadratic, cubic, or quartic polynomial to 3, 4, or 5 data points, respectively. physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu x -1 0 1 3 points, 2 nd order x -1 0 1 2 4 points, 3 rd order x -2 -1 0 1 2 5 points, 4 th order The quadratic case illustrates the principle simply. We seek a quadratic function 2 2 1 0 ( ) y x a x a x a = + + which exactly fits 3 equally spaced points, at x = 1, x = 0, and x = 1, with value y 1 , y 0 , and y 1 , respectively (shown above). So long as your actual data are equally spaced, you can simply scale and offset to the x values 1, 0, and 1. We can directly solve for the coefficients a 2 , a 1 , and a 0 : ( ) ( ) 2 2 1 0 1 2 1 0 1 2 2 1 0 0 0 0 2 2 1 0 1 2 1 0 1 2 1 1 0 1 1 1 0 0 ( 1) ( 1) (0) (0) (1) (1) / 2 , / 2, a a a y a a a y a a a y a y a a a y a a a y a y y y a y y a y + + = + = + + = = ` ` + + = + + = ) ) = + = = Similar formulas for the 3 rd and 4 th order fits yield this code: // --------------------------------------------------------------------------- // fit3rd() computes 3rd order fit coefficients. 4 mult/div, 8 adds PUBLIC void fit3rd( double ym1, double y0, double y1, double y2) { a0 = y0; a2 = (ym1 + y1)/2. - y0; a3 = (2.*ym1 + y2 - 3.*y0)/6. - a2; a1 = y1 - y0 - a2 - a3; } // fit3rd() // --------------------------------------------------------------------------- // fit4th() computes 4th order fit coefficients. 6 mult/div, 13 add PUBLIC void fit4th( double ym2, double ym1, double y0, double y1, double y2) { b0 = y0; b4 = (y2 + ym2 - 4*(ym1 + y1) + 6*y0)/24.; b2 = (ym1 + y1)/2. - y0 - b4; b3 = (y2 - ym2 - 2.*(y1 - ym1))/12.; b1 = (y1 - ym1)/2. - b3; } // fit4th() TBS: Alternative 3 rd order (4 point) symmetric fit, with x {-3, -1, 1, 3}. physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu Twos Complement Arithmetic Twos complement is a way of representing negative numbers in binary. It is universally used for integers, and rarely used for floating point. This section assumes the reader is familiar with positive binary numbers and simple binary arithmetic. 0110 Most Significant Bit (MSB) Least Significant Bit (LSB) 2 3 2 2 2 1 2 0 Twos complement uses the most significant bit (MSB) of an integer as a sign bit: zero means the number is > 0; 1 means the number is negative. Twos complement represents non-negative numbers as ordinary binary, with the sign bit = 0. Negative numbers have the sign bit = 1, but are stored in a special way: for a b-bit word, a negative number n (n < 0) is stored as if it were unsigned with a value of 2 b + n. This is shown below, using a 4-bit word as a simple example: bits unsigned signed 0000 0 0 0001 1 1 0010 2 2 sign 0011 3 3 bit (MSB) 0100 4 4 0101 5 5 0110 6 6 0111 7 7 1000 8 -8 1001 9 -7 1010 10 -6 1011 11 -5 1100 12 -4 1101 13 -3 1110 14 -2 1111 15 -1 With twos complement, a 4-bit word can store integers from 8 to +7. E.g., 1 is stored as 16 1 = 15. This rule is usually defined as follows (which completely obscures the purpose): 0, 0 Let n a n a = < > Example: n = 4, a = 4 complement it (change all 0s to 1s and 1s to 0s). 1011 Lets see how twos complement works in practice. There are 4 possible addition cases: (1) Adding two positive numbers: so long as the result doesnt overflow, we simply add normally (in binary). (2) Adding two negative numbers: Recall that when adding unsigned integers, if we overflow our 4 bits, the carries out of the MSB are simply discarded. This means that the result of adding a + c is actually (a + c) mod 16. Now, let n and m be negative numbers in twos complement, so their bit patterns are 16 + n, and 16 + m. If we add their bit patterns as unsigned integers, we get physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu ( ) ( ) ( ) ( ) 16 16 32 mod16 16 , 0 n m n m n m n m ( + + + = + + = + + + < which is the 2s complement representation of (n + m) < 0. E.g., 2 1110 16 + (2) + 3 + 1101 + 16 + (3) 5 1011 16 + (5) So with twos complement, adding negative numbers uses the same algorithm as adding unsigned integers! Thats why we use twos complement. (3) Adding a negative and a positive number, with positive result: ( ) ( ) 16 16 mod16 , 0 n a n a n a n a ( + + = + + = + + > E.g., 2 1110 16 + (2) + 5 0101 + 5 3 0011 3 (4) Adding a negative and a positive number, with negative result: ( ) ( ) 16 16 , 0 n a n a n a + + = + + + < E.g., 6 1010 16 + (6) + 3 0011 + 3 3 1101 16 + (3) In all cases, With twos complement arithmetic, adding signed integers uses the same algorithm as adding unsigned integers! Thats why we use twos complement. The computer hardware need not know which numbers are signed, and which are unsigned: it adds the same way no matter what. It works the same with subtraction: subtracting twos complement numbers is the same as subtracting unsigned numbers. It even works multiplying to the same word size: ( ) ( ) ( ) ( ) ( ) : 16 16 mod16 16 , 0, 0, 0 : 16 16 256 16 mod16 , 0, 0, 0 n a a na na n a na n m n m nm nm n m nm ( + + = + = + < > < ( + + = + + + = < < > In reality, word sizes are usually 32 (or maybe 16) bits. Then in general, we store b-bit negative numbers (n < 0) as 2 b + n. E.g., for 16 bits, (n < 0) 65536 + n. How Many Digits Do I Get, 6 or 9? How many decimal digits of accuracy do I get with a binary floating point number? You often see a range: 6 to 9 digits. Huh? We jump ahead, and assume here that you understand binary floating point (see below for explanation). Wobble, but dont fall down: The idea of number of digits of accuracy is somewhat flawed. Six digits of accuracy near 100,000 is ~10 times worse than 6 digits of accuracy near 999,999. The smallest increment is 1 in the least-significant digit. One in 100,000 is accuracy of 10 -5 ; 1 in 999,999 is almost 10 -6 , or 10 times more accurate. Aside: The wobble of a floating point number is the ratio of the lowest accuracy to the highest accuracy for a fixed number of digits. It is always equal to the base in which the floating point number is expressed, which is 10 in this example. The wobble of binary floating point is 2. The wobble of hexadecimal floating point (mostly obsolete now) is 16. We assume IEEE-754 compliant numbers (see later section). To insure, say, 6 decimal digits of accuracy, the worst-case binary accuracy must exceed the best-case decimal accuracy. For IEEE single- physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu precision, there are 23 fraction bits (and one implied-1 bit), so the worst case accuracy is 2 -23 = 1.2 10 -7 . The best 6-digit accuracy is 10 -6 ; the best 7 digit accuracy is 10 -7 . Thus we see that single-precision guarantees 6 decimal digits, but almost gets 7, i.e. most of the time, it actually achieves 7 digits. The table in the next section summarizes 4 common floating point formats. How many digits do I need? Often, we need to convert a binary number to decimal, write it to a file, and then read it back in, converting it back to binary. An important question is, how many decimal digits do we need to write to insure that we get back exactly the same binary floating point number we started with? In other words, how many binary digits do I get with a given number of decimal digits? (This is essentially the reverse of the preceding section.) We choose our number of decimal digits to insure full binary accuracy (assuming our conversion software is good, which is not always the case). Our worst-case decimal accuracy has to exceed our best-case binary accuracy. For single precision, the best accuracy is 2 24 = 6.0 10 8 . The worst case accuracy of 9 decimal digits is 10 8 , so we need 9 decimal digits to fully represents IEEE single precision. Heres a table of precisions for 4 common formats: Format Fract ion bits Minimum decimal digits accuracy Decimal digits for exact replication Decima l digits range IEEE single 23 2 23 = 1.2 10 7 => 6 2 24 = 6.0 10 8 => 9 6 9 IEEE double 52 2 52 = 2.2 10 16 => 15 2 53 = 1.1 10 16 => 17 15 17 x86 long double 63 2 63 = 1.1 10 19 => 18 2 64 = 5.4 10 20 => 21 18 21 SPARC REAL*16 112 2 112 = 1.9 10 34 => 33 2 113 = 9.6 10 35 => 36 33 36 These number of digits agree exactly with the quoted ranges in the IEEE Floating Point section, and the ULP table in the underflow section. In C, then, to insure exact binary accuracy when writing, and then reading, in decimal, for double precision, use sprintf(dec, "%.17g", x); How Far Can I Go? A natural question is: What is the range, in decimal, of numbers that can be represented by the IEEE formats? The answer is dominated by the number of bits in the binary exponent. This table shows it: Range and Precision of Storage Formats Format Signific ant Bits Smallest Normal Number Largest Number Decim al Digits IEEE single 24 1.175... 10 38 3.402... 10 +38 6-9 IEEE double 53 2.225... 10 308 1.797... 10 +308 15-17 x86 long double 64 3.362... 10 4932 1.189... 10 +4932 18-21 SPARC REAL*16 113 3.362... 10 4932 1.189... 10 +4932 33-36 physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu Software Engineering Software Engineering is much more than computer programming: it is the art and science of designing and implementing programs efficiently, over the long term, across multiple developers. Software engineering maximizes productivity and fun, and minimizes annoyance and roadblocks. Engineers first design, then implement, systems that are useful, fun, and efficient. Hackers just write code. Software engineering includes: - Documentation: lots of it in the code as comments. - Documentation: design documents that give an overview and conceptual view that is infeasible to - Coding guidelines: for consistency among developers. Efficiency can only be achieved by cooperation among the developers, including a consistent coding style that allows others to quickly understand the code. E.g., physics.ucsd.edu/~emichels/Coding%20Guidelines.pdf. - Clean code: it is easy to read and follow. - Maintainable code: it functions in a straightforward and comprehensible way, so that it can be changed easily and still work. Notice that all of the above are subjective assessments. Thats the nature of all engineering: Engineering is lots of tradeoffs, with subjective approximations of the costs and benefits. Dont get me wrong: sometimes I hack out code. The judgment comes in knowing when to hack and when to design. Fun quotes: Whenever possible, ignore the coding standards currently in use by thousands of developers in your projects target language and environment. - Roedy Green, How To Write Unmaintainable Code, www.strauss.za.com/sla/code_std.html Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it. - Brian W. Kernighan Coding guidelines make everyones life easier, even yours. - Eric L. Michelsen Object Oriented Programming This is a much used and abused term, with no definitive definition. The goal of Object Oriented Programming (OOP) is to allow reusable code that is clean and maintainable. The best definition Ive seen of OOP is that it uses a language and approach with these properties: - User defined data types, called classes, which allow (1) a single object (data entity) to have multiple data items, and (2) provide user-defined methods (functions and operators) for manipulating objects of that class. - Information hiding: a class can define a public interface which hides the implementation details from the code which uses the class. - Overloading: the same named function or operator can be invoked on multiple data types, including both built-in and user-defined types. The language chooses which of the same-named functions to invoke based on the data types of its arguments. - Inheritance: new data types can be created by extending existing data types. The derived class inherits all the data and methods of the base class, but can add data, and override (overload) any methods it chooses with its own, more specialized versions. physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu - Polymorphism: this is more than overloading. Polymorphism allows derived-class objects to be handled by (often older) code which only knows about the base class (i.e., which does not even know of the existence of the derived class.) Even though the application code knows nothing of the derived class, the data object itself insures calling proper specialized methods for itself. In C++, polymorphism is implemented with virtual functions. OOP does not have to be a new paradigm. It is usually more effective to make it an improvement on the good software engineering practices you already use. The Best of Times, the Worst of Times We give here some ways to speed up common computations, using matrices as examples. The principles are applicable to almost any computation performed over a large amount of data. For the vast majority of programs, execution time is so short that it doesnt matter how efficient it is; clarity and simplicity are more important than speed. In rare cases, time is a concern. For some simple examples, we show how to easily cut your execution times to 1/3 of original. We also show that things are not always so simple as they seem. This section assumes knowledge of computer programming with simple classes (the beginning of object oriented programming). This topic is potentially huge, so we can only touch on some basics. The main point here is: Computer memory management is the key to fast performance. We proceed along these lines: - We start with a simple C++ class for matrix addition. We give run times for this implementation (the worst of times). - A simple improvement greatly improves execution times (the best of times). - We try another expected improvement, but things are not as expected. - We describe the general operation of memory cache (pronounced cash) in simple terms. - Moving on to matrix multiplication, we find that our previous tricks dont work well. - However, due to the cache, adding more operations greatly improves the execution times. The basic concept in improving matrix addition is to avoid C++s hidden copy operations. However: Computer memory access is tricky, so things arent always what youd expect. Nonetheless, we can be efficient, even without details of the computer hardware. The tricks are due to computer hardware called RAM cache, whose general principles we describe later, but whose details are beyond our scope. First, here is a simple C++ class for matrix creation, destruction, and addition. (For simplicity, our sample code has no error checking; real code, of course, does. In this case, we literally dont want reality to interfere with science.) The class data for a matrix are the number of rows, the number of columns, and a pointer to the matrix elements (data block). typedef double T; // matrix elements are double precision class ILmatrix // 2D matrix { public: int nr, nc; // # rows & columns T *db; // pointer to data physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu ILmatrix(int r, int c); // create matrix of given size ILmatrix(const ILmatrix &b); // copy constructor ~ILmatrix(); // destructor T * operator [](int r) const {return db + r*nc;}; // subscripting ILmatrix & operator =(const ILmatrix& b); // assignment ILmatrix operator +(const ILmatrix& b) const; // matrix add }; The matrix elements are indexed starting from 0, i.e. the top-left corner of matrix a is referenced as a[0][0]. Following the data are the minimum set of methods (procedures) for matrix addition. Internally, the pointer db points to the matrix elements (data block). The subscripting operator finds a linear array element as (row)(#columns) + column. Here is the code to create, copy, and destroy matrices: // create matrix of given size (constructor) ILmatrix::ILmatrix(int r, int c) : nr(r), nc(c) // set nr & nc here { db = new T[nr*nc]; // allocate data block } // ILmatrix(r, c) // copy a matrix (copy constructor) ILmatrix::ILmatrix(const ILmatrix & b) { int r,c; nr = b.nr, nc = b.nc; // matrix dimensions if(b.db) { db = new T[nr*nc]; // allocate data block for(r = 0; r < nr; r++) // copy the data for(c = 0; c < nc; c++) (*this)[r][c] = b[r][c]; } } // copy constructor // destructor ILmatrix::~ILmatrix() { if(db) {delete[] db;} // free existing data nr = nc = 0, db = 0; // mark it empty } // assignment operator ILmatrix & ILmatrix::operator =(const ILmatrix& b) { int r, c; for(r = 0; r < nr; r++) // copy the data for(c = 0; c < nc; c++) (*this)[r][c] = b[r][c]; return *this; } // operator =() The good stuff: With the tedious preliminaries done, we now implement the simplest matrix addition method. It adds two matrices element by element, and returns the result as a new matrix: ILmatrix ILmatrix::operator +(const ILmatrix& b) const { int r, c; ILmatrix result(nr, nc); for (r=0; r < nr; r++) for (c=0; c < nc; c++) result[r][c] = (*this)[r][c] + b[r][c]; return result; // invokes copy constructor! } // operator +() physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu How long does this simple code take? To test it, we standardize on 300 300 and 400 x 400 matrix sizes, each on two different computers: computer 1 is a c. 2000 Compaq Workstation W6000 with a 1.7 GHz Xeon. Computer 2 is a Gateway Solo 200 ARC laptop with a 2.4 GHz CPU. We time 100 matrix int n = 300; // matrix dimension ILmatrix a(n,n), b(n,n), d(n,n); d = a + b; // prime memory caches for(i = 0; i < 100; i++) d = a + b; With modern operating systems, you may have to run your code several times before the execution times stabilize. [This may be due to internal operations of allocating memory, and flushing data to disk.] We find that, con computer 1, it takes ~1.36 0.10 s to execute 100 simple matrix additions (see table at end of this section). Wow, that seems like a long time. Each addition is 90,000 floating point adds; 100 additions is 9 million operations. Our 2.4 GHz machine should execute 2.4 additions per ns. Wheres all the time going? C++ has a major flaw. Though it was pretty easy to create our matrix class: C++ copies your data twice in a simple class operation on two values. So besides our actual matrix addition, C++ copies the result twice before it reaches the matrix d. The first copy happens at the return result statement in our matrix addition function. Since the variable result will be destroyed (go out of scope) when the function returns, C++ must copy it to a temporary variable in the main program. Notice that the C++ language has no way to tell the addition function that the result is headed for the matrix d. So the addition function has no choice but to copy it into a temporary matrix, created by the compiler and hidden from programmers. The second copy is when the temporary matrix is assigned to the matrix d. Each copy operation copies 90,000 8-byte double-precision numbers, ~720k bytes. Thats a lot of copying. of writing our own loops to copy data, we can call the library function memcpy( ), which is specifically optimized for copying blocks of data. Our copy constructor is now: ILmatrix::ILmatrix(const ILmatrix & b) { int r,c; nr = b.nr, nc = b.nc; // matrix dimensions if(b.db) { db = new T[nr*nc]; // allocate data block memcpy(db, b.db, sizeof(T)*nr*nc); // copy the data } } // copy constructor Similarly for the assignment operator. This code takes 0.98 0.10 s, 28 % better than the old code. Not bad for such a simple change, but still bad: we still have two needless copies going on. For the next improvement, we note that C++ can pass two matrix operands to an operator function, but not three. Therefore, if we do one copy ourselves, we can then perform the addition in place, and avoid the second copy. For example: // Faster code to implement d = a + b: d = a; // the one and only copy operation d += b; // += adds b to the current value of d The expression in parentheses copies a to d, and evaluates as the matrix d, which we can then act on with the += operator. We can simplify this main code to a single line as: (d = a) += b; To implement this code, we need to add a += operator function to our class: physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu ILmatrix & ILmatrix::operator +=(const ILmatrix & b) { int r, c; for (r = 0; r < nr; r++) for (c = 0; c < b.nc; c++) (*this)[r][c] += b[r][c]; return *this; // returns by reference, NO copy! } This code runs in 0.45 0.02 s, or 1/3 the original time! The price, though, is somewhat uglier code. Perhaps we can do even better. Instead of using operator functions, which are limited to only two matrix arguments, we can write our own addition function, with any arguments we want. The main code is now: mat_add(d, a, b); // add a + b, putting result in d Requiring the new function mat_add( ): // matrix addition to new matrix: d = a + b ILmatrix & mat_add(ILmatrix & d, const ILmatrix & a, const ILmatrix & b) { int r, c; for (r = 0; r < d.nr; r++) for (c = 0; c < d.nc; c++) d[r][c] = a[r][c] + b[r][c]; return d; // returned by reference, NO copy constructor This runs in 0.49 0.02 s, slightly worse than the one-copy version. Its also even uglier than the previous version. How can this be? Memory access, including data copying, is dominated by the effects of a complex piece of hardware called memory cache. There are hundreds of different variations of cache designs, and even if you know the exact design, you can rarely predict its exact effect on real code. We will describe cache shortly, but even then, there is no feasible way to know exactly why the zero-copy code is slower than one-copy. This result also held true for the 400 400 matrix on computer 1, and the 300 300 matrix on computer 2, but not the 400 400 matrix on computer 2. All we can do is try a few likely cases, and go with the general trend. More on this later. Beware Leaving out a single character from your code can produce code that works, but runs over 2 times slower than it should. For example, in the function definition of mat_add, if we leave out the & before argument a: ILmatrix & mat_add(ILmatrix & d, const ILmatrix a, const ILmatrix & b) then the compiler passes a to the function by copying it! This completely defeats our goal of zero copy. [Guess how I found this out.] Also notice that the memcpy( ) optimization doesnt apply to this last method, since it has no copies at all. Below is a summary of matrix addition. The best code choice was a single copy, with in-place addition. It is medium ugly. While there was a small discrepancy with this on computer 2, 400 400, its not worth the required additional ugliness. physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu Computer 1 times (ms, ~ 100 ms) Computer 2 times (ms, ~ 100 ms) Algorithm 300 300 400 400 300 300 400 400 d = a + b, loop copy 1360 100 % 5900 100 % 1130 100 % 2180 100 % d = a + b, memcpy( ) 985 = 72 % 4960 = 84 % 950 = 84 % 1793 = 82 % (d = a) += b 445 = 33 % 3850 = 65 % 330 = 29 % 791 = 36 % mat_add(d, a, b) 490 = 36 % 4400 = 75 % 371 = 33 % 721 = 33 % Run times for matrix addition with various algorithms. Uncertainties are very rough 1. Best performing algorithms are highlighted Cache Value In the old days, computations were slower than memory accesses. Therefore, we optimized by increasing memory use, and decreasing computations. Today, things are exactly reversed: Modern CPUs (c. 2009) can compute about 50 times faster than they can access main memory. Therefore, the biggest factor in overall speed is efficient use of memory. To help reduce the speed degradation of slow memory, computers use a memory cache: a small memory that is very fast. A typical main memory is 1 Gb, while a typical cache is 1 Mb, or 1000x smaller. The CPU can access cache memory as fast as it can compute, so cache is ~50x faster than main memory. The cache is invisible to program function, but is critical to program speed. The programmer usually does not have access to details about the cache, but she can use general cache knowledge to greatly reduce run time. RAM 0 1 2 : N-1 0 5 10 : N-5 matrix A matrix B sequential CPU data path big, slow RAM small, fast RAM cache sequential (Left) Computer memory (RAM) is a linear array of bytes. (Middle) For convenience, we draw it as a 2D array, of arbitrary width. We show sample matrix storage. (Right) A very fast memory cache keeps a copy of recently used memory locations, so they can be quickly used again. The cache does two things (diagram above): 1. Cache remembers recently used memory values, so that if the CPU requests any of them again, the cache provides the value instantly, and the slow main memory access does not happen. 2. Cache looks ahead to fetch memory values immediately following the one just used, before the CPU might request it. If the CPU in fact later requests the next sequential memory location, the cache provides the value instantly, having already fetched it from slow main memory. The cache is small, and eventually fills up. Then, when the CPU requests new data, the cache must discard old data, and replace it with the new. Therefore, if the program jumps around memory a lot, the physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu benefits of the cache are reduced. If a program works repeatedly over a small region of memory (say, a few hundred k bytes), the benefits of cache increase. Typically, cache can follow four separate regions of memory concurrently. This means you can interleave accesses to four different regions of memory, and still retain the benefits of cache. Therefore, we have three simple rules for efficient memory use: For efficient memory use: (1) access memory sequentially, or at most in small steps, (2) reuse values as much as possible in the shortest time, and (3) access few memory regions concurrently, preferably no more than four. There is huge variety in computer memory designs, so these rules are general, and behavior varies from machine to machine, sometimes greatly. Our data below demonstrate this. We can now understand some of our timing data given above. We see that the one-copy algorithm unexpectedly takes less time than the zero-copy algorithm. The one-copy algorithm accesses only two memory regions at a time: first matrix a and d for the copy, then matrix b and d for the add. The zero-copy algorithm accesses three regions at a time: a, b, and d. This is probably reducing cache efficiency. Recall that the CPU is also fetching instructions (the program) concurrently with the data, which is at least a fourth region. Exact program layout in memory is virtually impossible to know. Also, the cache on this old computer may not support 4-region concurrent access. The newer machine, computer 2, probably has a better cache, and the one- and zero-copy algorithms perform very similarly. Heres a new question for matrix addition: the code given earlier loops over rows in the outer loop, and columns in the inner loop. What if we reversed them, and looped over columns on the outside, and rows on the inside? The result is 65% longer run time, on both machines. Heres why: the matrices are stored by rows, i.e. each row is consecutive memory locations. Looping over columns on the inside accesses memory sequentially, taking advantage of cache look-ahead. When reversed, the program jumps from row to row on the inside, giving up any benefit from look-ahead. The cost is quite substantial. This concept works on almost every machine. Caution FORTRAN stores arrays in the opposite order from C and C++. In FORTRAN, the first index is cycled most rapidly, so you should code with the outer loop on the second index, and the inner loop on the first index. E.g., DO C = 1, N DO R = 1, N A(R, C) = blah blah ... ENDDO ENDDO Scaling behavior: Matrix addition is an O(N 2 ) operation, so increasing from 300 300 to 400 400 increase the computations by a factor of 1.8. On the older computer 1, the runtime penalty is much larger, between 4.5x and 9x slower. On the newer computer 2, the difference is much closer, between 1.8x and 2.2x slower. This is likely due to cache size. A 300 300 double precision matrix takes 720 k bytes, or under a MB. A 400 400 matrix takes 1280 k bytes, just over one MB. It could be that on computer 1, with the smaller matrix, a whole matrix or two fits in cache, but with the large matrix, cache is overflowed, and more (slow) main memory accesses are needed. The newer computer probably has bigger caches, and may fit both sized matrices fully in cache. Cache Withdrawal: Matrix Multiplication We now show that the above tricks dont work well for large-matrix multiplication, but a different trick cuts multiplication run time dramatically. To start, we use a simple matrix multiply in the main code: d = a * b; The straightforward matrix multiply operator is this: // matrix multiply to temporary ILmatrix ILmatrix::operator *(const ILmatrix & b) const { int r, c, k; ILmatrix result(nr, b.nc); // temporary for result physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu T sum; for(r = 0; r < nr; r++) { for(c = 0; c < b.nc; c++) { sum = 0.; for(k = 0; k < nc; k++) sum += (*this)[r][k] * b[k][c]; result[r][c] = sum; } } return result; // invokes copy constructor! } // operator *() While matrix addition is an O(N 2 ) operation, matrix multiplication is an O(N 3 ) operation. Multiplying two 300 x 300 matrices is about 54,000,000 floating point operations, which is much slower than addition. Timing the simple multiply routine, similarly to timing matrix addition, but with only 5 multiplies, we find it takes 7.8 0.1 s on computer 1. First we try the tricks we already know to improve and avoid data copies: we started already with memcpy( ). We compare the two-copy, one-copy, and zero-copy algorithms as with addition, but this time, 5 of the 6 trials show no measurable difference. Matrix multiply is so slow that the copy times are insignificant. The one exception is the one-copy algorithm on computer 2, which shows a significant reduction of ~35%. This is almost certainly due to some quirk of memory layout and the cache, but we cant identify it precisely. However, if we have to choose from these 3 algorithms, we choose the one-copy (which coincidentally agrees with the matrix addition favorite). And certainly, we drop the ugly 3- argument mat_mult( ) function, which gives no benefit. Now well improve our matrix multiply greatly, by adding more work to be done. The extra work will result in more efficient memory use, that pays off handsomely in reduced runtime. Notice that in matrix multiplication, for each element of the results, we access a row of the first matrix a, and a column of the second matrix b. But we learned from matrix addition that accessing a column is much slower than accessing a row. And in matrix multiplication, we have to access the same column N times. Extra bad. If only we could access both matrices by rows! Well, we can. We first make a temporary copy of matrix b, and transpose it. Now the columns of b become the rows of b T . We perform the multiply as rows of a with rows of b T copy time is insignificant for multiplication, so the cost of one copy and one transpose (similar to a copy) is negligible. But the benefit of cache look-ahead is large. The transpose method reduces runtime by 30% to 50%. Further thought reveals that we only need one column of b at a time. We can use it N times, and discard it. Then move on to the next column of b. This reduces memory usage, because we only need extra storage for one column of b, not for the whole transpose of b. It costs us nothing in operations, and reduces memory. That can only help our cache performance. In fact, it cuts runtime by about another factor of two, to about one third of the original runtime, on both machines. (It does require us to loop over columns of b on the outer loop, and rows of a on the inner loop, but thats no burden.) Note that optimizations that at first were insignificant, say reducing runtime by 10%, may become significant after the runtime is cut by a factor of 3. That original 10% is now 30%, and may be worth doing. physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu Computer 1 times (ms, ~ 100 ms) Computer 2 times (ms, ~ 100 ms) Algorithm 300 300 400 400 300 300 400 400 d = a * b 7760 100 % 18,260 100 % 5348 100 % 16,300 100 % (d = a) *= b 7890 = 102 % 18,210 = 100 % 3485 = 65 % 11,000 = 67 % mat_mult(d, a, b) 7720 = 99 % 18,170 = 100 % 5227 = 98 % 16,200 = 99 % d = a * b, transpose b 4580 = 59 % 12,700 = 70 % 2900 = 54 % 7800 = 48% (d = a) *= b, transpose b 4930 = 64 % 12,630 = 69 % 4250 = 79 % 11,000 = 67 % d = a * b, copy b column 2710 = 35 % 7875 = 43 % 3100 = 58 % 8000 = 49 % (d = a) *= b, copy b column 2945 = 38 % 7835 = 43 % 2100 = 39 % 5400 = 33 % Run times for matrix multiplication with various algorithms. Uncertainties are very rough 1. Best performing algorithms are highlighted Cache Summary In the end, exact performance is nearly impossible to predict. However, general knowledge of cache, and following the three rules for efficient cache use (given above), will greatly improve your runtimes. Conflicts in memory between pieces of data and instruction cannot be precisely controlled. Sometimes even tiny changes in code will cross a threshold of cache, and cause huge changes in performance. IEEE Floating Point Formats And Concepts Much of this section is taken from http://docs.sun.com/source/806-3568/ncg_math.html , an excellent article introducing IEEE floating point. However, many clarifications are made here. What Is IEEE Arithmetic? In brief, IEEE 754 specifies exactly how floating point operations are to occur, and to what precision. It does not specify how the floating point numbers are stored in memory. Each computer makes its own choice for how to store floating point numbers. We give some popular formats later. In particular, IEEE 754 specifies a binary floating point standard, with: - Two basic floating-point formats: single and double. - The IEEE single format has a significand (aka mantissa) precision of 24 bits, and is 32 bits overall. The IEEE double format has a significand precision of 53 bits, and is 64 bits overall. - Two classes of extended floating-point formats: single extended and double extended. The standard specifies only the minimum precision and size. For example, an IEEE double extended format must have a significand precision of at least 64 bits and occupy at least 79 bits overall. physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu - Accuracy requirements on floating-point operations: add, subtract, multiply, divide, square root, remainder, round numbers in floating-point format to integer values, convert between different floating-point formats, convert between floating-point and integer formats, and compare. The remainder and compare operations must be exact. Other operations must minimally modify the exact result according to prescribed rounding modes. - Accuracy requirements for conversions between decimal strings and binary floating-point numbers. Within specified ranges, these conversions must be exact, if possible, or minimally modify such exact results according to prescribed rounding modes. Outside the specified ranges, these conversions must meet a specified tolerance that depends on the rounding mode. - Five types of floating-point exceptions, and the conditions for the occurrence of these exceptions. The five exceptions are invalid operation, division by zero, overflow, underflow, and inexact. - Four rounding directions: toward the nearest representable value, with "even" values preferred whenever there are two nearest representable values; toward negative infinity (down); toward positive infinity (up); and toward 0 (chop). - Rounding precision; for example, if a system delivers results in double extended format, the user should be able to specify that such results be rounded to either single or double precision. The IEEE standard also recommends support for user handling of exceptions. IEEE 754 floating-point arithmetic offers users great control over computation. It simplifies the task of writing numerically sophisticated, portable programs not only by imposing rigorous requirements, but also by allowing implementations to provide refinements and enhancements to the standard. Storage Formats The IEEE floating-point formats define the fields that compose a floating-point number, the bits in those fields, and their arithmetic interpretation, but not how those formats are stored in memory. A storage format specifies how a number is stored in memory. Each computer defines its own storage formats, though they are obviously all related. High level languages have different names for floating point data types, which usually correspond to the IEEE formats as shown: IEEE Formats and Language Types IEEE Precision C, C++ Fortran single float REAL or REAL*4 double double DOUBLE PRECISION or REAL*8 double extended long double double extended REAL*16 [e.g., SPARC]. Note that in many implementations, REAL*16 is different than long double IEEE 754 specifies exactly the single and double floating-point formats, and it defines ways to extend each of these two basic formats. The long double and REAL*16 types shown above are two double extended formats compliant with the IEEE standard. physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu f 127 126 112 111 0 1 15 112 LSB Double-Extended (SPARC) (33-36 decimal digits) f 63 62 52 51 0 1 11 52 LSB Double (15-17 decimal digits) e f 31 30 23 22 0 1 8 23 LSB Single (6-9 decimal digits) s f 95 80 79 78 64 63 62 0 16 1 15 1 63 LSB j unused Double-Extended (long double) (x86) (18-21 decimal digits) s s e e s e The following sections describe each of the floating-point storage formats on SPARC and x86 platforms. When a Bias Is a Good Thing IEEE floating point uses biased exponents, where the actual exponent is the unsigned value of the e field minus a constant, called a bias: exponent = e bias The bias makes the e field an unsigned integer, and smallest numbers have the smallest e field (as well as the smallest exponent). This format allows (1) floating point numbers sort in the same order as if their bit patterns were integers; and (2) true floating point zero is naturally represented by an all-zero bit pattern. These might seem insignificant, but they are quite useful, and so biased exponents are nearly universal. Single Format The IEEE single format consists of three fields: a 23-bit fraction, f; an 8-bit biased exponent, e; and a 1-bit sign, s. These fields are stored contiguously in one 32-bit word, as shown above. The table below shows the three constituent fields s, e, and f, and the value represented in single- format: physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu Single-Format Fields Value 1 e 254 (1) s 2 127 1.f (normal numbers) e = 0; f 0 (at least one bit in f is nonzero) (1) s 2 126 0.f (denormalized numbers) e = 0; f = 0 (all bits in f are zero) (1) s 0.0 (signed zero) s = 0/1; e = 255; f = 0 (all bits in f are zero) +/ (infinity) s = either; e = 255; f 0 (at least one bit in f is nonzero) NaN (Not-a-Number) Notice that when 1 e 254, the value is formed by inserting the binary radix point to the left of the fraction's most significant bit, and inserting an implicit 1-bit to the left of the binary point, thus representing a whole number plus fraction, called the significand, where 1 significand < 2. The implicit bits value is not explicitly given in the single-format bit pattern, but is implied by the biased exponent field. A denormalized number (aka subnormal number) is one which is too small to be represented by an exponent in the range 1 e 254. The difference between a normal number and a denormalized number is that the bit to left of the binary point of a normal number is 1, but that of a denormalized number is 0. The 23-bit fraction combined with the implicit leading significand bit provides 24 bits of precision in single-format normal numbers. Examples of important bit patterns in the single-storage format are shown below. The maximum positive normal number is the largest finite number representable in IEEE single format. The minimum positive denormalized number is the smallest positive number representable in IEEE single format. The minimum positive normal number is often referred to as the underflow threshold. (The decimal values are rounded to the number of figures shown.) Important Bit Patterns in IEEE Single Format Common Name Bit Pattern (Hex) Approximate Value +0 0000 0000 0.0 0 8000 0000 0.0 1 3f80 0000 1.0 2 4000 0000 2.0 maximum normal number 7f7f ffff 3.40282347e+38 minimum positive normal number 0080 0000 1.17549435e38 maximum subnormal number 007f ffff 1.17549421e38 minimum positive subnormal number 0000 0001 1.40129846e45 + 7f80 0000 + (positive infinity) ff80 0000 (negative infinity) Not-a-Number (NaN) 7fc0 0000 (e.g.) NaN A NaN (Not a Number) can be represented with many bit patterns that satisfy the definition of a NaN; the value of the NaN above is just one example. physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu Double Format The IEEE double format is the obvious extension of the single format, and also consists of three fields: a 52-bit fraction, f; an 11-bit biased exponent, e; and a 1-bit sign, s. These fields are stored in two consecutive 32-bit words. In the SPARC architecture, the higher address 32-bit word contains the least significant 32 bits of the fraction, while in the x86 architecture the lower address 32-bit word contains the least significant 32 bits of the fraction. The table below shows the three constituent fields s, e, and f, and the value represented in double- format: Double-Format Fields Value 1 e 2046 (1) s 2 1023 x 1.f (normal numbers) e = 0; f 0 (at least one bit in f is nonzero) (1) s 2 1022 x 0.f (denormalized numbers) e = 0; f = 0 (all bits in f are zero) (1) s 0.0 (signed zero) s = 0/1; e = 2047; f = 0 (all bits in f are zero) +/ (infinity) s = either; e = 2047; f 0 (at least one bit in f is 1) NaN (Not-a-Number) This is the obvious analog of the single format, and retains the implied 1-bit in the significand. The 52-bit fraction combined with the implicit leading significand bit provides 53 bits of precision in double- format normal numbers. Below, the 2 nd column has two hexadecimal numbers. For the SPARC architecture, the left one is the lower addressed 32-bit word; for the x86 architecture, the left one is the higher addressed word. The decimal values are rounded to the number of figures shown. Important Bit Patterns in IEEE Double Format Common Name Bit Pattern (Hex) Approximate Value + 0 00000000 00000000 0.0 0 80000000 00000000 0.0 1 3ff00000 00000000 1.0 2 40000000 00000000 2.0 max normal number 7fefffff ffffffff 1.797 693 134 862 3157e+308 min positive normal number 00100000 00000000 2.225 073 858 507 2014e308 max denormalized number 000fffff ffffffff 2.225 073 858 507 2009e308 min positive denormalized number 00000000 00000001 4.940 656 458 412 4654e324 + 7ff00000 00000000 + (positive infinity) fff00000 00000000 (negative infinity) Not-a-Number 7ff80000 00000000 NaN physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu (e.g.) A NaN (Not a Number) can be represented with many bit patterns that satisfy the definition of a NaN; the value of the NaN above is just one example. Double-Extended Format (SPARC) The SPARC floating-point quadruple-precision format conforms to the IEEE definition of double- extended format. The quadruple-precision format occupies four 32-bit words and consists of three fields: a 112-bit fraction, f; a 15-bit biased exponent, e; and a 1-bit sign, s. These fields are stored contiguously. The lowest addressed word has the sign, exponent, and the 16 most significant bits of the fraction. The highest addressed 32-bit word contains the least significant 32-bits of the fraction. Below shows the three constituent fields and the value represented in quadruple-precision format. Double-Extended Fields (SPARC) Value 1 e 32766 (1) s x 2 16383 1.f (normal numbers) e = 0, f 0 (at least one bit in f is nonzero) (1) s x 2 16382 0.f (denormalized numbers) e = 0, f = 0 (all bits in f are zero) (1) s x 0.0 (signed zero) s = 0/1, e = 32767, f = 0 (all bits in f are zero) +/ (infinity) s = either, e = 32767, f 0 (at least one bit in f is 1) NaN (Not-a-Number) In the hex digits below, the left-most number is the lowest addressed 32-bit word. physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu Important Bit Patterns in IEEE Double-Extended Format (SPARC) Name Bit Pattern (SPARC, hex) Approximate Value +0 00000000 00000000 00000000 00000000 0.0 0 80000000 00000000 00000000 00000000 0.0 1 3fff0000 00000000 00000000 00000000 1.0 2 40000000 00000000 00000000 00000000 2.0 max normal 7ffeffff ffffffff ffffffff ffffffff 1.189 731 495 357 231 765 085 759 326 628 0070 e+4932 min normal 00010000 00000000 00000000 00000000 3.362 103 143 112 093 506 262 677 817 321 7526 e4932 max subnormal 0000ffff ffffffff ffffffff ffffffff 3.362 103 143 112 093 506 262 677 817 321 7520 e4932 min pos subnormal 00000000 00000000 00000000 00000001 6.475 175 119 438 025 110 924 438 958 227 6466 e4966 + 7fff0000 00000000 00000000 00000000 + ffff0000 00000000 00000000 00000000 Not-a- Number 7fff8000 00000000 00000000 00000000 (e.g.) NaN Double-Extended Format (x86) The important difference in the x86 long-double format is the lack of an implicit leading 1-bit in the significand. Instead, the 1-bit is explicit, and always present in normalized numbers. This clearly violates the spirit of the IEEE standard. However, big companies carry a lot of clout with standards bodies, so Intel claims this double-extended format conforms to the IEEE definition of double-extended formats, because IEEE 754 does not specify how (or if) the leading 1-bit is stored. X86 long-double consists of four fields: a 63-bit fraction, f; a 1-bit explicit leading significand bit, j; a 15-bit biased exponent, e; and a 1-bit sign, s In the x86 architectures, these fields are stored contiguously in ten successively addressed 8-bit bytes. However, the UNIX System V Application Binary Interface Intel 386 Processor Supplement (Intel ABI) requires that double-extended parameters and results occupy three consecutive 32-bit words in the stack, with the most significant 16 bits of the highest addressed word being unused, as shown below. Double-Extended (long double) Format (x86) s e f 95 80 79 78 64 63 62 0 1 15 1 63 LSB j unused physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu The lowest addressed 32-bit word contains the least significant 32 bits of the fraction, f[31:0], with bit 0 being the least significant bit of the entire fraction. Though the upper 16 bits of the highest addressed 32- bit word are unused by x86, they are essential for conformity to the Intel ABI, as indicated above. Below shows the four constituent fields and the value represented by the bit pattern. x = dont care. Double-Extended Fields (x86) Value j = 0, 1 <= e <= 32766 Unsupported j = 1, 1 <= e <= 32766 (1) s x 2 e16383 x 1.f (normal numbers) j = 0, e = 0; f 0 (at least one bit in f is nonzero) (1) s x 2 16382 x 0.f (denormalized numbers) j = 1, e = 0 (1) s x 2 16382 x 1.f (pseudo-denormal numbers) j = 0, e = 0, f = 0 (all bits in f are zero) (1) s x 0.0 (signed zero) j = 1; s = 0/1; e = 32767; f = 0 (all bits in f are zero) +/ (infinity) j = 1; s = x; e = 32767; f = .1xxx...xx QNaN (quiet NaNs) j = 1; s = x; e = 32767; f = .0xxx...xx 0 (at least one of the x in f is 1) SNaN (signaling NaNs) Notice that bit patterns in x86 double-extended format do not have an implicit leading significand bit. The leading significand bit is given explicitly as a separate field, j. However, when e 0, any bit pattern with j = 0 is unsupported: such a bit pattern as an operand in floating-point operations provokes an invalid operation exception. The union of the fields j and f in the double extended format is called the significand. The significand is formed by inserting the binary radix point between the leading bit, j, and the fraction's most significant bit. In the x86 double-extended format, a bit pattern whose leading significand bit j is 0 and whose biased exponent field e is also 0 represents a denormalized number, whereas a bit pattern whose leading significand bit j is 1 and whose biased exponent field e is nonzero represents a normal number. Because the leading significand bit is represented explicitly rather than being inferred from the exponent, this format also admits bit patterns whose biased exponent is 0, like the subnormal numbers, but whose leading significand bit is 1. Each such bit pattern actually represents the same value as the corresponding bit pattern whose biased exponent field is 1, i.e., a normal number, so these bit patterns are called pseudo- denormals. Pseudo-denormals are merely an artifact of the x86 double-extended storage format; they are implicitly converted to the corresponding normal numbers when they appear as operands, and they are never generated as results. Below are some important bit patterns in the double-extended storage format. The 2 nd column has three hex numbers. The first number is the 16 least significant bits of the highest addressed 32-bit word (recall that the upper 16 bits of this 32-bit word are unused), and the right one is the lowest addressed 32-bit word. physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu Important Bit Patterns in Double-Extended (x86) Format and their Values Common Name Bit Pattern (x86) Approximate Value +0 0000 00000000 00000000 0.0 0 8000 00000000 00000000 0.0 1 3fff 80000000 00000000 1.0 2 4000 80000000 00000000 2.0 max normal 7ffe ffffffff ffffffff 1.189 731 495 357 231 765 05 e+4932 min positive normal 0001 80000000 00000000 3.362 103 143 112 093 506 26 e4932 max subnormal 0000 7fffffff ffffffff 3.362 103 143 112 093 506 08 e4932 min positive subnormal 0000 00000000 00000001 3.645 199 531 882 474 602 53 e4951 + 7fff 80000000 00000000 + ffff 80000000 00000000 quiet NaN with greatest fraction 7fff ffffffff ffffffff QNaN quiet NaN with least fraction 7fff c0000000 00000000 QNaN signaling NaN with greatest fraction 7fff bfffffff ffffffff SNaN signaling NaN with least fraction 7fff 80000000 00000001 SNaN A NaN (Not a Number) can be represented by any of the bit patterns that satisfy the definition of NaN. The most significant bit of the fraction field determines whether a NaN is quiet (bit = 1) or signaling (bit = 0). Precision in Decimal Representation This section covers the precisions of the IEEE single and double formats, and the double-extended formats on SPARC and x86. See the earlier section on How Many Digits Do I Get? for more information. The IEEE standard specifies the set of numerical values representable in a binary format. Each format has some number of bits of precision (e.g., single has 24 bits). But the decimal numbers of roughly the same precision do not match exactly the binary numbers, as you can see on the number line: physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu 10 n 10 n+1 2 m 2 m+1 2 m+2 decimal binary 10 n+2 Comparison of a Set of Numbers Defined by Decimal and Binary Representation Because the decimal numbers are different than the binary numbers, estimating the number of significant decimal digits corresponding to b significant binary bits requires some definition. Reformulate the problem in terms of converting floating-point numbers between binary and decimal. You might convert from decimal to binary and back to decimal, or from binary to decimal and back to binary. It is important to notice that because the sets of numbers are different, conversions are in general inexact. If done correctly, converting a number from one set to a number in the other set results in choosing one of the two neighboring numbers from the second set (which one depends on rounding). All binary numbers can be represented exactly in decimal, but usually this requires unreasonably many digits to do so. What really matters is how many decimal digits are needed, to insure no loss in converting from binary to decimal and back to binary. Most decimal numbers cannot be represented exactly in binary (because decimal fractions include a factor of 5, which requires infinitely repeating binary digits). For example, run the following Fortran program: REAL Y, Z Y = 838861.2 Z = 1.3 WRITE(*,40) Y 40 FORMAT("y: ",1PE18.11) WRITE(*,50) Z 50 FORMAT("z: ",1PE18.11) The output should resemble: y: 8.38861187500E+05 z: 1.29999995232E+00 The difference between the value 8.388612 10 5 assigned to y and the value printed out is 0.0125, which is seven decimal orders of magnitude smaller than y. So the accuracy of representing y in IEEE single format is about 6 to 7 significant digits, or y has about 6 significant digits. Similarly, the difference between the value 1.3 assigned to z and the value printed out is 0.00000004768, which is eight decimal orders of magnitude smaller than z. The accuracy of representing z in IEEE single format is about 7 to 8 significant digits, or z has about 7 significant digits. See Appendix F of http://docs.sun.com/source/806-3568/ncg_references.html for references on base conversion. They say that particularly good references are Coonen's thesis and Sterbenz's book. Underflow Underflow occurs, roughly speaking, when the result of an arithmetic operation is so small that it cannot be stored in its intended destination format without suffering a rounding error that is larger than usual; in other words, when the result is smaller than the smallest normal number. physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu Underflow Thresholds in Each Precision single smallest normal number largest subnormal number 1.175 494 35e38 1.175 494 21e38 double smallest normal number largest subnormal number 2.225 073 858 507 201 4e308 2.225 073 858 507 200 9e308 double extended (x86) smallest normal number largest subnormal number 3.362 103 143 112 093 506 26e4932 3.362 103 143 112 093 505 90e4932 double extended (SPARC) smallest normal number largest subnormal number 3.362 103 143 112 093 506 262 677 817 321 752 6e 4932 3.362 103 143 112 093 506 262 677 817 321 752 0e 4932 The positive subnormal numbers are those numbers between the smallest normal number and zero. Subtracting two (positive) tiny numbers that are near the smallest normal number might produce a subnormal number. Or, dividing the smallest positive normal number by two produces a subnormal result. The presence of subnormal numbers provides greater precision to floating-point calculations that involve small numbers, although the subnormal numbers themselves have fewer bits of precision than normal numbers. Gradual underflow produces subnormal numbers (rather than returning the answer zero) when the mathematically correct result has magnitude less than the smallest positive normal number. There are several other ways to deal with such underflow. One way, common in the past, was to flush those results to zero. This method is known as Store 0 and was the default on most mainframes before the The mathematicians and computer designers who drafted IEEE Standard 754 considered several alternatives, while balancing the desire for a mathematically robust solution with the need to create a standard that could be implemented efficiently. How Does IEEE Arithmetic Treat Underflow? IEEE Standard 754 requires gradual underflow. This method requires defining two representations for stored values, normal and subnormal. Recall that the IEEE value for a normal floating-point number is: (1) s 2 ebias 1.f where s is the sign bit, e is the biased exponent, and f is the fraction. Only s, e, and f need to be stored to fully specify the number. Because the leading bit of the significand is 1 for normal numbers, it need not be stored (but may be). The smallest positive normal number that can be stored, then, has the negative exponent of greatest magnitude and a fraction of all zeros. Even smaller numbers can be accommodated by considering the leading bit to be zero rather than one. In the double-precision format, this effectively extends the minimum exponent from 10 308 to 10 324 , because the fraction part is 52 bits long (roughly 16 decimal digits.) These are the subnormal numbers; returning a subnormal number (rather than flushing an underflowed result to Clearly, the smaller a subnormal number, the fewer nonzero bits in its fraction; computations producing subnormal results do not enjoy the same bounds on relative roundoff error as computations on normal operands. However, the key fact is: Gradual underflow implies that underflowed results never suffer a loss of accuracy any greater than that which results from ordinary roundoff error. Addition, subtraction, comparison, and remainder are always exact when the result is very small. physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu Recall that the IEEE value for a subnormal floating-point number is: (1) s 2 bias + 1 0.f where s is the sign bit, the biased exponent e is zero, and f is the fraction. Note that the implicit power- of-two bias is one greater than the bias in the normal format, and the leading bit of the fraction is zero. Gradual underflow allows you to extend the lower range of representable numbers. It is not smallness that renders a value questionable, but its associated error. Algorithms exploiting subnormal numbers have smaller error bounds than other systems. The next section provides some mathematical justification for The purpose of subnormal numbers is not to avoid underflow/overflow entirely, as some other arithmetic models do. Rather, subnormal numbers eliminate underflow as a cause for concern for a variety of computations (typically, multiply followed by add). For a more detailed discussion, see Underflow and the Reliability of Numerical Software by James Demmel, and Combatting the Effects of Underflow and Overflow in Determining Real Roots of Polynomials by S. Linnainmaa. The presence of subnormal numbers in the arithmetic means that untrapped underflow (which implies loss of accuracy) cannot occur on addition or subtraction. If x and y are within a factor of two, then x y is error-free. This is critical to a number of algorithms that effectively increase the working precision at critical places in algorithms. In addition, gradual underflow means that errors due to underflow are no worse than usual roundoff error. This is a much stronger statement than can be made about any other method of handling underflow, and this fact is one of the best justifications for gradual underflow. Most of the time, floating-point results are rounded: computed result = true result + roundoff How large can the roundoff be? One convenient measure of its size is called a unit in the last place, abbreviated ulp. The least significant bit of the fraction of a floating-point number is its last place. The value represented by this bit (e.g., the absolute difference between the two numbers whose representations are identical except for this bit) is a unit in the last place of that number. If the true result is rounded to the nearest representable number, then clearly the roundoff error is no larger than half a unit in the last place of the computed result. In other words, in IEEE arithmetic with rounding mode to nearest, 0 |roundoff | 1/2 ulp of the computed result. Note that an ulp is a relative quantity. An ulp of a very large number is itself very large, while an ulp of a tiny number is itself tiny. This relationship can be made explicit by expressing an ulp as a function: ulp(x) denotes a unit in the last place of the floating-point number x. Moreover, an ulp of a floating-point number depends on the floating point precision. For example, this shows the values of ulp(1) in each of the four floating-point formats described above: ulp(1) in Four Different Precisions single ulp(1) = 2 23 ~ 1.192093e07 double ulp(1) = 2 52 ~ 2.220446e16 double extended (x86) ulp(1) = 2 63 ~ 1.084202e19 112 ~ 1.925930e34 Recall that only a finite set of numbers can be exactly represented in any computer arithmetic. As the magnitudes of numbers get smaller and approach zero, the gap between neighboring representable numbers physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu narrows. Conversely, as the magnitude of numbers gets larger, the gap between neighboring representable numbers widens. For example, imagine you are using a binary arithmetic that has only 3 bits of precision. Then, between any two powers of 2, there are 2 3 = 8 representable numbers, as shown here: The number line shows how the gap between numbers doubles from one exponent to the next. In the IEEE single format, the difference in magnitude between the two smallest positive subnormal numbers is approximately 10 45 , whereas the difference in magnitude between the two largest finite numbers is approximately 10 31 ! Below, nextafter(x, +) denotes the next representable number after x as you move towards +. Gaps Between Representable Single-Format Floating-Point Numbers x nextafter(x, +) Gap 0.0 1.4012985e45 1.4012985e45 1.1754944e-38 1.1754945e38 1.4012985e45 1.0 1.0000001 1.1920929e07 2.0 2.0000002 2.3841858e07 16.000000 16.000002 1.9073486e06 128.00000 128.00002 1.5258789e05 1.0000000e+20 1.0000001e+20 8.7960930e+12 9.9999997e+37 1.0000001e+38 1.0141205e+31 Any conventional set of representable floating-point numbers has the property that the worst effect of one inexact result is to introduce an error no worse than the distance to one of the representable neighbors of the computed result. When subnormal numbers are added to the representable set and gradual underflow is implemented, the worst effect of one inexact or underflowed result is to introduce an error no greater than the distance to one of the representable neighbors of the computed result. In particular, in the region between zero and the smallest normal number, the distance between any two neighboring numbers equals the distance between zero and the smallest subnormal number. Subnormal numbers eliminate the possibility of introducing a roundoff error that is greater than the distance to the nearest representable number. Because roundoff error is less than the distance to any of the representable neighbors of the true result, many important properties of a robust arithmetic environment hold, including these: - x y <=> x - y 0 - (x y) + y x, to within a rounding error in the larger of x and y - 1/(1/x) x, when x is a normalized number, implying 1/x 0 physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu An old-fashioned underflow scheme is Store 0, which flushes underflow results to zero. Store 0 violates the first and second properties whenever x y underflows. Also, Store 0 violates the third property whenever 1/x underflows. Let represent the smallest positive normalized number, which is also known as the underflow threshold. Then the error properties of gradual underflow and Store 0 can be compared in terms of . gradual underflow: |error| < ulp in Store 0: |error| Even in single precision, the round-off error is millions of times worse with Store 0 than gradual underflow. Two Examples of Gradual Underflow Versus Store 0 The following are two well-known mathematical examples. The first example computes an inner product. sum = 0; for (i = 0; i < n; i++) { sum = sum + a[i] * y[i]; } With gradual underflow, the result is as accurate as roundoff allows. In Store 0, a small but nonzero sum could be delivered that looks plausible but is wrong in nearly every digit. To avoid these sorts of problems, clever programmers must scale their calculations, which is only possible if they can anticipate The second example, deriving a complex quotient, is not amenable to scaling: ( ) ( ) ( ) ( ) ( ) / / , / 1, / p r s q i q r s p p iq a ib assuming r s r is s r r s + + + + = s = + + It can be shown that, despite roundoff, (1) the computed complex result differs from the exact result by no more than what would have been the exact result if p + iq and r + is each had been perturbed by no more than a few ulps, and (2) this error analysis holds in the face of underflows, except that when both a and b underflow, the error is bounded by a few ulps of |a + ib|. Neither conclusion is true when underflows are flushed to zero. This algorithm for computing a complex quotient is robust, and amenable to error analysis, in the presence of gradual underflow. A similarly robust, easily analyzed, and efficient algorithm for computing the complex quotient in the face of Store 0 does not exist. In Store 0, the burden of worrying about low- level, complicated details shifts from the implementer of the floating-point environment to its users. The class of problems that succeed in the presence of gradual underflow, but fail with Store 0, is larger than the fans of Store 0 may realize. Many frequently used numerical techniques fall in this class: - Linear equation solving - Polynomial equation solving - Numerical integration - Convergence acceleration - Complex division Does Underflow Matter? In the absence of gradual underflow, user programs need to be sensitive to the implicit inaccuracy threshold. For example, in single precision, if underflow occurs in some parts of a calculation, and Store 0 is used to replace underflowed results with 0, then accuracy can be guaranteed only to around 10 31 , not 10 38 , the usual lower range for single-precision exponents. This means that programmers need to physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu implement their own method of detecting when they are approaching this inaccuracy threshold, or else abandon the quest for a robust, stable implementation of their algorithm. Some algorithms can be scaled so that computations don't take place in the constricted area near zero. However, scaling the algorithm and detecting the inaccuracy threshold can be difficult and time-consuming for each numerical program. physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu Fourier Transforms and Digital Signal Processing We assume the reader is familiar with basic sampling and Fourier Transform principles. In particular, you must be familiar with decomposing a function into an orthonormal basis of functions. We describe important (often overlooked) properties of Discrete Fourier Transforms. We start with the most general (and simplest) case, then proceed through more specialized cases. Topics: - Complex sequences, and complex Fourier Transform (its actually easier to start with the complex case, and specialize to real numbers later) - Sampling and the Model of Digitization - Even number of points vs. odd number of points - Basis Functions and Orthogonality - Real sequences: even and odd # points - Normalization and Parsevals Theorem - Continuous vs. discrete time and frequency; finite vs. infinite time and frequency - Non-uniformly spaced samples This section assumes you are familiar with complex arithmetic and exponentials. Understanding phasors is very helpful, but not essential (see Funky Electromagnetic Concepts for a discussion of phasors). Brief Definitions: Fourier Series represents a periodic continuous function as an infinite sum of sinusoids at discrete frequencies: 1 2 1 1 1 0 are complex (phasors), ( ) , 1/ (in cycle/s or Hz), 2 (in rad/s) k i kf t k k where S t time s t S e f period f t e t = = f 1 = 1/period, the lowest nonzero frequency, is called the fundamental frequency. f 0 = 0, always. Fourier Transform (FT) represents a continuous function as an integral of sinusoids over continuous frequencies: 2 1 ( ) (2 ) ( ) , ( ) is complex 2 i ft i t s t S f e df S e d where S t e t e e t = = } } We do not discuss this here. The function s(t) is not periodic, so there is no fundamental frequency. S() is a phasor-valued function of angular frequency. Discrete Fourier Transform (DFT) represents a finite sequence of numbers as a finite sum of sinusoids: ( ) 1 2 / 1 1 1 0 are complex (phasors), 0, ... 1 the sample index, , 1/ (in cycle/s), 2 (in rad/s) n k i k n j j k k where S j n s S e f period f t e t = = = = = The sequence s j may be thought of as either periodic, or undefined outside the sampling interval. As in the Fourier Series, the fundamental frequency is 1/period, or equivalently 1/(sampled interval), and f 0 = 0, always. [Since a DFT essentially treats the input as periodic, it might be better called a Discrete Fourier Series (rather than Transform), but Discrete Fourier Transform is completely standard.] Fast Fourier Transform (FFT) an algorithm for implementing special cases of DFT. physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu Inverse Discrete Fourier Transform (IDFT) gives the sequence of numbers s j from the DFT components. The general digital Fourier Transform is a Discrete Fourier Transform (DFT). An FFT is an algorithm for special cases of DFT. Model of Digitization All realistic systems which digitize analog signals must comprise at least the following components: 0 1 2 3 4 5 6 7 8 9 ... analog signal Anti-alias Low Pass Filter (LPF) Analog to Digital Converter sample clock, f samp filtered analog signal digital samples, s j Minimum components of a Digital Signal Processing system, with uniformly spaced samples Complex Sequences and Complex Fourier Transform Its actually easier to start with the complex case, and specialize to real numbers later. Given a sequence of n complex numbers s j , we can write the sequence as a sum of sinusoids, i.e. complex exponentials: Inverse Discrete Fourier Transform: ( ) 1 2 / 0 , 0, ... 1is the sample index the frequency of the component, in cycle/sample the complex frequency component (phasor) n i k n j j k k th k s S e where j n k k n S t = = = Note that there are n original complex numbers, and n complex frequency components, so no information is lost. The transform is exact, unique, and reversible. (In other words, this is not a fit.) The above equation forces all normalization conventions. We use the simple scheme wherein a function equals the sum of its components (with no factors of 2 or anything else). Often, the index j is a measure of time or distance, and the sequence comprises samples of a signal taken at equal intervals. Without loss of generality, we will refer to j as a measure of time, but it could be anything. Note that the equation above actually defines the Inverse Discrete Fourier Transform (IDFT), because it gives the original sequence from the Fourier components. [Mathematicians often reverse the definitions of DFT and IDFT, by putting a minus sign in the exponent of the IDFT equation above. Engineers and physicists usually use the convention given here.] Sampling Each number in the sequence is called a sample, because such sequences are often generated by sampling a continuous signal s(t). For n samples, there are n frequency components, S k , each at normalized frequency k/n (defined soon): physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu j complex samples n = 10 s j 0 1 2 3 4 5 6 7 8 9 k Complex Frequency Components S k 0 1 2 3 4 5 6 7 8 9 k = 9 k = 1 k = 2 k = 0 basis frequencies signal period, aka sample interval fundamental frequency For simplicity in this diagram, the samples, sinusoids, and component amplitudes are shown as real, but in general, they are all complex valued. Note that there are a full n sample times in the sample interval (aka signal period), not (n 1). The above representation is used by many DFT functions in computer libraries. Also, there is no need for any other frequencies, because k = 10 has exactly the same values at all the sample points as k = 0. If the samples are from a continuous signal that had a frequency component at k = 10, then that component will be aliased down to k = 0, and added to the actual k = 0 component. It is forever lost, and cannot be recovered from the samples, nor distinguished from the k = 0 (DC) component. The same aliasing occurs for any two frequencies k and k + n. The above definition is the only correct meaning for aliasing. Many (most?) people misuse this word to mean other things (e.g., harmonics or sidebands). To avoid a dependence on n, we usually label the frequencies as fractions. For n samples, there are n frequencies, measured in units of cycles/sample, and running from f = 0 to f = (1 1/n) cycles/sample. The n normalized frequencies are { } 1 2 3 1 , 0,1, ... 1, 0, , , ,... k k k n f k n that is f n n n n n = = = ` ) There is no f = 1, just as there is no k = n, because f = 1 is an alias of f = 0. The Fourier components are written as S(f), a function of f, so we re-label the above diagram with normalized frequencies: j complex samples n = 10 s j 0 1 2 3 4 5 6 7 8 9 f Complex Frequency Components S k 0 .1 .2 .3 .4 .5 .6 .7 .8 .9 f =. 9 f = .1 f = .2 f = 0 basis frequencies sampled interval fundamental frequency Normalized frequencies are equivalent to measuring time in units of the sample time, and frequencies in cycles/sample. For theoretical analysis, it is often more convenient to have the frequency range be 0.5 < f 0.5, instead of 0 f < 1. Since any frequency f is equivalent to (an alias of) f 1, we can simply move the frequencies in the range 0.5 < f < 1 down to 0.5 < f < 0: physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu f Complex Frequency Components S(f ) -.4 -.3 -.2 -.1 0 .1 .2 .3 .4 .5 f Complex Frequency Components S(f ) 0 .1 .2 .3 .4 .5 .6 .7 .8 .9 n = 10 For an even number of samples (and frequencies, diagram above), the resulting frequency set is necessarily asymmetric, because there is no f = 0.5, but there is an f = +0.5. For an odd number of points (below), the frequency set is symmetric, and there is neither f = 0.5 nor f = +0.5: f Complex Frequency Components S(f ) -.4 -.2 0 .2 .4 f Complex Frequency Components S(f ) -.4 -.2 0 .2 .4 .6 .8 n = 5 Some references say that sampling a signal is like setting it to zero everywhere except the sample times. It is not. This is a common misconception. It well refuted by [Openheim and Schafer, and dozens of other signal processing experts]. It is easy to show that that claim is not true, in several ways. One simple way is this: For a band-limited signal, I can reconstruct the signal between the sample times from just the samples alone. That makes no sense if sampling amounted to zeroing the signal between samples, and then transforming. Basis Functions and Orthogonality The basis functions of the DFT are the discrete-time exponentials, which are equivalent to sines and cosines: ( ) 2 / even: / 2 1, ... 1, 0,1... / 2 ( ) , 0,1, ... 1, 0,1, ... 1 : int( / 2), ... 1, 0,1, ... int( / 2) i k n j k n n n b j e j n k n or n odd n n t + = = = Note that: The DFT and FT are simply decompositions of functions into basis functions, just like in ordinary quantum mechanics. The transform equations are just the inner products of the given functions with the basis functions. The basis functions are orthogonal, normalized (in our convention) such that k m km b b n o = . Proof: physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu ( ) ( ) ( )( ) ( )( ) ( )( ) ( )( ) 1 1 1 1 2 / 2 / 2 / 2 / * 0 0 0 0 1 0 0 1 2 / 0 2 / ( ) ( ) , 1 , . Then: 1 1 1 n n n n j i k n j i m n j i n m k j i n m k k m k m j j j j n j k m j n n i n m k j j n i n m k k m i b b b j b j e e e e For k m we have b b e n r For k m use r where r e r e b b e t t t t t t = = = = ( = = = ( = = = ( = = = ( )( ) ( )( ) ( )( ) ( )( ) 2 2 / 2 / 2 / 1 1 1 0 1 1 i m k k m km n m k i n m k i n m k e b b n e e t t t t o = = = = ( ( ( The orthogonality condition allows us to immediately write the DFT from the definition of the IDFT above: Discrete Fourier Transform: ( ) 1 2 / 0 1 , the complex frequency component the normalized frequency of the component n i k n j k j k j S s e where S n k kth n t = = = Note that there are 2n independent real numbers in the complex sequence s j , and there are also 2n independent real numbers in the complex spectrum S k , as there must be (same number of degrees of freedom). Real Sequences An important special case of sequence s j is a real-valued sequence, which is a special case of a complex-valued sequence. For ( ) / 2 2 / / 2 n i k n j j k k n s S e t ~ ~ = to be real, the S k must occur in complex conjugate pairs, i.e., the spectrum S k must be conjugate symmetric: * (for real, and / 2) k k j S S s k n = < This implies that S 0 is always real, which is also clear since S 0 is just the average of the real sequence. Note that there is no k = n/2. There are n independent real numbers in the real sequence s j . We now consider two subcases: n is even, and n is odd. For n even, ( ) / 2 2 / / 2 1 n i k n j j k k n s S e n even t = + = , and we use the asymmetric frequency range 0.5 < f 0.5, which corresponds to n/2 < k n/2 (below left). For an even number of points, since there are no conjugates to k = 0 or k = n/2, we must have that S 0 and S n/2 are real. All other S k are conjugate symmetric. physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu -.44 -.33 -.22 -.11 0 .11 .22 .33 .44 -.4 -.3 -.2 -.1 0 .1 .2 .3 .4 .5 f S 0 & S 5 real conjugate symmetric f n = 10, s j real Complex Frequency Components S 0 real conjugate symmetric n = 9, s j real S(f) S(f) Complex Frequency Components (Left) Sequence with even number of samples, n = 10. (Right) Sequence with odd number, n = 9. Therefore, in the spectrum, there are (n/2 1) independent complex frequency components, plus two real components, totaling n independent real numbers in the spectrum, matching the n independent real numbers in the sequence s j . In terms of sine and cosine components (rather than the complex components), there are (n/2 + 1) independent cosine components, and (n/2 1) independent sine components. For an odd number of points (above right), ( ) ( ) ( ) 1 / 2 2 / 1 / 2 , n i k n j j k k n s S e n odd t = = , there is no k = n/2 component, and again there is no conjugate to k = 0. Therefore, we must have that S 0 is real. All other S k are conjugate symmetric. Therefore, in the spectrum, there are (n 1)/2 independent complex frequency components, plus one real component (S 0 ), totaling n independent real numbers in the spectrum, matching the n independent real numbers in the sequence s j . In terms of sine and cosine components (rather than the complex components), there are (n + 1)/2 independent cosine components, and (n 1)/2 independent sine components. Normalization and Parsevals Theorem When the original sequence represents something akin to samples of voltage over time, we can speak of energy in the signal. The energy of the signal is the sum of the energies of each sample: 2 2 1 1 2 0 0 , "conductance", choosen to be1. j j j n n j j j j E Gs s where G E E s = = = = = = = The energies of the sinusoidal components in the DFT add as well, because the sinusoids are orthogonal (show why??): 1 2 0 n k k E S Parseval Theorem equates the energy of the original sequence to the energy of the sinusoidal components, by providing the constant of proportionality. First, we evaluate the energy of a single sinusoid: physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu ( ) 1 2 2 2 2 / 0 1 1 1 1 2 2 2 0 0 0 0 n i k n j k k k j n n n n k k k j k k k j E S e n S E E n S n S s t = = = = = = = = = = Besides our normalization choice above, there are several other choices in common use. In general, between the DFT, IDFT, and Parsevals Theorem, you can choose a normalization for one, which then fixes the normalization for the other two. For example, some people choose to make the DFT and IDFT more symmetric by defining: ( ) ( ) 1 2 / 1 1 0 2 2 1 0 0 2 / 0 1 : 1 : n i k n j j k n n k k j n k j i k n j k j j IDFT s S e n S s DFT S s e n t t = = = ` (alternate normalizations) Continuous and Discrete, Finite and Infinite TBS: Finite length implies discrete frequencies; infinite length implies continuous frequencies. Discrete time implies finite frequencies; continuous time implies infinite frequencies. Finite length is equivalent to periodic. White Noise and Correlation White noise has, on average, all frequency components equal (named by incorrect analogy with white light); samples of white noise are uncorrelated. Non-white noise has unequal frequency components (on average); samples of non-white noise are necessarily correlated. (Show this??). Why Oversampling Does Not Improve Signal-to-Noise Ratio Sometimes it might seem that if I oversample a signal (i.e., sample above the Nyquist rate), the noise power stays constant (= noise variance is constant), but I have more samples of the signal which I can average. Therefore, by oversampling, I should be able to improve my SNR by averaging out more noise, but keeping all the signal. This reasoning is wrong, of course, because it implies that by sampling arbitrarily fast, I can filter out arbitrarily large amounts of noise, and ultimately recover anything from almost nothing. So whats wrong with this reasoning? Lets take an example. Suppose I sample a signal at 100 samples/sec, with white noise. Then my Nyquist frequency is 50 Hz, and I must use a 50 Hz Low Pass Filter (LPF) for anti-aliasing before sampling. This LPF leaves me with 50 Hz worth of noise power (= variance). Now suppose I double the sampling frequency to 200 samples/sec. To maintain white noise, I must open my anti-alias filter cutoff to the new Nyquist frequency, 100 Hz. This doubles my noise power. Now I have twice as many samples of the signal, with twice as much noise power. I can run a LPF to reduce the noise (say, averaging adjacent samples). At best, I cut the noise by half, reducing it back to its 100 sample/sec value, and reducing my sample rate by 2. Hence, Im right back where I was when I just sampled at 100 samples/sec in the first place. physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu f samp = 100 samples/sec 50 Hz = Nyquist frequency A m p l i t u d e Discrete white noise spectrum f samp = 200 samples/sec 50 Hz A m p l i t u d e 100 Hz = Nyquist frequency f samp = 200 samples/sec 50 Hz A m p l i t u d e Discrete correlated noise spectrum Discrete white noise spectrum 100 Hz = Nyquist frequency But wait! Why open my anti-alias LPF? Lets try keeping the LPF at 50 Hz, and sampling at 200 samples/sec. But then, my noise occupies only of the sampling bandwidth: it occupies only 50 Hz of the 100 Hz Nyquist band. Hence, the noise is not white, which means adjacent noise samples are correlated! Hence, when I average adjacent samples, the noise variance does not decrease by a factor of 2. The factor of 2 gain only occurs with uncorrelated noise. In the end, oversampling buys me nothing. Filters TBS?? FIR vs. IIR. Because the data set can be any size, and arbitrarily large: The transfer function of an FIR or IIR is continuous. Consider some filter. We must carefully distinguish between the filter in general, which can be applied to any data set (with any n), and the filter as applied to one particular data set. Any given data set has only discrete frequencies; if we apply the filter to the data set, the data sets frequencies will be multiplied by the filters transfer function at those frequencies. But we can apply any size data set to the filter, with frequency components, f = k/n, anywhere in the Nyquist interval. For every data set, the filter has a transfer function at all its frequencies. Therefore, the filter in general has a continuous transfer function. H(f ) f 0.5 H(f ) f 0.5 H(f ) f 0.5 Data sets with different n sample the transfer function H(f) at different points. H(f), in general, is a continuous curve, defined at all points in the Nyquist interval f e [0, 1) or (-0.5, +0.5]. What Happens to a Sine Wave Deferred? ... Maybe it just sags, like a heavy load. Or does it explode? [Sincere apologies to Langston Hughes.] You many ask, The DFT has only a discrete set of frequencies. Can I use a DFT to estimate the frequency of an unknown signal? What happens if I sample a sinusoid of a frequency in between the allowed DFT frequencies? What is its spectrum? Good questions. We now demonstrate. We choose n = 40 samples, which means the allowed frequencies are k(1/n), k = 19, ... 0, ... 20, measured in cycles per sample (or equivalently, in units of the sampling rate, f samp ). The frequency spacing is 1/n = 0.025 cycle/sample. No other frequencies exist in the DFT (we say no others are allowed). First, we show an allowed-frequency sinusoid of f = 10/n = 0.25 cycle/sample (k = 10). Since the signal is real, the spectrum is conjugate symmetric (S k = S k * ); therefore, I show only the positive frequencies, and double their magnitudes: 2 cos , cycle/sample j k k s j f n n t | | = = | \ . physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu (Left) A sampled sinusoid of f = 0.25, n = 40. (Right) As expected, its magnitude spectrum (DFT) has exactly one component at f = 0.25. [Aside: Notice that when the sample points are connected by straight lines, the sinusoid doesnt look sinusoidal, but recall that connecting with straight lines is not the proper way to interpolate between samples.] Now we go off-frequency by half the frequency spacing: f = 10.5/n = 0.2625 cycle/sample, halfway between two allowed frequencies: (Left) A sampled sinusoid of f = 0.2625, n = 40. (Right) Its magnitude spectrum (DFT) has components everywhere, but is peaked around f = 0.2625. Not too surprisingly, the components are peaked at the allowed frequencies closest to the sinusoid frequency, but there are also components at all other frequencies. This is the artifact of sampling a pure sinusoid of a non-allowed frequency for a finite time. Finally, instead of being half-way between allowed frequencies, suppose were only 0.2 of the way, f = 10.2/n = 0.255 cycle/sample: physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu (Left) A sampled sinusoid of f = 0.255, n = 40. (Right) Its magnitude spectrum (DFT) has components everywhere, is asymetric, and peaked at f = 0.25. These examples show that a DFT, with its fixed frequencies, can give only a rough estimate of an unknown sinusoids frequency. The estimate gets worse if the unknown signal is not exactly a sinusoid, because that means it has an even smaller spectral peak, with more components spread around the spectrum. Other methods exist for estimating the frequency of an unknown signal, even one that is non-uniformly sampled in time. If the signal is fairly sinusoidal, one can correlate with a sinusoidal basis frequency, and numerically search for the frequency with maximum correlation. This avoids the discrete-frequency limitation of a DFT. Other methods usually require many periods of data, e.g. [Leahy, Ap J, 1983, vol 266, p160]. Nonuniform Sampling and Arbitrary Basis Functions So far, we have used a signal sampled uniformly in time. We now show that one can find a Fourier transform of a signal with any set of n samples, uniform or not. This has many applications: some experiments (such as lunar laser ranging) cannot sample the signal uniformly for practical, economic, or political reasons. Magnetic Resonance Imaging (MRI) often uses non-uniform sampling to reduce imaging time, which can be an hour or more for a patient. We write the required transform as a set of simultaneous equations, with t j as the arbitrary sample times, and keeping (for now) the uniformly spaced frequencies: ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) 1 0 0 0 1 1 1 0 1 1 1 0 0 0 0 1 0 1 0 1 0 1 1 1 1 1 1 0 1 ( ) exp 2 / ( ) exp 2 / ... ( ) exp 2 / ( ) exp 2 exp 2 ... exp 2 ( ) exp 2 exp 2 ... exp 2 : : : : ( ) exp 2 exp n k k n k k n n k n k n n n n s t S i k n t s t S i k n t OR s t S i k n t s t f t f t f t s t f t f t f t s t f t t t t t t t t t t t = = = ( ( ( = ( ( ( ) ( ) 0 1 1 1 1 1 1 : 2 ... exp 2 n n n n S S S f t f t t t ( ( ( ( ( ( ( ( ( ( ( How can we find the required coefficients, S k ? physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu The exponential functions are no longer orthogonal over the sample times; they are only orthogonal over uniformly spaced samples. Nonetheless, we have n unknowns (S 0 , ... S n1 ), and n equations. So long as the basis functions are linearly independent over the sample times, we can (in principle) solve for the needed coefficients, S k . We have now greatly expanded our ability to decompose arbitrary samples into basis functions: We can decompose a signal over any set of sample times into any set of linearly independent (not necessarily orthogonal) basis functions. Note that Parsevals theorem does not apply to the coefficients, since the basis functions (evaluated at the non-uniform sample points) are no longer orthogonal. Also, S 0 is no longer the average of the signal values, since the sinusoids may have nonzero average over the sample points. There is one more subtlety: what is the fundamental frequency f 0 ? Equivalently, what is the signal period? The two are related, because f 0 = 1/period. There is no unique answer to this. However, since a finite signal transforms as if it is periodic, the period cannot be (t n1 t 0 ), since the first and last samples would then have to be identical. The period must be longer than that. A convenient choice is to simply mimic what happens when the samples are uniform. In that case, ( ) 1 0 0 , 1/ 1 n n period t t f period n = = This choice for period reproduces the traditional DFT when the samples are uniform, and is usually adequate for non-uniform samples, as well. Example: DFT of a real, non-uniformly sampled sequence: We can set up the matrix equation to be solved by recalling the frequency layout for even and odd n, and applying the above. We set t 0 = 0, and define n/2 as floor(n/2). For illustration of the last two columns, we take n odd: ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) / 2 0 0 0 0 / 2 1 1 1 0 / 2 1 1 1 0 0 1 1 1 1 1 ( ) cos sin 2 / ( ) cos sin : ( ) cos sin 1.0 1.0 0.0 ... 1.0 0.0 ( ) 1.0 cos sin ... cos / 2 sin ( ) : ( ) n k k n k k n n k n n k n s t S k t k t where n s t S k t k t OR s t S kt k t s t t t n t s t s t e e e t e e e e e e = = = ( = + = + ( ( = + ( ( ( = ( ( ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) 0 1 1 1 1 1 1 1 / 2 : : : : : : 1.0 cos sin ... cos / 2 sin / 2 n n n n n S n t S S t t n t n t e e e e e ( ( ( ( ( ( ( ( ( ( ( This gives us the sine and cosine components separately. For n even, the highest frequency component is k = n/2, or = 2k/n = 2(1/2) = rad/sample, and the final column of sin() is not present. Note that this is not a fit; it is an exact, reversible transformation. The matrix is the set of all the basis functions (across each row), evaluated at the sample points (down each column). The matrix has no summations in it, and depends on the sample points, but not on the sample values. Example: basis functions as powers of x: In the continuous world, a Taylor series is a decomposition of a function into powers of (x a), which are a set of linearly independent (but not orthogonal) basis functions. Despite this lack of orthogonality, Taylor devised a clever way to evaluate the basis-function coefficients without solving simultaneous equations. physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu Example: sampled standard basis functions: We could choose a standard (continuous) mathematical basis set, such as Bessel functions, J n (t). For n sample points, t 1 , ... t n , the Bessel functions are linearly independent, and we can solve for the coefficients, A k . We need a scale factor for the time (equivalent to 2k/n in the Fourier transform). For example, we might use = the (n 1) th zero of J n1 (t). Then: 1 0 0 0 1 1 1 1 0 1 1 1 1 0 1 ( ) ( ) ... ( ) n k k k n n k k k n n n k k n k n s t A J t t s t A J t t s t A J t t o o o = | | = | \ . | | = | \ . | | = | \ . We have n equations and n unknowns, A 0 , ... A n1 , so we can solve for the A k . Old fashioned FFT implementations required you to have N = a power of 2 number of samples (64, 1024, etc.). Modern FFT implementations are general to any number of samples, and use the prime decomposition of N to provide the fastest and most accurate DFT known. The worst case is when N is prime, and no FFT optimization is possible: the DFT is evaluated directly from the defining summations. But with modern computers, this is so fast that we dont care. In the old days, if people had a non-power-of-2 number of data points, they used to pad their data, typically (and horribly) by just adding zeros to the end until they reached a power of 2. This introduced artifacts into the spectrum, which often obscured or destroyed the very information they sought [Ham p??]. With a modern FFT implementation, there is no need for it, anyway. If for some reason, you absolutely must constrain N to some preferred values, it is much better to throw away some data points than to add fake ones. Two Dimensional Fourier Transforms One dimensional Fourier transforms often have time or space as the independent variable. Two dimensional transforms almost always have space, say (x, y), as the independent variables. The most common 2D transform is of pictures. In the continuous world of light, lenses can physically project a Fourier transform of an image based on optics, with no computations. This allows for filtering the image with opaque masks, and re-transforming back to the original-but-filtered image, all at the speed of light with no computer. But digitized images store the image as pixels, each with some light intensity. These are computationally processed by computer. Basis Functions TBS. Not sines and cosines, or products of sines and cosines. Products of complex exponentials. Wave fronts at various angles, discrete k x and k y . Note on Continuous Fourier Series and Uniform Convergence The continuous Fourier Series is defined for a periodic signal s(t) over a continuous range of times, t e [0, T): physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu 0 2 0 0 ( ) , is the frequency of the component is the complex frequency component i k th k k k s t S e where k k S t e e = = Note that the time interval is continuous, but the frequency components are discrete. In general, periodic signals lead to discrete frequency components. The continuous Fourier Series is not always uniformly convergent. Therefore, the order of integrations and summations cannot always be interchanged. Non-uniform convergence is illustrated by the famous Gibbs phenomenon: when we transform a square wave to the frequency domain (aka Fourier space), and retain only a finite number of frequency components to then transform back to the time domain, the square wave comes back with overshoot: wiggles that are large near the discontinuities: t overshoot t original square wave reconstructed wave Gibbs phenomenon: (Left) After losing high frequencies, reconstructed square wave has overshoot and wiggles. (Right) Retaining more frequencies reduces wiggle time, but not amplitude. As we include more and more frequency components, the wiggles get narrower, but do not get lower in amplitude. This means that there are always some time points for which the inverse transform does not converge to the original square wave. Such wiggles are commonly observed in many electronic systems, which must necessarily filter out high frequency components above some cut-off frequency. However: Continuous signals have Fourier Series that converge uniformly. This applies to most physical phenomena, so interchanging integration and summation is valid [F&W p217+]. This is true even if the derivative of the signal is discontinuous. physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu Tensors, Without the Tension Approach Well present tensors as follows: 1. Two physical examples: magnetic susceptibility, and deformable solids 2. A non-example: when is a matrix not a tensor? 3. Forward looking definitions (dont get stuck on these) 4. Review of vector spaces and notation (dont get stuck on this, either) 5. A short, but at first unhelpful, definition (really, really dont get stuck on this) 6. A discussion which clarifies the above definition 7. Examples, including dot-products and cross-products as tensors 8. Higher rank tensors 9. Change of basis 10. Non-orthonormal systems: contravariance and covariance 11. Indefinite metrics of Special and General Relativity 12. Mixed basis linear functions (transformation matrices, the Pauli vector) Tensors are all about vectors. They let you do things with vectors you never thought possible. We define tensors in terms of what they do (their linearity properties), and then show that linearity implies the transformation properties. This gets most directly to the true importance of tensors. [Most references define tensors in terms of transformations, but then fail to point out the all-important linearity properties.] We also take a geometric approach, treating vectors and tensors as geometric objects that exist independently of their representation in any basis. Inevitably, though, there is a fair amount of unavoidable algebra. Later, we introduce contravariance and covariance in terms of non-orthonormal coordinates, but first with a familiar positive-definite metric from classical mechanics. This makes for a more intuitive understanding of contra- and co-variance, before applying the concept to the more bizarre indefinite metrics of special and general relativity. There is deliberate repetition of several points, because it usually takes me more than once to grok something. So I repeat: If you dont understand something, read it again once, then keep reading. Dont get stuck on one thing. Often, the following discussion will clarify an ambiguity. Two Physical Examples We start with two physical examples: magnetic susceptibility, and deformation of a solid. We start with matrix notation, because we assume it is familiar to you. Later we will see that matrix notation is not ideal for tensor algebra. Magnetic Susceptibility We assume you are familiar with susceptibility of magnetic materials: when placed in an H-field, magnetizable (susceptible) materials acquire a magnetization, which adds to the resulting B-field. In simple cases, the susceptibility is a scalar, and is the magnetization, is the susceptibility, and is the applied magnetic field where _ _ = M H M H physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu The susceptibility in this simple case is the same in any direction; i.e., the material is isotropic. However, there exist materials which are more magnetizable in some directions than others. E.g., imagine a cubic lattice of axially-symmetric molecules which are more magnetizable along the molecular axis than perpendicular to it: x y z H M H M H M xx = 2 yy = 1 zz = 1 x y z x y z more magnetizable l e s s m a g n e t i z a b l e Magnetization, M, as a function of external field, H, for a material with a tensor-valued susceptibility, . In each direction, the magnetization is proportional to the applied field, but is larger in the x-direction than y or z. In this example, for an arbitrary H-field, we have ( ) ( ) 2 0 0 , , 2 , , 0 1 0 0 0 1 x y z x y z ij M M M H H H or _ | | | = = = = | | \ . M M H H _ Note that in general, M is not parallel to H (below, dropping the z axis for now): H M= (2H x , H y ) x y M need not be parallel to H for a material with a tensor-valued . But M is a linear function of H, which means: ( ) ( ) ( ) 1 2 1 2 k k + = + M H H M H M H . This linearity is reflected in the fact that matrix multiplication is linear: ( ) ( ) ( ) ( ) 1 2 1 2 1 2 1 2 2 0 0 2 0 0 2 0 0 0 1 0 0 1 0 0 1 0 0 0 1 0 0 1 0 0 1 k k k k | | | | | | | | | + = + = + = + | | | | | | \ . \ . \ . M H H H H H H M H M H The matrix notation might seem like overkill, since is diagonal, but it is only diagonal in this basis of x, y, and z. Well see in a moment what happens when we change basis. First, let us understand what the matrix ij really means. Recall the visualization of pre-multiplying a vector by a matrix: a matrix times a column vector H, is a weighted sum of the columns of : physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu xx x xx yx x y xz xz yz z xy xy yy y y yy zy zy yz zz z zz x zx zx H H H H H H _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ( ( ( ( ( ( ( ( ( ( = + + ( ( ( ( ( ( ( ( ( ( H We can think of the matrix as a set of 3 column vectors: the first is the magnetization vector for H = e x ; the 2 nd column is M for H = e y ; the 3 rd column is M for H = e z . Since magnetization is linear in H, the magnetization for any H can be written as the weighted sum of the magnetizations for each of the basis vectors: ( ) ( ) ( ) ( ) , , are the unit vectors in , , x x y y z z x y z H H H where x y z = + + M H M e M e M e e e e This is just the matrix multiplication above: = M H . (Were writing all indexes as subscripts for now; later on well see that M, , and H should be indexed as M i , i j , and H i .) Now lets change bases from e x , e y , e z , to some e 1 , e 2 , e 3 , defined below. We use a simple transformation, but the 1-2-3 basis is not orthonormal: e y x y z e z e x e 2 1 2 3 e 3 e 1 ae 1 be 2 e x e y ce 1 de 2 old basis new basis x y z Transformation to a non-orthogonal, non-normal basis. e 1 and e 2 are in the x-y plane, but are neither orthogonal nor normal. For simplicity, we choose e 3 = e z . Here, b and c are negative. To find the transformation equations to the new basis, we first write the old basis vectors in the new basis. Weve chosen for simplicity a transformation in the x-y plane, with the z-axis unchanged: 1 2 1 2 3 x y z a b c d = + = + = e e e e e e e e Now write a vector, v, in the old basis, and substitute out the old basis vectors for the new basis. We see that the new components are a linear combination of the old components: ( ) ( ) ( ) ( ) 1 2 1 2 3 1 2 3 1 1 2 2 3 3 1 2 3 , , y x x y y z z x y z x x y x y z x y x y z v v v v a b v c d v av cv bv dv v v v v v av cv v bv dv v v = + + = + + + + = + + + + = + + = + = + = v e e e e e e e e e e e e e e e e _ _ Recall that matrix multiplication is defined to be the operation of linear transformation, so we can write this basis transformation in matrix form: physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu 1 2 3 0 0 0 0 0 1 1 0 0 0 x x z z y y y z x a v a b v c c d v v d v b v v v v ( | | ( ( ( ( | ( ( ( ( ( = = + + | ( ( ( ( ( | ( ( ( ( ( \ . e e e The columns of the transformation matrix are the old basis vectors written in the new basis. This is illustrated explicitly on the right hand side, which is just x x y y z z v v v + + e e e . Finally, we look at how the susceptibility matrix ij transforms to the new basis. We saw above that the columns of are the M vectors for H = each of the basis vectors. So right away, we must transform each column of with the transformation matrix above, to convert it to the new basis. Since matrix multiplication AB is distributive across the columns of B, we can write the transformation of all 3 columns in a single expression by pre-multiplying with the above transformation matrix: 0 0 2 0 0 2 0 1 0 0 0 1 0 2 0 0 0 1 0 0 1 0 0 1 0 0 1 new a c a c a c Step of in newbasis b d b d b d | | | | | | | | | | | | = = = | | | | | | | | \ . \ . \ . \ . But were not done. This first step expressed the column vectors in the new basis, but the columns of the RHS (right hand side) are still the Ms for basis vectors e x , e y , e z . Instead, we need the columns of new to be the M vectors for e 1 , e 2 , e 3 . Please dont get bogged down yet in the details, but we do this transformation similarly to how we transformed the column vectors. We transform the contributions to M due to e x , e y , e z to that due to e 1 by writing e 1 in terms of e x , e y , e z : ( ) ( ) ( ) 1 1 x y x y e f e f = + = = = + = e e e M H e M H e M H e Similarly, ( ) ( ) ( ) ( ) ( ) 2 2 3 3 x y x y z z g h g h = + = = = + = = = = = e e e M H e M H e M H e e e M H e M H e Essentially, we need to transform among the columns, i.e. transform the rows of . These two transformations (once of the columns, and once of the rows) is the essence of a rank-2 tensor: A tensor matrix (rank-2 tensor) has columns that are vectors, and simultaneously, its rows are also vectors. Therefore, transforming to a new basis requires two transformations: once for the rows, and once for the columns (in either order). [Aside: The details (which you can skip at first): We just showed that we transform using the inverse of our previous transformation. The reason for the inverse is related to the up/down indexes mentioned earlier; please be patient. In matrix notation, we write the row transformation as post-multiplying by the transpose of the needed transformation: 0 2 0 0 0 0 2 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 1 0 0 1 0 0 1 0 0 1 T new a c e f a c e g Final b d g h b d f h | | | | | | | || || | | | | | | | = = | | | | | | | | | | | | \ . \ . \ . \ .\ .\ . ] [Another aside: A direction-dependent susceptibility requires to be promoted from a scalar to a rank-2 tensor (skipping any rank-1 tensor). This is necessary because a rank-0 tensor (a scalar) and a rank-2 tensor can both act on a vector (H) to produce a vector (M). There is no sense to a rank-1 (vector) susceptibility, because there is no simple way a rank-1 tensor (a vector) can act on another vector H to produce an output vector M. More on this later.] physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu Mechanical Strain A second example of a tensor is the mechanical strain tensor. When I push on a deformable material, it deforms. A simple model is just a spring, with Hookes law: 1 applied x F k A = + We write the formula with a plus sign, because (unlike freshman physics spring questions) we are interested in how a body deforms when we apply a force to it. For an isotropic material, we can push in any direction, and the deformation is parallel to the force. This makes the above equation a vector equation: 1 the strain constant s where s k A = = x F Strain is defined as the displacement of a given point under force. [Stress is the force per unit area applied to a body. Stress produces strain.] In an isotropic material, the stress constant is a simple scalar. Note that if we transform to another basis for our vectors, the stress constant is unchanged. Thats the definition of a scalar: A scalar is a number that is the same in any coordinate system. A scalar is a rank-0 tensor. The scalar is unchanged even in a non-ortho-normal coordinate system. But what if our material is a bunch of microscopic blobs connected by stiff rods, like atoms in a crystal? x F x F (Left) A constrained deformation crystal structure. (Middle) The deformation vector is not parallel to the force. (Right) More extreme geometries lead to a larger angle between the force and displacement. The diagram shows a 2D example: pushing in the x-direction results in both x and y displacements. The same principle could result in a 3D x, with some component into the page. For small deformations, the deformation is linear with the force: pushing twice as hard results in twice the displacement. Pushing with the sum of two (not necessarily parallel) forces results in the sum of the individual displacements. But the displacement is not proportional to the force (because the displacement is not parallel to it). In fact, each component of force results in a deformation vector. Mathematically: xz x xy xy xx xx y x x yx yx z z z yz yz zz yy yy y z zz z zx y y x z s s s F s s F s s s F s s s s s s F F F s s s s ( | | ( ( ( | ( ( ( ( A = + + = = | ( ( ( ( | ( ( ( ( \ . x sF s _ Much like the anisotropy of the magnetization in the previous example, the anisotropy of the strain requires us to use a rank-2 tensor to describe it. The linearity of the strain with force allows us to write the strain tensor as a matrix. Linearity also guarantees that we can change to another basis using a method physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu similar to that shown above for the susceptibility tensor. Specifically, we must transform both the columns and the rows of the strain tensor s. Furthermore, the linearity of deformation with force also insures that we can use non-orthonormal bases, just as well as orthonormal ones. physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu When Is a Matrix Not a Tensor? I would say that most matrices are not tensors. A matrix is a tensor when its rows and columns are both vectors. This implies that there is a vector space, basis vectors, and the possibility of changing basis. As a counter example, consider the following graduate physics problem: Two pencils, an eraser, and a ruler cost \$2.20. Four pencils, two erasers, and a ruler cost \$3.45. Four pencils, an eraser, and two rulers cost \$3.85. How much does each item cost? We can write this as simultaneous equations, and as shorthand in matrix notation: 2 220 2 1 1 220 4 2 345 4 2 1 345 4 1 1 385 4 2 385 p e r p p e r or e r p e r + + = | | ( ( | ( ( + + = = | ( ( | ( ( + + = \ . It is possible to use a matrix for this problem because the problem takes linear combinations of the costs of 3 items. Matrix multiplication is defined as the process of linear combinations, which is the same process as linear transformations. However, the above matrix is not a tensor, because there are no vectors of school supplies, no bases, and no linear combinations of (say) part eraser and part pencil. Therefore, the matrix has no well-defined transformation properties. Hence, it is a lowly matrix, but no tensor. However, later (in We Dont Need No Stinking Metric) well see that under the right conditions, we can form a vector space out of seemingly unrelated quantities. An ordinary vector associates a number with each direction of space: x y z v v v = + + v x y z The vector v associates the number v x with the x-direction; it associates the number v y with the y- direction, and the number v z with the z-direction. The above tensor examples illustrate the basic nature of a rank-2 tensor: it associates a vector with each direction of space: xx xy xz yx yy yz zx zy zz T T T T T T T T T ( ( ( ( ( ( = + + ( ( ( ( ( ( T x y z Some Definitions and Review These definitions will make more sense as we go along. Dont get stuck on these: ordinary vector = contravariant vector = contravector = ( 1 0 ) tensor 1-form = covariant vector = covector = ( 0 1 ) tensor. (Yes, there are 4 different ways to say the same thing.) covariant the same. E.g., General Relativity says that the mathematical form of the laws of physics are covariant (i.e., the same) with respect to arbitrary coordinate transformations. This is a completely different meaning of covariant than the one above. rank The number of indexes of a tensor; T ij is a rank-2 tensor; R i jkl is a rank-4 tensor. Rank is unrelated to the dimension of the vector space in which the tensor operates. MVE mathematical vector element. Think of it as a vector for now. Caution: a rank ( 0 1 ) tensor is a 1-form, but a rank ( 0 2 ) tensor is not always a 2-form. [Dont worry about it, but just for completeness, a 2-form (or any n-form) has to be fully anti-symmetric in all pairs of vector arguments.] physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu Notation: (a, b, c) is a row vector; (a, b, c) T is a column vector (the transpose of a row vector). To satisfy our pathetic word processor, we write ( m n ), even though the m is supposed to be directly above the n. T is a tensor, without reference to any basis or representation. T ij is the matrix of components of T, contravariant in both indexes, with an understood basis. T(v, w) is the result of T acting on v and w. or v v , are two equivalent ways to denote a vector, without reference to any basis or representation. Note that a vector is a rank-1 tensor. ~ or a a are two equivalent ways to denote a covariant vector (aka 1-form), without reference to any basis or representation a i the components of the covecter (1-form) a, in an understood basis. Vector Space Summary Briefly, a vector space comprises a field of scalars, a group of vectors, and the operation of scalar multiplication of vectors (details below). Quantum mechanical vector spaces have two additional characteristics: they define a dot-product between two vectors, and they define linear operators which act on vectors to produce other vectors. Before understanding tensors, it is very helpful, if not downright necessary, to understand vector spaces. Funky Quantum Concepts has a more complete description of vector spaces. Here is a very brief summary: a vector space comprises a field of scalars, a group of vectors, and the operation of scalar multiplication of vectors. The scalars can be any mathematical field, but are usually the real numbers, or the complex numbers (e.g., quantum mechanics). For a given vector space, the vectors are a class of things, which can be one of many possibilities (physical vectors, matrices, kets, bras, tensors, ...). In particular, the vectors are not necessarily lists of scalars, nor need they have anything to do with physical space. Vector spaces have the following properties, which allow solving simultaneous linear equations both for unknown scalars, and unknown vectors: Scalars Mathematical Vectors Scalars form a commutative group (closure, unique identity, inverses) under operation +. Vectors form a commutative group (closure, unique identity, inverses) under operation +. Scalars, excluding 0, form a commutative group under operation ( ). Distributive property of ( ) over +. Scalar multiplication of vector produces another vector. Distributive property of scalar multiplication over both scalar + and vector +. With just the scalars, you can solve ordinary scalar linear equations such as: 11 1 12 2 1 1 21 1 22 2 2 2 1 1 2 2 ... ... : : : ... n n n n n n nn n n a x a x a x c a x a x a x c written in matrix form as a x a x a x c + + = + + = = ` + + = ) ax c All the usual methods of linear algebra work to solve the above equations: Cramers rule, Gaussian elimination, etc. With the whole vector space, you can solve simultaneous linear vector equations for unknown vectors, such as physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu 11 1 12 2 1 1 21 1 22 2 2 2 1 1 2 2 ... ... : : : ... n n n n n n nn n n a a a a a a written in matrix form as a a a + + = + + = = ` + + = ) v v v w v v v w av w v v v w where a is again a matrix of scalars. The same methods of linear algebra work just as well to solve vector equations as scalar equations. Vector spaces may also have these properties: Dot-product produces a scalar from two vectors. Linear operators act on vectors to produce other vectors. The key points of mathematical vectors are (1) we can form linear combinations of them to make other vectors, and (2) any vector can be written as a linear combination of basis vectors: v = (v 1 , v 2 , v 3 ) = v 1 e 1 + v 2 e 2 + v 3 e 3 where e 1 , e 2 , e 3 are basis vectors, and v 1 , v 2 , v 3 are the components of v in the e 1 , e 2 , e 3 basis. Note that v 1 , v 2 , v 3 are numbers, while e 1 , e 2 , e 3 are vectors. There is a (kind of bogus) reason why basis vectors are written with subscripts, and vector components with superscripts, but well get to that later. The dimension of a vector space, N, is the number of basis vectors needed to construct every vector in the space. Do not confuse the dimension of physical space (typically 1D, 2D, 3D, or (in relativity) 4D), with the dimension of the mathematical objects used to work a problem. For example, a 33 matrix is an element of the vector space of 33 matrices. This is a 9-D vector space, because there are 9 basis matrices needed to construct an arbitrary matrix. Given a basis, components are equivalent to the vector. Components alone (without a basis) are insufficient to be a vector. [Aside: Note that for position vectors defined by r = (r, , |), r, , and | are not the components of a vector. The tip off is that with two vectors, you can always add their components to get another vector. Clearly, ( ) 1 2 1 2 1 2 1 2 , , r r u u | | + = + + + r r , so (r, , |) cannot be the components of a vector. This failure to add is due to r being a displacement vector from the origin, where there is no consistent basis: e.g., what is e r at the origin? At points off the origin, there is a consistent basis: e r , e , and e | are well-defined.] When Vectors Collide There now arises a collision of terminology: to a physicist, vector usually means a physical vector in 3- or 4-space, but to a mathematician, vector means an element of a mathematical vector-space. These are two different meanings, but they share a common aspect: linearity (i.e., we can form linear combinations of vectors to make other vectors, and any vector can be written as a linear combination of basis vectors). Because of that linearity, we can have general rank-n tensors whose components are arbitrary elements of a mathematical vector-space. To make the terminology confusion worse, an ( m n ) tensor whose components are simple numbers is itself a vector-element of the vector-space of ( m n ) tensors. Mathematical vector-elements of a vector space are much more general than physical vectors (e.g. force, or velocity), though physical vectors and tensors are elements of mathematical vector spaces. To be clear, well use MVE to refer to a mathematical vector-element of a vector space, and vector to mean a normal physics vector (3-vector or 4-vector). Recall that MVEs are usually written as a set of components in some basis, just like vectors are. In the beginning, we choose all the input MVEs to be vectors. physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu If youre unclear about what an MVE is, just think of it as a physical vector for now, like force. Tensors vs. Symbols There are lots of tensors: metric tensors, electromagnetic tensors, Riemann tensors, etc. There are also symbols: Levi-Civita symbols, Christoffel symbols, etc. Whats the difference? Symbols arent tensors. Symbols look like tensors, in that they have components indexed by multiple indices, they are referred to basis vectors, and are summed with tensors. But they are defined to have specific components, which may depend on the basis, and therefore symbols dont change basis (transform) the way tensors do. Hence, symbols are not geometric entities, with a meaning in a manifold, independent of coordinates. For example, the Levi-Civita symbol is defined to have specific constant components in all bases. It doesnt follow the usual change-of-basis rules. Therefore, it cannot be a tensor. Notational Nightmare If you come from a differential geometry background, you may wonder about some insanely confusing notation. It is a fact that dx and dx are two different things: ( , , ) is a , ( ) is a - d dx dy dz but x x = = V x vector d r 1 form We dont use the second notation (or exterior derivatives) in this chapter, but we might in the Differential Geometry chapter. Tensors? What Good Are They? A Short, Complicated Definition It is very difficult to give a short definition of a tensor that is useful to anyone who doesnt already know what a tensor is. Nonetheless, youve got to start somewhere, so well give a short definition, to point in the right direction, but it may not make complete sense at first (dont get hung up on this, skip if needed): A tensor is an operator on one or more mathematical vector elements (MVEs), linear in each operand, which produces another mathematical vector element. The key point is this (which we describe in more detail in a moment): Linearity in all the operands is the essence of a tensor. I should add that the basis vectors for all the MVEs must be the same (or tensor products of the same) for an operator to qualify as a tensor. But thats too much to put in a short definition. We clarify this point later. Note that a scalar (i.e., a coordinate-system-invariant number, but for now, just a number) satisfies the definition of a mathematical vector element. Many definitions of tensors dwell on the transformation properties of tensors. This is mathematically valid, but such definitions give no insight into the use of tensors, or why we like them. Note that to satisfy the transformation properties, all the input vectors and output tensors must be expressed in the same basis (or tensor products of that basis with itself). Some coordinate systems require distinguishing between contravariant and covariant components of tensors; superscripts denote contravariant components; subscripts denote covariant components. However, orthonormal positive definite systems, such as the familiar Cartesian, spherical, and cylindrical systems, do not require such a distinction. So for now, lets ignore the distinction, even though the following notation properly represents both contravariant and covariant components. Thus, in the following text, contravariant components are written with superscripts, and covariant components are written with subscripts, but we dont care right now. Just think of them all as components in an arbitrary coordinate system. physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu Building a Tensor Oversimplified, a tensor operates on vectors to produce a scalar or a vector. Lets construct a tensor which accepts (operates on) two 3-vectors to produce a scalar. (Well see later that this is a rank-2 tensor.) Let the tensor T act on vectors a and b to produce a scalar, s; in other words, this tensor is a scalar function of two vectors: s = T(a, b) Call the first vector a = (a 1 , a 2 , a 3 ) in some basis, and the second vector b = (b 1 , b 2 , b 3 ) (in the same basis). A tensor, by definition, must be linear in both a and b; if we double a, we double the result, if we triple b, we triple the result, etc. Also, T(a + c, b) = T(a, b) + T(c, b), and T(a, b + d) = T(a, b) + T(a, d). So the result must involve at least the product of a component of a with a component of b. Lets say the tensor takes a 2 b 1 as that product, and additionally multiplies it by a constant, T 21 . Then we have built a tensor acting on a and b, and it is linear in both: 2 1 2 1 21 ( , ) . : ( , ) 7 T a b Example a b = = T a b T a b But, if we add to this some other weighted product of some other pair of components, the result is still a tensor: it is still linear in both a and b: 1 3 2 1 1 3 2 1 13 21 ( , ) . : ( , ) 4 7 T a b T a b Example a b a b = + = + T a b T a b In fact, we can extend this to the weighted sum of all combinations of components, one each from a and b. Such a sum is still linear in both a and b: 3 3 1 1 2 6 4 ( , ) : 7 5 1 6 0 8 i j ij ij i j T a b Example T = = ( ( = = ( ( T a b Further, nothing else can be added to this that is linear in a and b. A tensor is the most general linear function of a and b that exists, i.e. any linear function of a and b can be written as a 33 matrix. (Well see that the rank of a tensor is equal to the number of its indices; T is a rank-2 tensor.) The T ij are the components of the tensor (in the basis of the vectors a and b.) At this point, we consider the components of T, a, and b all as just numbers. Why does a tensor have a separate weight for each combination of components, one from each input mathematical vector element (MVE)? Couldnt we just weight each input MVE as a whole? No, because that would restrict tensors to only some linear functions of the inputs. Any linear function of the input vectors can be represented as a tensor. Note that tensors, just like vectors, can be written as components in some basis. And just like vectors, we can transform the components from one basis to another. Such a transformation does not change the tensor itself (nor does it change a vector); it simply changes how we represent the tensor (or vector). More on transformations later. Tensors dont have to produce scalar results! Some tensors accept one or more vectors, and produce a vector for a result. Or they produce some rank-r tensor for a result. In general, a rank-n tensor accepts m vectors as inputs, and produces a rank n m tensor as a result. Since any tensor is an element of a mathematical vector space, tensors can be written as linear combinations of other (same rank & type) tensors. So even when a tensor produces another (lower rank) tensor as an output, the tensor is still a linear function of all its input vectors. Its just a tensor-valued function, instead of a scalar-valued function. For example, the force on a charge: a B-field operates on a physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu vector, qv, to produce a vector, f. Thus, we can think of the B-field as a rank-2 tensor which acts on a vector to produce a vector; its a vector-valued function of one vector. Also, in general, tensors arent limited to taking just vectors as inputs. Some tensors take rank-2 tensors as inputs. For example, the quadrupole moment tensor operates on the 2 nd derivative matrix of the potential (the rank-2 Hessian tensor) to produce the (scalar) work stored in the quadrupole of charges. And a density matrix in quantum mechanics is a rank-2 tensor that acts on an operator matrix (rank-2 tensor) to produce the ensemble average of that operator. Tensors in Action Lets consider how rank-0, rank-1, and rank-2 tensors operate on a single vector. Recall that in tensor-talk, a scalar is an invariant number, i.e. it is the same number in any coordinate system. Rank-0: A rank-0 tensor is a scalar, i.e. a coordinate-system-independent number. Multiplying a vector by a rank-0 tensor (a scalar), produces a new vector. Each component of the vector contributes to the corresponding component of the result, and each component is weighted equally by the scalar, a: x y z x y z v v v v av av av av = + + = + + i j k i j k , , Rank-1: A rank-1 tensor a operates on (contracts with) a vector to produce a scalar. Each component of the input vector contributes a number to the result, but each component is weighted separately by the corresponding component of the tensor a: 3 1 ( ) x y z i x y z i i a v a v a v a v = = + + = a v Note that a vector is itself a rank-1 tensor. Above, instead of considering a acting on v, we can equivalently consider that v acts on a: a(v) = v(a). Both a and v are of equal standing. Rank-2: Filling one slot of a rank-2 tensor with a vector produces a new vector. Each component of the input vector contributes a vector to the result, and each input vector component weights a different vector. z x y z x y z x y column 1 column 2 column 3 (a) (b) (c) (a) A hypothetical rank-2 tensor with an x-vector (red), a y-vector (green), and a z-vector (blue). (b) The tensor acting on the vector (1, 1, 1) producing a vector (heavy black). Each component (column) vector of the tensor is weighted by 1, and summed. (c) The tensor acting on the vector (0, 2, 0.5), producing a vector (heavy black). The x-vector is weighted by 0, and so does not contribute; the y-vector is weighted by 2, so contributes double; the z-vector is weighted by 0.5, so contributes half. physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu 3 3 3 1 1 1 ( _, ) x x z z y z y z z z z x x y y y y y y y y x x x x x y i j j x y z x j y j z j x y z j j j j j j x z y x x z z x x z z y y z z B v v v v B v B B B v v B B B B v B B v B B B B v B B B v B B B v B v = = = ( ( ( ( = = = + + ( ( ( ( | ( ( ( ( ( ( ( ( ( ( ( ( | | | | | = + + = + + | | | \ . \ . \ . B v B B B i j k The columns of B are the vectors which are weighted by each of the input vector components, v j ; or equivalently, the columns of B are the vector weights for each of the input vector components Example of a simple rank-2 tensor: the moment-of-inertia tensor, I ij . Every blob of matter has one. We know from mechanics that if you rotate an arbitrary blob around an arbitrary axis, the angular momentum vector of the blob does not in general line up with the axis of rotation. So what is the angular momentum vector of the blob? It is a vector-valued linear function of the angular velocity vector, i.e. given the angular velocity vector, you can operate on it with the moment-of-inertia tensor, to get the angular momentum vector. Therefore, by the definition of a tensor as a linear operation on a vector, the relationship between angular momentum vector and angular velocity vector can be given as a tensor; it is the moment-of-inertia tensor. It takes as an input the angular velocity vector, and produces as output the angular momentum vector, therefore it is a rank-2 tensor: ( , _) , ( , ) 2KE = = = I L I L [Since I is constant in the blob frame, it rotates in the lab frame. Thus, in the lab frame, the above equations are valid only at a single instant in time. In effect, I is a function of time, I(t).] [?? This may be a bad example, since I is only a Cartesian tensor [L&L3, p ??], which is not a real tensor. Real tensors cant have finite displacements on a curved manifold, but blobs of matter have finite size. If you want to get the kinetic energy, you have to use the metric to compute L. Is there a simple example of a real rank-2 tensor??] Note that some rank-2 tensors operate on two vectors to produce a scalar, and some (like I) can either act on one vector to produce a vector, or act on two vectors to produce a scalar (twice the kinetic energy). More of that, and higher rank tensors, later. Tensor Fields A vector is a single mathematical object, but it is quite common to define a field of vectors. A field in this sense is a function of space. A vector field defines a vector for each point in a space. For example, the electric field is a vector-valued function of space: at each point in space, there is an electric field vector. Similarly, a tensor is a single mathematical object, but it is quite common to define a field of tensors. At each point in space, there is a tensor. The metric tensor field is a tensor-valued function of space: at each point, there is a metric tensor. Almost universally, the word field is omitted when calling out tensor fields: when you say metric tensor, everyone is expected to know it is a tensor field. When you say moment of inertia tensor, everyone is expected to know it is a single tensor (not a field). Dot Products and Cross Products as Tensors Symmetric tensors are associated with elementary dot-products, and anti-symmetric tensors are associated with elementary cross-products. A dot product is a linear operation on two vectors: AB = BA, which produces a scalar. Because the dot-product is a linear function of two vectors, it can be written as a tensor. (Recall that any linear function of vectors can be written as a tensor.) Since it takes two rank-1 tensors, and produces a rank-0 tensor, the physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu dot-product is a rank-2 tensor. Therefore, we can achieve the same result as a dot-product with a rank-2 symmetric tensor that accepts two vectors and produces a scalar; call this tensor g: g(A, B) = g(B, A) g is called the metric tensor: it produces the dot-product (aka scalar product) of two vectors. Quite often, the metric tensor varies as a function of the generalized coordinates of the system; then it is a metric tensor field. It happens that the dot-product is symmetric: AB = BA.; therefore, g is symmetric. If we write the components of g as a matrix, the matrix will be symmetric, i.e. it will equal its own transpose. (Do I need to expand on this??) On the other hand, a cross product is an anti-symmetric linear operation on two vectors, which produces another vector: A B = B A. Therefore, we can associate one vector, say B, with a rank-2 anti-symmetric tensor, that accepts one vector and produces another vector: B( _, A) = B(A, _ ) For example, the Lorentz force law: F = v B. We can write B as a ( 1 1 ) tensor: 0 ( _, ) 0 0 y z x z y z y x y z i j y x z j z x z x z x y x y z y x y x B v B v v B B v v v B v B B v B v B v B B B B B v B v B v ( ( ( ( ( ( ( = = = = = + ( ( ( ( ( ( ( i j k F = v B B v We see again how a rank-2 tensor, B, contributes a vector for each component of v: B i x e i = B z j + B y k (the first column of B) is weighted by v x . B i y e i = B z i B x k (the 2 nd column of B) is weighted by v y . B i z e i = B y i + B x j (the 3 rd column of B) is weighted by v z . z x y B i x =-B z j+B y k B x , B y , B z > 0 z x y B i y =B z i-B x k z x y B i z =-B y i+B x j A rank-2 tensor acting on a vector to produce their cross-product. TBS: We can also think of the cross product as a fully anti-symmetric rank-3 tensor, which acts on 2 vectors to produce a vector (their cross product). This is the anti-symmetric symbol c ijk (not a tensor). Note that both the dot-product and cross-product are linear on both of their operands. For example: ( ) ( ) ( ) ( ) ( ) ( ) o o | q | q + = + + = + A C B A B C B A B D A B A D Linearity in all the operands is the essence of a tensor. Note also that a rank of a tensor contracts with (is summed over) a rank of one of its operands to eliminate both of them: one rank of the B-field tensor contracts with one input vector, leaving one surviving rank of the B-field tensor, which is the vector result. Similarly, one rank of the metric tensor, g, physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu contracts with the first operand vector; another rank of g contracts with the second operand vector, leaving a rank-0 (scalar) result. The Danger of Matrices There are some dangers to thinking of tensors as matrices: (1) it doesnt work for rank 3 or higher tensors, and (2) non-commutation of matrix multiplication is harder to follow than the more-explicit summation convention. Nonetheless, the matrix conventions are these: - contravariant components and basis covectors (up indexes) column vector. E.g., 1 1 2 2 3 3 , basis 1-forms: v v v ( ( ( ( = ( ( ( ( e v e e - covariant components and basis contravectors (down indexes) row vector ( ) ( ) 1 2 3 1 2 3 , , , basis vectors: , , w w w = w e e e Matrix rows and columns are indicated by spacing of the indexes, and are independent of their upness or downness. The first matrix index is always the row; the second, the column: , r c rc c r rc T T T T where r rowindex c column index = = Tensor equations can be written as equations with tensors as operators (written in bold): KE = I(, ) Or, they can be written in component form: (1) KE = I ij i j Well be using lots of tensor equations written in component form, so it is important to know how to read them. Note that some standard notations almost require component form: In GR, the Ricci tensor is R v , and the Ricci scalar is R: 1 2 G R Rg v v v = In component equations, tensor indexes are written explicitly. There are two kinds of tensor indexes: dummy (aka summation) indexes, and free indexes. Dummy indexes appear exactly twice in any term. Free indexes appear only once in each term, and the same free indexes must appear in each term (except for scalar terms). In the above equation, both and are free indexes, and there are no dummy indexes. In eq. (1) above, i and j are both dummy indexes and there are no free indexes. Dummy indexes appear exactly twice in any term are used for implied summation, e.g. 3 3 1 1 1 1 2 2 i j i j ij ij i j KE I KE I e e e e = = = = Free indexes are a shorthand for writing several equations at once. Each free index takes on all possible values for it. Thus, , , (3 equations) i i i x x x y y y z z z C A B C A B C A B C A B = + = + = + = + and physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu 00 00 00 01 01 01 02 02 02 03 03 03 10 10 10 11 11 11 12 12 12 13 13 13 20 20 20 21 21 21 22 22 22 23 23 23 30 30 1 2 1 1 1 1 , , , 2 2 2 2 1 1 1 1 , , , 2 2 2 2 1 1 1 1 , , , 2 2 2 2 G R Rg G R Rg G R Rg G R Rg G R Rg G R Rg G R Rg G R Rg G R Rg G R Rg G R Rg G R Rg G R Rg G R v v v = = = = = = = = = = = = = = 30 31 31 31 32 32 32 33 33 33 1 1 1 1 , , , 2 2 2 2 Rg G R Rg G R Rg G R Rg = = = (16 equations). It is common to have both dummy and free indexes in the same equation. Thus the GR statement of conservation of energy and momentum uses as a dummy index, and as a free index: 3 3 3 3 0 1 2 3 0 0 0 0 0 0, 0, 0, 0 T T T T T v = = = = V = V = V = V = V = (4 equations). Notice that scalars apply to all values of free indexes, and dont need indexes of their own. However, any free indexes must match on all tensor terms. It is nonsense to write something like: ij i j A B C = + (nonsense) However, it is reasonable to have E.g., angular momentum: ij i j ij i j j i A B C M r p r p = = Since tensors are linear operations, you can add or subtract any two tensors that take the same type arguments and produce the same type result. Just add the tensor components individually. . . , , 1, ... ij ij ij E g S T U i j N = + = + = S T U You can also scalar multiply a tensor. Since these properties of tensors are the defining requirements for a vector space, all the tensors of given rank and index types compose a vector space, and every tensor is an MVE in its space. This implies that a tensor field can be differentiated (or integrated), and in particular, it has a gradient. Higher Rank Tensors When considering higher rank tensors, it may be helpful to recall that multi-dimensional matrices can be thought of as lower-dimensional matrices with each element itself a vector or matrix. For example, a 3 x 3 matrix can be thought of as a column vector of 3 row-vectors. Matrix multiplication works out the same whether you consider the 3 x 3 matrix as a 2-D matrix of numbers, or a 1-D column vector of row vectors: ( ) ( ) ( ) ( ) ( , , ) ( , , ) ( , , ) ( , , ) ( , , ) ( , , ) a b c x y z d e f ax dy gz bx ey hz cx fy iz g h i a b c x y z d e f x a b c y d e f z g h i ax dy gz bx ey hz cx fy iz g h i or = + + + + + + = + + = + + + + + + ( ( ( ( ( ( ( ( physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu Using this same idea, we can compare the gradient of a scalar field, which is a ( 0 1 ) tensor field (a 1- form), with the gradient of a rank-2 (say ( 0 2 )) tensor field, which is a ( 0 3 ) tensor field. First, the gradient of a scalar field is a ( 0 1 ) tensor field with 3 components, where each component is a number-valued function: 1 2 3 1 2 3 1 2 3 1 2 3 , , , are basis (co)vectors ( , , ), , , f f f f x y z f f f can be written as D D D where D D D x y z c c c V = = + + c c c c c c = = = c c c D D . The gradient operates on an infinitesimal displacement vector to produce the change in the function when you move through the given displacement: ( ) f f f df d dx dy dz x y z c c c = = + + c c c D r . Now let R be a ( 0 2 ) tensor field, and T be its gradient. T is a ( 0 3 ) tensor field, but can be thought of as a ( 0 1 ) tensor field where each component is itself a ( 0 2 ) tensor. z x y z x y z x y x-tensor y-tensor z-tensor A rank-3 tensor considered as a set of 3 rank-2 tensors: an x-tensor, a y-tensor, and a z-tensor. The gradient operates on an infinitesimal displacement vector to produce the change in the ( 0 2 ) tensor field when you move through the given displacement. 11 12 13 21 22 23 31 1 11 12 13 2 1 12 13 21 22 23 3 32 33 1 22 23 31 32 3 1 2 32 33 3 1 3 T T T y y y T T T y y y T T T T T T T x x x T T T x x x T T T T z z z T T T z z z T T T z z T y y z x x x x y z y ( c c c = V = + + c c c | | ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( | | | = | | | \ . R R R T R . T ijx v x T ijy v y T ijz v z + dR , , ( ) ( ) k k ijk ij ijk k x y z d T v dR T v = = = = R T v physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu Note that if R had been a ( 2 0 ) (fully contravariant) tensor, then its gradient would be a ( 2 1 ) mixed tensor. Taking the gradient of any field simply adds a covariant index, which can then be contracted with a displacement vector to find the change in the tensor field when moving through the given displacement. The contraction considerations of the previous section still apply: a rank of an tensor operator contracts with a rank of one of its inputs to eliminate both. In other words, each rank of input tensors eliminates one rank of the tensor operator. The rank of the result is the number of surviving ranks from the tensor operator: ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) rank tensor rank inputs rank result or rank result rank tensor rank inputs = + = Tensors of Mathematical Vector Elements: The operation of a tensor on vectors involves multiplying components (one from the tensor, and one from each input vector), and then summing. E.g., 1 1 11 ( , ) ... ... i j ij T a b T a b = + + + T a b Similar to the above example, the T ij components could themselves be a vector of a mathematical vector space (i.e., could be MVEs), while the a i and b j components are scalars of that vector space. In the example above, we could say that each of the T ij;x , T ij;y , and T ij;z is a rank-2 tensor (an MVE in the space of rank-2 tensors), and the components of v are scalars in that space (in this case, real numbers). Tensors In General In complete generality then, a tensor T is a linear operation on one or more MVEs: T(a, b, ...). Linearity implies that T can be written as a numeric weight for each combination of components, one component from each input MVE. Thus, the linear operation performed by T is equivalent to a weighted sum of all combinations of components of the input MVEs. (Since T and the a, b, ... are simple objects, not functions, there is no concept of derivative or integral operations. Derivatives and integrals are linear operations on functions, but not linear functions of MVEs.) Given the components of the inputs a, b, ..., and the components of T, we can contract T with (operate with T on) the inputs to produce a MVE result. Note that all input MVEs have to have the same basis. Also, T may have units, so the output units are arbitrary. Note that in generalized coordinates, different components of a tensor may have different units (much like the vector parameters r and have different units). Change of Basis: Transformations Since tensors are linear operations on MVEs, we can represent a tensor by components. If we know a tensors operations on all combinations of basis vectors, we have fully defined the tensor. Consider a rank- 2 tensor T acting on two vectors, a and b. We expand T, a, and b into components, using the linearity of the tensor: physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu 1 2 3 1 2 3 1 1 2 1 3 1 1 2 2 2 3 2 1 3 2 3 3 3 1 2 3 3 3 1 1 ( , ) ( , ) ( , ) ( , ) ( , ) ( , ) ( , ) ( , ) ( , ) ( , ) ( , ) ( , ), , , ( , ) ( , ij i j i j i i j a a a b b b a b a b a b a b a b a b a b a b a b Define T where then a b = = = + + + + = + + + + + + + + = = = = = T a b T i j k i j k T i i T j i T k i T i j T j j T k j T i k T j k T k k T e e e i e j e k T a b T e 3 3 1 1 ) i j j ij i j T a b = = = e The tensors values on all combinations of input basis vectors are the components of the tensor (in the basis of the input vectors.) Now lets transform T to another basis. To change from one basis to another, we need to know how to find the new basis vectors from the old ones, or equivalently, how to transform components in the old basis to components in the new basis. We write the new basis with primes, and the old basis without primes. Because vector spaces demand linearity, any change of basis can be written as a linear transformation of the basis vectors or components, so we can write (eq. #s from Talman): ( ) ( ) 1 1 1 1 ' [Tal 2.4.5] ' [Tal 2.4.8] N k k i i k i k k N i i i k k k k k v v v = = = A = A = A = A e e e , where the last form uses the summation convention. There is a very important difference between equations 2.4.5 and 2.4.8. The first is a set of 3 vector equations, expressing each of the new basis vectors in the old basis Aside: Lets look more closely at the difference between equations 2.4.5 and 2.4.8. The first is a set of 3 vector equations, expressing each of the new basis vectors in the old basis. Basis vectors are vectors, and hence can themselves be expressed in any basis: 1 2 3 1 2 3 1 1 1 1 2 1 3 1 1 2 3 1 2 3 1 2 3 2 2 1 2 2 2 3 2 1 2 3 1 2 3 1 2 3 3 3 1 3 2 3 3 3 1 2 3 ' ' ' ' ' ' a a a or more simply b b b c c c = A + A + A = + + = A + A + A = + + ` = A + A + A = + + ) e e e e e e e e e e e e e e e e e e e e e e e e where the as are the components of e 1 in the old basis, the bs are the components of e 2 in the old basis, and the cs are the components of e 3 in the old basis. In contrast, equation 2.4.8 is a set of 3 number equations, relating the components of a single vector, taking its old components into the new basis. In other words, in the first equation, we are taking new basis vectors and expressing them in the old basis (new old). In the second equation, we are taking old components and converting them to the new basis (old new). The two equations go in opposite directions: the first takes new to old, the second takes old to new. So it is natural that the two equations use inverse matrices to achieve those conversions. However, because of the inverse matrices in these equations, vector components are said to transform contrary (oppositely) to basis vectors, so they are called contravariant vectors. I think it is misleading to say that contravariant vectors transform oppositely to basis vectors. In fact, that is impossible. Basis vectors are contravectors, and transform like any other contravector. A vector of (1, 0, 0) (in some basis) is a basis vector. It may also happen to be the value of some physical vector. In both cases, the expression of the vector (1, 0, 0) (old basis) in the new-basis is the same. Now we can use 2.4.5 to evaluate the components of T in the primed basis: physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu 1 1 1 1 ' ( ' , ' ) ( , ) ( , ) N N N N k l k l k l ij i j i k j l i j k l i j kl k l k l T T T T T = = = = = = A A = A A = A A e e e e e e Notice that there is one use of the transformation matrix A for each index of T to be transformed. Matrix View of Basis Transformation The concept of tensors seems clumsy at first, but its a very fundamental concept. Once you get used to it, tensors are essentially simple things (though it took me 3 years to understand how simple they are). The rules for transformations are pretty direct. Transforming a rank-n tensor requires using the transformation matrix n times. A vector is rank-1, and transforms by a simple matrix multiply, or in tensor terms, by a summation over indices. Here, since we must distinguish row basis from column basis, we put the primes on the indices, to indicate which index is in the new basis, and which is in the old basis. 0' 0' 0 0'1 0' 2 0' 3 0 1' 1' 0 1'1 1' 2 1' 3 1 ' ' 2' 2' 0 2'1 2' 2 2' 3 2 3' 3' 0 3'1 3' 2 3' 3 3 a a a a a a a a a a v v ( | | ( A A A A | ( ( | A A A A ( ( = = A | ( ( A A A A | ( ( | ( ( A A A A \ . a' = a Notice that when you sum over (contract over) two indices, they disappear, and youre left with the unsummed index. So above when we sum over old-basis indices, were left with a new-basis vector. Rank-2 example: The electromagnetic field tensor F is rank-2, and transforms using the transformation matrix twice, by two summations over indices, transforming both stress-energy indices. This is clumsy to write in matrix terms, because you have to use the transpose of the transformation matrix to transform the rows; this transposition has no physical significance. In the rank-2 (or higher) case, the tensor notation is both simpler, and more physically meaningful: 0' 0' 0'1' 0' 2' 0' 3' 0' 0 0'1 0' 2 0' 3 00 01 02 03 1' 0' 1'1' 1' 2' 1' 3' 1' 0 1'1 1' 2 1' 3 10 11 2' 0' 2'1' 2' 2' 2' 3' 2' 0 2'1 2' 2 2' 3 3' 0' 3'1' 3' 2' 3' 3' 3' 0 3'1 3' 2 3' 3 T F F F F F F F F F F F F F F F F F F F F F F F ( | | A A A A | ( | A A A A ( = | ( A A A A | ( | ( A A A A \ . F' = F 0' 0 1' 0 2' 0 3' 0 12 13 0'1 1'1 2'1 3'1 20 21 22 23 0' 2 1' 2 2' 2 3' 2 30 31 32 33 0' 3 1' 3 2' 3 3' 3 ' ' ' ' F F F F F F F F F F F v v v v v ( | | A A A A | ( | A A A A ( | ( A A A A | ( | ( A A A A \ . = A A In general, you have to transform every index of a tensor, each index requiring one use of the transformation matrix. Non-Orthonormal Systems: Contravariance and Covariance Many systems cannot be represented with orthonormal coordinates, e.g. the (surface of a) sphere. Dealing with non-orthonormality requires a more sophisticated view of tensors, and introduces the concepts of contravariance and covariance. Consider the following problem from classical mechanics: a pendulum is suspended from a pivot point which slides horizontally on a spring. The generalized coordinates are (a, ). physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu u a (a,u) (a+da,u) (a,u +du) (a+da,u+du) constant constant a dr = da a+du u dr To compute kinetic energy, we need to compute |v| 2 , conveniently done in some orthogonal coordinates, say x and y. We start by converting the generalized coordinates to the orthonormal x-y coordinates, to compute the length of a physical displacement from the changes in generalized coordinates: 2 2 2 2 2 2 2 2 2 2 2 2 2 sin , cos cos , sin 2 cos cos sin 2 cos x a l dx da l d y l dy l d ds dx dy da l da d l d l d da l da d l d u u u u u u u u u u u u u u u = + = + = = = + = + + + = + + We have just computed the metric tensor field, which is a function of position in the (a, ) configuration space. We can write the metric tensor field components by inspection: 1 2 2 2 2 2 2 2 2 1 1 , 1 cos 2 cos cos i j ij ij i j Let x a x l ds g dx dx da l da d l d g l l u u u u u u = = = = | | = = + + = | | \ . Then |v| 2 = ds 2 /dt 2 . A key point here is that the same metric tensor computes a physical displacement from generalized coordinate displacements, or a physical velocity from generalized coordinate velocities, or a physical acceleration from generalized coordinate accelerations, etc., because time is the same for any generalized coordinate system (no Relativity here!). Note that we symmetrize the cross-terms of the metric, g ij = g ji , which is necessary to insure that g(v, w) = g(w, v). Now consider the scalar product of two vectors. The same metric tensor (field) helps compute the scalar product (dot-product) of any two (infinitesimal) vectors, from their generalized coordinates: ( , ) i j ij d d d d g dv dw = = v w g v w Since the metric tensor takes two input vectors, is linear in both, and produces a scalar result, it is a rank-2 tensor. Also, since g(v, w) = g(w, v), g is a symmetric tensor. Now, lets define a scalar field as a function of the generalized coordinates; say, the potential energy: 2 cos 2 k U a mg u = It is quite useful to know the gradient of the potential energy: ( ) a U U U U U dU d da d a a u u u u c c c c = V = + = = + c c c c D D r The gradient takes an infinitesimal displacement vector dr = (da, du), and produces a differential in the value of potential energy, dU (a scalar). Further, dU is a linear function of the displacement vector. Hence, by definition, the gradient at each point in a- space is a rank-1 tensor, i.e. the gradient is a tensor field. physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu Do we need to use the metric (computed earlier) to make the gradient operate on dr? No! The gradient operates directly on dr, without the need for any assistance by a metric. So the gradient is a rank-1 tensor that can directly contract with a vector to produce a scalar. This is markedly different from the dot-product case above, where the first vector (a rank-1 tensor) could not contract directly with an input vector to produce a scalar. So clearly, There are two kinds of rank-1 tensors: those (like the gradient) that can contract directly with an input vector, and those that need the metric to help them operate on an input vector. Those tensors that can operate directly on a vector are called covariant tensors, and those that need help are called contravariant, for reasons we will show soon. To indicate that D is covariant, we write its components with subscripts, instead of superscripts. Its basis vectors are covariant vectors, related to e 1 , e 2 , and e 3 : , are covariant basis vectors i a r i a D D D where u u u = = + D In general, covariant tensors result from differentiation operators on other (scalar or) tensor fields: gradient, covariant derivative, exterior derivative, Lie derivative, etc. Note that just as we can say that D acts on dr, we can say that dr is a rank-1 tensor that acts on D to produce dU: ( ) ( ) i i i U U U d d dx da d a x u u c c c = = = + c c c D r r D The contractions are the same with either acting on the other, so the definitions are symmetric. Interestingly, when we compute small oscillations of a system of particles, we need both the potential matrix, which is the gradient of the gradient of the potential field, and the mass matrix, which really gives us kinetic energy. The potential matrix is fully covariant, and we need no metric to compute it. The kinetic energy matrix requires us to compute absolute magnitudes of |v| 2 , and so requires us to compute the metric. We know that a vector, which is a rank-1 tensor, can be visualized as an arrow. How do we visualize this covariant tensor, in a way that reveals how it operates on a vector (an arrow)? We use a set of equally spaced parallel planes. Let D be a covariant tensor (aka 1-form): D(v 1 + v 2 ) = D(v 1 ) + D(v 2 ) + + + + + + + + + + + D(v 1 ), D(v 2 ) > 0 D(v 3 ) < 0 v 3 v 1 v 2 Visualization of a covariant vector (1-form) as oriented parallel planes. The 1-form is a linear operator on vectors (see text). The value of D on a vector, D(v), is the number of planes pierced by the vector when laid on the parallel planes. Clearly, D(v) depends on the magnitude and direction of v. It is also a linear function of v: the sum of planes pierced by two different vectors equals the number of planes pierced by their vector sum. There is an orientation to the planes. One side is negative, and the other positive. Vectors crossing in the negative to the positive direction pierce a positive number of planes. Vectors crossing in the positive to negative direction pierce a negative number of planes. physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu Note also we could redraw the two axes arbitrarily oblique (non-orthogonal), and rescale the axes arbitrarily, but keeping the intercept values of the planes with the axes unchanged (thus stretching the arrows and planes). The number of planes pierced would be the same, so the two diagrams above are equivalent. Hence, this geometric construction of the operation of a covector on a contravector is completely general, and even applies to vector spaces which have no metric (aka non-metric spaces). All you need for the construction is a set of arbitrary basis vectors (not necessarily orthonormal), and the values D(e i ) on each, and you can draw the parallel planes that illustrate the covector. The direction of D, analogous to the direction of a vector, is normal to (perpendicular to) the planes used to graphically represent D. What Goes Up Can Go Down: Duality of Contravariant and Covariant Vectors Recall the dot-product is given by ( , ) i j ij d d d d g dv dw = = v w g v w If I fill only one slot of g with v, and leave the 2 nd slot empty, then g(v, _ ) is a linear function of one vector, and can be directly contracted with that vector; hence g(v, _ ) is a rank-1 covariant vector. For any given contravariant vector v i , I can define this dual covariant vector, g(v, _ ), which has N components Ill call v i . ( , _) k i ik v g v = = g v So long as I have a metric, the contravariant and covariant forms of v contain equivalent information, and are thus two ways of expressing the same vector (geometric object). The covariant representation can contract directly with a contravariant vector, and the contravariant representation can contract directly with a covariant vector, to produce the dot-product of the two vectors. Therefore, we can use the metric tensor to lower the components of a contravariant vector into their covariant equivalents. Note that the metric tensor itself has been written with two covariant (lower) indexes, because it contracts directly with two contravariant vectors to produce their scalar-product. Why do I need two forms of the same vector? Consider the vector force: i i m or F ma = = F a (naturally contravariant) Since position x i is naturally contravariant, so is its derivative v i , and 2 nd derivative, a i . Therefore, force is naturally contravariant. But force is also the gradient of potential energy: i i U or F U x c = V = c F (naturally covariant) Oops! Now force is naturally covariant! But its the same force as above. So which is more natural for force? Neither. Use whichever one you need. Nurture supersedes nature. The inverse of the metric tensor matrix is the contravariant metric tensor, g ij . It contracts directly with two covariant vectors to produce their scalar product. Hence, we can use g ij to raise the index of a covariant vector to get its contravariant components. ( , _) i ik ik i k kj j v g v g g g = = = g v Notice that raising and lowering works on the metric tensor itself. Note that in general, even for symmetric tensors, T i j T j i , and T i j T i j . For rank-2 or higher tensors, each index is separately of the contravariant or covariant type. Each index may be raised or lowered separately from the others. Each lowering requires a contraction with the fully covariant metric tensor; each raising requires a contraction with the fully contravariant metric tensor. physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu In Euclidean space with orthonormal coordinates, the metric tensor is the identity matrix. Hence, the covariant and contravariant components of any vector are identical. This is why there is no distinction made in elementary treatments of vector mathematics; displacements, gradients, everything, are simply called vectors. The space of covectors is a vector space, i.e. it satisfies the properties of a vector space. However, it is called dual to the vector space of contravectors, because covectors operate on contravectors to produce scalar invariants. A thing is dual to another thing if the dual can act on the original thing to produce a scalar, and vice versa. E.g., in QM, bras are dual to kets. Vectors in the dual space are covectors. Just like basis contravectors, basis covectors always have components (in their own basis) 1 2 3 (1, 0, 0...), (0,1, 0...), (0, 0,1...), . etc = = = and we can write an arbitrary covector as 1 2 3 1 2 3 ... f f f = + + + f . TBS: construction and units of a dual covector from its contravector. The Real Summation Convention The summation convention says repeated indexes in an arithmetic expression are implicitly summed (contracted). We now understand that only a contravariant/covariant pair can be meaningfully summed. Two covariant or two contravariant indexes require contracting with the metric tensor to be meaningful. Hence, the real Einstein summation convention is that any two matching indexes, one up (contravariant) and one down (covariant), are implicitly summed (contracted). Two matching contravariant or covariant indexes are meaningless, and not allowed. Now we can see why basis contravectors are written e 1 , e 2 , ... (with subscripts), and basis covectors are written e 1 , e 2 , ... (with superscripts). It is purely a trick to comply with the real summation convention that requires summations be performed over one up index and one down index. Then we can write a vector as a linear combination of the basis vectors, using the summation convention: 1 2 3 1 2 3 1 2 3 1 2 3 i i i i v v v v a a a a = + + = = + + = v e e e e a Note well that there is nothing covariant about e i , even though it has a subscript; there is nothing i , even though it has a superscript. Its just a notational trick. Transformation of Covariant Indexes It turns out that the components of a covariant vector transform with the same matrix as used to express the new (primed) basis vectors in the old basis: f k = f j j k [Tal 2.4.11] Again, somewhat bogusly, eq. 2.4.11 is said to transform covariantly with (the same as) the basis vectors, so f i is called a covariant vector. For a rank-2 tensor such as T ij , each index of T ij transforms like the basis vectors (i.e., covariantly with the basis vectors). Hence, each index of T ij is said to be a covariant index. Since both indexes are covariant, T ij is sometimes called fully covariant. Indefinite Metrics: Relativity In short, a covariant index of a tensor is one which can be contracted with (summed over) a contravariant index of an input MVE to produce a meaningful resultant MVE. In relativity, the metric tensor has some negative signs. The scalar-product is a frame-invariant interval. No problem. All the math, raising, and lowering, works just the same. In special relativity, the metric ends up simply putting minus signs where you need them to get SR intervals. The covariant form of a vector has the minus signs pre-loaded, so it contracts directly with a contravariant vector to produce a scalar. physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu Lets use the sign convention where ## = diag(1, 1, 1, 1). When considering the dual 1-forms for Minkowski space, the only unusual aspect is that the 1-form for time increases in the opposite direction as the vector for time. For the space components, the dual 1-forms increase in the same direction as the vectors. This means that 1, 1, 1, 1 t x y z t x y z = = = = e e e e as it should for the Minkowski metric. Is a Transformation Matrix a Tensor? Sort of. When applied to a vector, it converts components from the old basis to the new basis. It is clearly a linear function of its argument. However, a tensor usually has all its inputs and outputs in the same basis (or tensor products of that basis). But a transformation matrix is specifically constructed to take inputs in one basis, and produce outputs in a different basis. Essentially, the columns are indexed by the old basis, and the rows are indexed by the new basis. It basically works like a tensor, but the transformation rule is that to transform the columns, you use a transformation matrix for the old basis; to transform the rows, you use the transformation matrix for the new basis. Consider a vector 1 2 3 1 2 3 v v v = + + v e e e This is a vector equation, and despite its appearance, it is true in any basis, not just the (e 1 , e 2 , e 3 ) basis. If we write e 1 , e 2 , e 3 as vectors in some new (e x , e y , e z ) basis, the vector equation above still holds: ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) 1 2 3 1 1 1 1 2 2 2 2 3 3 3 3 1 2 3 1 2 3 1 2 3 1 1 1 2 2 2 3 3 3 x y z x y z x y z x y z x y z x y z x y z x y z x y z x y z x y z x y z v v v v v v = + + = + + = + + = + + ( ( ( = + + + + + + + + e e e e e e e e e e e e e e e e e e e e e e e e v e e e e e e e e e e e e e e e e e e e e e _ _ _ The vector v is just a weighted sum of basis vectors, and therefore the columns of the transformation matrix are the old basis vectors expressed in the new basis. E.g., to transform the components of a vector from the (e 1 , e 2 , e 3 ) to the (e x , e y , e z ) basis, the transformation matrix is ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 x x x x x x y y y y y y z z z z z z ( ( ( ( ( = ( ( ( ( e e e e e e e e e e e e e e e e e e e e e e e e e e e You can see directly that the first column is e 1 written in the x-y-z basis; the 2 nd column is e 2 in the x-y- z basis; and the 3 rd column is e 3 in the x-y-z basis. In quantum mechanics, the Pauli vector is a vector of three 2x2 matrices: the Pauli matrices. Each 2x2 complex-valued matrix corresponds to a spin-1/2 operator in some x, y, or z direction. It is a 3 rd rank object in the tensor product space of R 3 C 2 C 2 , i.e. xyz spinor spinor. The xyz rank is clearly in a different basis than the complex spinor ranks, since xyz is a completely different vector space than spin-1/2 physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu spinor space. However, it is a linear operator on various objects, so each rank transforms according to the transformation matrix for its basis. 0 1 0 1 0 , , 1 0 0 0 1 x y z i i o | | ( ( ( = | ( ( ( | \ . , Its interesting to note that the term tensor product produces, in general, an object of mixed bases, and often, mixed vector spaces. Nonetheless, the term tensor seems to be used most often for mathematical objects whose ranks are all in the same basis. Cartesian Tensors Cartesian tensors arent quite tensors, because they dont transform into non-Cartesian coordinates properly. (Note that despite their name, Cartesian tensors are not a special kind of tensor; they arent really tensors. Theyre tensor wanna-bes.) Cartesian tensors have two failings that prevent them from being true tensors: they dont distinguish between contravariant and covariant components, and they treat finite displacements in space as vectors. In non-orthogonal coordinates, you must distinguish contravariant and covariant components. In non-Cartesian coordinates, only infinitesimal displacements are vectors. Details: Recall that in Cartesian coordinates, there is no distinction between contravariant and covariant components of a tensor. This allows a certain sloppiness that one can only get away with if one sticks to Cartesian coordinates. This means that Cartesian tensors only transform reliably by rotations from one set of Cartesian coordinates to a new, rotated set of Cartesian coordinates. Since both the new and old bases are Cartesian, there is no need to distinguish contravariant and covariant components in either basis, and the transformation (to a rotated coordinate system) works. For example, the moment of inertia tensor is a Cartesian tensor. There is no problem in its first use, to compute the angular momentum of a blob of mass given its angular velocity: ( , _) x x z z y z y z z z z z x x x x x x x y y i i j z z z x y x y x x z z x x y y y y y y y z y j z y I I I I L I I I I I L L I L I I I I I I I I I e e e e e e e = = ( ( ( ( ( ( = = + + ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( I L But notice that if I accepts a contravariant vector, then Is components for that input vector must be covariant. However, I produces a contravariant output, so its output components are contravariant. So far, so good. But now we want to find the kinetic energy. Well, 2 1 1 1 ( , _) 2 2 2 Ie | | = = | \ . L I . But we have a dot-product of two contravariant vectors. To evaluate that dot-product, in a general coordinate system, we have to use the metric: 1 1 1 2 2 2 i j i j k i j i j i j ik j KE I I g I e e e e e e = = = However, in Cartesian coordinates, the metric matrix is the identity matrix, the contravariant components equal the covariant components, and the final not-equals above becomes an equals. Hence, we neglect the distinction between contravariant components and covariant components, and incorrectly sum the components of I on the components of e, even though both are contravariant in the 2 nd sum. In general coordinates, the direct sum for the dot-product doesnt work, and you must use the metric tensor for the final dot-product. physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu Example of failure of finite displacements: TBS: The electric quadrupole tensor acts on two copies of the finite displacement vector to produce the electric potential at that displacement. Even in something as simple as polar coordinates, this method fails. The Real Reason Why the Kronecker Delta Is Symmetric TBS: Because it a mixed tensor, ## . Symmetry can only be assessed by comparing interchange of two indices of the same up- or down-ness (contravariance or covariance). We can lower, say , in with the metric: g g o| o | o| o o = = The result the metric, which is always symmetric. Hence, ## is a symmetric tensor, but not because its matrix looks symmetric. In general, a mixed rank-2 symmetric tensor does not have a symmetric matrix representation. Only when both indices are up or both down is its matrix symmetric. The Kronecker delta is a special case that does not generalize. Things are not always what they seem. Tensor Appendices Pythagorean Relation for 1-forms Demonstration that 1-forms satisfy the Pythagorean relation for magnitude: 0 dx + 1 dy |a~| = 1 1 dx + 1 dy |a~| = 2 2 dx + 1 dy |a~| = 5 unit vector a dx + b dy |a~| = (a 2 +b 2 ) 1/b 1/a unit vector a~ a~ a~ a~ Examples of 3 1-forms, and a generic 1-form. Here, dx is the x basis 1-form, and dy is the y basis 1-form. From the diagram above, a max-crossing vector (perpendicular to the planes of a~) has (x, y) components (1/b, 1/a). Dividing by its magnitude, we get a unit vector: ( ) 2 2 1 1 max crossing unit vector . ( ) 1, 1 1 1 b a Note that x and y b a + = = = + x y d x d y u The magnitude of a 1-form is the scalar resulting from the 1-forms action on a max-crossing unit vector: physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu ( ) ( ) ( ) 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 ( ) 1 1 1 1 1 1 a b a x b y a b a b b a b a a b a b ab b a b a b a | | | | + + + | | + + \ . \ . = = = = = = + + + + + d d x y a a u Heres another demonstration that 1-forms satisfy the Pythagorean relation for magnitude. The magnitude of a 1-form is the inverse of the plane spacing: O A B X ( )( ) 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 1 OX BO OXA ~ BOA OA BA BO OA b a OX BA b a b a ab a b OX b a b a | = = = + + = = = + = + a Geometric Construction Of The Sum Of Two 1-Forms: a~(x) = 2 b~(x) = 1 (a~ + b~)(x) = 3 a~(v a ) = 1 b~(v a ) = 0 (a~ + b~)(v a ) = 1 a~(v b ) = 0 b~(v b ) = 1 (a~ + b~)(v b ) = 1 Example of a~ + b~ Construction of a~ + b~ a~ b~ x v b v a a~ + b~ O step 4 step 5 To construct the sum of two 1-forms, a~ + b~: 1. Choose an origin at the intersection of a plane of a~ and a plane of b~. 2. Draw vector v a from the origin along the planes of b~, so b~(v a ) = 0, and of length such that a~(v a ) = 1. [This is the dual vector of a~.] physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu 3. Similarly, draw v b from the origin along the planes of a~, so a~(v b ) = 0, and b~(v b ) = 1. [This is the dual vector of b~.] 4. Draw a plane through the heads of v a and v b (black above). This defines the orientation of (a~ + b~). 5. Draw a parallel plane through the common point (the origin). This defines the spacing of planes of (a~ + b~). 6. Draw all other planes parallel, and with the same spacing. This is the geometric representation of (a~ + b~). Now we can easily draw the test vector x, such that a~(x) = 2, and b~(x) = 1. Fully Anti-symmetric Symbols Expanded Everyone hears about them, but few ever see them. They are quite sparse: the 3-D fully anti- symmetric symbol has 6 nonzero values out of 27; the 4-D one has 24 nonzero values out of 256. 3-D, from the 6 permutations, ijk: 123+, 132-, 312+, 321-, 231+, 213- 1 2 3 0 0 0 0 0 1 0 1 0 0 0 1 , 0 0 0 , 1 0 0 0 1 0 1 0 0 0 0 0 ijk k k k c = = = ( ( ( ( ( ( = ( ( ( ( ( ( 4-D, from the 24 permutations, : 0123+ 0132- 0312+ 0321- 0231+ 0213- 1023- 1032+ 1302- 1320+ 1230- 1203+ 2013+ 2031- 2301+ 2310- 2130+ 2103- 3012- 3021+ 3201- 3210+ 3120- 3102+ 0 1 2 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 , , , 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 , 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 o|o | | | | c o o = = = = ( ( ( ( ( ( ( ( = ( ( ( ( ( ( ( ( = ( ( ( ( ( ( ( ( ( ( = ( ( ( ( 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 , , 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 2 , , , 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 3 0 1 0 0 0 0 0 o o ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( = ( ( ( ( ( ( ( ( = 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 , , , 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu Metric? We Dont Need No Stinking Metric! Examples of Useful, Non-metric Spaces Non-metric spaces are everywhere. A non-metric space has no concept of distance between arbitrary points, or even between arbitrary nearby points (points with infinitesimal coordinate differences). However: Non-metric spaces have no concept of distance, but many still have a well-defined concept of area, in the sense of an integral. For example, consider a plot of velocity (of a particle in 1D) vs. time (below, left). velocity time pressure volume displacement work momentum position action A B Some useful non-metric spaces: (left) velocity vs. time; (middle) pressure vs. volume; (right) momentum vs. position. In each case, there is no distance, but there is area. The area under the velocity curve is the total displacement covered. The area under the P-V curve is the work done by an expanding fluid. The area under the momentum-position curve (p-q) is the action of the motion in classical mechanics. Though the points in each of these plots exist on 2D manifolds, the two coordinates are incomparable (they have different units). It is meaningless to ask what is the distance between two arbitrary points on the plane. For example, points A and B on the v-t curve differ in both velocity and time, so how could we define a distance between them (how can we add m/s and seconds)? In the above cases, we have one coordinate value as a function of the other, e.g. velocity as a function of time. We now consider another case: rather than consider the function as one of the coordinates in a manifold, we consider the manifold as comprising only the independent variables. Then, the function is defined on that manifold. As usual, keeping track of the units of all the quantities will help in understanding both the physical and mathematical principles. For example, the speed of light in air is a function of 3 independent variables: temperature, pressure, and humidity. At 633 nm, the effects amount to speed changes of about +1 ppm per kelvin, 0.4 ppm per mm-Hg pressure, and +0.01 ppm per 1% change in relative humidity (RH) (see http://patapsco.nist.gov/ s(T, P, H) = s 0 + aT bP + cH where a 300 (m/s)/k, b 120 (m/s)/mm-Hg, and c 3 (m/s)/% are positive constants, and the function s is the speed of light at the given conditions, in m/s. Our manifold is the set of TPH triples, and s is a function on that manifold. We can consider the TPH triple as a (contravariant, column) vector: (T, P, H) T . These vectors constitute a 3D vector space over the field of reals. s() is a real function on that vector space. Note that the 3 components of a vector each have different units: the temperature is measured in kelvins (K), the pressure in mm-Hg, and the relative humidity in %. Note also that there is no metric on (T, P, H) space (which is bigger, 1 K or 1 mm-Hg?). However, the gradient of s is still well defined: s s s s a b c T P H c c c V = + + = + c c c dT dP dH dT dP dH What are the units of the gradient? As with the vectors, each component has different units: the first is in (m/s) per kelvin; the second in (m/s) per mm-Hg; the third in (m/s) per %. The gradient has different units than the vectors, and is not a part of the original vector space. The gradient, Vs, operates on a vector (T, P, H) T to give the change in speed from one set of conditions, say (T 0 , P 0 , H 0 ) to conditions incremented by the vector (T 0 + T, P 0 + P, H 0 + H). physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu One often thinks of the gradient as having a second property: it specifies the direction of steepest increase of the function, s. But: Without a metric, steepest is not defined. Which is steeper, moving one unit in the temperature direction, or one unit in the humidity direction? In desperation, we might ignore our units of measure, and choose the Euclidean metric (thus equating one unit of temperature with one unit of pressure and one unit of humidity); then the gradient produces a direction of steepest increase. However, with no justification for such a choice of metric, the result is probably meaningless. What about basis vectors? The obvious choice is, including units, (1 K, 0 mm-Hg, 0 %) T , (0 K, 1 mm- Hg, 0 %) T , and (0 K, 0 mm-Hg, 1 %) T , or omitting units: (1, 0, 0), (0, 1, 0), and (0, 0, 1). Note that these are not unit vectors, because there is no such thing as a unit vector, because there is no metric by which to measure one unit. Also, if I ascribe units to the basis vectors, then the components of an arbitrary vector in that basis are dimensionless. Now lets change the basis: suppose now I measure temperature in some unit equal to K (almost the Rankine scale). Now all my temperature measurements double, i.e. T new = 2 T old . In other words, ( K, 0, 0) T is a different basis than (1 K, 0, 0) T . As expected for a covariant component, the temperature T is cut in half if the basis vector halves. So when the half-size gradient component operates on the double-size temperature vector component, the product remains invariant, i.e., the speed of light is a function of temperature, not of the units in which you measure temperature. The above basis change was a simple change of scale of one component in isolation. The other common basis change is a rotation of the axes, mixing the old basis vectors. Can we rotate axes when the units are different for each component? Surprisingly, we can. H T P H T P e 1 e 2 e 3 We simply define new basis vectors as linear combinations of old ones, which is all that a rotation does. For example, suppose we measured the speed of light on 3 different days, and the environmental conditions were different on those 3 days. We choose those measurements as our basis, say e 1 = (300 K, 750 mm-Hg, 20%), e 2 = (290 K, 760 mm-Hg, 30 %), and e 3 = (290 K, 770 mm-Hg, 10 %). These basis vectors are not orthogonal, but are (of course) linearly independent. Suppose I want to know the speed of light at (296 K, 752 mm-Hg, 18 %). I decompose this into my new basis and get (0.6, 0.6, 0.2). I compute the speed of light function in the new basis, and then compute its gradient, to get 1 2 3 1 2 3 d d d + + e e e . I then operate on the vector with the gradient to find the change in speed: s = Vs(0.6, 0.6, 0.2) = 0.6 d 1 + 0.6 d 2 0.2 d 3 . We could extend this to a more complex function, and then the gradient is not constant. For example, a more accurate equation for the speed of light is ( ) ( ) 2 0 ( , , ) 273 160 P s T P H c f gH T T = + + where f 7.86 10 4 and g 1.5 10 11 are constants. Now the gradient is a function of position (in TPH space), and there is still no metric. Comment on the metric: In desperation, you might define a metric, i.e. the length of a vector, to be s, the change in the speed of light due to the environmental changes defined by that vector. However, such a metric is in general non-Euclidean (not a Pythagorean relationship), indefinite (non-zero vectors can have zero or negative lengths), and still doesnt define a meaningful dot product. Our more-accurate equation for the speed of light provides examples of these failures. physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu References: [Knu] Knuth, Donald, The Art of Computer Programming, Vol. 2: Seminumerical Algorithms, 2nd Ed., p. 117. [Mic] Michelsen, Eric L., Funky Quantum Concepts, unpublished. http://physics.ucsd.edu/~emichels/FunkyQuantumConcepts.pdf . [Sch] Schutz, Bernard, A First Course in General Relativity, Cambridge University Press, 1990. [Sch2] Schutz, Bernard, Geometrical Methods of Mathematical Physics, Cambridge University Press, 1980. [Tal] Talman, Richard, Geometric Mechanics, John Wiley and Sons, 2000. physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu Differential Geometry Manifolds A manifold is a space: a set of points with coordinate labels. We are free to choose coordinates many ways, but a manifold must be able to have coordinates that are real numbers. We are familiar with metric manifolds, where there is a concept of distance. However, there are many useful manifolds which have no metric, e.g. phase space (see We Dont Need No Stinking Metric above). Even when a space is non-metric, it still has concepts of locality and continuity. Such locality and continuity are defined in terms of the coordinates, which are real numbers. It may also have a volume, e.g. the oft-mentioned phase-space volume. It may seem odd that theres no definition of distance, but there is one of volume. Volume in this case is simply defined in terms of the coordinates, dV = dx 1 dx 2 dx 3 ..., and has no absolute meaning. Coordinate Bases Coordinate bases are basis vectors derived from coordinates on the manifold. They are extremely useful, and built directly on basic multivariate calculus. Coordinate bases can be defined a few different ways. Perhaps the simplest comes from considering a small displacement vector on a manifold. We use 2D polar coordinates in (r, ) as our example. A coordinate basis can be defined as the basis in which the components of an infinitesimal displacement vector are just the differentials of the coordinates: e u e r e r e u e r e u e u e r e r e u p p + dp dp = (dr, d) (Left) Coordinate bases: the components of the displacement vector are the differentials of the coordinates. (Right) Coordinate basis vectors around the manifold. Note that e (the basis vector) far from the origin must be bigger than near, because a small change in angle, d, causes a bigger displacement vector far from the origin than near. The advantage of a coordinate basis is that it makes dot products, such as a gradient dotted into a displacement, appear in the simplest possible form: ( ) ( , ), ( , ) , f f f f Given f r df f r d dr d dr d r r u u u u u u c c c c | | = V = + = + | c c c c \ . p The last equality is assured from elementary multivariate calculus. The basis vectors are defined by differentials, but are themselves finite vectors. Any physical vector, finite or infinitesimal, can be expressed in the coordinate basis, e.g., velocity, which is finite. Vectors as derivatives: There is a huge confusion about writing basis vectors as derivatives. From our study of tensors (earlier), we know that a vector can be considered an operators on a 1-form, which produces a scalar. We now describe how vector fields can be considered operators on scalar functions, which produce scalar fields. I dont like this view, since it is fairly arbitrary, confuses the much more consistent tensor view, and is easily replaced with tensor notation. We will see that in fact, the derivative basis vectors are operators which create 1-forms (dual-basis components), not traditional basis vectors. The vector basis is then implicitly defined as the dual of the dual-basis, which is always the coordinate basis. In detail: physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu We know from the Tensors chapter that the gradient of a scalar field is a 1-form with partial derivatives as its components. For example: 1 2 3 1 2 3 ( , , ) , , , , , are basis 1-forms f f f f f f f x y z where x y z x y z | | c c c c c c V = = + + | c c c c c c \ . Many texts define vectors in terms of their action on scalar functions (aka scalar fields), e.g. [Wald p15]. Given a point (x, y, z), and a function f(x, y, z), the definition of a vector v amounts to ( ) ( ) , , , , (a scalar field) x y z x y z f f f v v v such that f x y z f v v v x y z c c c V = + + ( c c c v v v Roughly, the action of v on f produces a scaled directional derivative of f: Given some small displacement dt, as a fraction of |v| and in the direction of v, v tells you how much f will change when moving from (x, y, z) to (x + v x dt, y + v y dt, z + v z dt): | | | | df df f dt or f dt = = v v If t is time, and v is a velocity, then v[f] is the time rate of change of f. While this notation is compact, Id rather write it simply as the dot-product of v and Vf, which is more explicit, and consistent with tensors: df df f dt or f dt = V = V v v The definition of v above requires an auxiliary function f, which is messy. We remove f by redefining v as an operator: (an operator) x y z v v v x y x | | c c c + + | c c c \ . v Given this form, it looks like /x, /y, and /z are some kind of basis vectors. Indeed, standard terminology is to refer to /x, /y, and /z as the coordinate basis for vectors, but they are really operators for creating 1-forms! Then | | ( ) , , (a scalar field) x y z i i i x y z f f f f v v v v f x y z = c c c = + + = V c c c v The vector v contracts directly with the 1-form Vf (without need of any metric), hence v is a vector implicitly defined in the basis dual to the 1-form Vf. Note that if v = v(x, y, z) is a vector field, then ( ) , , ( , , ) ( , , ) (a scalar field) f x y z x y z f x y z V ( v v These derivative operators can be drawn as basis vectors in the usual manner, as arrows on the manifold. They are just the coordinate basis vectors shown earlier. For example, consider polar coordinates (r, ): physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu e u e r e r e u e r e u e u e r Examples of coordinate basis vectors around the manifold. e r happens to be unit magnitude everywhere, but e is not. The manifold in this case is simply the flat plane, 2 . The r-coordinate basis vectors are all the same size, but have different directions at different places. The coordinate basis vectors get larger with r, and also vary in direction around the manifold. Covariant Derivatives Notation: Due to word-processor limitations, the following two notations are equivalent: ( ) ( ), h r , , h r . This description is similar to one in [Sch]. We start with the familiar concepts of derivatives, and see how that evolves into the covariant derivative. Given a real-valued function of one variable, f(x), we want to know how f varies with x near a value, a. The answer is the derivative of f(x), where df = f '(a) dx and therefore f(a + dx) f(a) + df = f(a) + f '(a) dx Extending to two variables, g(x, y), wed like to know how g varies in the 2-D neighborhood around a point (a, b), given a displacement vector dr = (dx, dy). We can compute its gradient: ( ) and therefore , ( , ) ( ) g g g g a dx b dy g a b g dr x y c c V = + + + ~ + V c c dx dy , The gradient is also called a directional derivative, because the rate at which g changes depends on the direction in which you move away from the point (a, b). The gradient extends to a vector valued function (a vector field) h(x, y) = h x (x, y)i + h y (x, y)j: ( ) x y x y x x x x y y y y h h h x y h h h h h h and x x x y y y h h h h dx x y x y h h dh h dr dx dy dx dy x y h h h h dy x y x y c c V = + c c c c c c c c = + = + c c c c c c ( ( ( ( c c c c ( ( ( ( ( ( ( c c c c c c ( ( ( = V = + = = + c c ( ( ( c c c c ( ( ( ( ( ( c c c c dx dy i j i j , , , , , , , , , , ( ( ( ( ( We see that the columns of Vh are vectors which are weighted by dx and dy, and then summed to produce a vector result. Therefore, Vh is linear in the displacement vector dr = (dx, dy). This linearity physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu insures that it transforms like a duck . . . I mean, like a tensor. Thus Vh is a rank-2 ( 1 1 ) tensor: it takes a single vector input, and produces a vector result. So far, all this has been in rectangular coordinates. Now we must consider what happens in curvilinear coordinates, such as polar. Note that were still in a simple, flat space. (Well get to curved spaces later). Our goal is still to find the change in the vector value of h( ), given an infinitesimal vector change of position, dx = (dx 1 , dx 2 ). We use the same approach as above, where a vector valued function comprises two (or n) real-valued component functions: 1 2 1 1 2 2 1 2 1 2 ( , ) ( , ) ( , ) h x x h x x h x x = + e e , , , . However, in this general case, the basis vectors are themselves functions of position (previously the basis vectors were constant everywhere). So h( ) is really 1 2 1 1 2 1 2 2 1 2 1 2 1 2 ( , ) ( , ) ( , ) ( , ) ( , ) h x x h x x x x h x x x x = + e e , , , Hence, partial derivatives of the component functions alone are no longer sufficient to define the change in the vector value of h( ); we must also account for the change in the basis vectors. h(x 1 , x 2 ) h(x 1 +dx 1 , x 2 +dx 2 ) e 1 (x 1 , x 2 ) e 2 (x 1 , x 2 ) e 2 (x 1 +dx 1 , x 2 +dx 2 ) e 1 (x 1 +dx 1 , x 2 +dx 2 ) Components constant, but vector changes Vector constant, but components change e 1 e 2 dx = (dx 1 , dx 2 ) dx = (dx 1 , dx 2 ) Note that a component of the derivative is distinctly not the same as the derivative of the component (see diagram above). Therefore, the ith component of the derivative depends on all the components of the vector field. We compute partial derivatives of the vector field h(x 1 , x 2 ) using the product rule: 1 2 1 2 1 1 2 1 2 2 1 2 1 2 1 2 1 1 1 1 1 1 2 1 2 1 1 1 ( , ) ( , ) ( , ) ( , ) ( , ) ( , ) n j j j j j h h h x x h x x x x h x x x x x x x h x x h x x x x = c c c c c = + + + c c c c c c | | c = + | | c c \ . e e e e e e , , , , , , , This is a vector equation: all terms are vectors, each with components in all n basis directions. This is equivalent to n numerical component equations. Note that (ch/cx 1 ) has components in both (or all n) directions. Of course, we can write similar equations for the components of the derivative in any basis direction, e k : 1 2 1 2 1 1 2 1 2 2 1 2 1 2 1 2 1 2 1 2 1 ( , ) ( , ) ( , ) ( , ) ( , ) ( , ) k k k k k n j j j j k k j h h h x x h x x x x h x x x x x x x h x x h x x x x = c c c c c = + + + c c c c c c | | c = + | | c c \ . e e e e e e , , , , , , , Because we must frequently work with components and component equations, rather than whole vector equations, let us now consider only the ith component of the above: 1 2 1 ( , ) i i n i j j k k k j h h h x x x x x = c | | | | c c = + | | c c c \ . \ . e , , physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu The first term moves out of the summation because each of the first terms in the summation of eq. (1) are vectors, and each points exactly in the e j direction. Only the j = i term contributes to the ith component; the purely e j directed vector contributes nothing to the ith component when j = i. Recall that these equations are true for any arbitrary coordinate system; we have made no assumptions about unit length or orthogonal basis vectors. Note that ( ) the th (covariant) component of k k h h k h x c = V = V c , , , Since Vh is a rank-2 tensor, the kth covariant component of Vh is the kth column of Vh: 1 1 1 2 2 2 1 2 h h x x h h h x x ( | | | | c c ( | | ( c c \ . \ . ( V = ( ( | | | | c c ( | | c c ( \ . \ . , , , , , Since the change in h( ) is linear with small changes in position, 1 2 ( ), ( , ) dh h dx where dx dx dx = V = , , , , Going back to Equations (1) and (2), we can now write the full covariant derivative of h( ) in 3 ways: vector, verbose component, and compact component: ( ) ( ) ( ) 1 2 1 2 1 1 1 1 2 1 ( , ) ( , ) ( , ) , n n n j j j i k jk i k k k k j j i i n i i j j k k k j i i i j j j i i i k jk jk jk i k k k h h h h h x x h x x x x x h h h x x x x h h h where x x x = = = = c c c V V = + = + I c c c c | | c V = + | c c \ . c c | | c V = + I I I = | c c c \ . e e e e e e , , , , , , , , , , , , Aside: Some mathematicians complain that you cant define the Christoffel symbols as derivatives of basis vectors, because you cant compare vectors from two different points of a manifold without already having the Christoffel symbols (aka the connection). Physicists, including Schutz [Sch], say that physics defines how to compare vectors at different points of a manifold, and thus you can calculate the Christoffel symbols. In the end, it doesnt really matter. Either way, by physics or by fiat, the Christoffel symbols are, in fact, the derivatives of the basis vectors. Christoffel Symbols Christoffel symbols are the covariant derivatives of the basis vector fields. TBS. Derivatives of e r r e r r + dr e r de r = 0 physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu Visualization of n-Forms TBS: 1-forms as oriented planes 2-forms (in 3 or more space) as oriented parallelograms 3-forms (in 3 or more space) as oriented parallelepipeds 4-forms (in 4-space): how are they oriented?? Review of Wedge Products and Exterior Derivative This is a quick insert that needs proper work. ?? 1-D I dont know of any meaning for a wedge-product in 1-D, or even a vector. Also, the 1-D exterior derivative is a degenerate case, because the exterior of a line segment is just the 2 endpoints, and all functions are scalar functions. In all higher dimensions, the exterior or boundary of a region is a closed path/ surface/ volume/ hyper-volume. In 1-D the boundary of a line segment cannot be closed. So instead of integrating around a closed exterior (aka boundary), we simply take the difference in the function value at the endpoints, divided by a differential displacement. This is simply the ordinary derivative of a function, f (x). 2-D The exterior derivative of a scalar function f(x, y) follows the 1-D case, and is similarly degenerate, where the exterior is simply the two endpoints of a differential displacement. Since the domain is a 2-D space, the displacements are vectors, and there are 2 derivatives, one for displacements in x, and one for displacements in y. Hence the exterior derivative is just the one-form gradient of the function: ( , ) " " f f x y c c = = + c c df dx dy In 2-D, the wedge product . dx dy is a two-form, which accepts two vectors to produce the signed area of the parallelogram defined by them. A signed area can be + or -; a counter-clockwise direction is positive, and clockwise is negative. v w + v w - ( , ) signed area defined by ( , ) ( , ) ( ) ( ) ( ) ( ) ( ) ( ) det det ( ) ( ) x x y y v w v w w v v w v w v w v w v w v w . = = . = = = dx dy dx dy dx dy dy dx dx dx dy dy , , , , , , , , , , , , , , The exterior derivative of a 1-form is the ratio of the closed path integral of the 1-form to the area of the parallelogram of two vectors, for infinitesimal vectors. This is very similar to the definition of curl, only applied to a 1-form instead of a vector field. physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu 1 3 dx dy dx dy 2 4 x (r+dy) x (r) y (r) y (r+dx) Path integrals from 2 Consider the horizontal and vertical contributions to the path integral separately: 1 3 2 4 ( , ) ( ) ( ) ( , ) ( , ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) x x x x y y y x y x, y x, y r x y dr dx dy dr dr r dx r dy dx dy dx y dr dr r dx dy r dy dx dy x e e e e e e e e = + = = c + = + = c c + = + = c } } } } y dx dy , , , , , , The horizontal (segments 1 & 3) integrals are linear in dx, because that is the length of the path. They are linear in dy, because dy is proportional to the difference in x . Hence, the contribution is linear in both dx and dy, and therefore proportional to the area (dx)(dy). A similar argument holds for the vertical contribution, segments 2 & 4. Therefore, the path integral varies proportionately to the area enclosed by two orthogonal vectors. It is easy to show this is true for any two vectors, and any shaped area bounded by an infinitesimal path. For example, when you butt up two rectangles, the path integral around the combined boundary equals the sum of the individual path integrals, because the contributions from the common segment cancel from each rectangle, and hence omitting them does not change the path integral. The area integrals clearly 3-D In 3-D, the wedge product ( , , ) signed volume defined by ( , , ) ( , , ), . ( ) ( ) ( ) det ( ) ( ) ( ) det ( ) ( ) ( ) x x x y y y z z z u v w u v w u w v etc u v w u v w u v w u v w u v w u v w . . = = . . = = dx dy dz dx dy dz dx dx dx dy dy dy dz dz dz , , , , , , , , , , , , , , , , , , is a 3-form which can either: 1. accept 2 vectors to produce an oriented area; it doesnt have a sign, it has a direction. Analogous to the cross-product. Or, 2. accept 3 vectors to produce a signed volume. The exterior derivative of a scalar or 1-form field is essentially the same as in the 2-D case, except that now the areas defined by vectors are oriented instead of simply signed. In this case, the exterior is a closed surface; the interior is a volume. physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu Math Tricks Here are some math tricks that either come up a lot and are worth knowing about, or are just fun and interesting. Math Tricks That Come Up A Lot The Gaussian Integral 2 ax e dx } You can look this up anywhere, but here goes: well evaluate the basic integral, 2 x e dx } , and throw in the a at the end by a simple change of variable. First, we square the integral, then rewrite the second factor calling the dummy integration variable y instead of x: ( ) 2 2 2 2 2 2 x y x x y dx e dx e dy e dx dy e + | | | | | | = = | | | \ . \ . \ . } } } } } This is just a double integral over the entire x-y plane, so we can switch to polar coordinates. Note that the exponential integrand is constant at constant r, so we can replace the differential area dx dy with 2tr dr: x d(area) = 2r dr y r dr ( ) 2 2 2 2 2 2 2 2 2 2 0 0 2 2 , x y r r x x ax Let r x y dx dy e dr r e e dx e dx e and dx e a t t t t t t + = + = ( = = ( | | = = = | \ . } } } } } } Math Tricks That Are Fun and Interesting sin dx x } Continuous Infinite Crossings The following function has an infinite number of zero crossings near the origin, but is everywhere continuous (even at x = 0). That seems bizarre to me. Recall the definition: f(x) is continuous at a iff lim ( ) ( ) x a f x f a = Then let physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu 0 1 sin , 0 ( ) 0, 0 lim ( ) 0 (0) (0) x x x x f x x f x f f is continuous | | = | \ . = = = Picture Phasors Phasors are complex numbers that represent sinusoids. The phasor defines the magnitude and phase of the sinusoid, but not its frequency. See Funky Electromagnetic Concepts for a full description. Future Funky Mathematical Physics Topics 1. Finish theoretical importance of IBP 2. Finish Legendre transformations 3. Sturm-Liouville 4. Pseudo-tensors (ref. Jackson). 5. Tensor densities 6. f(z) = - dx exp(x^2)/xz has no poles, but has a branch cut. Where is the branch cut, and what is the change in f(z) across it? physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu Appendices References [A&S] Abramowitz and Stegun, ?? [Chu] Churchill, Ruel V., Brown, James W., and Verhey, Roger F., Complex Variables and Applications, 1974, McGraw-Hill. ISBN 0-07-010855-2. [Det] Dettman, John W., Applied Complex Variables, 1965, Dover. ISBN 0-486-64670-X. [F&W] Fetter, Alexander L. and John Dirk Walecka, Theoretical Mechanics for Particles and Continua, McGraw-Hill Companies, February 1, 1980. ISBN-13: 978-0070206588. [Jac] Jackson, Classical Electrodynamics, 3 rd ed. [M&T] Marion & Thornton, 4th ed. [One] ONeill, Barrett, Elementary Differential Geometry, 2 nd ISBN 0-12-526745-2. [Sch] Schutz, Bernard F., A First Course in General Relativity, Cambridge University Press (January 31, 1985), ISBN 0521277035. [Sch2] Schutz, Bernard F., Geometrical Methods of Mathematical Physics, Cambridge University Press ??, ISBN [Sea] Sean, Seans Applied Math Book, 1/24/2004. http://www.its.caltech.edu/~sean/book.html. [Tal] Talman, Richard, Geometric Mechanics, Wiley-Interscience; 1 st edition (October 4, 1999), ISBN 0471157384 [Tay] Taylor, Angus E., General Theory of Functions and Integration, 1985, Dover. ISBN 0- 486-64988-1. [W&M] Walpole, Ronald E. and Raymond H. Myers, Probability and Statistics for Engineers and Scientists, 3 rd edition, 1985, Macmillan Publishing Company, ISBN 0-02-424170-9. Glossary Definitions of common mathematical physics terms. Special mathematical definitions are noted by (math). These are technical mathematical terms that you shouldnt have to know, but will make reading math books a lot easier because they are very common. These definitions try to be conceptual and accurate, but comprehensible to normal people (including physicists, but not mathematicians). 1-1 A mapping from a set A to a set B is 1-1 if every value of B under the map has only one value of A that maps to it. In other words, given the value of B under the map, we can uniquely find the value of A which maps to it. However, see 1-1 correspondence. See also injection. 1-1 correspondence A mapping, between two sets A and B, is a 1-1 correspondence if it uniquely associates each value of A with a value of B, and each value of B with a value of A. Synonym: bijection. accumulation point syn. for limit point. 1 In inner products, hard to define simply (see text). Crudely, the adjoint of an operator is the operator which preserves the inner product of two vectors as <v|(O|w>) = (O |v>) ## |w>. When viewing matrices as operators, the adjoint of a matrix is the hermitian conjugate. This has nothing to do with matrix adjoints (below). 2 In matrices, the transpose of the cofactor matrix is called the adjoint of a matrix. This has nothing to do with linear operator adjoints (above). physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu analytic A function is analytic in some domain iff it has continuous derivatives to all orders, i.e. is infinitely differentiable. For complex functions of complex variables, if a function has a continuous first derivative in some region, then it has continuous derivatives to all orders, and is therefore analytic. analytic geometry the use of coordinate systems along with algebra and calculus to study geometry. Aka coordinate geometry bijection Both an injection and a surjection, i.e. 1-1 and onto. A mapping between sets A and B is a bijection iff it uniquely associates a value of A with every value of B. Synonym: 1-1 correspondence. BLUE In statistics, Best Linear Unbiased Estimator. branch point A branch point is a point in the domain of a complex function f(z), z also complex, with this property: when you traverse a closed path around the branch point, following continuous values of f(z), f(z) has a different value at the end point of the path than at the beginning point, even though the beginning and end point are the same point in the domain. Example TBS: square root around the origin. boundary point (math) see limit point. C or the set of complex numbers. closed (math) contains any limit points. For finite regions, a closed region includes its boundary. Note that in math talk, a set can be both open and closed! The surface of a sphere is open (every point has a neighborhood in the surface), and closed (no excluded limit points; in fact, no limit points). cofactor The ij-th minor of an nn matrix is the determinant of the (n1)(n1) matrix formed by crossing out the i-th row and j-th column. A cofactor is just a minor with a plus or minus sign affixed, according to whether (i, j) is an even or odd number of steps away from (1,1): ( 1) i j ij ij C M + = compact (math) for our purposes, closed and bounded [Tay thm 2-6I p66]. A compact region may comprise multiple (infinite number??) disjoint closed and bounded regions. congruence a set of 1D non-intersecting curves that cover every point of a manifold. Equivalently, a foliation of a manifold with 1D curves. Compare to foliation. contrapositive The contrapositive of the statement If A then B is If not B then not A. The contrapositive is equivalent to the statement: if the statement is true (or false), the contrapositive is true (or false). If the contrapositive is true (or false), the statement is true (or false). convergent approaches a definite limit converse The converse of the statement If A then B is If B then A. In general, if a statement is true, its converse may be either true or false. The converse is the contrapositive of the inverse, and hence the converse and inverse are equivalent statements. connected There exists a continuous path between any two points in the set (region). See also: simply connected. [One p178]. coordinate geometry the use of coordinate systems along with algebra and calculus to study geometry. Aka analytic geometry diffeomorphism a C ## inverse, from one manifold onto another. Onto implies the mapping covers the whole range manifold. Two diffeomorphic manifolds are topologically identical, but may have different geometries. divergent not convergent: a sequence is divergent iff it is not convergent. physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu domain of a function: the set of numbers (usually real or complex) on which the function is defined. entire A complex function is entire iff it is analytic over the entire complex plane. An entire function is also called an integral function. essential singularity a pole of infinite order, i.e. a singularity around which the function is unbounded, and cannot be made finite by multiplication by any power of (z z 0 ) [Det p165]. factor a number (or more general object) that is multiplied with others. E.g., in (a + b)(x +y), there are two factors: (a + b), and (x +y). finite a non-zero number. In other words, not zero, and not infinity. foliation a set of non-intersecting submanifolds that cover every point of a manifold. E.g., 3D real space can be foliated into 2D sheets stacked on top of each other, or 1D curves packed around each other. Compare to congruence. holomorphic syn. for analytic. Other synonyms are regular, and differentiable. Also, a holomorphic map is just an analytic function. homomorphic something from abstract categories that should not be confused with homeomorphism. homeomorphism a continuous (1-1) map, with a continuous inverse, from one manifold onto another. Onto implies the mapping covers the whole range manifold. A homeomorphism that preserves distance is an isometry. identify to establish a 1-1 and onto relationship. If we identify two mathematical things, they are essentially the same thing. iff if, and only if, injection A mapping from a set A to a set B is an injection if it is 1-1, that is, if given a value of B in the mapping, we can uniquely find the value of A which maps to it. Note that every value of A is included by the definition of mapping [CRC 30 th ]. The mapping does not have to cover all the elements of B. integral function Syn. for entire function: a function that is analytic over the entire complex plane. inverse The inverse of the statement If A then B is If not A then not B. In general, if a statement is true, its inverse may be either true or false. The inverse is the contrapositive of the converse, and hence the converse and inverse are equivalent statements. invertible A map (or function) from a set A to a set B is invertible iff for every value in B, there exists a unique value in A which maps to it. In other words, a map is invertible iff it is a bijection. isolated singularity a singularity at a point, which has a surrounding neighborhood of analyticity [Det p165]. isometry a homeomorphism that preserves distance, i.e. a continuous, invertible (1-1) map from one manifold onto another that preserves distance (onto in the mathematical sense). isomorphic same structure. A widely used general term, with no single precise definition. limit point of a domain is a boundary of a region of the domain: for example, the open interval (0, 1) on the number line and the closed interval [0, 1] both have limit points of 0 and 1. In this case, the open interval excludes its limit points; the closed interval includes them (definition of closed). Some definitions define all points in the domain as also limit points. Formally, a point p is a limit point of domain D iff every open subset containing p also contains a point in D other than p. mapping syn. function. A mapping from a set A to a set B defines a value of B for every value of A [CRC 30 th ]. physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu meromorphic A function is meromorphic on a domain iff it is analytic except at a set of isolated poles of finite order (i.e., non-essential poles). Note that some branch points are not poles (such as \z at zero), so a function including such a branch point is not meromorphic. minor The ij-th minor of an nn matrix is the determinant of the (n1)(n1) matrix formed by the set of natural numbers (positive integers) oblique non-orthogonal and not parallel one-to-one see 1-1. onto covering every possible value. A mapping from a set A onto the set B covers every possible value of B, i.e. the mapping is a surjection. open (math) An region is open iff every point in the region has a finite neighborhood of points around it that are also all in the region. In other words, every point is an interior point. Note that open is not not closed; a region can be both open and closed. pole a singularity near which a function is unbounded, but which becomes finite by multiplication by (z z 0 ) k for some finite k [Det p165]. The value k is called the order of the pole. positive definite a matrix or operator which is > 0 for all non-zero operands. It may be 0 when acting on a zero operand, such as the zero vector. This implies that all eigenvalues > 0. positive semidefinite a matrix or operator which is 0 for all non-zero operands. It may be 0 when acting on a non-zero operands. This implies that all eigenvalues 0. PT perturbation theory Q or the set of rational numbers. Q + the set of positive rationals. R or the set of real numbers. removable singularity an isolated singularity that can be made analytic by simply defining a value for the function at that point. For example, f(x) = sin(x)/x has a singularity at x = 0. You can remove it by defining f(0) = 1. Then f is everywhere analytic. [Det p165] residue The residue of a complex function at a complex point z 0 is the a 1 coefficient of the Laurent expansion about the point z 0 . simply connected There are no holes in the set (region), not even point holes. I.e., you can shrink any closed curve in the region down to a point, the curve staying always within the region (including at the point). singularity of a function: a point on a boundary (i.e. a limit point) of the domain of analyticity, but where the function is not analytic. [Det def 4.5.2 p156]. Note that the function may be defined at the singularity, but it is not analytic there. E.g., \z is continuous at 0, but not differentiable. smooth for most authors, smooth means infinitely differentiable, i.e. C ## . For some authors, though, smooth means at least one continuous derivative, i.e. C 1 , with first derivative continuous. This latter definition looks smooth to our eye (no kinks, or sharp points). surjection A mapping from a set A onto the set B, i.e. that covers every possible value of B. Note that every value of A is included by the definition of mapping [CRC 30 th ], however multiple values of A may map to the same value of B. term a number (or more general object) that is added to others. E.g., in ax + by + cz, there are three terms: ax, by, and cz. uniform convergence a series of functions f n (z) is uniformly convergent in an open (or partly open) region iff its error after the N th function can be made arbitrarily small with a single physics.ucsd.edu/~emichels Funky Mathematical Physics Concepts emichels at physics.ucsd.edu value of N (dependent only on ) for every point in the region. I.e. given , a single N works for all points z in the region [Chu p156]. voila French for see there! WLOG or WOLOG without loss of generality Z or the set of integers. Z + or the set of positive integers (natural numbers). Formulas completing the square: 2 2 2 (x-shift / 2 ) 2 4 b b ax bx a x b a a a | | + = + = | \ . Integrals 2 2 2 2 3 3 2 0 1 1 2 2 ax ax ar dx e dx x e dr r e a a a t t = = = } } } Statistical distributions 2 2 2 2 : 2 exponential : avg avg v _ v o v t o t = = = = error function [A&S]: 2 0 2 ( ) x t erf x e dt t } gaussian included probability between z and +z: ( ) ( ) 2 2 / 2 2 2 / 2 0 1 pdf ( ) / 2 , 2 2 2 2 / 2 2 z z u gaussian gaussian z z z t p z u du e du Let u t du dt e dt erf z t t + + + = = = = } } } Special Functions ( ) ( ) 1 0 ( ) 1 ! ( ) ( ) 1 ( 1) (1/ 2) a x n n a dx x e a a a t I = I I = I I = } Index The index is not yet developed, so go to the web page on the front cover, and text-search in this document.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9387505650520325, "perplexity": 2363.9613915609525}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670729.90/warc/CC-MAIN-20191121023525-20191121051525-00419.warc.gz"}
https://www.examcopilot.com/subjects/radio-navigation/dme/d-m-e-hold-entries
# DME Hold Entries RNAV An unusual question for Radio Navigation, as holds and hold entries are covered in Air Law. Sector one is a parallel join and sector three is a direct join. These are the only two possible as you are flying either directly along or back along the hold track when the DMEDME —Distance Measuring Equipment arc is completed. Sector two, an offset join, cannot be achieved in this fashion. If you have not studied these sector joins yet, do not worry. The answer is sectors 1 and 3, and that answer only.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8579484224319458, "perplexity": 2503.726041113305}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487586239.2/warc/CC-MAIN-20210612162957-20210612192957-00477.warc.gz"}
https://arxiv-export-lb.library.cornell.edu/abs/2108.02453v1
math.AP (what is this?) # Title: Optimal decay of compressible Navier-Stokes equations with or without potential force Abstract: In this paper, we investigate the optimal decay rate for the higher order spatial derivative of global solution to the compressible Navier-Stokes (CNS) equations with or without potential force in three-dimensional whole space. First of all, it has been shown in \cite{guo2012} that the $N$-th order spatial derivative of global small solution of the CNS equations without potential force tends to zero with the $L^2-$rate $(1+t)^{-(s+N-1)}$ when the initial perturbation around the constant equilibrium state belongs to $H^N(\mathbb{R}^3)\cap \dot H^{-s}(\mathbb{R}^3)(N \ge 3 \text{~and~} s\in [0, \frac32))$. Thus, our first result improves this decay rate to $(1+t)^{-(s+N)}$. Secondly, we establish the optimal decay rate for the global small solution of the CNS equations with potential force as time tends to infinity. These decay rates for the solution itself and its spatial derivatives are really optimal since the upper bounds of decay rates coincide with the lower ones. Subjects: Analysis of PDEs (math.AP) MSC classes: 35Q30, 35Q35, 35B40 Cite as: arXiv:2108.02453 [math.AP] (or arXiv:2108.02453v1 [math.AP] for this version) ## Submission history From: Minling Li [view email] [v1] Thu, 5 Aug 2021 08:40:19 GMT (34kb) Link back to: arXiv, form interface, contact.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9363115429878235, "perplexity": 1041.940522259886}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103620968.33/warc/CC-MAIN-20220629024217-20220629054217-00614.warc.gz"}
https://www.physicsforums.com/threads/can-this-4-bit-adder-be-further-simplified.639533/
# Can this 4 bit adder be further simplified? 1. Sep 28, 2012 ### Psinter Hello, I'm new to logic circuits and I was making a 4 bit binary adder and I thought I could maybe simplify it. However, I can't find anything simpler than what I already got. What I wanted to know is: is my circuit already in its most simple form or am I making something wrong in my attempts to simplificate it (meaning it can still be simplificated)? The circuit works as follows: We have the 1st 4 bit number: ABCD We have the 2nd 4 bit number: EFGH The total sum with one bit at a time would be: (D + H) and (C + G) and (B + F) and (A + E). In other words: ABCD EFGH --------- XXXX where, (D + H) is the only one that doesn't receive any carry because it's the initial point. So I need 4 outputs with their respective carries as illustrated: As you can see the pattern of circuits start at the second sum. Meaning, (C + G), (B + F), and (A + E) are the same circuits. I tried to simplify 1 of them so I could simplify the rest of the adder. However, I can't find any way to simplify it any further. When I look at the Karnaugh Map I made out of the Truth Table of the circuit that repeats I don't know what to do from there because I don't have any mapping groups there. Truth Table: $$((C \oplus G) \oplus CARRY) \wedge CARRY$$ \begin{array}{|c|c|c|c|} \hline C & G & CARRY & F1 \\ \hline 0 & 0 & 0 & 0 \\ \hline 0 & 0 & 1 & 1 \\ \hline 0 & 1 & 0 & 0 \\ \hline 0 & 1 & 1 & 0 \\ \hline 1 & 0 & 0 & 0 \\ \hline 1 & 0 & 1 & 0 \\ \hline 1 & 1 & 0 & 0 \\ \hline 1 & 1 & 1 & 1 \\ \hline \end{array} Karnaugh Map: $$((C \oplus G) \oplus CARRY) \wedge CARRY$$ \begin{array}{|c|c|c|c|c|c|} \hline & G \wedge CARRY & 00 & 01 & 11 & 10 \\ \hline C & & & & & \\ \hline 0 & & & 1 & & \\ \hline 1 & & & & 1 & \\ \hline \end{array} Does all this mean that I cannot simplify my adder any further? 2. Oct 4, 2012 ### Psinter This is wrong, never mind, this adder is wrongly designed. This is not an adder. It says that 0 + 0 = 1 with carry 1 if the previous sum has a carry of 1. We know that this is not true so this is wrong. 3. Oct 7, 2012 ### Enthalpy I confirm it's wrong, because each pair of bits must enter an AND gate. The XOR doesn't suffice. If you like circuitry, you can give a look at carry propagation and anticipation, both for adders and counters. 4. Oct 7, 2012 ### skeptic2 How do you want to simplify it - by reducing the number of ICs? One way to simplify it would be to replace all the ICs with a PROM. Use the address lines as inputs and the data lines as outputs. For every combination of inputs (all addresses) program the correct output. You could even have one of the output lines designated as a carry out and one of the inputs as carry in. That way they could be cascaded. 5. Oct 10, 2012 ### Enthalpy http://www.physics.wisc.edu/undergrads/courses/fall2012/623/ds/74LS283.pdf http://en.wikipedia.org/wiki/Adder_(electronics) http://www.google.de/search?q="4+bi...DsbZsgbDpYHoCQ&ved=0CB4QsAQ&biw=1200&bih=1728 Beware that what was a "simple" design with TTL gates isn't "simple" with present-day CMOS, and a "simple" breadboard isn't a "simple" chip design - a simple breadbord would buy a 4-bit full adder chip... Especially, CMOS can make good XOR gates with 3 inputs, as well as good majority logic gates (= at least 2 inputs of 3 are logical 1). http://book.huihoo.com/design-of-vlsi-systems/ch06/ch06.html Interesting : the LS283 computed the carry over 4 bits directly to save time, despite it takes more gates. Again, many-inputs-Nand fit LS very well, and heavy output loading was a small penalty ; different story with CMOS. Carry lookahead saves time by splitting the 1-bit adder carry signal into a "generate carry" and "propagate carry" (if one carry is input to this bit). These two signals can enter the lookahead unit which, for with words, is a tree. Depicted here as a binary tree: http://net.pku.edu.cn/~course/cs101/2007/resource/Intro2Algorithm/book6/chap29.htm but normally it would be a radix-four tree: http://pages.cs.wisc.edu/~jsong/CS352/Readings/CLAs.pdf [Broken] Carry lookahead exist for counters as well but are easier because a counter passes through FC and FE before attaining FF, giving more time. Last edited by a moderator: May 6, 2017 Share this great discussion with others via Reddit, Google+, Twitter, or Facebook
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9800403714179993, "perplexity": 696.7682274338797}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267155817.29/warc/CC-MAIN-20180919024323-20180919044323-00217.warc.gz"}
https://math.stackexchange.com/questions/81583/how-do-i-prove-that-xp-xa-is-irreducible-in-a-field-with-p-elements-when/81637
# How do I prove that $x^p-x+a$ is irreducible in a field with $p$ elements when $a\neq 0$? Let $p$ be a prime. How do I prove that $x^p-x+a$ is irreducible in a field with $p$ elements when $a\neq 0$? Right now I'm able to prove that it has no roots and that it is separable, but I have not a clue as to how to prove it is irreducible. Any ideas? • When reading recently an article about the Artin-Schreier theorem, some properties of the so-called Artin extensions were used, and, if no mistakes occur here, those are intimately related to the polynomials of the form $x^p-x+a$. Is there indeed any error that occur? and is there any reference to know more in this direction? Thanks in advance. – awllower Nov 13 '11 at 14:05 • @awllower: This question may get you started? – Jyrki Lahtonen Nov 13 '11 at 16:39 • @JyrkiLahtonen: Thanks for the question. I really appreciate this. – awllower Nov 13 '11 at 16:46 • The original version of the question (which I've now edited) omitted the requirement that $p$ be a prime. This requirement was necessary: The polynomial $x^4-x+a$ over $\mathbb{F}_4$ is reducible for every $a \in \mathbb{F}_4$ ! – darij grinberg Sep 18 '16 at 3:20 • This is also a particular case of math.stackexchange.com/questions/136164 . – darij grinberg Sep 18 '16 at 3:34 Greg Martin and zyx have given you IMHO very good answers, but they rely on a few basic facts from Galois theory and/or group actions. Here is a more elementary but also a longer approach. Because we are in a field with $p$ elements, we know that $p$ is the characteristic of our field. Hence, the polynomial $g(x)=x^p-x$ has the property $$g(x_1+x_2)=g(x_1)+g(x_2)$$ whenever $x_1$ and $x_2$ are two elements of an extension field of $\mathbb{F}_p$. By little Fermat we know that $g(k)=k^p-k=0$ for all $k\in \Bbb{F}_p$. Therefore, if $r$ is one of the roots of $f(x)=x^p-x+a$, then $$f(r+k)=g(r+k)+a=g(r)+g(k)+a=f(r)+g(k)=0,$$ so all the elements $r+k$ with $k \in \Bbb{F}_p$ are roots of $f(x)$, and as there are $p$ of them, they must be all the roots. It sounds like you have already shown that $r$ cannot be an element of $\Bbb{F}_p$. Now assume that $f(x)=f_1(x)f_2(x)$, where both factors $f_1(x),f_2(x)\in \Bbb{F}_p[x]$. From the above consideration we can deduce that $$f_1(x)=\prod_{k\in S}(x-(r+k)),$$ where $S$ is some subset of the field $\Bbb{F}_p$. Write $\ell=|S|=\deg f_1(x)$. Expanding the product we see that $$f_1(x)=x^\ell-x^{\ell-1}\sum_{k\in S}(r+k)+\text{lower degree terms}.$$ This polynomial was assumed to have coefficients in the field $\Bbb{F}_p$. From the above expansion we read that the coefficient of degree $\ell-1$ is $|S|\cdot r+\sum_{k\in S}k$. This is an element of $\Bbb{F}_p$, if and only if the term $|S|\cdot r\in\Bbb{F}_p$. Because $r\notin \Bbb{F}_p$, this can only happen if $|S|\cdot1_{\Bbb{F}_p}=0_{\Bbb{F}_p}$. In other words $f_1(x)$ must be either of degree zero or of degree $p$. • Well done, it is a good proof. – awllower Nov 13 '11 at 14:01 • I love your proof. One thing is bothering me, though. And It's probably really obvious. How do you know that if $|S|\cdot r\in F_p$, then $|S|$ must be a multiple of $p$ in order for $|S|\cdot r$ to be in $F_p$? I know it has to do with the fact that $r\notin F_p$, and I have some intuition for it, but I don't know how to prove it. – MathTeacher Nov 14 '11 at 4:21 • @MathMastersStudent: If $|S|$ is not a multiple of $p$, then $|S|\cdot 1$ is an invertible element of $F_p$. So if $|S|\cdot r= b$ with $b\in F_p$, then $r=b|S|^{-1}$ would be in the prime field $F_p$ as well contradicting known facts. – Jyrki Lahtonen Nov 14 '11 at 7:20 • Great answer as usual, Jyrki: +1. Just to show you how pathologically nit-picking some guys are, I would write $|S|\cdot 1_{F_p}=0_{F_p}$ rather than $|S|=0_{F_p}$, since $S$ is an integer and the integers are not included in a finite field... – Georges Elencwajg Jul 24 '13 at 8:05 • This argument also applies to any field of characteritic $p$. – Lao-tzu Dec 31 '14 at 12:41 $x \to x^p$ is an automorphism sending $r$ to $r-a$ for any root $r$ of the polynomial. This operation is cyclic of order $p$, so that one can get from any root to any other by applying the automorphism several times. The Galois group thus acts transitively on the roots, which is equivalent to irreducibility. • Impressively elegant, zyx: +1 – Georges Elencwajg Jul 24 '13 at 8:12 $f(x)$ is separable since its derivative is $f'(x) = -1 \ne 0$. Suppose $\theta$ is a root of $f(x) = x^p - x + a$. Using the Frobenius automorphism, we have: \begin{align} f(\theta + 1) &= (\theta + 1)^p - (\theta + 1) + a\\ &= \theta^p + 1^p - \theta - 1 + a\\ &= \theta^p - \theta + a\\ &= f(\theta) = 0 \end{align} Thus, by induction, if $\theta$ is a root of $f(x)$, then $\theta + j$ is also a root for all $j \in \mathbb F_p$. By above, if $f(x)$ were to have a root in $\mathbb F_p$, then $0$ would a be a root too, but this contradicts $a \ne 0$. Thus, $f(x)$ has no roots in $\mathbb F_p$. (This can also be shown using Fermat's little theorem.) Suppose $\theta$ is a root of $f(x)$ in some extension of $\mathbb F_p$. We know that $\theta + j$ is also a root for all $j \in \mathbb F_p$. Since $f(x)$ is of degree $p$, these are all of the roots of $f(x)$. Clearly, $\mathbb F_p(\theta) = \mathbb F_p(\theta + j)$ for all $j \in \mathbb F_p$. Thus, all $\{\theta + j\}$ have the same degree over $\mathbb F_p$. Since $f(x)$ is separable, it follows that $f(x)$ must be the product of all minimal polynomials of $\{\theta + j\}$. Suppose the minimal polynomials have degree $m$. We have $p = km$ for some $k$. Since $p$ is prime, either $m = 1$; hence $\theta \in \mathbb F_p$, a contradiction. Or $k = 1$; hence $f(x)$ is irreducible because it's the minimal polynomial. • Ah, I already accepted it. I want to accept it another time but it will get as unaccepted :P Thank You for this wonderful proof :) :) :) – user87543 Aug 3 '13 at 10:03 • A good one! ${}$ – Jyrki Lahtonen Aug 3 '13 at 10:06 • Happy to help! I have it in my notes. I don't actually remember if I came up with it myself or found it somewhere. – Ayman Hourieh Aug 3 '13 at 10:11 • What ever may be the source, Your Answer is good :) – user87543 Aug 3 '13 at 10:16 I think the following idea works. Let $f(x) = x^p-x+a$. They key observation is that $f(x+1)=f(x)$ in the field of $p$ elements. Now factor $f(x) = g_1(x) \cdots g_k(x)$ as a product of irreducibles. Sending $x$ to $x+1$ must therefore permute the factors $\{ g_1(x), \dots, g_k(x) \}$. But sending $x$ to $x+1$ $p$ times in a row comes back to the original polynomial, so this permutation of the $k$ factors has order dividing $p$. It follows that either every $g_j(x)$ is fixed by sending $x$ to $x+1$ - which I think is a property that no nonconstant polynomial of degree less than $p$ can have, but that needs proof - or else there are $k=p$ factors, which can only happen in the case $a=0$. • This probably needs the separability of $f$, because if some of the $g_1,g_2,\ldots,g_k$ could be equal, the concept of a permutation and its order would be somewhat muddled. – darij grinberg Sep 18 '16 at 3:25 $x^p-x+a$ divides $x^{p^p}-x$. If $f$ is an irreducible divisor of $x^p-x+a$ of degree $d$ then $\mathbf{Z}_p[x]/f$ will be a subfield of the field with $p^p$ elements so $p^p = (p^d)^e$ and so $d=1$ or $e=1$. since $x^p-x+a$ has no roots $e=p.$ One more proof, similar to Greg Martin's: Suppose $\alpha$ is a root of $f(x)=x^p-x+a$ in some splitting field; then \begin{equation*} (\alpha+1)^p - (\alpha+1) + a = \alpha^p + 1 - \alpha - 1 + a = \alpha^p - \alpha + a = 0, \end{equation*} so that $\alpha+1$ is also a root. It follows that the roots of $f$ are $\alpha+i$ for $0\le i < p$. If $f$ factors in $\mathbb{F}[x]$, say $f = gh$, then the sum of the roots of $g$ is $k\alpha + r$ where $\deg g = k$ and $k, r\in\mathbb{F}_P$. Since $g\in \mathbb{F}_p[x]$ it follows that $\alpha\in \mathbb{F}_p$. But that implies that $f$ splits in $\mathbb{F}_p$, which is not the case (for example, neither $0$ nor $1$ is a root). Thus $f$ is irreducible. • This is a very helpful and simple answer. Can I ask why it is not the case why f splits in Fp? – P-S.D Apr 1 '17 at 23:20 • Also, by f splits in Fp, do you mean that Fp is the splitting field of f? Why would Fp being the splitting field of f imply that 0 or 1 is a root? – P-S.D Apr 1 '17 at 23:26 • @P-S.D If $\alpha\in\mathbb{F}_p$, then the $p$ elements $\alpha+i$, $0\le i<p$, are all in $\mathbb{F}_p$ and are all roots of $f$. But $f$ has degree $p$, so it clearly splits in $\mathbb{F}_p$ and thus each of the $p$ elements of that field, including $0$ and $1$, is a root of $f$. – rogerl Apr 2 '17 at 14:53 The supposition of Greg Martin is truth, if a polinomyal $f$ with $deg(f)=n$ satisfies the property, then $n\ge p$, by contradiction argument, just write the expansion with the Newton's formula and analyse the coeficient of $x^{n-1}$ term, you get $\binom n 1a_{n}+a_{n-1}=a_{n-1}$, if $n\lt p$, this equation is an absurd.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9597019553184509, "perplexity": 125.76826470674644}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998844.16/warc/CC-MAIN-20190618223541-20190619005541-00025.warc.gz"}
https://motls.blogspot.com/2014/09/
## Tuesday, September 30, 2014 ... ///// ### Glimpsed second Higgs at $137\GeV$ OK with BLSSM, not MSSM Fifteen months ago, I discussed a very interesting paper by CMS that has seen a 2.73-sigma or 2.93-sigma (depending on details) excess suggesting the existence of a second CP-even neutral Higgs boson at mass $m_{h'}=136.5\GeV$. Three months later, I mentioned some weak dilepton evidence in favor of this new particle. Today, W. Abdallah, S. Khalil, and S. Moretti released a hep-ph preprint that tests the incorporation of this hypothetical second Higgs boson into supersymmetric models: Double Higgs peak in the minimal SUSY $B-L$ model It may be the most interesting paper on the arXiv today. ### Hong Kong's homogenization within China is unstoppable "Pro-democratic" protests in Hong Kong continue to cause havoc in that province – and the global markets. I find most of the chaos and the attention to these protesters incomprehensible. Apparently, they are protesting the planned non-democratic character of some 2017 elections (or perhaps I should say "appointments"). Some sources say that they just want to express sympathies with the protesters in Ferguson, too. ;-) These two explanations seem very different but no one is quite sure which of them is more accurate. I couldn't learn how to walk on such a street, let alone how to learn the 3,000 Kung Fu characters. At least, when two Chinese Germans asked me how to get to a river in Ejpovice, I could teach them that here in Czechia, it's not a "livel" but a "rrrriverrrrrr". :-) First, some basic background. I would isolate three "more capitalist provinces" of Greater China – places that we would know from the "MADE IN ***" on lots of electronics products since the 1980s. First, there is Singapore. 75% of its 5 million people are Chinese but Singapore is a tip of the Malay Peninsula (the peninsula consisting mostly of Malaysia and connected to the thicker peninsula of Indochina – with Vietnam, Thailand etc.). Singapore isn't "too close" to mainland China but it's heavily Chinese because China has used it for trade. ## Monday, September 29, 2014 ... ///// ### Peacefully redrawing Middle East borders Several years ago, Dr Sheldon Cooper FTW outlined a clever plan to solve the Israel-Palestinian conflict. Dr Gablehauser said that Dr Cooper was a nut but everyone knows that Sheldon's IQ is 187 which means something. This map is perverse but it also shows that these loons have some visions, unless most of the civilized world. Here, we are going to solve not just one problem of the Middle East but all of them. ;-) A general problem is that the existing borders don't really reflect the conditions on the ground too well which is why there are civil wars and why the current borders are being questioned. Despite the constant interventions, the U.S. and others have no clue how they want the region to evolve and what the supposed political map will be in 2030, for example. On the other hand, the Islamic State has drawn ambitious maps – see the example above. What I am going to present is less ambitious but much more reasonable and could actually win the support of a large fraction of the people in the region as well as the international community, thus reducing the desire of everyone to fight. Of course that the regional and/or international armies would have to co-operate to bring the planned rearrangement into reality. The plan was prepared from the viewpoint of maximum pragmatism which isn't something that many people in the region (and even in the West!) are used to but they may get used to it, anyway. ### A surge of attacks against classical GR A decade ago, an organized movement of hardcore crackpots began its public assaults against string theory. Because most of the central results of the string theory paradigm have been established with as much rigor as you may ever get in natural science, it was pretty clear to me that given someone's semi-successful efforts to obfuscate some of the pillars of state-of-the-art science, it would soon or later become possible to attack any result in science and team up with journalists who will do the job. The deterioration of the public discourse was faster than I could imagine. To quantify some of these processes, note that the resistance to string theory requires one to reject some of the most important results of science of the 1990s and sometimes the 1980s. You could say that they're still "recent enough" so that the people interested in science haven't had enough time to absorb them yet – 20 or 30 years may be considered "too short a period". However, what we saw later was the escalating questioning of the very basic postulates of quantum mechanics which is really a framework of modern physics almost fully uncovered in the 1920s. That's almost 90 years ago. We have been seeing totally idiotic articles about the foundations of quantum mechanics in the media that call themselves "science media", pretty much on a daily basis. ### Influence of parity violation on biochemistry measured Parity violation could matter in biology, after all Up to the 1950s, people would believe that the laws of physics were invariant under the simple left-right mirror reflection. However, neutrinos were found to be left-handed and other processes linked to the weak nuclear force that violate the left-right symmetry were found in the 1950s. You know, the direction of the electron that leaves a nucleus after it beta-radiates is correlated with the nucleon's spin even though the velocity is a polar vector and the spin is an axial vector – such a correlation couldn't exist in a left-right-symmetric world. A prejudice has been falsified. Ten years later, the CP symmetry was shown to be invalid as well although its violation in Nature is even tinier. The combined CPT symmetry has to hold and does hold (at least so far), as implied by Pauli's CPT theorem. Bromocamphor, to become a player below With the help of chiral spinors (and perhaps self-dual or anti-self-dual middle, ($2k+1$)-forms i.e. antisymmetric tensors if the spacetime dimension is $4k+2$) particle physicists learned to build left-right-asymmetric theories and it became a mundane business. The electroweak theory included in the Standard Model is the most tangible real-world example of a left-right-asymmetric theory. We also observe some left-right asymmetry in the world around us. Most of us have a heart on the left side – which could be an accident. But such left-right asymmetries exist at a more elementary level. Amino acids and other molecules look different than their images in the mirror – and all the life we know seems to use only one of the two images, typically a "left-handed-screwed" version of such molecules. Is there a relationship to the violation of the parity at the fundamental level? ## Saturday, September 27, 2014 ... ///// ### RT: Dawkins on biology, psychology of ISIS, religion I've liked some of the intellectual videos about atheism (like Jonathan Miller's Atheism Tapes with Steve Weinberg and others) but it just happens that these profound enough intellectual debates began to be moved to places such as the Russian media in recent years. Russia Today has just posted this video of its program "Worlds Apart" hosted by Oxana Bojko. I am confident that she is so advanced that she will be able to peacefully read my comment that there are hotter babes on Russia Today – both those from the post-Soviet realm as well as those imported from the U.S. But holy cow, she is intelligent, indeed. To say the least, she looked like a peer of Dawkins'. ### Hartle, Srednicki on foundations of QM In August, there was a workshop on the foundations of quantum mechanics (see 12 videos) where lots of nonsense has been said but there were smart and reasonable people, too. Santa Barbara is a nice and sunny place in California but it's also a source of almost reliably sensible comments on quantum mechanics. Because we discussed Murray Gell-Mann's comments about quantum mechanics a few days ago, I think it's natural to start with Gell-Mann's co-father of the Consistent Histories, Jim Hartle. He gave a talk about the emergence of classical physics. I could obviously describe the talk in the usual detailed TRF way but let me avoid it this time. The main points is that the default behavior of physical systems is the quantum behavior, classical physics is just a limit, and whether this limit is relevant must be judged by the approximate validity of the classical dynamical laws as extracted from the probabilities of some coarse-grained histories. The degree of "classicality" may be quantified and Hartle was discussing some relationships between classicality and complexity. ## Thursday, September 25, 2014 ... ///// I have often emphasized that Linux is an example of non-commercial, communism-based software architecture where no one is financially motivated to take his responsibility for the quality and safety of his products seriously. Today happens to be a day which makes my words much more important than they have been for many previous years. Yesterday, a serious bug that the media often declare to be worse than Heartbleed was found in Bash, the world's most widespread command line shell for Unix (a Unix counterpart of DOS, you could say). The bug affects all versions of Unix and Linux released between 1994 and 2014 (yesterday) and everything that is sufficiently Unix-like so that it incorporates "Bash" in some form – in particular, lots of "things" on the Internet of things, routers, Apple's MacOS X system, but in principle also Android and iOS – and applications that may call "Bash" during their routine tasks. Everything with "*nix" and "*nux" in it is in danger; the only exception is Richard Nixon who is already safe. He died in 1994, exactly when the flaw was introduced to the Unix system by Brian Fox (not Cox) who was ordered to create "Bash" by the hardcore communist named Richard Stallman. Thank you, comrade. So much for the claims that open-source software is safe because everyone can look into it. The problem is that almost no one does because this extra work creates no profit. The mobile devices with Android and iOS – like the newest bendable iPhone 6 Plus – are less likely to be targets because "Bash" isn't used that often. However, MacOS X is a full-fledged target. Attacks against the vulnerabilities have already been detected and because "Bash" is comprehensible to millions of people, the creativity of the attacks is likely to grow exponentially in coming weeks. ### Czech ex-PM Nečas: stop hostility towards Russia Related: see also "The lies Europe tells about Russia are monstrous" by Czech ex-president Klaus in The Spectator (where they called him – generously yet deeply pessimistically – the last outspoken leader in the West) There are folks in Czech politics who enjoy repeating the mindless Obama-style, and sometimes even Bandera-style if not Hitler-style, hateful proclamations against Russia and designing silly methods intended to harm Russia (which is often the goal even if they also harm themselves or their citizens or allies). But I think that it is getting increasingly clear every day that the majority of the Czech political representatives do view Russia as a partner. It's true about President Zeman, Ex-president Klaus, PM Sobotka, the billionaire and deputy PM Babiš, and, we were just reminded by the Czech and Slovak media and by RIA, among others, former PM Nečas (2010-2013, see the picture). Petr Nečas, a guy with a PhD in plasma physics, was the boss of ODS, the Klaus-founded conservative party (for decades after 1989, the main Czech right-wing party) that became much less conservative a decade ago when it diverged away from Klaus and that turned into a small party a year or two ago. I met him on Klaus' birthday party in June, he was smiling and was shaven, having reverted his temporary image from the times when he was harassed by some malicious and frivolous Czech investigators. ## Wednesday, September 24, 2014 ... ///// ### Murray Gell-Mann on foundations of quantum mechanics Last Monday, Murray Gell-Mann celebrated his 85th birthday: congratulations! As far as I remember, I have never published a blog post that would be primarily dedicated to Gell-Mann's comments about the foundations of quantum mechanics. So this is the first time. The 17-minute-long video monologue above was taken from multi-hour interviews with him (and analogously, many others) on the "Web of Stories". ### Smug condescension, other signatures of Sagan, Tyson eras Honza U. sent me a link to a wonderful text by Robert Tracinski, Neil deGrasse Tyson and the Science of Smug Condescension (thefederalist.com) That website seems to be all about Neil deGrasse Tyson whom I don't consider too important in one way or another (perhaps just because I have never been exposed to his recent TV program: I have no idea whether I would like most of it!) but he plays the role of the "symbol" of some undesirable trends in the attitude to science that the laymen are being led to in this world. Neil deGrasse Tyson – and before him, to some extent, even Carl Sagan – often create the picture that it's important for the viewers to get familiar with the scientific thinking, have appreciation for it, and ignore the invalidity or inaccuracy of the facts that are sometimes sold to make a "bigger point". But as Tracinski rightfully mentions, the respect for the facts – and for accuracy – is really a cornerstone of the actual scientific method so there's no way for the Tyson "facts-ignoring" and "facts-twisting" style to educate people to think in a way that is actually scientific. Instead, these methods only strengthen the point that "it is the show that matters" and create low-brow cults of personality and not really any understanding for science. If a presenter is as smart and as educated as an average undergrad, he is also likely to err equally frequently – but the cult of an "infallible science guy" prevents some people from understanding this simple point. ## Tuesday, September 23, 2014 ... ///// ### An interview with Edward Witten at a bizarre place Most events in the "science journalism" of the recent years have been really strange, to put it extremely mildly. So the following thing is probably just another example of the rule. But listen. John Horgan is a loud, violent, and obnoxious critic of science who believes that science has ended. In fact, he has also written extensive texts about the end of mathematics. The oppressive numbers, functions, and groups have collapsed and all this fantasy called mathematics is over, Horgan has informed his readers. Before he published a loony "book" titled "The End of Science" sometime in the mid 1990s, he would also interview Edward Witten (in 1991). Well, the word "interview" is too strong. Horgan himself had to admit that it was a childish yet brutally malicious assault on theoretical physics in general, string theory in particular, and Witten as a person. Now, in the wake of the Kyoto Prize that Witten won – congratulations but no surprise – we may read another interview with Witten in the Scientific American's blogs hosted by... John Horgan. Physics Titan Edward Witten Still Thinks String Theory “On the Right Track” I don't actually know whether Witten knew that he was being interviewed by the e-mail but the text surely makes you believe that he did and we're told that some "publicist" behind the Kyoto Prize had to choose Horgan as the "interviewer". Oh my God. ## Monday, September 22, 2014 ... ///// ### Why I remain a BICEP2 believer Because I can see the non-dust pattern with naked eyes... Just three days ago, I wrote a blog post about BICEP2. So I wasn't terribly excited to write down another blog post once the Planck Collaboration published a paper claiming that the BICEP2 could be due to dust, especially because I don't find the Planck paper to be terribly new, interesting, insightful, or game-changing. A random image, taken from Perceiving randomness: egalitarian bias They've been saying similar things since the spring and the arguments they presented today don't seem stronger than the previous ones. Their fits don't seem to be too good without the error margins and as far as I can say, they are inflating the errors by inventing various kinds of "extra errors" (such as the "conversion error") in order to dilute and obfuscate the signal that they may have failed to discover, despite their superior gadgets and huge funding. This production of spurious errors sort of reminds me of Gerhard Schröder's invention of new taxes such as the environmental tax, the beverage tax, the bad weather tax, and others (Schröder wasn't a sufficiently arrogant hardcore thief to propose a carbon tax, however!). Much of this tension is a clash of personalities. I think that what BICEP2 has shown is the experimental science of the best kind and unless some embarrassing error emerges (I really mean something like a loosened OPERA cable: it hasn't emerged so far), I will continue to think of them highly even if their discovery is ultimately reduced to dust (or another background). Like proper stereotypical experimenters, they didn't really believe a word that the theorists like to say (all proper experimenters think that gravity is actually caused by leprechauns and GR is just a theorists' fairy-tale for babies to sleep smoothly; but if a theorist needs the experimenters to empirically determine something, the good experimenters are as reliable as a vacuum cleaner). However, after they spent a very long time by efforts to show that their signal is due to something else, they published a paper with the discovery claim and it was undoubtedly right that they did so. Science couldn't operate if the publication of a discovery were viewed as a blasphemy. ### A simple explanation behind AMS' electron+positron flux power law? Aside from tweets about the latest, not so interesting, and inconclusive Planck paper on the dust and polarized CMB, Francis Emulenews Villatoro tweeted the following suggestive graphs to his 7,000+ Twitter followers: The newest data from the Alpha Magnetic Spectrometer are fully compatible with the positron flux curve resulting from an annihilating lighter than $1\TeV$ dark matter particle. But the steep drop itself hasn't been seen yet (the AMS' dark matter discovery is one seminar away but it may always be so in the future LOL) and the power-law description seems really accurate and attractive. What if neither dirty pulsars nor dark matter is the cause of these curves? All of those who claim to love simple explanations and who sometimes feel annoyed that physics has gotten too complicated are invited to think about the question. ### NYPD was only able to arrest 100 "climate" communist criminals so far ...along with 1 polar bear... Off-topic: the 8th season of The Big Bang Theory was started yesterday. If you have missed those episodes, don't miss the next ones. On Sunday, "The People's Climate March" took place in Manhattan. According to their counts, it was attended by 310,000 communist and socialist hecklers; the figure includes about 100,000 hecklers who were actually observable in the visible spectrum. These individuals brought the world another clear piece of evidence that the global warming movement has nothing whatever to do with science, despite the often repeated outrageous lies that it has; it's all about the extremist ideology and some people's straightforward strategy to make profit out of this political junk that controls the hearts of a not quite negligible fraction of the bottom of the contemporary human society. According to their yelling and banners, most of the demonstrators wanted to undermine and burn capitalism and corporations and establish communism or socialism or the same inhuman regime described by other words. It's very clear that the extremist communist fringe of the political spectrum is what represents the "grassroot movement" behind the delusions about "climate change". These people have been loud, obnoxious, and manageable – but they may potentially become dangerous. Meanwhile, Avaaz and 350.org, two fraudulent organizations promoting the climate hysteria, have abused the large amount of rabble that has accumulated on the streets of New York to get some extra funding. The immorality of all these mechanisms couldn't be clearer. ## Sunday, September 21, 2014 ... ///// ### A conversation with Nima Arkani-Hamed On behalf of the Science Museum in London, science historian Graham Farmelo hosted a conversation with a top particle physicist of his generation, Nima Arkani-Hamed, on November 14th, 2013. A 55-minute video of excerpts from the event was posted just two months ago. You may speed the video up by a factor of 1.25 or 1.5, if you wish ("options" wheel). Nima has said lots of interesting and important things about theoretical physics of the 20th century (it's easy to highlight the breakthroughs of the 20th century in 3 minutes: relativity, quanta, and their cooperative applications: as a team, relativity and QM are hugely constraining), the recent past, the present, and the future; the LHC and the Higgs boson, and lots of related things. What the fundamental laws can and can't explain (the theories are effective and hierarchical)? We're at a rather special era because we're beginning to ask a new type of questions that are deeper and more structured, Nima said. ### Kaggle: quantifying the African soil If the most important task for the mankind and the computerkind (the fourth best friend of man's, after puppies, books, and women) was to recognize tau-tau semileptonic decays of the Higgs boson, the second most important task was to predict the properties of the African soil. Currently it's the only open Kaggle contest whose data aren't huge – in gigabytes – and that also offers the winners a few bucks. You download a 13 MB training file and an 8 MB test file – an order of magnitude smaller files than what one needed in the Higgs contest. ## Saturday, September 20, 2014 ... ///// ### Antarctica is a climate denier, too Absolute record high in sea ice area improved by a few percent These days, the climate alarmist media are full of tirades against the penguins. Children should only be allowed to play with the polar bears and not the penguins, we often read, because penguins are evil conservatives, anti-Gore and anti-Stalinist mavericks and contrarians paid by the Koch brothers and the dirty Big Oil. Why did the climate alarmist whackos begin to hate these cute animals (and even erase Linux from their hard disks and stop rooting for the ice-hockey team in Pittsburgh)? If you open the Cryosphere Today and especially a graph of the absolute Southern Hemisphere sea ice area, a graph from the places where the penguins live, you will see that these days, the sea ice area is surpassing the all-time record high. Well, "all-time" only means the recent 35 years but this period of time is still longer than most people's memory. It's apparently politically incorrect for ice to accumulate and to say "f*ck you, you f*cked up alarmist aßes". A well-behaved ice with civil awareness should melt away and scream "help to save the planet and introduce a new tax and new bans!". ## Friday, September 19, 2014 ... ///// ### A pro-BICEP2 paper Update Sep 22nd: a Planck paper on polarization is out, suggesting dust could explain the BICEP2 signal – or just 1/2 of it – but lacking the resolution to settle anything. A joint Planck-BICEP2 paper should be out in November but it seems predetermined that they only want to impose an upper bound on $r$ so it won't be too strong or interesting, either. It's generally expected that the Planck collaboration should present their new results on the CMB polarization data within days, weeks, or a month. Will they be capable of confirming the BICEP2 discovery – or refute it by convincing data? Ten days ago, Planck published a paper on dust modelling: Planck intermediate results. XXIX. All-sky dust modelling with Planck, IRAS, and WISE observations I am not able to decide whether this paper has anything to say about the discovery of the primordial gravitational waves. It could be relevant but note that the paper doesn't discuss the polarization of the radiation at all. Perhaps more interestingly, Wesley Colley and Richard Gott released their preprint Genus Topology and Cross-Correlation of BICEP2 and Planck 353 GHz B-Modes: Further Evidence Favoring Gravity Wave Detection that seems to claim that the data are powerful enough to confirm some influence of the dust yet defend the notion that the primordial gravitational waves have to represent a big part of the BICEP2 observation, too. ## Thursday, September 18, 2014 ... ///// ### AMS in PRL: the positrons do stop increasing ...but the evidence for an actual drop remains underwhelming... In April 2013, the Alpha Magnetic Spectrometer (AMS-02), a gadget carried by the International Space Station that looks for dark matter and other things and whose data are being evaluated by Nobel prize winner Sam Ting (MIT) and his folks, reported intriguing observations that were supposed to grow to a smoking fun proving that dark matter exists and is composed of heavy elementary particles: AMS-02 seems to overcautiously censor solid evidence for dark matter AMS: the steep drop is very likely there I had various reasons for these speculative optimistic prophesies – including Sam Ting's body language. It just seemed that he knew more than he was saying and was only presenting a very small, underwhelming part of the observations. ### Poroshenko in the U.S. Congress The Americans' stupidity is staggering I have always belonged to the top 1% of my nation and other environments (including the Harvard environment) when it came to the defense of America, Americans, their record, their views, and their values. Of course that I often had to point out some other, bigger advantages that compensate most of the Americans' complete misunderstanding of the world geography and the reality outside the U.S. borders in general. After I completed watching this 43-minute speech of Ukrainian president Petro Poroshenko in front of the U.S. Congress, you will probably believe me that in my whole life so far, I have never considered the overwhelming majority of the Americans to be this breathtakingly stupid – dangerously stupid and pretty much uniformly stupid, in a very bipartisan way. ### A top 2% Kaggle Higgs solution Guest blog by Hongliang Liu (UC Riverside) This blog is for describing my selected top submission sent to the Kaggle Higgs competition. It has the public score of 3.75+ and the private score of 3.73+ which has ranked at 26th. This solution uses a single classifier with some feature work from basic high-school physics plus a few advanced but calculable physical features. 1. The model I choose XGBoost which is a parallel implementation of gradient boosting tree. It is a great piece of work: parallel, fast, efficient and tune-able parameters. The parameter tuning-up is simple. I know GBM can have a good learning result by providing a small shrinkage eta value. According to a simply brute-force grid search for the parameters and the cross-validation score, I choose: ## Wednesday, September 17, 2014 ... ///// ### Ambulance-chasing Large Hadron Collider collisions Guest blog by Ben Allanach on the impure fun of rapid-response physics B.A. is a professor of theoretical physics at the University of Cambridge. He is a supersymmetry enthusiast, and is always looking for ways to interpret data using it. You can watch his TEDx talk giving some background to the LHC, supersymmetry and dark matter, or (for experts) look at the paper that this blog refers to. “Ambulance chasing” refers to the morally dubious practice of lawyers chasing down accident victims in order to help them sue. In a physics context, when some recent data disagrees with the Standard Model of particle physics and researchers come up with an interpretation in terms of new physics, they are called ambulance chasers too. This is probably because some view the practice as a little glory-grabbing and somehow impure: you’re not solving problems purely using your mind (you’re using data as well), and even worse that that, you’ve had to be quick or other researchers might have been able produce something similar before you. It’s not that the complainers get really upset, more that they can be a bit sniffy (and others are just taking the piss in a fun way). I’ve been ambulance chasing some data just recently with collaborators, and we’ve been having a great time. These projects are short, snappy and intense. You work long hours for a short period, playing ping-pong with the draft in the final stages while you quickly write the work up as a short scientific paper. A couple of weeks ago, the CMS experiment released an analysis of some data (TRF) that piqued our interest because it had a small disagreement with Standard Model predictions. In order to look for interesting effects, CMS sieved the data in the following way: they required either an electron and an anti-electron or a muon and an anti-muon. Electrons and muons are called `leptons’ collectively. They also required two jets (sprays of strongly interacting particles) and some apparent missing energy. We’ve known for years that maybe you could find supersymmetry with this kind of sieving. The jets and leptons could come from the production of supersymmetric particles which decay into them and a supersymmetric dark matter particle. So if you find too many of these type of collisions compared to Standard Model predictions, it could be due to supersymmetric particle production. ### Global billionaire political power index The Czech and Slovak media were somewhat excited about a fun new "index" by the U.S. sociologist Darrell West: The global billionaire political power index The 15 world's politically most influential dollar billionaires are said to possess these faces: You see that the fifth one is a Slovak-born food industry mogul and Czech vice-PM Andrej Babiš, the leader of a somewhat kitschy "apolitical" Czech movement ANO (=YES). ## Tuesday, September 16, 2014 ... ///// ### Does only NATO, and not Russia, enjoy the right to send weapons to Ukraine? In the last week, after a decision from the previous Friday, a ceasefire regime was established in Ukraine. People are still killing each other in the Donetsk Region but the shooting may be interpreted as "sporadic" which is why both sides of the civil war continue to officially respect the ceasefire. While these advances towards peace are promising and all ethical people in the world should want the de-escalation of the situation to continue, some warmongers and lunatics in the EU and Washington D.C. have introduced new sanctions against Russia exactly during these promising days – sanctions against a country that they labeled the culprit of all the evil in the world. They openly said that the sanctions may depend on their satisfaction with the developments in Ukraine. This is incredible because this comment implicitly means that Russia (which is not really a party of the conflict at all, as a country) is automatically classified as the culprit of all failures in these people's lives and their friends' lives. Not even the Nazi party was treating the Jews in this simple way. On one hand, fuzzy satellite photographs from wrong times and wrong places produced by a Google-Earth-serving photoshop company – photographs with speculative probabilistic interpretations that have already been taken down from the web – are the only "evidence" that the Russian Federation as a country is helping the anti-Kiev militias, and this seems like enough for the warmongers to cripple the international trade and to treat Russia as a criminal country. On the other hand, the standards are very different. The defense minister in Kiev has literally boasted that his side of the civil war in Ukraine is already receiving arms shipments from NATO member states. It's exactly the same thing on the other side of the conflict – except that in this case, the deliveries are obviously real because they have been officially confirmed. However, the Western press doesn't urge everyone to cut the U.S. and other countries who are doing that from the international trade, to isolate the country, and similar things. A bit of double standards, right? ### Kaggle Higgs: lessons from the critical development Vaguely related: Mathematica Online (in the cloud) became available yesterday, see Wolfram's blog. Unfortunately, it's not quite free. When the Kaggle Higgs contest switched from the public 100k "public" (preliminary) collisions to the official 450k "private" (final) collisions a few hours ago, your humble correspondent's team (including Christian Velkeen of CMS, with a 15% share) dropped from the 1st place to the 9th place (correction: it's 8th place now, Friday Sep 19th). This corresponds to the hypothesis that the changes of AMS will be comparable to the "full combined noise" 0.1 or so. Fun link: See all of my/our 589 submissions' preliminary scores, final scores, filenames, and comments (with a somewhat funny and esoteric terminology). It's the last, HTML file over there; save it and open locally. Because the "random" changes of the score were generally this high, you could say that chance decided about the winners, and the 0.045 gap between the winner and my team was our bad luck. Still, I feel that the three guys at the top – Melis, Salimans, and the marijuana guy – didn't become winners by pure chance. All of them are professional big-data programmers of a sort. It's true of everyone in the top 10 and almost everyone in the top 100, too. I am not aware of anyone who was significantly helped by some physics wisdom. I still think it's clear that much (well, almost exactly 1/2) of my/our improvements done from the public xgboost demo – whose score actually increased from the preliminary 3.60003 to the final 3.64655 – was genuine. After all, those who only used the basic xgboost code ended up at the 357th-450th place, well below my 9th. But instead of an increase to 3.85060 as seen on the preliminary leaderboard, the actual increase was just to 3.76050. The efficiency is lousy because my methods to rate the improvements were amateurish: the preliminary AMS is much better than nothing but it isn't a good enough measure to safely beat the top players. ## Sunday, September 14, 2014 ... ///// ### Higgs the mass killer: in defense of Stephen Hawking Stephen Hawking was brought to deep financial troubles when he lost $100 against TRF guest blogger Gordon Kane after the two men have agreed on a bet that the Higgs boson would or wouldn't ever be discovered, or something like that. (I have won$500 for an analogous bet: yes, the Higgs boson has been discovered.) So evil tongues could argue that it's the reason why Hawking says that Higgs has made physics less interesting – and now why he accuses the Higgs particle of the plan to destroy the world. Will it destroy the world? Probably not but the threat, while small, has a totally legitimate scientific justification. Yahoo News were among the tons of sources that have offered the disturbing prophesy by the famous physicist. Matt Strassler and especially Don Lincoln (full) wrote nice texts that try to calm the public and present Hawking's warnings as scientifically misleading. I wouldn't be equally critical. ### George Kukla (1930-2014) A well-known Czech American climate skeptic This happened already 3 months ago but wasn't really covered in the blogosphere... I re-learned from Willie Soon that George Kukla died at age 84 of an apparent heart attack. His Columbia University wrote a biography of him with a somewhat insulting title. Kukla received a Medal of Merit from Czech president Václav Klaus in 2011. Jiří (=George) Kukla was born in Prague in 1930. His journey around the world was colorful. In the 1960s, he would manage to be an adviser to Fidel Castro. But he would also work in China, Chile, Antarctica, and Eastern Europe. ### I. P. Pavlov: 165th birthday A Russian patriot and anti-communist whom communists had to nurture Ivan Petrovich Pavlov, a pioneer of physiology, was born in Ryazan (200 km Southeast of Moscow) in September 1849. At Technet.cz, Karel Pacner published a fascinating chapter of his book about the geniuses of the 20th century. ## Friday, September 12, 2014 ... ///// ### Extracurricular activities are indeed just "extra" I am happy to report that I agree with Scott Aaronson – and obviously with Steve Pinker – that the universities should focus on the learning and scholarly work while sports and similar things should be treated as cherries on a pie. Also, I agree with them that the standardized tests are – if you allow me to use a quote we invented along with Winston Churchill – that the standardized tests are the worst method to "rate" applicants except for all other methods that have been tried. ;-) ## Thursday, September 11, 2014 ... ///// ### One-half of CO2 doubling achieved Detectable impacts on the climate are yet to be seen When the CO2 level in the atmosphere surpassed 400 ppm a short time ago, many alarmists would celebrate this symbolic achievement. Oh, the CO2 concentration is so high! It's a signal from the heaven, a shot from the Aurora telling us to start another world revolution because our previous one, that of 1917, has already faded away and it wasn't enough for us, anyway. The number is so round, and so on. Of course, nothing new happens when the CO2 level reaches 400 ppm – it's just another number that only looks special because of an arbitrary decadic numeral system we happen to use today. The Earth has seen concentrations around 6,000 ppm as well and 4,000 ppm would be just fine for all life forms we know today. By far the closest worrisome CO2 concentration is 150 ppm in which most existing plant species stop growing (ice ages have only forced them to easily withstand 180 ppm or so). Another numerically special value of the concentration was achieved two years ago or so but unlike 400 ppm, it wasn't hyped by anyone. The hypothetical effect of CO2 on the temperatures (well, almost certainly real effect theoretically; hypothetical from an empirical viewpoint because the effect is so incredibly weak) is often quantified – converted to numbers – when we talk about the "climate sensitivity", i.e. the increase of the global mean temperature caused by a doubling of the CO2 concentration. The doubling defines more natural benchmark values of the concentration because it suggests that we should look at the behavior of the temperature assuming the exponential growth of CO2. That's natural because the temperature increase is approximately (very accurately) proportional to the logarithm of the CO2 concentration increase (thanks, John). Consequently, every time you double the CO2 level, the temperature increases by the same amount (ignoring non-CO2 drivers). The global mean temperature as a function of the CO2 concentration is$T(c) = 14.5^\circ {\rm C} + {\rm sensitivity} \times \frac{\ln (c/280\text{ ppm})}{\ln 2}$ The temperature 14.5 degrees Celsius is the holy "optimum" global mean temperature that the climate alarmists want to see forever (yes, they will also protect our blue, not green planet from future ice ages when the temperature would otherwise drop by 8 °C as many times in the past) because it was how things probably were in 1750 although no one can really reconstruct the temperature in 1750 with a sub-degree accuracy (and even today's "global mean temperature" depends on so many technicalities that it's fair to say that it isn't defined at a sub-degree accuracy, either). The ratio of logarithms may also be written as the "base two logarithm" but I wanted to use basic functions only. Note that $\ln 2\approx 0.69315$. The coefficient "sensitivity" is theoretically equal to 1.2 degrees Celsius if we ignore all the feedbacks. The total figure when feedbacks (especially those related to various forms of water in the atmosphere) are included is unknown and it may be higher or lower than 1.2 degrees Celsius. One of the unjustified assumptions of the climate change ideology is that the full figure has to be higher. The higher value of the climate sensitivity you defend, the greater influence over the climate alarmist paramilitary movement you achieve. If you believe that the value of the total climate sensitivity is below an offensive threshold, you are a heretic. The offensive threshold used to be 3 degrees Celsius but the alarmists were forced to lower the threshold of heresy to 1.5-2.0 degrees Celsius because those values are obviously consistent with all the known data (and probably still overestimates). ## Wednesday, September 10, 2014 ... ///// ### Should junior people be equally loud during seminars? A man named David Chalmers wrote down some norms of the right behavior during philosophical seminars and Sean Carroll responded, in a surprisingly moderate way. The recommendations are things like 1. be nice 2. allow the junior people to speak more than you do 3. the time one spends by talking during a seminar should be proportional to the product of the number of his or her X chomosomes and the number of men that he or she has sex with and so on. Moreover, if you want to avoid a sexual harassment lawsuit, you should follow several simple rules: 1. be handsome 2. be attractive 3. don't be unattractive Well, I admit, this is not directly related to the main topic but I wanted the blog post to be self-sufficient. The funny video under the latest hyperlink shows the actual decisive factors – and hypocrisy – behind most of the similar laws. ## Tuesday, September 09, 2014 ... ///// ### Bohr, Heisenberg, Landau wouldn't find QBism new David Mermin posted the text of his talk Why QBism is not the Copenhagen interpretation and what John Bell might have thought of it (arXiv + video) that he gave in Vienna 3 months ago. The conference was dedicated to John Bell and the 50th anniversary of his theorem. I agree with many statements that Mermin is making (and was making, in recent years) about the foundations of quantum mechanics. But that's not the case of some of the focal points in this talk. First, I don't think that John Bell was such an important physicist that we should spend too much time with speculations what he would think about some ideas proposed after his death. Bell didn't discover quantum mechanics, he wasn't even in the top 100 ring of its co-founders and their first-generation followers, and his expectations about the fate of quantum mechanics turned out to be wrong. He didn't coin QBism and related concepts, either. Would Bell like QBism? Yes, no, who cares? It was a conference about Bell which may (at least partially) explain why Mermin cared. Also, the talk was meant to be nice to Fuchs and Schack, two guys behind the QBism meme, which may explain why Mermin tried to present QBism as a new idea – even though it is not a new idea – and, in fact, as an idea that the notorious critics of proper quantum mechanics such as Bell may have liked - even though Bell and others would almost certainly hate QBism, too. But let me discuss the 14 pages of the arXiv preprint a bit more systematically. ## Monday, September 08, 2014 ... ///// ### Vafa: supergroups, non-unitary cousins of CFT, and black hole puzzles Cumrun Vafa has a very interesting new paper, Non-Unitary Holography You may start with a simple question. What happens if you replace gauge groups derived from $U(N)$ by their supergroup counterparts, $U(N+k|k)$? Well, the supergroup has more degrees of freedom – many of those have a negative norm (anticommuting but spin-one components of the "gauge bosons") and may produce negative probabilities. But Cumrun says that it doesn't affect anything you encounter at any order of the perturbative expansion in $1/N$ i.e. in the string loop expansion. The effect of the extra $k$ bosonic dimensions added to the original $N$ is cancelled against the new $k$ fermionic ones. ### Scottish independence may be a non-event Next Thursday, the Scottish voters will be asked whether Scotland should become an independent country. Some Scots support the independence efforts. Many Englishmen such as Stephen Hawking and Paul McCartney have urged the Scots to vote to preserve the union. Peter Higgs, an Englishman in Edinburgh, was undecided quite recently. The result of the referendum seems completely uncertain now – the odds are 50-50. Lyrics of the Czech jingle that I would hear rather often as a kid, despite socialism: The Scot has skirts, pipes, and the lake where a mysterious secret has been hiding for a long time. They say that an evil monster resembling a dragon is living there. Only Matthew and Pauline know what it looks like. In fact, there are numerous monsters there, they are not evil at all, and none of them is certainly monstrous. Who wouldn't like the Ness Family and who wouldn't like to play with them? They are wonderful friends. The United Kingdom has been a whole for many centuries and changes of the state borders are so rare that many of us tend to instinctively think that the dissolution of the U.K. would be a big deal. ## Saturday, September 06, 2014 ... ///// ### Genesis according to Kerry: you shall save the Muslim world from global warming What an interesting combination of insanities As you may have noticed, I am very busy these days and it will continue to be so for 9 more days because several things have overlapped. So it may be a time for easier blog posts. When I was reading a title by Anthony Watts, Is John Kerry mentally ill? "Scriptures Commands America To Protect Muslims From Global Warming" I was thinking that Anthony had to misinterpret Kerry's words or heavily exaggerate, or something like that. But then I pressed "play" on this 90-second video: In the State Department, they follow the slogan "religion matters". Moreover, all religions are really brothers and all of their scriptures, starting from Genesis, command us as follows: You shall save the Muslim world and the blessed children of God such as the ISIS from the anthropogenic global warming. OK, I added the ISIS but otherwise it's there. I agree with those who say that if the socialist U.S. Obamacare really works, John Kerry should be immediately assigned several psychiatrists for free. ## Friday, September 05, 2014 ... ///// It's been estimated that the Czech sanctions against Russia would cost us about $1 billion if they continued in 2015. It's a lot of money which is one reason – and the lack of any good outcomes of the sanctions so far is another – why sensible politicians are skeptical towards sanctions, much like the top Slovak politicians. Zeman and Putin. Yes, Zeman is currently the tallest head of a country in the world. On behalf of their countries, Czech and Slovak PMs (Sobotka and Fico) won the right not to join future EU sanctions or their parts. President Zeman said that harder sanctions could be OK but he demands a real proof that a Russian invasion into Ukraine is underway. If there won't be real evidence, Zeman will oppose the sanctions, too. (The deputy prime minister Babiš, a former communist agent codenamed "Bureš" and a billionaire who founded an "unideological" party ANO, also tends to be against the sanctions, partly because he is a food industry mogul. The foreign minister Zaorálek, a typical hateful socialist demagogue, insists that "we are always obliged to agree with the majority of the EU". Lots of would-be right-wing politicians – currently in the opposition – are supporting the sanctions.) Zeman was a target of a verbal assault that shows that typical participants of the NATO summit refuse to discuss these serious issues seriously. ## Thursday, September 04, 2014 ... ///// ### Brain-to-brain communication Science Alert and many others bring us the gospel about the research reported in PLOS in which thoughts were sent directly from one brain to another brain, using no other human organs, over the Internet. The two sides of the communication were located in France and in Spain or India. Except for a couple of extra wires in between, the technology is really nothing else than telepathy. They were sending binary messages (they have used "Ciao vs Hola" instead of "zero vs one", probably because they confused France with Italy LOL) and the input side has typed the words by the power of her will – by moving a ball on the screen via electromagnetic fields detected around her skull. Correct me if I am wrong but I think that the whole Internet middle part of the experiment is pure marketing – they just sent the information over the Internet in the most ordinary way so of course that there may be thousands of miles in between. The receiving side obtained the information by "phosphenes". Electrical pulses near various parts of the skull are interpreted by the brain as flashes (I mentioned this fact 10 days ago when I talked about the Russian guy who used a collider beam instead of Botox. Bacon's cipher was used, too. ### Kaggle Higgs: view from Mt Everest Update Sep 16: ninth place, people couldn't compete against the machine learning gurus who knew what they were doing from the beginning. I am / we are ninth at the end. Also, the winner has 3.805 (although everyone else is below 3.8) so I apparently lose a "below 3.8"$100 bet. Heikki is very lucky, isn't he? ;-) A minor update Sep 15: I just wanted to experience the fleeting feeling of our team's look from the top of the preliminary leaderboard where we (shortly?) stand on the shoulders of 1,791 giants. You see the safe gap of 0.00001 between us and the Hungarian competition. ;-) Today, the "public" dataset of 100,000 events will be replaced by a completely disjoint (but statistically equivalent) dataset of 450,000 "private" ATLAS collisions and our team may – but is far from guaranteed – to drop like a stone. And even if it doesn't drop like a stone, there will be huge hassle to get convinced that the code has all the characteristics it should have. I am actually not 100% sure whether I want to remain in the top 3 because I dislike paperwork and lots of "small rules". Text below was originally posted on September 4th as "Kaggle Higgs: back to K2" The ATLAS Kaggle Higgs contest ends in less then two weeks, on September 15th or so, and I wanted to regain at least the second place among the 1,600 contestants seen in the leaderboard – because I still believe that it is unlikely for me to win a prize. After many and many clever ideas and hundreds of attempts, my team returned to the second place where I have already been for one hour in June. Gábor Melis is ahead of my team by 0.005. I am learning Hungarian in order to revert this gap. ### Brian Cox's incompetence Like Sean Carroll, Brian Cox pretends to be a scientist but in reality, he is confused about some very rudimentary facts about modern physics and science in general. It's not just the lunar phases or locality or the exclusion principle that he totally misunderstands (be sure that I haven't discussed every misconception of his that has made me very angry). He actually doesn't build on science; he builds on licking the rectums of the powerful and those who are brainwashed by currently fashionable political deviations. Cox is a kitsch for the least demanding audiences. Yesterday, The Guardian published a diatribe under the title Brian Cox: scientists giving false sense of debate on climate change I agree with this title. Genuine science doesn't have significant doubts about the fact that the climate panic is a pile of rubbish and it's pathetic for the media hosts and others to keep on inviting assorted alarmist loons and fraudsters whenever the topic is related to the climate or the energy policy. But you surely know that the message that should have been conveyed by the title was upside down. ## Wednesday, September 03, 2014 ... ///// ### China will build a 52-kilometer collider by 2028 Joseph S. has reminded me of this fascinating plan: China pursues 52 km collider project (Physics World) The world's #2 economy's biggest collider right now (one in Beijing) has 240 meters in circumference (about a quarter of a micron per capita) so they plan to improve their national record by a factor of 200+. ;-) ### Poroshenko ceasefire deal is great news Update: Today, the Czech nation was sort of "bloodily" introduced to the internal conflict in Ukraine when two Czech men were killed by the pro-Kiev troops. These two men are Ivo Stejskal, the teacher of Brno previously mentioned on this blog, and Vojtěch Hlinka, a driver of Žatec who shared the name with the legendary Czech national ice-hockey coach. It's no accident because the anti-Kiev warrior had married Hlinka's ex-wife and adopted her surname. I wonder when the defenders of the Kiev junta will go to sacrifice their comfort and lives as well, instead of sacrificing other people's money. Rest in peace. One hour ago, Vladimir Putin called his Ukrainian friend Petro Poroshenko and they largely agreed about the procedure needed to stop the fights in Eastern Ukraine – conditions for a "permanent ceasefire" with the local militias. That's how the Ukrainian president's office interpreted the call. The markets welcomed the news from Kiev – the Moscow index jumped by 4 percent or so. A few minutes ago, the ceasefire was officially declared by Poroshenko. However, Putin quickly denied the proposition about ceasefire from Poroshenko. Putin can't agree with a ceasefire because he wasn't a party to the conflict. It's clear that Putin has to remind everyone of this position while Poroshenko wants to say that Russia was a party to the conflict but otherwise I think that Poroshenko's claims that they have agreed what a sensible future solution could look like are correct claims. In the video, a pundit claims that Poroshenko's announcement is just a lie, a trick to gain time to reorganize his forces that were badly beaten by the local militias in recent weeks. Let me assume that this prophesy is too pessimistic. The anti-Kiev warriors have previously announced that they would even tolerate their membership in a Ukrainian state assuming certain political concessions. ### First Czech modern capitalist theater opens in Pilsen Even though the local social democrats were working hard to threaten the project (just because of a few percent of the budget), the new \$40 million theater opened here in Pilsen last night – Smetana's Bartered Bride was the first thing that the viewers saw. Most of the fans of arts seem to be happy with the appearance (initially designed by Portuguese architects) as well as the acoustics. That's good news because the Pilseners are a conservative bunch. Well, there are some Pilseners – like the local ice-hockey guru Marty Straka – for whom the theater is still too modern. The white wall with the bubbles is supposed to represent a "curtain dividing the real world from the virtual one". The plays take place in that part of the building; the other, more ordinary black part of the building is for maintenance and technical purposes. ## Tuesday, September 02, 2014 ... ///// ### Gavin Schmidt's analogy ceasefire ...I agree with him... Shockingly enough, the climate fearmonger Gavin Schmidt of RealClimate.ORG wrote something that I agree with: On arguing by analogy Climate skeptics often like to say that the climate alarmists are analogous to the defenders of Lysenkoism, phlogiston, phrenology, and eugenics, while the climate alarmists themselves love to compare themselves to Galileo Galilei, Isaac Newton, Albert Einstein, Charles Darwin, and maybe even Paul Ehrlich who has predicted that hundreds of millions of Americans would starve to death by the year 2000. Schmidt's main point that I agree with is that these analogies are pretty much worthless for the debate about the substance because these analogies are only relevant for someone who already accepts the basic points about the substance. If one already knows (or at least assumes) that the climate panic is hogwash, he will know that the skeptics' favorite analogies are approximately right while the alarmists' analogies are misleading. If she still believes that the climate is dangerously changing, she will refuse the skeptics' favorite analogies and buy the alarmists' ones. ### 50 years of Bell's theorem: watch Zeilinger's talk Well, I don't really count Bell's theorem among the top discoveries of physics but it was still a result that showed the irreversibility of the quantum revolution more clearly than previous arguments. Anton Zeilinger is a wise guy (not to mention that he is the president of the Austrian Academy of Sciences) and he is just giving a talk at CERN that you may watch live: From John Bell at CERN to Quantum Communication and Quantum Computation (video) The screen under the link above has two parts, one with the speaker and one with the transparencies. ## Monday, September 01, 2014 ... ///// ### It's always harmful to attach negative adjectives to quantum mechanics Sabine Hossenfelder attended a global summit dedicated to the question whether it's right for science writers to present quantum mechanics as spooky, strange, and weird. Imagine how much fuel would be saved if the participants could have stayed home and responded with the right answer which is No. There is really no room for long yet intelligent debates and diatribes. Quantum mechanics was surprising to the physicists and is still surprising to the beginners who are learning it today. But as Giotis mentioned in the first comment under Sabine's blog post, it is not really surprising that a deep enough theory is surprising because humans and their ancestors have been trained by experience where effective, superficial, and conceptually misleading laws are enough to understand what is going on. ### Production of vacuum cleaners above 1600 watts banned in the EU In November 2013, I reminded everyone that the EU had an incredibly irresponsible plan to simply ban all vacuum cleaners above 1600 watts of the input power. (The equivalent figure that Americans would use is 110 times smaller because the voltage is 110 volts and in amps, so 1600 watts is equivalent to 14.55 amps.) Today, on the 75th anniversary of the outbreak of World War II, the ban came to force. I didn't want to believe that it would ever become valid but it really has. Czech media say that it's still OK to sell them and the retailers have huge inventories, indeed. Some Western European media suggest that it is no longer legal to even sell them – but the sale really seems to continue in Czechia, a country that is telling the EU overlords "screw you, Ken".
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37404292821884155, "perplexity": 2431.80446533984}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824675.67/warc/CC-MAIN-20171021081004-20171021101004-00755.warc.gz"}
http://peeterjoot.com/tag/multivariable-newtons-method/
## A matrix formulation of the Harmonic Balance method non-linear currents Because it was simple, a coordinate expansion of the Jacobian of the non-linear currents was good to get a feeling for the structure of the equations. However, a Jacobian of that form is impossibly slow to compute for larger $$N$$. It seems plausible that eliminating the coordinate expansion, expressing both the currrent and the Jacobian directly in terms of the Harmonic Balance unknowns vector $$\BV$$, would lead to a simpler set of equations that could be implemented in a computationally more effective way. To aid in this discovery, consider the simple RC load diode circuit of fig. 1. It’s not too hard to start from scratch with the time domain nodal equations for this circuit, which are fig. 1. Simple diode and resistor circuit 1. $$0 = i_s – i_d$$ 2. $$Z v^{(2)} + C dv^{(2)}/dt = i_d$$ 3. $$i_d = I_0 \lr{ e^{(v^{(1)} – v^{(2)})/V_T} – 1}$$ To setup for matrix form, let \label{eqn:diodeRLCSample:1240} \Bv(t) = \begin{bmatrix} v^{(1)}(t) \\ v^{(2)}(t) \\ \end{bmatrix} \label{eqn:diodeRLCSample:1140} \BG = \begin{bmatrix} 0 & 0 \\ 0 & Z \\ \end{bmatrix} \label{eqn:diodeRLCSample:1160} \BC = \begin{bmatrix} 0 & 0 \\ 0 & C \\ \end{bmatrix} \label{eqn:diodeRLCSample:1180} \Bd = \begin{bmatrix} 1 \\ -1 \end{bmatrix} \label{eqn:diodeRLCSample:1200} \Bb = \begin{bmatrix} 1 \\ 0 \end{bmatrix}, so that the time domain equations can be written as \label{eqn:diodeRLCSample:1220} \BG \Bv(t) + \BC \Bv'(t) = \Bb i_s(t) + I_0 \Bd \lr{ e^{ (v^{(1)}(t) – v^{(2)}(t))/V_T} – 1 } = \begin{bmatrix} \Bb & -I_0 \Bd \end{bmatrix} \begin{bmatrix} i_s(t) \\ 1 \end{bmatrix} + I_0 \Bd e^{ (v^{(1)}(t) – v^{(2)}(t))/V_T}. Harmonic Balance is essentially the assumption that the input and outputs are assumed to be a bandwidth limited periodic signal, and the non-linear components can be approximated by the same \label{eqn:diodeRLCSample:1260} i_s(t) = \sum_{n=-N}^N I^{(s)}_n e^{ j \omega_0 n t }, \label{eqn:diodeRLCSample:1280} v^{(k)}(t) = \sum_{n=-N}^N V^{(k)}_n e^{ j \omega_0 n t }, \label{eqn:diodeRLCSample:1300} \epsilon(t) = e^{ (v^{(1)}(t) – v^{(2)}(t))/V_T} \simeq \sum_{n=-N}^N E_n e^{ j \omega_0 n t }, The approximation in \ref{eqn:diodeRLCSample:1300} is an equality only at the Nykvist sampling times $$t_k = T k/(2 N + 1)$$. The Fourier series provides a periodic extension to other times that will approximate the underlying periodic non-linear relation. With all the time dependence locked into the exponentials, the derivatives are really easy to calculate \label{eqn:diodeRLCSample:1281} \frac{d}{dt} v^{(k)}(t) = \sum_{n=-N}^N j \omega_0 n V^{(k)}_n e^{ j \omega_0 n t }. Inserting all of these into \ref{eqn:diodeRLCSample:1220} gives \label{eqn:diodeRLCSample:1320} \sum_{n=-N}^N e^{ j \omega_0 n t} \lr{ \BG + j \omega_0 n \BC } \begin{bmatrix} V^{(1)}_n \\ V^{(2)}_n \\ \end{bmatrix} = \sum_{n=-N}^N e^{ j \omega_0 n t} \lr{ -I_0 \Bd \delta_{n 0} + \Bb I^{(s)}_n + I_0 \Bd E_n }. The periodic assumption requires equality for each $$e^{j \omega_0 n t}$$, or \label{eqn:diodeRLCSample:1340} \lr{ \BG + j \omega_0 n \BC } \begin{bmatrix} V^{(1)}_n \\ V^{(2)}_n \\ \end{bmatrix} = -I_0 \Bd \delta_{n 0} + \Bb I^{(s)}_n + I_0 \Bd E_n. For illustration, consider the $$N = 1$$ case, where the block matrix form is \label{eqn:diodeRLCSample:1360} \begin{bmatrix} \BG + j \omega_0 (-1) \BC & 0 & 0 \\ 0 & \BG + j \omega_0 (0) \BC & 0 \\ 0 & 0 & \BG + j \omega_0 (1) \BC \end{bmatrix} \begin{bmatrix} \begin{bmatrix} V^{(1)}_{-1} \\ V^{(2)}_{-1} \\ \end{bmatrix} \\ \begin{bmatrix} V^{(1)}_{0} \\ V^{(2)}_{0} \\ \end{bmatrix} \\ \begin{bmatrix} V^{(1)}_{1} \\ V^{(2)}_{1} \\ \end{bmatrix} \end{bmatrix} = \begin{bmatrix} \Bb I^{(s)}_{-1} \\ \Bb I^{(s)}_{0} – I_0 \Bd \\ \Bb I^{(s)}_{1} \\ \end{bmatrix} + I_0 \begin{bmatrix} \Bd E_{-1} \\ \Bd E_{0} \\ \Bd E_{1} \\ \end{bmatrix}. The structure of this equation is \label{eqn:diodeRLCSample:1380} \BY \BV = \BI + \mathcal{I}(\BV), The non-linear current $$\mathcal{I}(\BV)$$ needs to be examined further. How much of this can be precomputed, and what is the simplest way to compute the Jacobian? With \label{eqn:diodeRLCSample:1400} \BE = \begin{bmatrix} E_{-1} \\ E_{0} \\ E_{1} \\ \Bepsilon = \begin{bmatrix} \epsilon_{-1} \\ \epsilon_{0} \\ \epsilon_{1} \\ \end{bmatrix}, the non-linear current is \label{eqn:diodeRLCSample:1420} \mathcal{I} = I_0 \begin{bmatrix} \Bd E_{-1} \\ \Bd E_{0} \\ \Bd E_{1} \\ \end{bmatrix} = I_0 \begin{bmatrix} \Bd \begin{bmatrix} 1 & 0 & 0 \end{bmatrix} \BE \\ \Bd \begin{bmatrix} 0 & 1 & 0 \end{bmatrix} \BE \\ \Bd \begin{bmatrix} 0 & 0 & 1 \end{bmatrix} \BE \end{bmatrix} = I_0 \begin{bmatrix} \Bd & 0 & 0 \\ 0 & \Bd & 0 \\ 0 & 0 & \Bd \end{bmatrix} \BF^{-1} \Bepsilon In the last step $$\BE = \BF^{-1} \Bepsilon$$ has been factored out (in its inverse Fourier form). With \label{eqn:diodeRLCSample:1480} \BD = \begin{bmatrix} \Bd & 0 & 0 \\ 0 & \Bd & 0 \\ 0 & 0 & \Bd \\ \end{bmatrix}, the current is \label{eqn:diodeRLCSample:1540} \boxed{ \mathcal{I}(\BV) = I_0 \BD \BF^{-1} \Bepsilon(\BV). } The next step is finding an appropriate form for $$\Bepsilon$$ \label{eqn:diodeRLCSample:1440} \begin{aligned} \Bepsilon &= \begin{bmatrix} \epsilon(t_{-1}) \\ \epsilon(t_{0}) \\ \epsilon(t_{1}) \\ \end{bmatrix} \\ &= \begin{bmatrix} \exp\lr{ \lr{ v^{(1)}_{-1} – v^{(2)}_{-1} }/V_T } \\ \exp\lr{ \lr{ v^{(1)}_{0} – v^{(2)}_{0} }/V_T } \\ \exp\lr{ \lr{ v^{(1)}_{1} – v^{(2)}_{1} }/V_T } \end{bmatrix} \\ &= \begin{bmatrix} \exp\lr{ \begin{bmatrix} 1 & 0 & 0 \end{bmatrix} \lr{ \Bv^{(1)} – \Bv^{(2)} }/V_T } \\ \exp\lr{ \begin{bmatrix} 0 & 1 & 0 \end{bmatrix} \lr{ \Bv^{(1)} – \Bv^{(2)} }/V_T } \\ \exp\lr{ \begin{bmatrix} 0 & 0 & 1 \end{bmatrix} \lr{ \Bv^{(1)} – \Bv^{(2)} }/V_T } \\ \end{bmatrix} \\ &= \begin{bmatrix} \exp\lr{ \begin{bmatrix} 1 & 0 & 0 \end{bmatrix} \BF \lr{ \BV^{(1)} – \BV^{(2)} }/V_T } \\ \exp\lr{ \begin{bmatrix} 0 & 1 & 0 \end{bmatrix} \BF \lr{ \BV^{(1)} – \BV^{(2)} }/V_T } \\ \exp\lr{ \begin{bmatrix} 0 & 0 & 1 \end{bmatrix} \BF \lr{ \BV^{(1)} – \BV^{(2)} }/V_T } \\ \end{bmatrix}. \end{aligned} It would be nice to have the difference of frequency domain vectors expressed in terms of $$\BV$$, which can be done with a bit of rearrangement \label{eqn:diodeRLCSample:1460} \begin{aligned} \BV^{(1)} – \BV^{(2)} &= \begin{bmatrix} V^{(1)}_{-1} – V^{(2)}_{-1} \\ V^{(1)}_{0} – V^{(2)}_{0} \\ V^{(1)}_{1} – V^{(2)}_{1} \\ \end{bmatrix} \\ &= \begin{bmatrix} 1 & -1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & -1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & -1 \\ \end{bmatrix} \begin{bmatrix} V_{-1}^{(1)} \\ V_{-1}^{(2)} \\ V_{0}^{(1)} \\ V_{0}^{(2)} \\ V_{1}^{(1)} \\ V_{1}^{(2)} \\ \end{bmatrix} \\ &= \begin{bmatrix} \Bd^\T & 0 & 0 \\ 0 & \Bd^\T & 0 \\ 0 & 0 & \Bd^\T \\ \end{bmatrix} \BV \\ &= \BD^\T \BV, \end{aligned} \label{eqn:diodeRLCSample:1520} \BH = \BF \BD^\T /V_T = \begin{bmatrix} \Bh_1^\T \\ \Bh_2^\T \\ \Bh_3^\T \end{bmatrix}, which allows the non-linear current to can now be completely expressed in terms of $$\BV$$. \label{eqn:diodeRLCSample:1560} \boxed{ \Bepsilon(\BV) = \begin{bmatrix} e^{\Bh_1^\T \BV} \\ e^{\Bh_2^\T \BV} \\ e^{\Bh_3^\T \BV} \\ \end{bmatrix}. } ## Jacobian With a compact matrix representation of the non-linear current, attention can now be turned to the Jacobian of the non-linear current. Let $$\BA = I_0 \BD \BF^{-1} = [ a_{ij} ]_{ij}$$, the current (with summation implied) is \label{eqn:diodeRLCSample:1580} \mathcal{I} = \begin{bmatrix} a_{ik} \epsilon_k, \end{bmatrix} with coordinates \label{eqn:diodeRLCSample:1600} \mathcal{I}_i = a_{ik} \epsilon_k = a_{ik} \exp\lr{ \Bh_k^\T \BV }. so the Jacobian components are \label{eqn:diodeRLCSample:1620} [\BJ^{\mathcal{I}}]_{ij} = a_{ik} \epsilon_k = a_{ik} \PD{V_j}{} \exp\lr{ \Bh_k^\T \BV } = a_{ik} h_{kj} \exp\lr{ \Bh_k^\T \BV }. Factoring out $$\BU = [h_{ij} \exp\lr{ \Bh_i^\T \BV }]_{ij}$$, \label{eqn:diodeRLCSample:1640} \BJ^{\mathcal{I}} = \BA \BU = \BA \begin{bmatrix} \begin{bmatrix} h_{11} & h_{12} & \cdots h_{1, R(2 N + 1)}\end{bmatrix} \exp\lr{ \Bh_1^\T \BV } \\ \begin{bmatrix} h_{21} & h_{22} & \cdots h_{2, R(2 N + 1)}\end{bmatrix} \exp\lr{ \Bh_2^\T \BV } \\ \begin{bmatrix} h_{31} & h_{32} & \cdots h_{3, R(2 N + 1)}\end{bmatrix} \exp\lr{ \Bh_3^\T \BV } \\ \end{bmatrix} = \BA \begin{bmatrix} \Bh_1^\T \exp\lr{ \Bh_1^\T \BV } \\ \Bh_2^\T \exp\lr{ \Bh_2^\T \BV } \\ \Bh_3^\T \exp\lr{ \Bh_3^\T \BV } \\ \end{bmatrix}. A quick sanity check of dimensions seems worthwhile, and shows that all is well • $$\BA$$ : $$R(2 N + 1) \times 2 N + 1$$ • $$\BU$$ : $$2 N + 1 \times R(2 N + 1)$$ • $$\BJ^{\mathcal{I}}$$ : $$R(2 N + 1) \times R(2 N + 1)$$ The Jacobian of the non-linear current is now completely determined \label{eqn:diodeRLCSample:1660} \boxed{ \BJ^{\mathcal{I}}( \BV ) = I_0 \BD \BF^{-1} \begin{bmatrix} \Bh_1^\T \exp\lr{ \Bh_1^\T \BV } \\ \Bh_2^\T \exp\lr{ \Bh_2^\T \BV } \\ \Bh_3^\T \exp\lr{ \Bh_3^\T \BV } \\ \end{bmatrix}. } ## Newton’s method solution All the pieces required for a Newton’s method solution are now in place. The goal is to find a value of $$\BV$$ that provides the zero \label{eqn:diodeRLCSample:1680} f(\BV) = \BY \BV – \BI – \mathcal{I}(\BV). Expansion to first order around an initial guess $$\BV^0$$, gives \label{eqn:diodeRLCSample:1700} f( \BV^0 + \Delta \BV ) = f(\BV^0) + \BJ(\BV^0) \Delta \BV \approx 0, where the full Jacobian of $$f(\BV)$$ is \label{eqn:diodeRLCSample:1720} \BJ(\BV) = \BY – \BJ^{\mathcal{I}}(\BV). The Newton’s method refinement of the initial guess follows by inversion \label{eqn:diodeRLCSample:1740} \Delta \BV = -\lr{ \BY – \BJ^{\mathcal{I}}(\BV^0) }^{-1} f(\BV^0). ## ECE1254H Modeling of Multiphysics Systems. Lecture 11: Nonlinear equations. Taught by Prof. Piero Triverio ECE1254H Modeling of Multiphysics Systems. Lecture 11: Nonlinear equations. Taught by Prof. Piero Triverio ## Disclaimer Peeter’s lecture notes from class. These may be incoherent and rough. ## Solution of N nonlinear equations in N unknowns We’d now like to move from solutions of nonlinear functions in one variable: \label{eqn:multiphysicsL11:200} f(x^\conj) = 0, to multivariable systems of the form \label{eqn:multiphysicsL11:20} \begin{aligned} f_1(x_1, x_2, \cdots, x_N) &= 0 \\ \vdots & \\ f_N(x_1, x_2, \cdots, x_N) &= 0 \\ \end{aligned}, where our unknowns are \label{eqn:multiphysicsL11:40} \Bx = \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_N \\ \end{bmatrix}. Form the vector $$F$$ \label{eqn:multiphysicsL11:60} F(\Bx) = \begin{bmatrix} f_1(x_1, x_2, \cdots, x_N) \\ \vdots \\ f_N(x_1, x_2, \cdots, x_N) \\ \end{bmatrix}, so that the equation to solve is \label{eqn:multiphysicsL11:80} \boxed{ F(\Bx) = 0. } The Taylor expansion of $$F$$ around point $$\Bx_0$$ is \label{eqn:multiphysicsL11:100} F(\Bx) = F(\Bx_0) + \underbrace{ J_F(\Bx_0) }_{Jacobian} \lr{ \Bx – \Bx_0}, where the Jacobian is \label{eqn:multiphysicsL11:120} J_F(\Bx_0) = \begin{bmatrix} \PD{x_1}{f_1} & \cdots & \PD{x_N}{f_1} \\ & \ddots & \\ \PD{x_1}{f_N} & \cdots & \PD{x_N}{f_N} \end{bmatrix} ## Multivariable Newton’s iteration Given $$\Bx^k$$, expand $$F(\Bx)$$ around $$\Bx^k$$ \label{eqn:multiphysicsL11:140} F(\Bx) \approx F(\Bx^k) + J_F(\Bx^k) \lr{ \Bx – \Bx^k } With the approximation \label{eqn:multiphysicsL11:160} 0 = F(\Bx^k) + J_F(\Bx^k) \lr{ \Bx^{k + 1} – \Bx^k }, then multiplying by the inverse Jacobian, and rearranging, we have \label{eqn:multiphysicsL11:220} \boxed{ \Bx^{k+1} = \Bx^k – J_F^{-1}(\Bx^k) F(\Bx^k). } Our algorithm is • Guess $$\Bx^0, k = 0$$. • REPEAT • Compute $$F$$ and $$J_F$$ at $$\Bx^k$$ • Solve linear system $$J_F(\Bx^k) \Delta \Bx^k = – F(\Bx^k)$$ • $$\Bx^{k+1} = \Bx^k + \Delta \Bx^k$$ • $$k = k + 1$$ • UNTIL converged As with one variable, our convergence is after we have all of the convergence conditions satisfied \label{eqn:multiphysicsL11:240} \begin{aligned} \Norm{ \Delta \Bx^k } &< \epsilon_1 \\ \Norm{ F(\Bx^{k+1}) } &< \epsilon_2 \\ \frac{\Norm{ \Delta \Bx^k }}{\Norm{\Bx^{k+1}}} &< \epsilon_3 \\ \end{aligned} Typical termination is some multiple of eps, where eps is the machine precision. This may be something like: \label{eqn:multiphysicsL11:260} 4 \times N \times \text{eps}, where $$N$$ is the “size of the problem”. Sometimes we may be able to find meaningful values for the problem. For example, for a voltage problem, we may not be interested in precisions greater than a millivolt. ## Automatic assembly of equations for nolinear system ### Nonlinear circuits We will start off considering a non-linear resistor, designated within a circuit as sketched in fig. 2. fig. 2. Non-linear resistor Example: diode, with $$i = g(v)$$, such as \label{eqn:multiphysicsL11:280} i = I_0 \lr{ e^{v/{\eta V_T}} – 1 }. Consider the example circuit of fig. 3. KCL’s at each of the nodes are fig. 3. Example circuit 1. $$I_A + I_B + I_D – I_s = 0$$ 2. $$– I_B + I_C – I_D = 0$$ Introducing the consistuative equations this is 1. $$g_A(V_1) + g_B(V_1 – V_2) + g_D (V_1 – V_2) – I_s = 0$$ 2. $$– g_B(V_1 – V_2) + g_C(V_2) – g_D (V_1 – V_2) = 0$$ In matrix form this is \label{eqn:multiphysicsL11:300} \begin{bmatrix} g_D & -g_D \\ -g_D & g_D \end{bmatrix} \begin{bmatrix} V_1 \\ V_2 \end{bmatrix} + \begin{bmatrix} g_A(V_1) &+ g_B(V_1 – V_2) & & – I_s \\ &- g_B(V_1 – V_2) & + g_C(V_2) & \\ \end{bmatrix} = 0 . We can write the entire system as \label{eqn:multiphysicsL11:320} \boxed{ F(\Bx) = G \Bx + F'(\Bx) = 0. } The first term, a product of a nodal matrix $$G$$ represents the linear subnetwork, and is filled with the stamps we are already familiar with. The second term encodes the relationships of the nonlinear subnetwork. This non-linear component has been marked with a prime to distinguish it from the complete network function that includes both linear and non-linear elements. Observe the similarity with the stamp analysis that we did previously. With $$g_A()$$ connected on one end to ground we have it only once in the resulting vector, whereas the nonlinear elements connected to two non-zero nodes in the network occur once with each sign. ### Stamp for nonlinear resistor For the non-linear circuit element of fig. 4. fig. 4. Non-linear resistor circuit element The stamp is ### Stamp for Jacobian \label{eqn:multiphysicsL11:360} J_F(\Bx^k) = G + J_{F’}(\Bx^k). Here the stamp for the Jacobian, an $$N \times N$$ matrix, is
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9290709495544434, "perplexity": 6102.602421442056}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347388758.12/warc/CC-MAIN-20200525130036-20200525160036-00076.warc.gz"}
https://mathematica.stackexchange.com/questions/175592/how-can-i-see-the-predictor-function-that-mathematica-produces
# How can I see the predictor function that Mathematica produces? I am trying some basic machine learning with Mathematica. I am wondering if I can see the function that Mathematica produces for a training set. I give the following example: trainingset = {1 -> 1.3, 2 -> 2.4, 3 -> 4.4, 4 -> 5.1, 6 -> 7.3}; p = Predict[trainingset, Method -> "LinearRegression"] If I type p[1.5], it will give me the predicted value. However, could I also see/know somehow the function used, e.g. for this case in the form of y=a*x+b, (where x is the variable)? Is there a way to see the predicted function? It turns out that PredictorFunctions are implemented in a nice and transparent way as PredictorFunction[ Association[...]]. So execute p[[1]] and everything is laid out nicely for you. • Well, the actual predictor consists of one or more neural nets with pre- and postprocessors. Very complicated. You can try to inspect trace = Trace[p[1.5]]; in order to guess what is going on... The working of FindFit or LinearModelFit might be easier to graps. – Henrik Schumacher Jun 19 '18 at 11:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37303194403648376, "perplexity": 1078.2168745596339}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540547165.98/warc/CC-MAIN-20191212205036-20191212233036-00373.warc.gz"}
https://www.physicsforums.com/threads/solve-diff-eq-using-power-series.908293/
# Homework Help: Solve Diff. Eq. using power series Tags: 1. Mar 19, 2017 ### JamesonS 1. The problem statement, all variables and given/known data (1-x)y^{"}+y = 0 I am here but do not understand how to combine the two summations: Mod note: Fixed LaTeX in following equation. $$(1-x)\sum_{n=0}^{\infty}(n+2)(n+1)a_{n+2}x^n+\sum_{n=0}^{\infty}a_nx^n = 0$$ Last edited by a moderator: Apr 1, 2017 2. Mar 19, 2017 ### Ray Vickson Here is your equation using PF-compatible TeX: $$(1-x)\sum_{n=0}^{\infty}(n+2)(n+1)a_{n+2} x^n+\sum_{n=0}^{\infty}a_nx^n = 0$$ Just replace the "\ begin {equation} ... \ end {equation} " by " ... " (no spaces between the initial and final \$ signs). Also: write "\infty", not "\infinity". As for your question: write out the first 3 or 4 terms, to see what you get. That will give you insight into what you should do next. Last edited by a moderator: Apr 1, 2017 3. Mar 19, 2017 ### JamesonS Thanks for the response and Latex help. Writing out the first few terms of each sum: $$(1-x)[2a_2+6a_3x+12a_4x^2+...]+[a_0+a_1x+a_2x^2+...]$$ I am not sure what to do with the (1-x) term outside the first sum... Last edited: Mar 19, 2017 4. Mar 19, 2017 ### Ray Vickson What is preventing you from "distributing out" the product? That is, $(1-x) P(x) = P(x) - x P(x).$ Last edited: Mar 20, 2017 5. Mar 31, 2017 ### MidgetDwarf Been a while since I did DE, but doesn't the OP have to be aware of the singular point in this problem? Hence he has to use the method of Frobenius? 6. Apr 1, 2017 ### LCKurtz But the singular point isn't at $x=0$. 7. Apr 2, 2017 ### MidgetDwarf Thanks for the correction. It's been a while. I remembered that there is no singular point if we take the Taylor expansion about x=0? Correct? 8. Apr 2, 2017 ### epenguin Practically after your first equation, or at any later stage, just multiply it out. You have shown that you know how to write a sum of different powers of x in terms of xn by changing the subscript appropriately. 9. Apr 2, 2017 ### vela Staff Emeritus That's backwards. If you expand about a regular point, then the solution can be written as a Taylor series. If there's a singular point, then you can use the method of Frobenius and end up with a Laurent series. 10. Apr 2, 2017 ### MidgetDwarf Thanks Vela! It has been a while. I may pop open a differentials to bring the memory back.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9047558307647705, "perplexity": 1586.9298928307878}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867559.54/warc/CC-MAIN-20180526151207-20180526171207-00117.warc.gz"}
https://codegolf.stackexchange.com/questions/5510/can-you-meta-quine
# Can you Meta Quine? Similar to other quine puzzles (more specifically, this one), write a program that produces the source for itself. Here's the new twist: The code produced should NOT be identical to the source. Rather, it should output a different program that will create the first. The challenge linked to above achieved that by jumping between two languages. I'm thinking this one would be done in just one language, but the two (or more) versions of the source should be significantly different (see rules below). With this constraint, single character answers would be disallowed, thus requiring a little more thought be put into a final submission. RULES 1. Your code must be produced in just one language. (Multiple submissions, one for each language is perfectly acceptable.) 2. Your different code versions must be syntactically distinct. In other words, if you were to draw out an abstract syntax tree for your code, there should be at least one node different. • Supplying an AST will not be necessary, but if you feel inclined to provide one for each of your programs, it would help in judging. 3. You may produce as many iterations as you wish, as long as they all remain syntactically distinct. (More will help your score, see below.) SCORING Your final score will be the mean length of all your programs, divided by the number of programs. Example 1: A (source for B) = 50 characters B (source for A) = 75 characters Final Score = 31.25 Example 2: A (source for B) = 50 characters B (source for C) = 75 characters C (source for A) = 100 characters Final Score = 25 • I meta quine once. Apr 13, 2012 at 13:32 • @mellamokb har har ;-) Apr 13, 2012 at 13:33 • This is actually just a more general version of this quine challenge, and the answers given there will win here, too. Apr 13, 2012 at 17:02 • @leftaroundabout, the requirement for syntactic differences invalidates a 'rotating quine', so this is not more general. Apr 13, 2012 at 18:02 • Never meta quine I didn't like. Aug 14, 2014 at 19:51 ## Python, 0 (limit of (68+3 n )/(16n)) If two abstract syntax trees are different if they have different constants, r='r=%r;n=(0x%XL+1)%%0x10...0L;print r%%(r,n)';n=(0xF...FL+1)%0x10...0L;print r%(r,n) there are 16n programs of length at most 68+3n, giving asymptotic score of 0. If you want programs with variable structure, we can implement a binary adder on n bits. Here, there are 2n programs of length O( n2). Goes in a cycle due to dropped carry bit. s=""" print 's='+'"'+'"'+'"'+s+'"'+'"'+'"' n=lambda m:reduce(lambda (s,c),y:(s+(c^y,),c&y),m,((),1))[0] print s[:112] t=n(t) print "t=(%s,)+(0,)*%s"%(t[0],len(t)-1) for i in range(len(t)-1): print i*' '+'for i in range(2):' print ' '+i*' '+['pass','t=n(t)'][t[i+1]] print s[113:-1] """ print 's='+'"'+'"'+'"'+s+'"'+'"'+'"' n=lambda m:reduce(lambda (s,c),y:(s+(c^y,),c&y),m,((),1))[0] print s[:112] t=(0,)+(0,)*10 for i in range(2): t=n(t) for i in range(2): t=n(t) for i in range(2): t=n(t) for i in range(2): t=n(t) for i in range(2): pass for i in range(2): t=n(t) for i in range(2): pass for i in range(2): pass for i in range(2): pass for i in range(2): t=n(t) t=n(t) print "t=(%s,)+(0,)*%s"%(t[0],len(t)-1) for i in range(len(t)-1): print i*' '+'for i in range(2):' print ' '+i*' '+['pass','t=n(t)'][t[i+1]] print s[113:-1] • Might I be confused? It looks like the output is identical to the source (not the objective of this challenge)? Apr 15, 2012 at 18:10 • Look in the nested block. pass will change to t=n(t) and back, in all 2^n combinations. Apr 16, 2012 at 0:04 • I do see that now. You confused me with all the repetition! Apr 16, 2012 at 2:07 • for some reason, I like very long golf solutions with tiny scores. Apr 20, 2012 at 4:01 • Wow, you completely owned that! Very nice. Mar 8, 2014 at 5:08 ## C++, score of 0.734194 The following source code prints a meta quine of order 999 to the console (explanation below): #define X 1*(1+1) #include<iostream> #include<vector> #define Q(S)auto q=#S;S Q( \ main() \ { \ using namespace std; \ cout<<"#define X 1"; \ int x=X==2?1000:X-1; \ vector<int> factors; \ for ( int p = 2; p <= x; ++p) \ { \ while ( x % p == 0 ) \ { \ factors.push_back( p ); \ x /= p; \ } \ } \ for ( int factor : factors ) \ { \ cout<<"*(1"; \ for ( int i=1;i<factor;++i) \ cout<<"+1"; \ cout<<")"; \ } \ cout<<"\n#include<iostream>\n#include<vector>\n#define Q(S)auto q=#S;S\nQ("<<q<<")"; \ }) The only line that changes is the first line. The value of X will be 1000, 999, 998, ..., 3, 2 and then it will start again. However, in order to get different syntax trees every time, X is represented in terms of its prime factorization, where every prime is written as a sum of 1s. The ASTs are different, because the prime factorization of integers is different for every value. The program will print itself, except that the first line is changed and the backslashes, line breaks and indentations that are within Q(...) will be removed. The following program calculates the score of my answer: #include <iostream> const int n = 1000; int getProgramLength( int n ) { int sum = 442; for ( int p = 2; p*p <= n; ++p ) { while ( n % p == 0 ) { sum += 2 * ( 1 + p ); n /= p; } } if ( n > 1 ) sum += 2 * ( 1 + n ); return sum; } int main() { int sum = 0; for ( int i = 2; i <= n; ++i ) sum += getProgramLength( i ); std::cout << (double)sum/(n-1)/(n-1) << '\n'; } It printed 0.734194 to the console. Obviously, 1000 can be replaced by larger integers and the score will approach 0 as its limit. The mathematical proof involves Riemann's Zeta function is somewhat convoluted. I leave it as an exercise to the reader. ;) # Perl, score of 110.25 I have to admit, I'm not very good with quines. I'm 100% certain that there is room for improvement. The solution is based off of the same principle of the Element solution below. The first program is 264 characters. $s='$a=chr(39);print"$s=asa;";s=reverses;for(1..87){chops}s=reverses;prints;f++;if(f==0){a=chr(39);print"$s=$a$s$a;$s"}';$a=chr(39);print"$s=asa;";s=reverses;for(1..87){chops}s=reverses;prints;f++;if(f==0){a=chr(39);print"$s=$a$s$a;$s"} The second program is 177 characters. $s='$a=chr(39);print"$s=asa;";s=reverses;for(1..87){chops}s=reverses;prints;f++;if(f==0){a=chr(39);print"$s=$a$s$a;$s"}';if($f==0){$a=chr(39);print"$s=asa;s"} I'm working on the AST for this entry (and the Element entry). # Element, score of 47.25 The first program is 105 characters. \ \3\:$\'$\\\\\\(\$\#\2\1\'$\(\#$\\ \3\:$\'$\\\\\\(\$\#\ 3:'[\\(]#21'[(#] 3:'[\\(]# The second program is 84 characters. \ \3\:$\'$\\\\\\(\$\#\2\1\'$\(\#$\\ \3\:\$\'$\\\\\\(\$\#\ 3:$'[\\(]# I'm sure that there is a lot of room for improvement. In the first program there is one string (in which every character is escaped, despite a lot of redundancy) followed by executable parts A and B. Part A does several things: prints the string and escapes out of every character, prints the last half of the string (which is the source for part B), and then prevents the part B that follows it from doing anything. The second program is the same string followed by part B. Part B is based off of a simple quine; it prints a string preceded by an escaped version of it. This means it prints the string, and both parts A and B. • I think this definitively, beyond any doubt, proves the validity of Element as a programming language. It is so easy to use that I, so inexperienced that I have only managed to write one complete interpreter for Element, have been able to answer this question before any other person on this entire planet of 7,000,000,000 people. Element's "one character, one function, all the time" paradigm means that all code is completely unambiguous. The language is versatile: except for []{}, any command can be placed anywhere in the entire program without causing a syntax error. It is perfect. Apr 14, 2012 at 0:08 • A bit biased, are we? ;-) Apr 15, 2012 at 18:07 ## VBA: (251+216)/2/2 = 116.75 251 Sub a() r=vbCrLf:c="If b.Lines(4, 4) = c Then"&r &"b.InsertLines 8, d"&r &"b.DeleteLines 4, 4"&r &"End If":d="b.InsertLines 6, c"&r &"b.DeleteLines 4, 2" Set b=Modules("Q") If b.Lines(4, 4) = c Then b.InsertLines 8, d b.DeleteLines 4, 4 End If End Sub 216 Sub a() r=vbCrLf:c="If b.Lines(4, 4) = c Then"&r &"b.InsertLines 8, d"&r &"b.DeleteLines 4, 4"&r &"End If":d="b.InsertLines 6, c"&r &"b.DeleteLines 4, 2" Set b=Modules("Q") b.InsertLines 6,c b.DeleteLines 4,2 End Sub This is run in MSAccess to make use of the Module object. The module is named "Q" for golfing. The difference in syntax comes from the If ... Then missing from the shorter version. • you can most likely get away with changing vbCrLF to vbCr May 31, 2017 at 16:34 ## JavaScript, 84.564 61 Two programs, both length 169 128 122. (function c(){alert(/* 2/*/1/**/);return ('('+c+')()').replace(/\/([/\*])/,function(m,a){return a=='*'?'/\/':'/\*'}); })() Before I golfed it, for your viewing pleasure: (function c() { var r = /\/([/\*])/; var f = function(m, a) { return a === '*' ? '/\/' : '/\*' }; var p = '(' + c + ')();'; p = p.replace(r, f); /* This is just a comment! console.log('Quine, part two!'); /*/ console.log('Quine, part one!'); /**/ return p; })(); Returns the new program and outputs the current part! I could probably make it shorter without the function regex, but... I don't want to. • No, they're syntatically distinct. Once you add the newlines, that is. – Ry- Apr 16, 2012 at 2:44 # J - (24+30)/2 / 2 = 13.5 pts Note that strings in J are not the backslash-escaped, but quote-escaped à la Pascal: 'I can''t breathe!'. 30$(,5#{.)'''30$(,5#{.)' NB. program 1, 24 char '30$(,5#{.)''''''30\$(,5#{.)''' NB. program 2, 30 char Program 1 has AST noun verb hook noun and program 2 has AST noun. Program 2 is a quoted version of program 1, which will just return program 1 when run, so this method can't be extended to three copies that easily :P Program 1 operates by taking a copy of the code portion of the source, with a quote appended at the front, and adding five of those quotes to the end ((,5#{.)`). Then, it cyclically takes 30 characters from this 16-character string, which gives exactly Program 2 as a result.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4305547773838043, "perplexity": 2953.582200540021}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00397.warc.gz"}
http://koreascience.or.kr/search.page?keywords=aged+garlic&lang=en&pageSize=10&pageNo=3
• Title, Summary, Keyword: aged garlic ### Correlation between Pungency and Allicin Content of Pickled Garlic during Aging (마늘장아찌 숙성 중 매운맛과 Allicin량과의 상관관계) • Kim, Mee-Ree;Yun, Jun-Hwa;Sok, Dai-Eun • Journal of the Korean Society of Food Science and Nutrition • / • v.23 no.5 • / • pp.805-810 • / • 1994 • Relationship between pungency and allicin content of pickled garlic during aging was examined . Degree of pungency of pickled garlic during aging at 20 $^{\circ}C$ was determined by the sensory evaluation. A panel of 10 members evaluated seven samples of pickled garlic which were aged for 0, 10 , 20, 40 , 50 or 60 days by using scoring test (seven point scale). The sensory evaluation results showed that pungency of pickled garlic decreased gradually during aging, and scored at 3.07 on the 40 th day of aging. Content of allicin, which was a major pungent component of garlic homogenate, was quantitatively analyzed by HPLC. The level of allicin in homogenate of pickled garlic was found to decrease gradually, and to 5.9% on the 40 th day of aging compared with that of fresh garlic. Relationship between the pungency score results and the content of allicin demonstrated a highly positive correlation (r=0.9648). ### Antioxidants Activity of Aged Red Garlic (숙성 홍마늘의 생리활성) • Lee, Soo-Jung;Shin, Jung-Hye;Kang, Min-Jung;Jung, Woo-Jae;Ryu, Ji-Hyun;Kim, Ra-Jeong;Sung, Nak-Ju • Journal of Life Science • / • v.20 no.5 • / • pp.775-781 • / • 2010 • The antioxidant activities of hot water extracts from fresh, red and black garlic processed in low temperatures were compared. The chromaticity value of browning garlic was between that of fresh and black garlic. Red garlic was similar in browning intensity to fresh garlic. Also, total phenol, flavonoids, total pyruvate and thiosulfate contents were similar between fresh and black garlic. DPPH, ABTs, NO radical scavenging activity and reducing power of red garlic were significantly higher than fresh garlic, but lower than those of black garlic. $\alpha$-Glucosidase inhibitory activity in red garlic was similar to that in black garlic. Antioxidant activities of red garlic were higher than fresh garlic but lower than black garlic, and it was confirmed that antioxidant activity by production of browning material through the thermal process was the main parameter of the biological activity in the aged red garlic. ### Effects of Diaphragm Breathing and Garlic Powder Intake on Body Composition, Heart Rate, Blood Pressure and Immunoglobulin in Middle-aged Male smokers. (횡격막 호흡과 마늘 분말 섭취가 중년 남성 흡연자의 신체조성, 심박수, 혈압 및 면역글로불린에 미치는 영향) • Choi, Seung-Uk;Baek, Yeong-Ho;Kwak, Yi-Sub • Journal of Life Science • / • v.17 no.9 • / • pp.1266-1271 • / • 2007 • The purpose of this study was to investigate combined effects of diaphragm breathing and garlic powder intake on body composition, heart rate, blood press and immunoglobulin levels in middle-aged male smokers from the age 40-49. Diaphragm breath training was 2-5 grade intensity on dyspnea scale for 20 minutes four times a week for 4 weeks and subjects were given garlic at 3 g of powder after breakfast and dinner two times a day during the 4 weeks. The conclusions of this study are as follows; Garlic intake group decreased in percentage of body fat, in the comparison between groups, garlic intake group had a lower percentage of body fat than control group. Heart rate was decreased in Diaphragm breathing group at rest. SBP was decreased in Diaphragm breathing+garlic intake group. Garlic intake group and diaphragm breathing+garlic intake group increased in IgG. ### Physicochemical Characteristics and Antioxidant Activities of Freezing Pretreated Black Garlic (동결 전처리한 숙성 흑마늘의 이화학적 특성 및 항산화 활성) • Choi, Hye Jung;Lim, Bo Ram;Ha, Sang-Chul;Kwon, Gi-Seok;Kim, Dong Wan;Joo, Woo Hong • Journal of Life Science • / • v.27 no.4 • / • pp.471-475 • / • 2017 • Freezing pretreatment can destroy the cell membrane of garlic and may improve some food-quality of garlic. Therefore we investigated the effect of freezing pretreatment at $-20^{\circ}C$ and $-70^{\circ}C$ on quality of aged black garlic, compared with traditional processing methods. Our results showed that freezing pretreatment at $-70^{\circ}C$ had the greatest impact on qualities and antioxidant activities of black garlic. Browning degree and pH of black garlic after both the freezing pretreatment and aging process were 3.14 and 3.55, respectively. Furthermore, 2,2-diphenyl-1-picrylhydrazyl (DPPH) free radical scavenging activity and reducing power of aged black garlic can be enhanced by pre-freezing processing. Reducing sugar and 5-hydroxymethyl-2-furaldehyde (5-HMF) contents of freezing pretreated and aged black garlic were increased by 1.69 and 1.14 fold, respectively, compared with the control samples. The results indicated that freezing pretreatment had improved the overall qualities (such as browning degree, pH, reducing sugar) and functional materials of black garlic. ### Inhibition of Adipocyte Differentiation and Adipogenesis by Aged Black Garlic Extracts in 3T3-L1 Preadipocytes (흑마늘 추출물에 의한 3T3-L1 지방전구세포의 분화 및 adipogenesis 억제에 관한 연구) • Park, Jung-Ae;Park, Cheol;Han, Min-Ho;Kim, Byung-Woo;Chung, Yoon-Ho;Choi, Yung-Hyun • Journal of Life Science • / • v.21 no.5 • / • pp.720-728 • / • 2011 • Garlic (Allium sativum) has been used as a source food as well as a traditional folk medicine ingredient since ancient times. Aged black garlic is a type of fermented garlic and is expected to have stronger anticancer and antioxidant activities than raw garlic. However, the mechanisms of their inhibitory effects on adipocyte differentiation and adipogenesis are poorly understood. In the present study, the effects and mechanisms of water extracts of raw garlic (WERG) and aged black garlic (WEABG) on adipocyte differentiation and adipogenesis in 3T3-L1 preadipocytes were investigated. Treatment with WEABG significantly suppressed terminal differentiation of 3T3-L1 preadipocytes in a dose-dependent manner as confirmed by a decrease in lipid droplet number and lipid content through Oil Red O staining, however WERG had no such effect. In addition, WEABG reduced accumulation of cellular triglyceride, which is associated with a significant inhibition of key pro-adipogenic transcription factors including peroxisome proliferator-activated receptor ${\gamma}$ (PPAR${\gamma}$), cytidine-cytidine-adenosine-adenosine-thymidine (CCAAT)/enhancer binding proteins ${\alpha}$ (C/EBP${\alpha}$) and C/EBP${\beta}$. Taken together, these results provide important new insight that aged black garlic might inhibit adipogenesis by suppressing the pro-adipogenic transcription factors in 3T3-L1 preadipocytes, and further studies will be needed to identify the active compounds that confer the anti-obesity activity of aged black garlic. ### Hepatoprotective Effect of Aged Black Garlic Extract in Rodents • Shin, Jung Hyu;Lee, Chang Woo;Oh, Soo Jin;Yun, Jieun;Kang, Moo Rim;Han, Sang-Bae;Park, Heungsik;Jung, Jae Chul;Chung, Yoon Hoo;Kang, Jong Soon • Toxicological Research • / • v.30 no.1 • / • pp.49-54 • / • 2014 • In this study, we investigated the hepatoprotective effects of aged black garlic (ABG) in rodent models of liver injury. ABG inhibited carbon tetrachloride-induced elevation of aspartate transaminase (AST) and alanine transaminase (ALT), which are markers of hepatocellular damage, in SD rats. D-galactosamine-induced hepatocellular damage was also suppressed by ABG treatment. However, ABG does not affect the elevation of alkaline phosphatase (ALP), a marker of hepatobilliary damage, in rats treated with carbon tetrachloride or D-galactosamine. We also examined the effect of ABG on high-fat diet (HFD)-induced fatty liver and subsequent liver damage. ABG had no significant effect on body weight increase and plasma lipid profile in HFD-fed mice. However, HFD-induced increase in AST and ALT, but not ALP, was significantly suppressed by ABG treatment. These results demonstrate that ABG has hepatoprotective effects and suggest that ABG supplementation might be a good adjuvant therapy for the management of liver injury. ### Aged Garlic Extract and Its Components Inhibit Platelet Aggregation in Rat (흰쥐에서 흑마늘 추출물과 그 성분들에 의한 혈소판 응집억제 효과) • Choi, You-Hee;Jeong, Hyung-Min;Kyung, Kyu-Hang;Ryu, Beung-Ho;Lee, Kwang-Youl • Journal of Life Science • / • v.21 no.10 • / • pp.1355-1363 • / • 2011 • Many clinical trials have demonstrated the beneficial effects of garlic (Allium sativum) on general cardiovascular health. Aged garlic extract (AGE) is known to display diverse biological activities such as in antioxidant, anti-inflammatory and anticancer activities. However, few studies have been directed on the effect of AGE on cardiovascular function. In this study, we aimed to investigate the effect of AGE and its components on platelet activation, a key contributor in thrombotic diseases. In freshly isolated rat platelets, AGE and its components have shown inhibitory activities on thrombin-induced platelet aggregation. These in vitro results were further confirmed in an in vivo platelet aggregation measurement where tail vein injection of garlic oil and S-Allylmercapto-cysteine (SAMC) significantly reduced thrombin and ADP-induced platelet aggregation. Potential active components for antiplatelet effects of AGE were identified to be SAMC and diallyl sulphide through agonist-induced platelet aggregation assay. These results indicate that aged garlic extract can be a novel dietary supplement for the prevention of cardiovascular risks and the improvement of blood circulation. ### Effects of Aging Temperature and Time on the Conversion of Garlic (Allium sativum L.) Components (온도 및 숙성기간이 마늘의 화학적 성분변화에 미치는 영향) • Cho, Kang-Jin;Cha, Ji-Young;Yim, Joo-Hyuk;Kim, Jae-Hyun • Journal of the Korean Society of Food Science and Nutrition • / • v.40 no.1 • / • pp.84-88 • / • 2011 • Some thermally processed foods have higher biological activities due to their various chemical changes during heat treatment. Especially, 5-hydroxymethylfurfural (HMF) is derived from dehydration of sugars and has been identified in processed garlic. The biological function of HMF have revealed as antisickling agent and thyrosinase inhibitor. This study was carried out to examine the formation of HMF and free sugars from the aged garlics when it is treated at 60 and $75^{\circ}C$ and different incubation periods from 7 to 35 days. HMF and free sugars from the hot-water extracts of aged garlics were analyzed with GC/MS, LC/MS, and HPLC. The amount of HMF was higher than at $75^{\circ}C$ and increasing incubation period. Among free sugars, the only fructose except glucose and sucrose was formed and converted to HMF at high temperature and long incubation period. However, fructose formed in low temperature during making of aged garlic was rarely converted to HMF. This result indicates that formation of HMF can be dependent on the temperature and incubation period for making aged garlic. ### The Effects of Dietary Garlic Powder on the Performance, Egg Traits and Blood Serum Cholesterol of Laying Quails • Yalcin, Sakine;Onbasilar, Ilyas;Sehu, Adnan;Yalcin, Suzan • Asian-Australasian Journal of Animal Sciences • / • v.20 no.6 • / • pp.944-947 • / • 2007 • This study was conducted to study the effects of dietary garlic powder on laying performance, egg traits and blood serum cholesterol level of quails. A total of three hundred quails (Coturnix coturnix japonica) aged nine weeks were used. They were allocated to 3 dietary treatments. Each treatment comprised 5 replicates of 20 quails. The diets were supplemented with 0, 5 and 10 g/kg garlic powder. The experimental period lasted 21 weeks. The addition of garlic powder did not significantly affect body weight, egg production, feed consumption, feed efficiency, egg shell thickness, egg albumen index, egg yolk index and egg Haugh unit. Adding 5 and 10 g/kg garlic powder to the laying quail diets increased egg weight (p<0.01). Egg yolk cholesterol and blood serum cholesterol concentration were reduced with garlic powder supplementation. The results of this study demonstrated that garlic powder addition had a significant cholesterol-reducing effect in serum and egg yolk without adverse effects on performance and egg traits of laying quails. ### Analysis of Biological Activity by Time of Black Garlic Ripening in Seosan Yukjok Garlic and Elephant Garlic (서산육쪽·코끼리마늘의 흑마늘 숙성 시기별 생리활성 분석) • Cho, Yong-Koo;Ann, Seoung-Won;Jang, Myoung-Jun;Oh, Tae-Seok;Oh, Min-Gyo;Park, Youn-Jin;Kim, Chang-ho • Journal of Environmental Science International • / • v.29 no.5 • / • pp.469-477 • / • 2020 • This study analyzed the quality characteristics of black garlic made from Seosan Yukjok Garlic and elephant garlic in Seosan, Chungnam province. Of the inorganic components, Mg content was the highest in all treatment groups, and the Ca content was high in each of the 15 day treatments. The content of K was high after 10 days aging in Yukjok garlic and after 15 days in the elephant garlic. The Fe, Na, K, and Mg content was high in Yukjok black garlic after 15 days, and Na, K, Ca, and Mg were high in the elephant black garlic aged for 15 days. The crude fat content was high in both Yukjok black garlic and elephant black garlic after 15 days. Vitamin C content was highest in both types of garlic after aging for 15 days. An analysis of four kinds of organic acids showed that citric acid was the only organic acid to appear in raw garlic of Yukjok garlic and elephant garlic. Black Yukjok garlic and elephant black garlic had a greater total amino acid content than the raw garlic of either type. However, among the tested amino acids, 13 kinds of amino acids were at their highest after five days of ripening in Yukjok black garlic, while 15 kinds of amino acids were abundant in elephant garlic after the same period. Eight kinds of amino acids were high after aging for 15 days. Through this study, it was confirmed that, in the process of making black garlic, changes in the main components of the garlic occur through different routes, and these changes vary depending on the garlic species. Therefore, this study provided basic data for the processing of Seosan's Yukjok black garlic and elephant black garlic.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49111008644104004, "perplexity": 23436.050851668104}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400197946.27/warc/CC-MAIN-20200920094130-20200920124130-00201.warc.gz"}
https://www.physicsoverflow.org/36282/question-about-notation-writing-moduli-space-string-theory
# Question about notation used in writing the moduli space in string theory + 2 like - 0 dislike 118 views In physics papers, particularly those by Aspinwall, or textbooks, I encounter things like $$\mathcal{M} \simeq O(\Gamma_{4,20})\setminus O(4,20)/((O(4)\times O(20))$$ For instance, this is from https://arxiv.org/abs/hep-th/9707014. Am I correct in understanding that the numerator is $O(\Gamma_{4,20})$ folllowed by a set theoretic subtraction of $O(4,20)$? If I wanted to compute the dimension of $\mathcal{M}$, I know I'd have to subtract the dimension of the numerator from the dimension of the denominator (which is just $(4\times 3/2) + (20 \times 19/2)$). What is the dimension of the numerator? This post imported from StackExchange Physics at 2016-06-08 08:49 (UTC), posted by SE-user leastaction asked Jun 6, 2016 That rather looks like a double coset to me, i.e. you're quotienting a left action of $\mathrm{O}(\Gamma)$ and a right action of $\mathrm{O}(4)\times\mathrm{O}(20)$ out of $\mathrm{O}(4,20)$ This post imported from StackExchange Physics at 2016-06-08 08:49 (UTC), posted by SE-user ACuriousMind Ah, thank you! So does that mean that if one writes $G_1\setminus H / G_2$, then the dimension will be $dim(H) - dim(G_1) - dim(G_2)$? This post imported from StackExchange Physics at 2016-06-08 08:49 (UTC), posted by SE-user leastaction The group acting from the left is a discrete group (dimension 0) consisting of transformations of a lattice, similar to a matrix group with integer entries. As for the dimension, it will be at least that, but may be higher if the actions are not faithful/effective. This post imported from StackExchange Physics at 2016-06-08 08:49 (UTC), posted by SE-user doetoe Thank you @doetoe! This post imported from StackExchange Physics at 2016-06-08 08:49 (UTC), posted by SE-user leastaction It is useful to remember that the dimension of $\mathcal{M} \simeq O(\Gamma_{p,q})\setminus O(p,q)/((O(p)\times O(q))$ is just $pq$. This post imported from StackExchange Physics at 2016-06-08 08:49 (UTC), posted by SE-user user40085 ## 2 Answers + 2 like - 0 dislike This is not really an answer (the answer is ACuriousMind's comment: this is a double coset space), but it may help to consider the construction of the moduli space of elliptic curves, as this can be done in the same way but is very easy. Every complex elliptic curve is obtained as $\Bbb C$ modulo a lattice. Scaling the lattice by a complex number gives an isomorphic curve, so you can scale in such a way that the lattice is generated by 1 and by $\tau\in\Bbb H$, the complex upper half plane (this is similar to gauge fixing). Not all $\tau$ give different lattices: two sets of generators give the same lattice if they are related by an element of $GL_2(\Bbb Z)$ (there is some residual gauge freedom). Since we work with oriented bases, we can restrict to $SL_2(\Bbb Z)$. It is not hard to see that the action of $SL_2(\Bbb Z)$ on a basis $1,\tau$ corresponds to an action on $\Bbb H$ by Möbius transformations $$\begin{pmatrix}a & b\\ c & d\end{pmatrix}\tau = \frac{a\tau + b}{c\tau + d}$$ This gives us the moduli space as a quotient $$\mathcal M \cong SL_2(\Bbb Z)\backslash \Bbb H$$ In general, if you have a space (possibly with some extra structure like a Riemannian metric) on which some group of automorphisms acts transitively (i.e. every point can be mapped to every point) by some mapping of the whole space onto itself, then this space can be written as the quotient of this group by the stabilizer (i.e. the subgroup fixing a given point) of any point. This is the orbit-stabilizer theorem. In our example, the complex upper half plane $\Bbb H$ has an obvious complex structure as a subset of $\Bbb C$, and its group of holomorphic automorphisms is $SL_2(\Bbb R)$ acting by Möbius transformations, except that $+I$ and $-I$ do the same thing, and the automorphisms are really $SL_2(\Bbb R)/\langle-I\rangle = PSL_2(\Bbb R)$. The stabilizer of the point $i$ is $SO_2(\Bbb R)\subset PSL_2(\Bbb R)$, so that $$\mathcal M \cong SL_2(\Bbb Z)\backslash PSL_2(\Bbb R)/SO_2(\Bbb R)$$ This post imported from StackExchange Physics at 2016-06-08 08:49 (UTC), posted by SE-user doetoe answered Jun 6, 2016 by (125 points) Thank you @doetoe for a comprehensive answer! This post imported from StackExchange Physics at 2016-06-08 08:49 (UTC), posted by SE-user leastaction + 2 like - 3 dislike There are two things going on. One is modulo that is the forwards slash / and the other is set-minus $\setminus$ the backwards slash. The $$G_4(20) = \frac{O(4,20)}{O(4)\times O(20)}$$ is the Grassmanian space defined by $4$-planes. the group $O(\Gamma_{4,20})$ is an orthogonal group over the unimodular transformations, a bit like saying $O(n,\mathbb Z)$, where Aspinwall introduces this on the two-torus earlier in the paper. The moduli space is this orthogonal group "minus" these Grassmanians. This post imported from StackExchange Physics at 2016-06-08 08:49 (UTC), posted by SE-user Lawrence B. Crowell answered Jun 6, 2016 by (590 points) This is wrong. There is not set-minus but one left quotient and one right quotient, as explained in the comments to the question and in doetoe answer. ## Your answer Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor)   Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar$ys$\varnothing$csOverflowThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9156833291053772, "perplexity": 696.6913728701355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583509326.21/warc/CC-MAIN-20181015142752-20181015164252-00527.warc.gz"}
https://artofproblemsolving.com/wiki/index.php?title=2020_AMC_12A_Problems/Problem_25&curid=18141&oldid=151749
# 2020 AMC 12A Problems/Problem 25 ## Problem The number , where and are relatively prime positive integers, has the property that the sum of all real numbers satisfying is , where denotes the greatest integer less than or equal to and denotes the fractional part of . What is ? ## Solution 1 Let be the unique solution in this range. Note that is also a solution as long as , hence all our solutions are for some . This sum must be between and , which gives and . Plugging this back in gives . ## Solution 2 First note that when while . Thus we only need to look at positive solutions ( doesn't affect the sum of the solutions). Next, we breakdown down for each interval , where is a positive integer. Assume , then . This means that when , . Setting this equal to gives We're looking at the solution with the positive , which is . Note that if is the greatest such that has a solution, the sum of all these solutions is slightly over , which is when , just under . Checking this gives ~ktong ## Video Solution 1 (Geometry) This video shows how things like The Pythagorean Theorem and The Law of Sines work together to solve this seemingly algebraic problem: https://www.youtube.com/watch?v=6IJ7Jxa98zw&feature=youtu.be ## Video Solution 3 (by Art of Problem Solving) Created by Richard Rusczyk ## Remarks ### Graph Let We make the following table of values: We graph by branches: ~MRENTHUSIASM (Graph by Desmos: https://www.desmos.com/calculator/ouvaiqjdzj) ### Extensions Visit the Discussion Page for the underlying arguments and additional questions. ~MRENTHUSIASM
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9867926836013794, "perplexity": 621.1007132794549}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989018.90/warc/CC-MAIN-20210509213453-20210510003453-00619.warc.gz"}
https://math.dartmouth.edu/~ahb/thesis_html/node134.html
Next: Convergence with number of Up: Dissipation in Deforming Chaotic Previous: Appendix F: Cross correlations # Appendix G: Numerical evaluation of wavefunction boundary integrals Boundary methods are a central component of this thesis. Closed integrals of a function over the boundary coordinate are ubiquitous. Generally and a square matrix of integrals (G.1) is required, where the indices label multiple functions. For evaluation of the quantum band profile (Chapters 2 and 3), the local density of states (Chapter 6), the tension and area-norm matrices (Chapter 5), and the Vergini matrix and its derivative (Chapter 6), and are basis functions or eigenstates which oscillate about zero on the length scale , the quantum (de Broglie) free-space wavelength. The deformation function boundary integrals (Chapter 4) do not involve any quantum scale, but are also evaluated using the method below. I will present only the case where boundary integrals over become line integrals over ; the generalization to higher is simple. My tool for evaluation of an integral on a closed curve is the discretization (G.2) where is the range of , that is, the length of the line integral (billiard perimeter). The points are spread uniformly (equidistant in ) along the closed curve. Because no point is special, no special quadrature [161] weights arise near any endpoints: all weights are equal. More sophisticated and accurate approximations exist for closed line integral evaluation [58], however this is sufficient for my needs and is very simple to code. Its errors will be discussed and tested below. A single integral (G.2) requires function evaluations of . Naively one might guess that filling a matrix using (G.1) requires evaluations. However, the correct way to compute (G.1) requires only such evaluations: First fill the rectangular matrices and , from which follows (G.3) This matrix multiplication does require operations, but being simple adds and multiplications (and using optimized library code e.g. BLAS), it is very fast and does not affect the scaling. If you like, the matrix multiply `performs' the integration over . In the case where and are the same function, only evaluations are required. Note that if a general weighting function is required in the integrand (G.1), it can easily be incorporated into or , or equivalently be included as a diagonal matrix inserted between and in (G.3). Subsections Next: Convergence with number of Up: Dissipation in Deforming Chaotic Previous: Appendix F: Cross correlations Alex Barnett 2001-10-03
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9679244756698608, "perplexity": 1282.3157587384399}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583509845.17/warc/CC-MAIN-20181015205152-20181015230652-00199.warc.gz"}
https://forums.powerarchiver.com/topic/3278/crc-error-when-out-of-space/5?lang=en-US
# CRC error when out of space • When I try to extract a multi-part RAR file (i.e. r01, r02) to a drive that runs out of space during extraction, PowerArchiver reports a CRC error in the part that it is extracting from when the drive runs out of space… freeing up space and re-attempting extraction works fine with no CRC errors. This has been going on for awhile because I’ve had it happen before with earlier versions. Is it a known problem? Thanks. • So what’s the problem? PowerArchiver reports an error because there is no enough space and so it is not possible to restore the original file there. • But it doesn’t report an error saying that there’s no space left. It says there’s a CRC error in the file it’s extracting when space runs out when that’s not the real problem. • it is general error that says many things, including crc and lack of space, right? :-). we will see if we can be more specific, thanks. • No, the error expliticly mentions a CRC error and only a CRC error. The exact text is as follows, in a dialog with a red X icon: “There is a CRC error on file:<file being=”" extracted="" from="" multi-part="" rar=""> Volume:<part being="" extracted="" when="" out="" of="" space=""> Continue?" …with Yes/No buttons. If I then try and extract the same file to a drive that has enough space, it succeeds without any problems.</part></file> • @mbg: No, the error expliticly mentions a CRC error and only a CRC error. The exact text is as follows, in a dialog with a red X icon: “There is a CRC error on file:<file being=”" extracted="" from="" multi-part="" rar=""> Volume:<part being="" extracted="" when="" out="" of="" space=""> Continue?" …with Yes/No buttons. If I then try and extract the same file to a drive that has enough space, it succeeds without any problems.</part></file> great, we will see if we can improve it… thanks! • This problem has been bugging me for a long time now. CRC error implies that the archive is corrupt and not “not enough space”. To get rid of this problem just check if there’s enough space on the drive for all the files in the archive…and warn user if there’s not enough space. Since every type of archive doesn’t contain extracted file sizes you can do a “free space” check right before poping up the error message and decide on what message to display. That’s VERY easy to code… :cool: • it was wrong error shown, it will be fixed, i think it is in internal versions. • This post is deleted! • please check with official beta 1, and let us know if this has been fixed… thank you!
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8153259754180908, "perplexity": 2360.348164522575}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735916.91/warc/CC-MAIN-20200805065524-20200805095524-00324.warc.gz"}
https://mathematica.stackexchange.com/questions/113883/huge-difference-after-changing-a-fraction-to-decimal
# Huge difference after changing a fraction to decimal I have a limit to calculate. With[{p = 1/2, q = 0.1}, Limit[a^q Integrate[Sin[x]/x^p, {x, a, Infinity}], a -> Infinity]] gives correct result 0. But change that $\frac{1}{2}$ to $0.5$, With[{p = 0.5, q = 0.1}, Limit[a^q Integrate[Sin[x]/x^p, {x, a, Infinity}], a -> Infinity]] gives ComplexInfinity which very far away from the correct answer. Why does machine precision and infinite precision have this huge difference without any warning? FYI, for any decimal p, using decimal gives incorrect result, while the true value is 0. • The difference seems to lie in the integration; when p == 1/2 the integral is expressed as a function of FresnelS; in the numerical case of p == 0.5, instead, the integral is expressed with HypergeometricPFQ. Unfortunately I don't know enough about special functions to go any further than that, but others on this site do, so hopefully they may be able to weigh in on this. – MarcoB Apr 28 '16 at 18:59 • Seems like a possible bug to me. This integral looks convergent to me for any p>q regardless, machine or infinite precision. – LLlAMnYP Apr 28 '16 at 19:32 • @LLlAMnYP Mathematica gives a result for the integral. Then claims the limit diverges for decimal input. No bug there. – Daniel Lichtblau Apr 28 '16 at 20:08 • @DanielLichtblau the integral should have a convergent, though not necessarily 0 result for all p>0. The limit as a whole, though I have not taken the time to rigorously prove this, should reduce to zero for a -> Infinity and p>q. If there was a case, that only exactly 1/2 works, while otherwise some important condition is not met, I would agree with you, but IMO here the result should be zero for a range of ps. – LLlAMnYP Apr 28 '16 at 20:36 • Possibly shows a bug in Series handling of HypergeometricPFQ at infinity, when input to PFQ is approximate. – Daniel Lichtblau Apr 28 '16 at 21:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8349422812461853, "perplexity": 983.422675113494}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529175.83/warc/CC-MAIN-20190723085031-20190723111031-00207.warc.gz"}
https://www.physicsforums.com/threads/strength-of-magnetic-fields.356238/
Strength of magnetic fields • Start date • #1 7 0 I've been given the question to compute the strength in the van allen belts when given the strength of Earth's magnetic field at the surface. Then, I am supposed to calculate the gyroradius of a 50 MeV proton from the strengths I come up with. I want to know if I am going about this correctly. Is it ok to use say that B = B (at Earth's surface) * (1r (Earth)/ r (belt radius in terms of Earth radius) ^3 For example, if B at surface is 4 *10^-5 T and inner belt radius = 1.5 (earth radii) then I will get the equation: B = 4 *10^-5(1/1.5)^3 = 1.185 * 10^-5 T As for the gyroradius, the formula is r = (mv/qB)--can I go ahead and assume the velocity of the proton is equal to the speed of light. The notes that I am using to carry out this problem does not specify any way of getting the velocity of a particle moving through the belts. Please let me know if this looks ok! Any help would be appreciated. Thanks!! Related Astronomy and Astrophysics News on Phys.org • Last Post Replies 5 Views 774 • Last Post Replies 7 Views 2K • Last Post Replies 8 Views 3K • Last Post Replies 6 Views 2K • Last Post Replies 1 Views 1K • Last Post Replies 1 Views 2K
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9356184601783752, "perplexity": 721.9879033723948}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107882102.31/warc/CC-MAIN-20201024051926-20201024081926-00582.warc.gz"}
https://www.msri.org/workshops/899/schedules/27572
# Mathematical Sciences Research Institute Home » Workshop » Schedules » Deformation of spherical conical metrics # Deformation of spherical conical metrics ## Recent developments in microlocal analysis October 14, 2019 - October 18, 2019 October 14, 2019 (11:00 AM PDT - 12:00 PM PDT) Speaker(s): Xuwen Zhu (Northeastern University) Location: MSRI: Simons Auditorium Tags/Keywords • Spherical conical metrics • deformation • uniformization Primary Mathematics Subject Classification Secondary Mathematics Subject Classification Video Abstract The problem of finding and classifying constant curvature metrics with conical singularities has a long history bringing together several different areas of mathematics. This talk will focus on the particularly difficult spherical case where many new phenomena appear. When some of the cone angles are bigger than $2\pi$, uniqueness fails and existence is not guaranteed; smooth deformation is not always possible and the moduli space is expected to have singular strata. I will give a survey of several recent results regarding this singular uniformization problem, connecting microlocal techniques with complex analysis and synthetic geometry. Based on joint works with Rafe Mazzeo and Bin Xu. Supplements
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48721033334732056, "perplexity": 1586.4536813065552}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662531762.30/warc/CC-MAIN-20220520061824-20220520091824-00169.warc.gz"}
https://www.physicsforums.com/threads/spinning-pencil-think-top-urgent-help-wanted.158488/
# Spinning Pencil (think top) Urgent help wanted 1. Feb 27, 2007 ### ^_^physicist 1. The problem statement, all variables and given/known data A pencil is set spinning in an upright position. How fast must the spin be for the pencil to remain in the upright position? Assume that the pencil is a uniform cylinder of length a and diameter b. Find the value of the spin in revolutions per second for a=20cm and b=1cm. Express results in revolutions per second 2. Relevant equations Assuming that this is a top problem: $$S^2*(I_s)^2 > (4mglI)$$ is the only relavent equation I can think of. Where $$I_s$$ = moment of inertia about the symmetry axis, and I= moment about the axes normal to the symmetry axis. 3. The attempt at a solution Taking the above equation for the necessary speed for "sleeping" to occur (that is maintaining a the vertical), I mainpulate the equation into S > [(4*m*g*l*I)/(I_s)^2)]^(1/2). Now noting that this is a cylinder I come up with I_s=m*(a^2)/2 and I= m*(a^2/4+b^2/12); however, when I plug these values into the express I am not getting what the back of the book is getting, in fact if I plug in numbers I am off by a signifigant amount (3 orders of magnatute). So, from what I can tell I am making a mistake somewhere on figuring out the I and I_s values. The back of the book gives the equation as S > [ (128*g*a)/(b^4)*(a^2/3+b^2/16) ]^(1/2) = (when values are inserted as indicated in the question) 2910 RPS. Any ideas on how to figure out I and I_s Last edited: Feb 27, 2007 Can you offer guidance or do you also need help? Similar Discussions: Spinning Pencil (think top) Urgent help wanted
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8367369174957275, "perplexity": 1051.8200992303207}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917126538.54/warc/CC-MAIN-20170423031206-00495-ip-10-145-167-34.ec2.internal.warc.gz"}
https://chemistry.stackexchange.com/questions/31639/what-do-the-quantum-numbers-actually-signify
# What do the quantum numbers actually signify? I know how to calculate them and such stuff, but I wanted to know what they actually signify. I have a vague idea that they have something to do with an electron's position in an atom but what do all of them mean? Any help would be greatly appreciated! Quantum numbers give information about the location of an electron or set of electrons. A full set of quantum numbers describes a unique electron for a particular atom. Think about it as the mailing address to your house. It allows one to pinpoint your exact location out of a set of $n$ locations you could possibly be in. We can narrow the scope of this analogy even further. Consider your daily routine. You may begin your day at your home address but if you have an office job, you can be found at a different address during the work week. Therefore we could say that you can be found in either of these locations depending on the time of day. The same goes for electrons. Electrons reside in atomic orbitals (which are very well defined 'locations'). When an atom is in the ground state, these electrons will reside in the lowest energy orbitals possible (e.g. 1$s^2$ 2$s^2$ and 2$p^2$ for carbon). We can write out the physical 'address' of these electrons in a ground-state configuration using quantum numbers as well as the location(s) of these electrons when in some non-ground (i.e. excited) state. You could describe your home location any number of ways (GPS coordinates, qualitatively describing your surroundings, etc.) but we've adapted to a particular formalism in how we describe it (at least in the case of mailing addresses). The quantum numbers have been laid out in the same way. We could communicate with each other that an electron is "located in the lowest energy, spherical atomic orbital" but it is much easier to say a spin-up electron in the 1$s$ orbital instead. The four quantum numbers allows us to communicate this information numerically without any need for a wordy description. Of course carbon is not always going to be in the ground state. Given a wavelength of light for example, one can excite carbon in any number of ways. Where will the electron(s) go? Regardless of what wavelength of light we use, we know that we can describe the final location(s) using the four quantum numbers. You can do this by writing out all the possible permutations of the four quantum numbers. Of course, with a little more effort, you could predict the exact location where the electron goes but in my example above, you know for a fact you could describe it using the quantum number formalism. The quantum numbers also come with a set of restrictions which inherently gives you useful information about where electrons will NOT be. For instance, you could never have the following possible quantum numbers for an atom: $n$=1; $l$=0; $m_l$=0; $m_s$=1/2 $n$=1; $l$=0; $m_l$=0; $m_s$=-1/2 $n$=1; $l$=0; $m_l$=0; $m_s$=1/2 This set of quantum numbers indicates that three electrons reside in the 1$s$ orbital which is impossible! As Jan stated in his post, these quantum numbers are derived from the solutions to the Schrodinger equation for the hydrogen atom (or a 1-e$^-$ system). There are any number of solutions to this equation that relate to the possible energy levels of they hydrogen atom. Remember, energy is QUANTIZED (as postulated by Max Planck). That means that an energy level may exist (arbitrarily) at 0 and 1 but NEVER in between. There is a discrete 'jump' in energy levels and not some gradient between them. From these solutions a formalism was constructed to communicate the solutions in a very easy, numerical way just as mailing addresses are purposefully formatted in such a way that is easy that anyone can understand with minimal effort. In summary, the quantum numbers not only tell you where electrons will be (ground state) and can be (excited state), but also will tell you where electrons cannot be in an atom (due to the restrictions for each quantum number). Principle quantum number ($n$) - indicates the orbital size. Electrons in atoms reside in atomic orbitals. These are referred to as $s,p,d,f...$ type orbitals. A $1s$ orbital is smaller than a $2s$ orbital. A $2p$ orbital is smaller than a $3p$ orbital. This is because orbitals with a larger $n$ value are getting larger due to the fact that they are further away from the nucleus. The principle quantum number is an integer value where $n$ = 1,2,3... . Angular quantum number ($l$) - indicates the shape of the orbital. Each type of orbital ($s,p,d,f..$) has a characteristic shape associated with it. $s$-type orbitals are spherical while $p$-type orbitals have 'dumbbell' orientations. The orbitals described by $l$=0,1,2,3... are $s,p,d,f...$ orbitals, respectively. The angular quantum number ranges from 0 to $n$-1. Therefore, if $n$ = 3, then the possible values of $l$ are 0, 1, 2. Magnetic quantum number ($m_l$) - indicates the orientation of a particular orbital in space. Consider the $p$ orbitals. This is a set of orbitals consisting of three $p$-orbitals that have a unique orientation in space. In Cartesian space, each orbital would like along an axis (x, y, or z) and would be centered around the origin at 0,0. While each orbital is indeed a $p$-orbital, we can describe each orbital uniquely by assigning this third quantum number to indicate its position in space. Therefore, for a set of $p$-orbitals, there would be three $m_l$, each uniquely describing one of these orbitals. The magnetic quantum number can have values of $-l$ to $l$. Therefore, in our example above (where $l$ = 0,1,2) then $m_l$ would be -2, -1, 0, 1, 2. Spin quantum number ($m_s$) - indicates the 'spin' of the electron residing in some atomic orbital. Thus far we have introduced three quantum numbers that localize a position to an orbital of a particular size, shape and orientation. We now introduce the fourth quantum number that describes the type of electron that can be in that orbital. Recall that two electrons can reside inside one atomic orbital. We can define each one uniquely by indicating the electron's spin. According to the Pauli-exclusion principle, no two electrons can have the exact same four quantum numbers. This means that two electrons in one atomic orbital cannot have the same 'spin'. We generally denote 'spin-up' as $m_s$ =1/2 and spin-down as $m_s$=-1/2. • This is quite helpful, but do you think a little more on the significance of these numbers might be even more helpful? Such as, the energy levels for the principle quantum number or the bonding implications of the angular quantum number? (I don't know enough about the implications of the last two to generalize that much). Also, I feel somewhat like the OP. I can calculate these numbers and I understand that they give us a way to annotate 3D info for an electron, but what does that enable us to do as a result? Why are quantum numbers important for chemistry? May 19 '15 at 16:19 • @Cohen_the_Librarian I've extensively edited my post to try and address your questions/suggestions. May 19 '15 at 17:58 • Consider l = 1 (i.e. p orbitals), do the px, py and pz orbitals correspond to ml = -1, 0 and 1 respectively? Is there any correspondence that can be done for the d and f orbitals as well? I understand that it is a matter of perspective but is there a particular convention to assign each value of ml to a particular orbital, be it px or dx-y or what not. Feb 17 '18 at 3:28 The Schrödinger equation for most system has many solutions $\hat{H}\Psi_i=E_i\Psi_i$, where $i=1,2,3,..$. In the case of the hydrogen atom the solutions has a specific notation, which are where the quantum numbers come from. In the case of the H atom the principal quantum number $n$ refers to solutions with different energy. For $n>1$ there are several solutions with the same energy, which come in different shapes ($s$, $p$, etc with different angular quantum numbers $l$) that can point in different directions ($p_x$, $p_y$, etc with different magnetic quantum numbers $m$) These quantum numbers are also applied to multi-electron atoms within the AO approximation. So the quantum numbers are a way to count (label) the solutions to the Schrödinger equation.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7380931377410889, "perplexity": 366.6377119878563}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358189.36/warc/CC-MAIN-20211127133237-20211127163237-00085.warc.gz"}
https://query.library.utoronto.ca/index.php/search/q?kw=Author:Tjus,%20J.%20Becker
# Articles ## Search Articles ### Databases X Search Filters Format Format X Sort by Filter by Count Journal Article (419) 419 Publication (205) 205 Report (52) 52 Book Chapter (1) 1 Conference Proceeding (1) 1 Data Set (1) 1 Reference (1) 1 more... Subjects Subjects X Sort by Filter by Count physical sciences (347) 347 natural sciences (206) 206 naturvetenskap (206) 206 astrophysics (201) 201 fysik (186) 186 high energy astrophysical phenomena (177) 177 science & technology (170) 170 physics (167) 167 astronomy, astrophysics and cosmology (140) 140 physics - high energy astrophysical phenomena (133) 133 astronomi, astrofysik och kosmologi (129) 129 astronomy & astrophysics (126) 126 astroparticle physics (115) 115 astrophysics - high energy astrophysical phenomena (95) 95 neutrinos (88) 88 cosmic rays (72) 72 experiment (70) 70 high energy physics (69) 69 astropartikelfysik (68) 68 physics, particles & fields (56) 56 gamma rays: general (55) 55 cosmology and extragalactic astrophysics (51) 51 physics and astronomy (47) 47 flux (46) 46 galaxy astrophysics (43) 43 astronomy and astrophysics (40) 40 gamma rays: galaxies (40) 40 icecube (40) 40 galaxies: active (38) 38 subatomic physics (38) 38 subatomär fysik (38) 38 sciences of the universe (33) 33 searching (33) 33 emission (32) 32 instrumentation and methods for astrophysics (32) 32 gamma rays (31) 31 instrumentation and detectors (30) 30 ism: supernova remnants (29) 29 [phys.astr.he]physics [physics]/astrophysics [astro-ph]/high energy astrophysical phenomena [astro-ph.he] (27) 27 [sdu.astr.he]sciences of the universe [physics]/astrophysics [astro-ph]/high energy astrophysical phenomena [astro-ph.he] (26) 26 [phys.astr]physics [physics]/astrophysics [astro-ph] (24) 24 supernova remnants (24) 24 acceleration of particles (23) 23 astrophysics, cosmology and astronomy (23) 23 telescopes (23) 23 high energy physics - experiment (21) 21 spectra (21) 21 galaxies (19) 19 energy (18) 18 muons (18) 18 phenomenology (18) 18 physics, multidisciplinary (18) 18 detectors (17) 17 emissions (17) 17 hess - abteilung hofmann (17) 17 sensitivity (15) 15 gamma-rays: galaxies (14) 14 hochenergie-astrophysik theorie - abteilung hofmann (14) 14 magnetic fields (14) 14 neutrino: atmosphere (14) 14 dark matter (13) 13 gamma ray: vhe (13) 13 gamma rays: stars (13) 13 neutrino (13) 13 observatory (13) 13 physics of elementary particles and fields (13) 13 spectral (13) 13 arrays (12) 12 astronomy (12) 12 atmospherics (12) 12 cosmology (12) 12 energy spectra (12) 12 gamma rays: ism (12) 12 photons (12) 12 supernova (12) 12 background (11) 11 bl lacertae objects: individual: pks 2155-304 (11) 11 confidence intervals (11) 11 elementary particles, quantum field theory (11) 11 emission analysis (11) 11 experimental results (11) 11 ism: clouds (11) 11 mathematical models (11) 11 measurement science and instrumentation (11) 11 nuclear energy (11) 11 nuclear physics, heavy ions, hadrons (11) 11 point sources (11) 11 pollution sources (11) 11 quantum field theories, string theory (11) 11 technology (11) 11 x-rays: binaries (11) 11 analysis (10) 10 energy spectrum (10) 10 galaxies: jets (10) 10 gamma-ray burst: general (10) 10 neutrino: detector (10) 10 pulsars (10) 10 stellar winds (10) 10 more... Language Publication Date Click on a bar to filter by decade Slide to change publication date range by Aartsen, M. G and Abraham, K and Ackermann, M and Adams, J and Aguilar, J. A and Ahlers, M and Ahrens, M and Altmann, D and Andeen, K and Anderson, T and Ansseau, I and Anton, G and Archinger, M and Argüelles, C and Arlen, T. C and Auffenberg, J and Axani, S and Bai, X and Barwick, S. W and Baum, V and Bay, R and Beatty, J. J and Becker Tjus, J and Becker, K.-H and BenZvi, S and Berghaus, P and Berley, D and Bernardini, E and Bernhard, A and Besson, D. Z and Binder, G and Bindig, D and Blaufuss, E and Blot, S and Boersma, D. J and Bohm, C and Börner, M and Bos, F and Bose, D and Böser, S and Botner, O and Braun, J and Brayeur, L and Bretz, H.-P and Burgman, A and Casey, J and Casier, M and Cheung, E and Chirkin, D and Christov, A and Clark, K and Classen, L and Coenders, S and Collin, G. H and Conrad, J. M and Cowen, D. F and Cruz Silva, A. H and Daughhetee, J and Davis, J. C and Day, M and de André, J. P. A. M and De Clercq, C and del Pino Rosendo, E and Dembinski, H and De Ridder, S and Desiati, P and de Vries, K. D and de Wasseige, G and de With, M and DeYoung, T and Díaz-Vélez, J. C and di Lorenzo, V and Dujmovic, H and Dumm, J. P and Dunkman, M and Eberhardt, B and Ehrhardt, T and Eichmann, B and Euler, S and Evenson, P. A and Fahey, S and Fazely, A. R and Feintzeig, J and Felde, J and Filimonov, K and Finley, C and Flis, S and Fösig, C.-C and Fuchs, T and Gaisser, T. K and Gaior, R and Gallagher, J and Gerhardt, L and Ghorbani, K and Giang, W and Gladstone, L and Glüsenkamp, T and Goldschmidt, A and Golup, G and Gonzalez, J. G and ... and IceCube Collaboration Physical review letters, ISSN 0031-9007, 08/2016, Volume 117, Issue 7, pp. 071801 - 071801 The IceCube neutrino telescope at the South Pole has measured the atmospheric muon neutrino spectrum as a function of zenith angle and energy in the... Physics, Multidisciplinary | Physical Sciences | Physics | Science & Technology | Confidence intervals | Approximation | Searching | Neutrinos | Mathematical models | Zenith | Atmospherics | Muons Journal Article by Aartsen, M G and Abbasi, R and Ackermann, M and Adams, J and Aguilar, J A and Ahlers, M and Altmann, D and Arguelles, C and Auffenberg, J and Bai, X and Baker, M and Barwick, S W and Baum, V and Bay, R and Beatty, J J and Tjus, J Becker and Becker, K -H and BenZvi, S and Berghaus, P and Berley, D and Bernardini, E and Bernhard, A and Besson, D Z and Binder, G and Bindig, D and Bissok, M and Blaufuss, E and Blumenthal, J and Boersma, D J and Bohm, C and Bose, D and Böser, S and Botner, O and Brayeur, L and Bretz, H -P and Brown, A M and Bruijn, R and Casey, J and Casier, M and Chirkin, D and Christov, A and Christy, B and Clark, K and Classen, L and Clevermann, F and Coenders, S and Cohen, S and Cowen, D F and Silva, A H Cruz and Danninger, M and Daughhetee, J and Davis, J C and Day, M and Clercq, C De and Ridder, S De and Desiati, P and de Vries, K D and de With, M and DeYoung, T and Díaz-Vélez, J C and Dunkman, M and Eagan, R and Eberhardt, B and Eichmann, B and Eisch, J and Euler, S and Evenson, P A and Fadiran, O and Fazely, A R and Fedynitch, A and Feintzeig, J and Feusels, T and Filimonov, K and Finley, C and Fischer-Wasels, T and Flis, S and Franckowiak, A and Frantzen, K and Fuchs, T and Gaisser, T K and Gallagher, J and Gerhardt, L and Gladstone, L and Glüsenkamp, T and Goldschmidt, A and Golup, G and Gonzalez, J G and Goodman, J A and Góra, D and Grandmont, D T and Grant, D and Gretskov, P and Groh, J C and Groß, A and Ha, C and Ismail, A Haj and Hallen, P and Hallgren, A and Halzen, F and Hanson, K and ... Journal of instrumentation, ISSN 1748-0221, 03/2014, Volume 9, Issue 3, pp. P03009 - P03009 Accurate measurement of neutrino energies is essential to many of the scientific goals of large-volume neutrino telescopes. The fundamental observable in such... Cherenkov detectors | Performance of High Energy Physics Detectors | dE/dx detectors | Neutrino detectors | Technology | Instruments & Instrumentation | Science & Technology | Lower bounds | Spectral emittance | Emittance | Neutrinos | Energy measurement | Telescopes | Instrumentation | Channels | Deposition Journal Article by Aartsen, M.G and Abbasi, R and Abdou, Y and Ackermann, M and Adams, J and Aguilar, J.A and Ahlers, M and Altmann, D and Auffenberg, J and Bai, X and Baker, M and Barwick, S.W and Baum, V and Bay, R and Beatty, J.J and Bechet, S and Becker Tjus, J and Becker, K.-H and Benabderrahmane, M.L and BenZvi, S and Berghaus, P and Berley, D and Bernardini, E and Bernhard, A and Bertrand, D and Besson, D.Z and Binder, G and Bindig, D and Bissok, M and Blaufuss, E and Blumenthal, J and Boersma, D.J and Bohaichuk, S and Bohm, C and Bose, D and Böser, S and Botner, O and Brayeur, L and Bretz, H.-P and Brown, A.M and Bruijn, R and Brunner, J and Carson, M and Casey, J and Casier, M and Chirkin, D and Christov, A and Christy, B and Clark, K and Clevermann, F and Coenders, S and Cohen, S and Cowen, D.F and Cruz Silva, A.H and Danninger, M and Daughhetee, J and Davis, J.C and Day, M and De Clercq, C and De Ridder, S and Desiati, P and De Vries, K.D and De With, M and DeYoung, T and Díaz-Vélez, J.C and Dunkman, M and Eagan, R and Eberhardt, B and Eichmann, B and Eisch, J and Ellsworth, R.W and Euler, S and Evenson, P.A and Fadiran, O and Fazely, A.R and Fedynitch, A and Feintzeig, J and Feusels, T and Filimonov, K and Finley, C and Fischer-Wasels, T and Flis, S and Franckowiak, A and Frantzen, K and Fuchs, T and Gaisser, T.K and Gallagher, J and Gerhardt, L and Gladstone, L and Glüsenkamp, T and Goldschmidt, A and Golup, G and Gonzalez, J.G and Goodman, J.A and Góra, D and Grandmont, D.T and Grant, D and Groß, A and Ha, C and Haj Ismail, A and ... and unav and IceCube Collaboration Science (American Association for the Advancement of Science), ISSN 0036-8075, 11/2013, Volume 342, Issue 6161, pp. 1242856 - 1242856 Journal Article by Aartsen, M.G and Ackermann, M and Adams, J and Aguilar, J.A and Ahlers, M and Ahrens, M and Altmann, D and Anderson, T and Arguelles, C and Arlen, T.C and Auffenberg, J and Bai, X and Barwick, S.W and Baum, V and Bay, R and Beatty, J.J and Becker Tjus, J and Becker, K.-H and Benzvi, S and Berghaus, P and Berley, D and Bernardini, E and Bernhard, A and Besson, D.Z and Binder, G and Bindig, D and Bissok, M and Blaufuss, E and Blumenthal, J and Boersma, D.J and Bohm, C and Bos, F and Bose, D and Böser, S and Botner, O and Brayeur, L and Bretz, H.-P and Brown, A.M and Buzinsky, N and Casey, J and Casier, M and Cheung, E and Chirkin, D and Christov, A and Christy, B and Clark, K and Classen, L and Clevermann, F and Coenders, S and Cowen, D.F and Cruz Silva, A.H and Daughhetee, J and Davis, J.C and Day, M and De André, J.P.A.M and De Clercq, C and Dembinski, H and De Ridder, S and Desiati, P and De Vries, K.D and De With, M and Deyoung, T and Díaz-Vélez, J.C and Dumm, J.P and Dunkman, M and Eagan, R and Eberhardt, B and Ehrhardt, T and Eichmann, B and Eisch, J and Euler, S and Evenson, P.A and Fadiran, O and Fazely, A.R and Fedynitch, A and Feintzeig, J and Felde, J and Filimonov, K and Finley, C and Fischer-Wasels, T and Flis, S and Frantzen, K and Fuchs, T and Gaisser, T.K and Gaior, R and Gallagher, J and Gerhardt, L and Gier, D and Gladstone, L and Glüsenkamp, T and Goldschmidt, A and Golup, G and Gonzalez, J.G and Goodman, J.A and Góra, D and Grant, D and Gretskov, P and Groh, J.C and Groß, A and Ha, C and ... and IceCube Collaboration Physical review letters, ISSN 0031-9007, 04/2015, Volume 114, Issue 17, pp. 171102 - 171102 A diffuse flux of astrophysical neutrinos above 100 TeV has been observed at the IceCube Neutrino Observatory. Here we extend this analysis to probe the... Physics, Multidisciplinary | Physical Sciences | Physics | Science & Technology | Flavor (particle physics) | Neutrinos | Classification | Consistency | Showers | Oscillations | Flux | Flavours Journal Article by Aartsen, M. G and Abraham, K and Ackermann, M and Adams, J and Aguilar, J. A and Ahlers, M and Ahrens, M and Altmann, D and Andeen, K and Anderson, T and Ansseau, I and Anton, G and Archinger, M and Argüelles, C and Auffenberg, J and Axani, S and Bai, X and Barwick, S. W and Baum, V and Bay, R and Beatty, J. J and Becker Tjus, J and Becker, K.-H and BenZvi, S and Berghaus, P and Berley, D and Bernardini, E and Bernhard, A and Besson, D. Z and Binder, G and Bindig, D and Bissok, M and Blaufuss, E and Blot, S and Bohm, C and Börner, M and Bos, F and Bose, D and Böser, S and Botner, O and Braun, J and Brayeur, L and Bretz, H.-P and Burgman, A and Carver, T and Casier, M and Cheung, E and Chirkin, D and Christov, A and Clark, K and Classen, L and Coenders, S and Collin, G. H and Conrad, J. M and Cowen, D. F and Cross, R and Day, M and de André, J. P. A. M and De Clercq, C and del Pino Rosendo, E and Dembinski, H and De Ridder, S and Desiati, P and de Vries, K. D and de Wasseige, G and de With, M and DeYoung, T and Díaz-Vélez, J. C and di Lorenzo, V and Dujmovic, H and Dumm, J. P and Dunkman, M and Eberhardt, B and Ehrhardt, T and Eichmann, B and Eller, P and Euler, S and Evenson, P. A and Fahey, S and Fazely, A. R and Feintzeig, J and Felde, J and Filimonov, K and Finley, C and Flis, S and Fösig, C.-C and Franckowiak, A and Friedman, E and Fuchs, T and Gaisser, T. K and Gallagher, J and Gerhardt, L and Ghorbani, K and Giang, W and Gladstone, L and Glagla, M and Glüsenkamp, T and Goldschmidt, A and Golup, G and Gonzalez, J. G and ... and IceCube Collaboration Physical review letters, ISSN 1079-7114, 12/2016, Volume 117, Issue 24, pp. 241101 - 241101 We report constraints on the sources of ultrahigh-energy cosmic rays (UHECRs) above 10(9) GeV, based on an analysis of seven years of IceCube data. This... Physics, Multidisciplinary | Physical Sciences | Physics | Science & Technology | Origins | Active galactic nuclei | Searching | Neutrinos | Pulsars | Fluxes | Star formation rate | Cosmic rays Journal Article by Aartsen, M G and Ackermann, M and Adams, J and Aguilar, J A and Ahlers, M and Ahrens, M and Al Samarai, I and Altmann, D and Andeen, K and Anderson, T and Ansseau, I and Anton, G and Argüelles, C and Auffenberg, J and Axani, S and Bagherpour, H and Bai, X and Barron, J P and Barwick, S W and Baum, V and Bay, R and Beatty, J J and Tjus, J Becker and Becker, K -H and BenZvi, S and Berley, D and Bernardini, E and Besson, D Z and Binder, G and Bindig, D and Blaufuss, E and Blot, S and Bohm, C and Börner, M and Bos, F and Bose, D and Böser, S and Botner, O and Bourbeau, J and Bradascio, F and Braun, J and Brayeur, L and Brenzke, M and Bretz, H -P and Bron, S and Burgman, A and Carver, T and Casey, J and Casier, M and Cheung, E and Chirkin, D and Christov, A and Clark, K and Classen, L and Coenders, S and Collin, G H and Conrad, J M and Cowen, D F and Cross, R and Day, M and de André, J P. A. M and De Clercq, C and DeLaunay, J J and Dembinski, H and De Ridder, S and Desiati, P and de Vries, K D and de Wasseige, G and de With, M and DeYoung, T and Díaz-Vélez, J C and di Lorenzo, V and Dujmovic, H and Dumm, J P and Dunkman, M and Eberhardt, B and Ehrhardt, T and Eichmann, B and Eller, P and Evenson, P A and Fahey, S and Fazely, A R and Felde, J and Filimonov, K and Finley, C and Flis, S and Franckowiak, A and Friedman, E and Fuchs, T and Gaisser, T K and Gallagher, J and Gerhardt, L and Ghorbani, K and Giang, W and Glauch, T and Glüsenkamp, T and Goldschmidt, A and Gonzalez, J G and Grant, D and Griffith, Z and ... and Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States) The European physical journal. C, Particles and fields, ISSN 1434-6044, 9/2017, Volume 77, Issue 9, pp. 1 - 11 ... , S. Axani 14 , H. Bagherpour 16 ,X .B a i 41 , J. P. Barron 23 ,S .W .B a r w i c k 27 , V . Baum 32 ,R .B a y 8 , J. J. Beatty 18,19 , J. Becker Tjus 11 , K.-H. Becker... Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology | ASTRONOMY AND ASTROPHYSICS Journal Article by Aartsen, M. G and Abbasi, R and Abdou, Y and Ackermann, M and Adams, J and Aguilar, J. A and Ahlers, M and Altmann, D and Auffenberg, J and Bai, X and Baker, M and Barwick, S. W and Baum, V and Bay, R and Beattie, K and Beatty, J. J and Bechet, S and Becker Tjus, J and Becker, K.-H and Bell, M and Benabderrahmane, M. L and BenZvi, S and Berdermann, J and Berghaus, P and Berley, D and Bernardini, E and Bernhard, A and Bertrand, D and Besson, D. Z and Bindig, D and Bissok, M and Blaufuss, E and Blumenthal, J and Boersma, D. J and Bohaichuk, S and Bohm, C and Bose, D and Böser, S and Botner, O and Brayeur, L and Brown, A. M and Bruijn, R and Brunner, J and Buitink, S and Carson, M and Casey, J and Casier, M and Chirkin, D and Christy, B and Clark, K and Clevermann, F and Cohen, S and Cowen, D. F and Cruz Silva, A. H and Danninger, M and Daughhetee, J and Davis, J. C and De Clercq, C and De Ridder, S and Desiati, P and de Vries-Uiterweerd, G and de With, M and DeYoung, T and Díaz-Vélez, J. C and Dreyer, J and Dunkman, M and Eagan, R and Eberhardt, B and Eisch, J and Ellsworth, R. W and Engdegård, O and Euler, S and Evenson, P. A and Fadiran, O and Fazely, A. R and Fedynitch, A and Feintzeig, J and Feusels, T and Filimonov, K and Finley, C and Fischer-Wasels, T and Flis, S and Franckowiak, A and Franke, R and Frantzen, K and Fuchs, T and Gaisser, T. K and Gallagher, J and Gerhardt, L and Gladstone, L and Glüsenkamp, T and Goldschmidt, A and Golup, G and Goodman, J. A and Góra, D and Grant, D and Groß, A and Gurtner, M and Ha, C and Haj Ismail, A and ... and IceCube Collaboration Physical review letters, ISSN 1079-7114, 03/2013, Volume 110, Issue 13, pp. 131302 - 131302 We have performed a search for muon neutrinos from dark matter annihilation in the center of the Sun with the 79-string configuration of the IceCube neutrino... Physics, Multidisciplinary | Physical Sciences | Physics | Science & Technology Journal Article by Aartsen, M G and Ackermann, M and Adams, J and Aguilar, J A and Ahlers, M and Ahrens, M and Samarai, I Al and Altmann, D and Andeen, K and Anderson, T and Ansseau, I and Anton, G and Argüelles, C and Auffenberg, J and Axani, S and Backes, P and Bagherpour, H and Bai, X and Barron, J P and Barwick, S W and Baum, V and Bay, R and Beatty, J J and Becker Tjus, J and Becker, K.-H and BenZvi, S and Berley, D and Bernardini, E and Besson, D Z and Binder, G and Bindig, D and Blaufuss, E and Blot, S and Bohm, C and Börner, M and Bos, F and Böser, S and Botner, O and Bourbeau, E and Bourbeau, J and Bradascio, F and Braun, J and Brenzke, M and Bretz, H.-P and Bron, S and Brostean-Kaiser, J and Burgman, A and Busse, R S and Carver, T and Cheung, E and Chirkin, D and Christov, A and Clark, K and Classen, L and Collin, G H and Conrad, J M and Coppin, P and Correa, P and Cowen, D F and Cross, R and Dave, P and Day, M and de André, J P. A. M and De Clercq, C and DeLaunay, J J and Dembinski, H and De Ridder, S and Desiati, P and de Vries, K D and de Wasseige, G and de With, M and DeYoung, T and Díaz-Vélez, J C and di Lorenzo, V and Dujmovic, H and Dumm, J P and Dunkman, M and Dvorak, E and Eberhardt, B and Ehrhardt, T and Eichmann, B and Eller, P and Evenson, P A and Fahey, S and Fazely, A R and Felde, J and Filimonov, K and Finley, C and Flis, S and Franckowiak, A and Friedman, E and Fritz, A and Gaisser, T K and Gallagher, J and Ganster, E and Gerhardt, L and Ghorbani, K and Giang, W and Glauch, T and Glüsenkamp, T and ... and IceCube Collaboration and Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States) The European physical journal. C, Particles and fields, ISSN 1434-6044, 10/2018, Volume 78, Issue 10, pp. 1 - 9 Journal Article
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8447086811065674, "perplexity": 25500.367333183796}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107878633.8/warc/CC-MAIN-20201021205955-20201021235955-00260.warc.gz"}
https://www.ias.ac.in/listing/bibliography/boms/Zhou_Chang-Rong
• Zhou Chang-Rong Articles written in Bulletin of Materials Science • Effect of substitution of titanium by magnesium and niobium on structure and piezoelectric properties in (Bi1/2Na1/2)TiO3 ceramics To develop new (Bi1/2Na1/2)TiO3-based ceramics with excellent piezoelectric properties, the similarities and the differences between PZT and (Bi1/2Na1/2)TiO3 ceramics were analysed. Based on the analysis, a new (Bi1/2Na1/2)TiO3-based piezoelectric ceramic of B-site substitution of complex ions (Mg1/3Nb2/3)4+ for Ti4+ was prepared by a conventional ceramic technique, and effect of complex ions (Mg1/3Nb2/3)4+ addition on the microstructure, dielectric and piezoelectric properties was investigated. The results show that all compositions are mono-perovskite phase and the grain size increases with increasing content of (Mg1/3Nb2/3)4+. The piezoelectric constant, 𝑑33, first increases and then decreases, and electromechanical coupling factor, 𝑘p, varies insignificantly with increasing content of (Mg1/3Nb2/3)4+. • Effect of substitution of titanium by magnesium and niobium on structure and piezoelectric properties in (Bi1/2Na1/2)TiO3 ceramics To develop new (Bi1/2Na1/2)TiO3-based ceramics with excellent piezoelectric properties, the similarities and the differences between PZT and (Bi1/2Na1/2)TiO3 ceramics were analysed. Based on the analysis, a new (Bi1/2Na1/2)TiO3-based piezoelectric ceramic of B-site substitution of complex ions (Mg1/3Nb2/3)4+ for Ti4+ was prepared by a conventional ceramic technique, and the effect of complex ions (Mg1/3Nb2/3)4+ addition on the microstructure, dielectric and piezoelectric properties was investigated. The research results show that all compositions are mono-perovskite phase and the grain size increases with increasing content of (Mg1/3Nb2/3)4+. The piezoelectric constant $d_{33}$ first increases and then decreases, and electromechanical coupling factor $k_{p}$ varies insignificantly with increasing content of (Mg1/3Nb2/3)4+. • # Bulletin of Materials Science Volume 43, 2020 All articles Continuous Article Publishing mode • # Editorial Note on Continuous Article Publication Posted on July 25, 2019
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48602014780044556, "perplexity": 7635.970332659618}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401601278.97/warc/CC-MAIN-20200928135709-20200928165709-00743.warc.gz"}
https://topologicalmusings.wordpress.com/tag/cassinis-identity/
You are currently browsing the tag archive for the ‘cassini’s identity’ tag. I thought I would share with our chess-loving readers the following interesting (and somewhat well-known) mathematical chess paradox , apparently proving that $64=65$, and the accompanying explanation offered by Prof. Christian Hesse, University of Stuttgart (Germany).  It shows a curious connection between the well-known Cassini’s identity (related to Fibonacci numbers) and the $8 \times 8$ chessboard ($8$ being a Fibonacci number!). The connection can be exploited further to come up with similar paradoxes wherein any $F_n \times F_n$ -square can always be “rerranged” to form a $F_{n-1} \times F_{n+1}$ -rectangle such that the difference between their areas is either $+1$ or $-1$. Of course, for the curious reader there are plenty of such dissection problems listed in Prof David Eppstein’s Dissection page. • 331,218 hits
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 7, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7088322639465332, "perplexity": 1618.2227894176342}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826842.56/warc/CC-MAIN-20181215083318-20181215105318-00506.warc.gz"}
https://math.stackexchange.com/questions/5231/how-can-i-remove-rotations-from-points-defining-a-plane
How can I remove rotations from points defining a plane? I have coordinates for 4 vertices/points that define a plane and the normal/perpendicular. The plane has an arbitrary rotation applied to it. How can I 'un-rotate'/translate the points so that the plane has rotation 0 on x,y,z ? I've tried to get the plane rotation from the plane's normal: rotationX = atan2(normal.z,normal.y); rotationY = atan2(normal.z,normal.x); rotationZ = atan2(normal.y,normal.x); Is this correct ? How do I apply the inverse rotation to the position vectors ? I've tried to create a matrix with those rotations and multiply it with the vertices, but it doesn't look right. At the moment, I've wrote a simple test using Processing and can be seen here: float s = 50.0f;//scale/unit PVector[] face = {new PVector(1.08335042,0.351914703846,0.839020013809), new PVector(-0.886264681816,0.69921118021,0.839020371437), new PVector(-1.05991327763,-0.285596489906,-0.893030643463), new PVector(0.909702301025,-0.63289296627,-0.893030762672)}; PVector n = new PVector(0.150384, -0.500000, 0.852869); PVector[] clone; void setup(){ size(400,400,P3D); smooth(); clone = unRotate(face,n,true); } void draw(){ background(255); translate(width*.5,height*.5); if(mousePressed){ rotateX(map(mouseY,0,height,0,TWO_PI)); rotateY(map(mouseX,0,width,0,TWO_PI)); } stroke(128,0,0); beginShape(QUADS); for(int i = 0 ; i < 4; i++) vertex(face[i].x*s,face[i].y*s,face[i].z*s); endShape(); stroke(0,128,0); beginShape(QUADS); for(int i = 0 ; i < 4; i++) vertex(clone[i].x*s,clone[i].y*s,clone[i].z*s); endShape(); } //get rotation from normal PVector getRot(PVector loc,Boolean asRadians){ loc.normalize(); float rz = asRadians ? atan2(loc.y,loc.x) : degrees(atan2(loc.y,loc.x)); float ry = asRadians ? atan2(loc.z,loc.x) : degrees(atan2(loc.z,loc.x)); float rx = asRadians ? atan2(loc.z,loc.y) : degrees(atan2(loc.z,loc.y)); return new PVector(rx,ry,rz); } //translate vertices PVector[] unRotate(PVector[] verts,PVector no,Boolean doClone){ int vl = verts.length; PVector[] clone; if(doClone) { clone = new PVector[vl]; for(int i = 0; i<vl;i++) clone[i] = PVector.add(verts[i],new PVector()); }else clone = verts; PVector rot = getRot(no,false); PMatrix3D rMat = new PMatrix3D(); rMat.rotateX(-rot.x);rMat.rotateY(-rot.y);rMat.rotateZ(-rot.z); for(int i = 0; i<vl;i++) rMat.mult(clone[i],clone[i]); return clone; } Any syntax/pseudo code or explanation is useful. What trying to achieve is this: If I have a rotated plane: How can move the vertices to have something that would have no rotation: Thanks! UPDATE: @muad I'm not sure I understand. I thought I was using matrices for rotations. PMatrix3D's rotateX,rotateY,rotateZ calls should done the rotations for me. Doing it manually would be declaring 3d matrices and multiplying them. Here's a little snippet to illustrate this: PMatrix3D rx = new PMatrix3D(1, 0, 0, 0, 0, cos(rot.x),-sin(rot.x), 0, 0, sin(rot.x),cos(rot.x) , 0, 0, 0, 0, 1); PMatrix3D ry = new PMatrix3D(cos(rot.y), 0,sin(rot.y), 0, 0, 1,0 , 0, -sin(rot.y), 0,cos(rot.y), 0, 0, 0,0 , 1); PMatrix3D rz = new PMatrix3D(cos(rot.z),-sin(rot.z), 0, 0, sin(rot.z), cos(rot.z), 0, 0, 0 , 0, 1, 0, 0 , 0, 0, 1); PMatrix3D r = new PMatrix3D(); r.apply(rx);r.apply(ry);r.apply(rz); //test PMatrix rmat = new PMatrix3D();rmat.rotateX(rot.x);rmat.rotateY(rot.y);rmat.rotateZ(rot.z); float[] frmat = new float[16];rmat.get(frmat); float[] fr = new float[16];r.get(fr); println(frmat);println(fr); /* Outputs: [0] 0.059300933 [1] 0.09312407 [2] -0.99388695 [3] 0.0 [4] 0.90466285 [5] 0.41586864 [6] 0.09294289 [7] 0.0 [8] 0.42198166 [9] -0.9046442 [10] -0.059584484 [11] 0.0 [12] 0.0 [13] 0.0 [14] 0.0 [15] 1.0 [0] 0.059300933 [1] 0.09312407 [2] -0.99388695 [3] 0.0 [4] 0.90466285 [5] 0.41586864 [6] 0.09294289 [7] 0.0 [8] 0.42198166 [9] -0.9046442 [10] -0.059584484 [11] 0.0 [12] 0.0 [13] 0.0 [14] 0.0 [15] 1.0 */ • Is it OK if the points end up on the xy plane but in an arbitrary orientation, or do you want them to also line up with the x and y axes? The answer will be different in the two cases. – Rahul Sep 22 '10 at 19:16 • @Rahul The end goal is to get the rotation of the plane, then get the 3d coordinates as if they were on a 2d plane(one dimension can be dropped, as it should be 0). If the points end up on xy,xz,yz, it shouldn't matter. – George Profenza Sep 22 '10 at 22:16 • I think you didn't get my meaning. Let's say your quadrilateral is the red one in this crudely-drawn diagram: imgur.com/wwDx2.png . The smallest rotation that brings them into the xy plane gets you the blue quadrilateral. Is that what you want, or do you want the rotation that gets you the axis-aligned green quadrilateral instead? – Rahul Sep 23 '10 at 5:14 • @Rahul Thank you for the explanation. The axis-aligned green quadrilateral is what I want. – George Profenza Sep 23 '10 at 8:04 3 Answers From your comments, what I understand of your problem is that you have the coordinates of an arbitrarily oriented rectangle centred on the origin, and you want to find the rotation that will bring it to an axis-aligned rectangle on the $xy$ plane. Let $u$ and $v$ be the unit vectors that should be mapped to the axis-oriented unit vectors $e_x$ and $e_y$ respectively. You can get these by subtracting adjacent points of the rectangle and normalizing. Then you want a rotation $R$ which satisfies $Ru = e_x$, $Rv = e_y$, and $Rn = e_z$. You can express this as $R[u\;v\;n] = [e_x\;e_y\;e_z] = I$. Then $R$ equals $[u\;v\;n]^{-1}$, which is simply $[u\;v\;n]^T$ since $u$, $v$, and $n$ are an orthonormal set. To be more explicit, the rotation matrix you want is: $$R = \begin{bmatrix}u_x & u_y & u_z \\ v_x & v_y & v_z \\ n_x & n_y & n_z\end{bmatrix}.$$ If you really like Euler angles (rotateX, rotateY, rotateZ), there are ways to convert a rotation matrix like above to Euler angles, but they're ugly. You're best off using the rotation matrix explicitly. By the way, if your rectangle is not centred on the origin, and you want to perform the rotation keeping its centre (say $c$) fixed, you'll have to get the rotated coordinates of a point $p$ not simply as $Rp$ but as $R(p-c) + c$. • Unfortunately I can barely 'read' math, so here's what I do and do not understand: 'coordinates of an arbitrarily oriented rectangle centred on the origin' - it is an arbitrarily oriented plane centred on the origin. it can be any shape(parallelogram for instance, not just rectangle). Yes, I want to find the rotation that will bring it to an axis aligned rectangle on the xy plane. – George Profenza Oct 5 '10 at 21:13 • ex and ey describe the xy plane, and u v describe the unit vectors of my plane, right ? shouldn't there be a w(a third dimension) ? 'subtracting adjacent points of the rectangle and normalizing' - I didn't understand. do I subtract a value from u so it matches with ex, and similarly for v ? This would probably take care of 2 axes, not 3. Then I would create R which uses ex,ey and ez as the rotations. Rn ? is this linked to the normal ? – George Profenza Oct 5 '10 at 21:19 • Once I know R, I invert/transpose it and multiple it to the original vectors and should get the vectors in the xy plane now, correct ? Related to the last note, the plane will always be centred on it's origin/centre – George Profenza Oct 5 '10 at 21:21 • @George Profenza: I think by "plane" you mean "quadrilateral"; the word "plane" is usually used to refer to an infinite plane, while a planar shape connecting four points is called a quadrilateral. But I'm still not sure I understand, as it's not possible to simply rotate a parallelogram and turn it into a rectangle -- you would also need to shear it in some way. And if you have an arbitrary quadrilateral then the problem becomes much more complicated. – Rahul Oct 5 '10 at 21:31 • @George Profenza: Sorry, I didn't see your other two comments when I replied. The third dimension is $n$, the normal to the plane, so you have a set of three vectors ($u$, $v$, and $n$) in space which are all perpendicular to each other. The rotation matrix is $R$, while the notation $Rx$ for some vector $x$ denotes multiplying $R$ with the vector $x$. You don't have to invert or transpose it; what I have written above is the actual value of $R$. – Rahul Oct 5 '10 at 21:38 Try to represent rotations using matrices instead of angles - then finding the inverse is easy. • PMatrix3D rMat = new PMatrix3D(); rMat.rotateX(-rot.x);rMat.rotateY(-rot.y);rMat.rotateZ(-rot.z); should create a matrix and set rotation X,Y and Z for the matrix. If I multiply it to the position vertices, the result looks wrong(distorted). I also looked at Spherical Coordinates(en.wikipedia.org/wiki/…), but I didn't understand what theta and phi would be in my case, and what would I do with the 3rd rotation. – George Profenza Sep 22 '10 at 20:07 • I am suggesting that you use rotation matrices instead of angles. – anon Sep 22 '10 at 20:08 • @George: To make muad's comment explicit: Euler angles is what you need here. The actual rotation matrix is decomposable as a product of three orthogonals; inverting is as easy as transposing all three and multiplying in the reverse order. – J. M. is a poor mathematician Sep 22 '10 at 22:22 • @J. M. I thought I was using Euler angles. I've added an update. – George Profenza Sep 22 '10 at 23:13 • No I am not suggesting to use Euler angles. – anon Oct 3 '10 at 20:11 At the moment, I went with a somewhat simple solution that allows me to draw a plane/face with 4 vertices with arbitrary rotations, in 2D: Here's how it works: PVector[] unRotateVerts(PVector[] verts,PVector n){ //get the angle between the face4 normal and Y angle = PVector.angleBetween(n,y);//acos(n.dot(y)); //get the axis of rotation, by getting the perpendicular axis = n.cross(y); axis.normalize(); //clone vertices int vl = verts.length; PVector[] clone = new PVector[vl]; for(int i = 0; i<vl;i++) clone[i] = verts[i].get(); //make a rotation matrix and rotate it PMatrix3D rMat = new PMatrix3D(); rMat.rotate(angle,axis.x,axis.y,axis.z); //multiply each vertex with the rotation matrix PVector[] dst = new PVector[vl]; for(int i = 0; i<vl;i++) { dst[i] = new PVector(); rMat.mult(clone[i],dst[i]); } return dst; } This works perfectly for aligning with XZ, but rotations on the Z axis are still there. Any hints on how to remove that would be handy.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5333104729652405, "perplexity": 1192.524348380084}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572439.21/warc/CC-MAIN-20190915235555-20190916021555-00090.warc.gz"}
https://elteoremadecuales.com/nachbins-theorem/?lang=it
Il teorema di Nachbin In matematica, in the area of complex analysis, Il teorema di Nachbin (named after Leopoldo Nachbin) is commonly used to establish a bound on the growth rates for an analytic function. This article provides a brief review of growth rates, including the idea of a function of exponential type. Classification of growth rates based on type help provide a finer tool than big O or Landau notation, since a number of theorems about the analytic structure of the bounded function and its integral transforms can be stated. In particolare, Nachbin's theorem may be used to give the domain of convergence of the generalized Borel transform, indicato di seguito. Contenuti 1 Exponential type 2 Ψ type 3 Borel transform 4 Nachbin resummation 5 Fréchet space 6 Guarda anche 7 References Exponential type Main article: Exponential type A function f(z) defined on the complex plane is said to be of exponential type if there exist constants M and α such that {stile di visualizzazione |f(re^{itteta })|leq Me^{alpha r}} in the limit of {displaystyle rto infty } . Qui, the complex variable z was written as {displaystyle z=re^{itteta }} to emphasize that the limit must hold in all directions θ. Letting α stand for the infimum of all such α, one then says that the function f is of exponential type α. Per esempio, permettere {stile di visualizzazione f(z)=sin(pi z)} . Then one says that {displaystyle peccato(pi z)} is of exponential type π, since π is the smallest number that bounds the growth of {displaystyle peccato(pi z)} along the imaginary axis. Così, for this example, Carlson's theorem cannot apply, as it requires functions of exponential type less than π. Ψ type Bounding may be defined for other functions besides the exponential function. In generale, una funzione {displaystyle Psi (t)} is a comparison function if it has a series {displaystyle Psi (t)=somma _{n=0}^{infty }Psi _{n}t^{n}} insieme a {displaystyle Psi _{n}>0} for all n, e {displaystyle lim _{infty }{frac {Psi _{n+1}}{Psi _{n}}}=0.} Comparison functions are necessarily entire, which follows from the ratio test. Se {displaystyle Psi (t)} is such a comparison function, one then says that f is of Ψ-type if there exist constants M and τ such that {stile di visualizzazione a sinistra|deviato(re^{itteta }Giusto)Giusto|leq MPsi (tau r)} come {displaystyle rto infty } . If τ is the infimum of all such τ one says that f is of Ψ-type τ. Nachbin's theorem states that a function f(z) with the series {stile di visualizzazione f(z)=somma _{n=0}^{infty }f_{n}z^{n}} is of Ψ-type τ if and only if {displaystyle limsup _{infty }sinistra|{frac {f_{n}}{Psi _{n}}}Giusto|^{1/n}=tau .} Borel transform Nachbin's theorem has immediate applications in Cauchy theorem-like situations, and for integral transforms. Per esempio, the generalized Borel transform is given by {stile di visualizzazione F(w)=somma _{n=0}^{infty }{frac {f_{n}}{Psi _{n}w^{n+1}}}.} If f is of Ψ-type τ, then the exterior of the domain of convergence of {stile di visualizzazione F(w)} , and all of its singular points, are contained within the disk {stile di visualizzazione |w|leq tau .} Inoltre, uno ha {stile di visualizzazione f(z)={frac {1}{2pi io}}unguento _{gamma }Psi (zw)F(w),dw} where the contour of integration γ encircles the disk {stile di visualizzazione |w|leq tau } . This generalizes the usual Borel transform for exponential type, dove {displaystyle Psi (t)=e^{t}} . The integral form for the generalized Borel transform follows as well. Permettere {displaystyle alfa (t)} be a function whose first derivative is bounded on the interval {stile di visualizzazione [0,infty )} , affinché {stile di visualizzazione {frac {1}{Psi _{n}}}=int _{0}^{infty }t^{n},dalpha (t)} dove {displaystyle dalpha (t)=alpha ^{primo }(t),dt} . Then the integral form of the generalized Borel transform is {stile di visualizzazione F(w)={frac {1}{w}}int _{0}^{infty }deviato({frac {t}{w}}Giusto),dalpha (t).} The ordinary Borel transform is regained by setting {displaystyle alfa (t)=e^{-t}} . Note that the integral form of the Borel transform is just the Laplace transform. Nachbin resummation Nachbin resummation (generalized Borel transform) can be used to sum divergent series that escape to the usual Borel summation or even to solve (asintoticamente) integral equations of the form: {stile di visualizzazione g(S)=sint _{0}^{infty }K(st)f(t),dt} where f(t) may or may not be of exponential growth and the kernel K(tu) has a Mellin transform. The solution can be obtained as {stile di visualizzazione f(X)=somma _{n=0}^{infty }{frac {un_{n}}{M(n+1)}}x^{n}} insieme a {stile di visualizzazione g(S)=somma _{n=0}^{infty }un_{n}s^{-n}} e M(n) is the Mellin transform of K(tu). An example of this is the Gram series {stile di visualizzazione pi (X)approx 1+sum _{n=1}^{infty }{frac {log ^{n}(X)}{ncdot n!zeta (n+1)}}.} in some cases as an extra condition we require {displaystyle int _{0}^{infty }K(t)t^{n},dt} to be finite for {displaystyle n=0,1,2,3,...} and different from 0. Fréchet space Collections of functions of exponential type {displaystyle tau } can form a complete uniform space, namely a Fréchet space, by the topology induced by the countable family of norms {stile di visualizzazione |f|_{n}= sup _{zin mathbb {C} }esp a sinistra[-sinistra(sì +{frac {1}{n}}Giusto)|z|Giusto]|f(z)|.} See also Divergent series Borel summation Euler summation Cesàro summation Lambert summation Mittag-Leffler summation Phragmén–Lindelöf principle Abelian and tauberian theorems Van Wijngaarden transformation References L. Nachbin, "An extension of the notion of integral functions of the finite exponential type", Anais Acad. Brasil. Ciencias. 16 (1944) 143–147. Ralph P. Boas, Jr. e R. Creighton Buck, Polynomial Expansions of Analytic Functions (Second Printing Corrected), (1964) Academic Press Inc., Publishers New York, Springer-Verlag, Berlino. Library of Congress Card Number 63-23263. (Provides a statement and proof of Nachbin's theorem, as well as a general review of this topic.) A.F. Leont'ev (2001) [1994], "Function of exponential type", Enciclopedia della matematica, EMS Press A.F. Leont'ev (2001) [1994], "Borel transform", Enciclopedia della matematica, Categorie di stampa EMS: Integral transformsTheorems in complex analysisSummability methods Se vuoi conoscere altri articoli simili a Il teorema di Nachbin puoi visitare la categoria Integral transforms. Vai su Utilizziamo cookie propri e di terze parti per migliorare l'esperienza dell'utente Maggiori informazioni
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9628617763519287, "perplexity": 6241.290668456616}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944996.49/warc/CC-MAIN-20230323034459-20230323064459-00789.warc.gz"}
https://physics.stackexchange.com/questions/123263/hamiltonian-with-dirac-delta-function
# Hamiltonian with Dirac Delta function I've to compute this expression $$\hat{H} ~=~\frac{1}{4}g_2\int d^3R\int d^3r\ \bar{\Psi}(\vec{R}+\frac{\vec{r}}{2})\bar{\Psi}(\vec{R}-\frac{\vec{r}}{2})$$$$\times \left[ \delta(\vec{r})\nabla_{\vec{r}}^2 +\nabla_{\vec{r}}^2\delta(\vec{r}) \right]\Psi(\vec{R}+\frac{\vec{r}}{2}) \Psi(\vec{R}-\frac{\vec{r}}{2}), \tag{15}$$ where $$\bar{\Psi}$$ is the conjugate of $$\Psi$$. Using Dirac delta properties, Can I say that $$\left[ \delta(\vec{r})\nabla_{\vec{r}}^2 +\nabla_{\vec{r}}^2\delta(\vec{r}) \right] = 2 \delta(\vec{r})\nabla_{\vec{r}}^2~?$$ If not, how can I calculate this integral? I should obtain $$\hat{H} = \frac{1}{4}g_2\int d^3R\ \bar{\Psi}(\vec{R})\left[ \nabla^2(\bar{\Psi}(\vec{R})\ \Psi(\vec{R}))\right]\Psi(\vec{R}). \tag{16}$$ A method should be expanding $$\Phi = V^{-1/2} \sum_\alpha a_\alpha e^{i\textbf{k}_\alpha\cdot\textbf{r}}$$, but I don't have any idea what doing! This integrals (15) comes from the paper Phys. Rev. A 67 053612 and authors say they do integration for part and then over $$\textbf{r}$$. Does anyone have ideas how to calculate this integral? /// Update /// I tried to calculate the integral using yours suggestions. I'm near the solution! At the last there is an extra term and an extra $$1/2$$. In the followed images, the conjugate is $$\phi^*$$ and I indicate $$\phi_+ = \Psi(\vec{R}+\frac{\vec{r}}{2})$$ and $$\phi_- = \Psi(\vec{R}-\frac{\vec{r}}{2})$$ The first term is and the second is Summing, $$\int d^3\vec{R}\int d^3\vec{r}\nabla^2_{\vec{r}}\left( \phi_+^* \phi^*_-\phi_-\phi_+ \right)\delta(\vec{r})$$ and then, at the last $$\int d^3\vec{R}\ \frac{1}{2}(\phi^*\phi(\phi\nabla^2\phi^*+\phi^*\nabla^2\phi) - \phi^2|\nabla\phi^*|^2-\phi^{*2}|\nabla\phi|^2)$$ and completing it by adding and subtracting $$2\phi\phi^*\nabla\phi^*\cdot\nabla\phi$$ $$\int d^3\vec{R}\ \frac{1}{2}\phi^*\left( \nabla^2(\phi^*\phi)\right)\phi - \int d^3\vec{R}\ \frac{1}{2} (\nabla(\phi\phi^*))^2$$ I did all calculations by hand and then i checked them with mathematica. is the term $$\int d^3\vec{R}\ (\nabla(\phi\phi^*))^2 = 0$$ for any reasons? I hope yes. Why is there the constant $$1/2$$? • – ACuriousMind Jul 5 '14 at 17:51 • @ACuriousMind thanks. The curiosity is that the result of the integral depends on the laplacian of product between the conjugate of $\Psi$ and itself. And, integrating by parts, i don't understand why the conjugate is inside the laplacian – apt45 Jul 5 '14 at 18:01 Hints to the sought-for formula (16) for $\hat{H}$: 1. Use integration by parts in ${\bf r}$-space to remove derivatives from the Dirac delta distributions, cf. comment by user ACuriousMind. 2. Work on the problem from both ends (15) and (16). Use Leibniz rule $$\tag{*}\nabla^2 (fg)~=~ g\nabla^2 f + f \nabla^2 g+ 2 \nabla f\cdot\nabla g,$$ so that $\nabla$ only acts on single objects everywhere. Let's call the last term in eq. (*) for a 'cross-term'. 3. Change the derivative $\nabla_{\bf r}~$ to $~\pm\frac{1}{2}\nabla_{\bf R}$. 4. Perform the $\bf r$-integration. 5. For cross-terms that act on $\Psi\Psi$ or $\bar{\Psi}\bar{\Psi}$, integrate by parts in ${\bf R}$-space, so that there are only cross-terms that act on $\bar{\Psi}\Psi$. 6. Compare! • Thanks Qmechanic. I've just edited my question with few calculations. I'm near the solution... can you help me? – apt45 Jul 5 '14 at 23:01 • I see that you edited your answer thank you! Are my calculation corrects? When you say "For cross-terms that act on $\Psi\Psi$ or $\bar{\Psi}\bar{\Psi}$, integrate by parts in R-space, so that there are only cross-terms that act on $\bar{\Psi}\Psi$ does it mean that $\int \nabla\Psi \cdot \nabla\Psi = 0$ under assumptions that $\Psi(\infty) \rightarrow 0$ ? – apt45 Jul 6 '14 at 9:05 • Concerning your last sentence: No, only total divergence terms are omitted. – Qmechanic Jul 6 '14 at 11:09 • I solved using the fact that $\int d^3\vec{R} \phi\nabla^2\phi = -\int d^3\vec{R} \nabla\phi\cdot\nabla\phi$ thank you!! – apt45 Jul 6 '14 at 11:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 18, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9353086352348328, "perplexity": 563.0291633686679}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027318952.90/warc/CC-MAIN-20190823172507-20190823194507-00519.warc.gz"}
https://forums.obsidian.net/topic/94795-obsidian-will-you-provide-xml-documentation-or-other-editing-tools-on-release/?tab=comments
Search In Find results that contain... Find results in... # Obsidian - Will you provide xml documentation or other editing tools on release? ## Recommended Posts Hello, 1. Are you going to write up a how-to guide on the exported xml files and how to do things like add dialogue/quests/items/spells? I've done all of those and I wonder if something like that is planned. I know whatever I write can't be nearly as extensive as anything you make, having a lot more experience. 2. Are there any other neat releasable tools to add content to the game with? I'm sure you're well aware how much well-supported modding tools can add to a games community, having worked on many with them. Right now I can't seem to find a way to actually edit enemy spawns efficiently (I can think of one hack-y way to do it), but I know you're using unity which really wasn't meant for that. Another big modding hurdle is dialogue. Can you add lines purely through xml? Yeah if you hate your life. If the fabled obsidian dialogue tool can't be released maybe just some documentation on how things are laid out would be great. 3. Maybe most importantly, are you considering adding an overwrite folder (like Baldurs Gate / Witcher 3)? A folder you can add custom files to (icons textures etc) and have them able to be pulled into the xml. An overwrite folder would also allow you to edit the xml without actually touching your base xml data itself. Imagine if two people want to add weapons, if you both have to edit bb_items it's going to be madness trying to get things to work together. I've probably put more time into messing with the xml files than playing the beta at this point, haha. I've asked these questions in other places but having them here makes the most sense, probably. I am so insanely hyped for poe2. -mort • 3 ##### Share on other sites It's Thanksgiving. You'll have to wait a bit before you can expect any answer from Obsidian. They left for the holidays. ##### Share on other sites I hope so. Really looking forward to the community on various mods. Hopefully they don't lock down and restrict assets and the only available options on modding are just xmls. ##### Share on other sites It's Thanksgiving. You'll have to wait a bit before you can expect any answer from Obsidian. They left for the holidays. Oh yeah! Uhhh, I wanted to give them time to think of a good answer? ##### Share on other sites update on my findings aka, I'm dumb! I just assumed you had to overwrite the same json data files but it will load anything you put in there as long as its formatted correctly. I suspected this was the case but I was more interested in finding what was possible. What I thought was the case: -You had to edit the bb_items.gamedatabundle directly and add things to it -This would be a nightmare if the game updated or you wanted to add more than one thing What is actually the case: -You can make a file called mort_is_dumb.gamedatabundle and add in anything you can think of, with properly formatted json, even just a single item or ability etc, and it'll load it into the game. -You could technically have a self-contained single file that adds tons of items, skills, characters, etc. -Entries are overwritten by the latest loading file. If you make a new file and copy estoc's data, then change the value to 9999 and save it, all estocs will inherit those stats. -No overwrite folder is necessary, it's built in. The good news for modders: -Your life is going to be a lot easier than I thought -A modding tool is as simple as a fillable json page The good news for everyone: -Mods are going to be very easy to install and uninstall. -I can't seem to add or change custom strings in the same way. All entries in the data don't have their display names or descriptions in the json, they reference a number which pulls from the localized stringtable depending on what language you choose. -I can add custom strings by editing the original stringtables but that will get sloppy. -I also cannot edit original strings in the same way that I can edit the gamedatabundle entries by making a new file and overwriting -I still don't know if custom assets can be loaded or things can be placed in world by modders. tl;dr modding is good, I assumed the worst but we're in good shape folks, its very similar to BG modding • 4 ##### Share on other sites update on my findings aka, I'm dumb! I just assumed you had to overwrite the same json data files but it will load anything you put in there as long as its formatted correctly. I suspected this was the case but I was more interested in finding what was possible. What I thought was the case: -You had to edit the bb_items.gamedatabundle directly and add things to it -This would be a nightmare if the game updated or you wanted to add more than one thing What is actually the case: -You can make a file called mort_is_dumb.gamedatabundle and add in anything you can think of, with properly formatted json, even just a single item or ability etc, and it'll load it into the game. -You could technically have a self-contained single file that adds tons of items, skills, characters, etc. -Entries are overwritten by the latest loading file. If you make a new file and copy estoc's data, then change the value to 9999 and save it, all estocs will inherit those stats. -No overwrite folder is necessary, it's built in. The good news for modders: -Your life is going to be a lot easier than I thought -A modding tool is as simple as a fillable json page The good news for everyone: -Mods are going to be very easy to install and uninstall. -I can't seem to add or change custom strings in the same way. All entries in the data don't have their display names or descriptions in the json, they reference a number which pulls from the localized stringtable depending on what language you choose. -I can add custom strings by editing the original stringtables but that will get sloppy. -I also cannot edit original strings in the same way that I can edit the gamedatabundle entries by making a new file and overwriting -I still don't know if custom assets can be loaded or things can be placed in world by modders. tl;dr modding is good, I assumed the worst but we're in good shape folks, its very similar to BG modding I was just messing with those files. I was worried also that you would have to reedit those file everytime an update came out but nice to know you can create your own mod file. Edited by draego ##### Share on other sites -You can make a file called mort_is_dumb.gamedatabundle and add in anything you can think of, with properly formatted json, even just a single item or ability etc, and it'll load it into the game. -You could technically have a self-contained single file that adds tons of items, skills, characters, etc. -Entries are overwritten by the latest loading file. If you make a new file and copy estoc's data, then change the value to 9999 and save it, all estocs will inherit those stats. -No overwrite folder is necessary, it's built in. If you don't want to rely on the order of the files for overriding, there is an override folder you can use.  This is probably a slightly better way to do mods. The game's data is in Pillars of Eternity II\PillarsOfEternity2_Data\exported\design\gamedata, which is what you're using.  The override folder is Pillars of Eternity II\PillarsOfEternity2_Data\override\gamedata (you'll have to create it).  Files in the override folder will always take priority over ones in the exported folder. Note that the override folder is broken in the current release, but it will work in the next. -I can't seem to add or change custom strings in the same way. All entries in the data don't have their display names or descriptions in the json, they reference a number which pulls from the localized stringtable depending on what language you choose. -I can add custom strings by editing the original stringtables but that will get sloppy. -I also cannot edit original strings in the same way that I can edit the gamedatabundle entries by making a new file and overwriting This is a good point.  I will make a note that we should take a look at this. -I still don't know if custom assets can be loaded or things can be placed in world by modders. Unfortunately, it's unlikely that scene modding will be any different than Pillars 1. • 10 ##### Share on other sites Amazing, thank you! I should also note that nexusmods finally added their POE2 page which is great news! https://old.nexusmods.com/pillarsofeternity2/mods/1/? ##### Share on other sites The newest backer beta has added support for overriding string table entries. You can both override existing strings and add entirely new strings. To do this, you create a .stringtable file in the override directory where the path matches that of the table you want to override. For example, if you want to override the English for \Pillars of Eternity II\PillarsOfEternity2_Data\exported\localized\en\text\game\abilities.stringtable, you need to create the file \Pillars of Eternity II\PillarsOfEternity2_Data\override\[YOUR MOD]\localized\en\text\game\abilities.stringtable.  The file should look the same, except that you only need to include blocks for the entries you want to change or add. Example abilities.stringtable overriding the Holy Radiance ability name and adding a new ability name: <?xml version="1.0" encoding="utf-8"?> <StringTableFile xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Name>game\abilities</Name> <Entries> <Entry> <ID>2</ID> <DefaultText>Unholy Space Jam</DefaultText> <FemaleText /> </Entry> <Entry> <ID>10000</ID> <DefaultText>Reap the Rappers</DefaultText> <FemaleText /> </Entry> </Entries> </StringTableFile> You don't have to put override gamedatabundles in a [YOUR MOD] folder like this, but I'd recommend that to keep your mods organized. • 10 ##### Share on other sites BMac, can we also have some possibility to set the load order of our mods? For example, I have: \Pillars of Eternity II\PillarsOfEternity2_Data\override\[MOD_A] \Pillars of Eternity II\PillarsOfEternity2_Data\override\[MOD_B] \Pillars of Eternity II\PillarsOfEternity2_Data\override\[MOD_C] and would like to add an (optional) file: \Pillars of Eternity II\PillarsOfEternity2_Data\override\mod.order.json { "mods":[ {"name":"MOD_C", "enabled": 1, "params": {}}, {"name":"MOD_A", "enabled": 1, "params": {}}, {"name":"MOD_B", "enabled": 0, "params": {}} ] } This will really help in future, if there will be some partially conflicting mods. Also, can you give at least some general, vague pointers on how to create a custom, sidekick-like hireling, and attach custom barks and on-event banter? • 2 ##### Share on other sites That's a good idea and it's on my list, though we'll be pretty busy with the game until release, so it may not come until after that. I think in order to create your own sidekick, these are the major steps you'd have to do: Make CharacterStatsGameData and SpeakerGameData objects for your character in an override gamedatabundle.  Make a name string for the character in an override stringtable.  Hook up the name and Speaker to the CharacterStats. Make a chatter file for the character, which defines which lines they'll say, and when. The different chatter setups for each character go in chatterbundles, which can be overridden exactly like gamedatabundles. The actual game files, for reference, are in PillarsOfEternity2_Data\exported\design\chatter.  Hook up the chatter file to the Speaker you made. Make the audio files that are referenced by the chatter file. They are packed in a Wwise format and they're at PillarsOfEternity2_Data\StreamingAssets\Audio\GeneratedSoundBanks\Windows\Voices.  You can probably get the free version of Wwise to produce these. We don't support an override folder for them, though. Actually creating a character prefab is probably the hardest.  This will be the actual object in the game, where you'd attach the CharacterStatsGameData you created and set up the character's appearance. They live in the characters assetbundle (PillarsOfEternity2_Data\assetbundles). I think there are ways to unpack and repack these but I haven't messed with them myself. You could then use console commands to instantiate the character and add it to your party. • 4 ##### Share on other sites That's a good idea and it's on my list, though we'll be pretty busy with the game until release, so it may not come until after that. That's great! And it's not like there will be many pre-release mods anyway. I think in order to create your own sidekick, these are the major steps you'd have to do: [..list..] Thank you a lot for the pointers! Will definitely attempt to make a sidekick at some point. Actually creating a character prefab is probably the hardest.  This will be the actual object in the game, where you'd attach the CharacterStatsGameData you created and set up the character's appearance. They live in the characters assetbundle (PillarsOfEternity2_Data\assetbundles). I think there are ways to unpack and repack these but I haven't messed with them myself. *scratching head* while looking at the characters.unity3d filesize of 0.98GB... is there a specific reason, why you guys decided to bundle into such big packs, instead of lets say: /assetbundles/characters/character_a.unity3d /assetbundles/characters/character_b.unity3d /assetbundles/characters/character_c.unity3d ... because atm in order to add/edit at least one character prefab, modder would have to unpack/do_stuff/repack the whole characters.unity3d and if he's successful to somehow also redistribute it. Am not even sure if nexus even permits such big sizes; plus repeating all this on each update :shiver: Also, with smaller .unity3d files modder can tinker with the help of AssetsBundleExtractor. While if the file is big, you are literally getting lost there ^^ (having to literally check all monobehaviours when you want to find a specific one). Sure there is also DevXUnity-Unpacker Studio (which looks more user-friendly, and supports repacking), but it is far from free. Was the copying speed somehow of a problem due to the big amount of files? ##### Share on other sites Is there any way to get character position in-game? (Stand next to Vektor, open console, type in some code, get "X" and "Y" position/location of one selected character or center between selected characters, and use that information to summon custom sidekick/NPC/quest giver with mods). Furthermore, does AI NPC's have "Patrolling"-, "Facing direction"-, "Tasks"-scripts that you can call for and is that scripts you can use in making a custom NPC appear in the world? Would it then also be possible to add "overhead text" that you can see on NPC's. I.E. "Psst! Over here!" or something. "Approach/MoveTowards Watcher/Player/Selected/%NAME%" script into "Start Dialogue" script? Edited by Osvir • 1 ##### Share on other sites because atm in order to add/edit at least one character prefab, modder would have to unpack/do_stuff/repack the whole characters.unity3d and if he's successful to somehow also redistribute it. Am not even sure if nexus even permits such big sizes; plus repeating all this on each update :shiver: Yeah, this is definitely not an ideal setup for modding. Hopefully we'll be able to look at override support for assetbundles. Is there any way to get character position in-game? (Stand next to Vektor, open console, type in some code, get "X" and "Y" position/location of one selected character or center between selected characters, and use that information to summon custom sidekick/NPC/quest giver with mods). Try "GetTransform Furthermore, does AI NPC's have "Patrolling"-, "Facing direction"-, "Tasks"-scripts that you can call for and is that scripts you can use in making a custom NPC appear in the world? Would it then also be possible to add "overhead text" that you can see on NPC's. I.E. "Psst! Over here!" or something. "Approach/MoveTowards Watcher/Player/Selected/%NAME%" script into "Start Dialogue" script? The NPC AI system might be a bit opaque and I don't know too much about it myself - but for anyone who wants to dig into it, you want to check out ScheduleGameData in the AI gamedatabundle and the AIController component on the character prefabs (assetbundle).  There are also some scripts you can use for one-off stuff ('find ai'). The overhead text is referred to as "barks" or "bark strings".  They're set up as Conversations and played with the startconversation script. • 3
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.15796300768852234, "perplexity": 2656.354792931596}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146187.93/warc/CC-MAIN-20200226054316-20200226084316-00377.warc.gz"}
https://wiki.wina.be/examens/index.php?title=Advanced_Statistical_Mechanics&diff=17701&oldid=17699
# Advanced Statistical Mechanics: verschil tussen versies Ga naar: navigatie, zoeken ## Examenvragen #### 1 Sept 2017 Theory: 1) Consider a discrete Markov process in continuous time. Write down the master equation. Determine sufficient conditions for the existence of a stationary probability distribution. Derive the form of the stationary distribution function under these conditions. Is a uniform PDF possible? 2) Consider the Landau-Ginzburg Hamiltionian in dimension d and for an n-component order parameter m. Discuss the critical behavior (t<0 and t>0) of the (ferromagnetic) order parameter and of the specific heat in mean-field theory. Consider subsequently small fluctuations of the order parameter components and treat them in a Gaussian approximation (take n=2). Calculate and discuss the fluctuation corrections to the specific heat and derive the Gaussian approximation for the critical exponent of the specific heat. Interpret your result. Exercices: 1) Consider diffusion in one dimension in a finite region (-a<x<a) with impenetrable and fully reflecting endpoints x=+/-a. Use seperation of variables to find the PDF solutions p(x,t) of the diffusion equation for the PDF on (-a,a), respecting these boundary conditions, as well as the initial condition ${\displaystyle p(x,0)=\delta (x)}$ and normalized to unity on (-a,a). Hints: -The boundary conditions imply that the current on the endpoints vanish at all times. -On the interval (-a,a) the Dirac delta can be represented as ${\displaystyle \delta (x)={\frac {1}{2a}}+{\frac {1}{a}}\sum _{n=1}^{\infty }\cos \left({\frac {n\pi x}{a}}\right)}$ 2) Real-space RG. Consider the Migdal-Kadanoff (bond-moving) transformation of the square Ising lattice (with nearest-neighbour coupling K) with rescaling factor b (b=integrer\geq 2).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9821851849555969, "perplexity": 1186.837820421824}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178366477.52/warc/CC-MAIN-20210303073439-20210303103439-00068.warc.gz"}
https://homework.cpm.org/category/CC/textbook/CCA2/chapter/Ch3/lesson/3.2.5/problem/3-114
### Home > CCA2 > Chapter Ch3 > Lesson 3.2.5 > Problem3-114 3-114. Examine the graph of $f(x)=|x-3|+1$ at right. Use the graph to find the values listed below. Homework Help ✎ 1. $f(3)$ What $y$-value corresponds with $x=3$? $1$ 1. $f(0)$ $4$ 1. $f(4)$ Refer to parts (a) and (b). 1. $f(-1)$ Refer to parts (a) and (b).
{"extraction_info": {"found_math": true, "script_math_tex": 9, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5459012389183044, "perplexity": 8230.52476171457}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145260.40/warc/CC-MAIN-20200220162309-20200220192309-00399.warc.gz"}
http://mathhelpforum.com/advanced-algebra/120145-finding-polar-decomposition-print.html
# finding polar decomposition! • December 12th 2009, 07:25 PM mancillaj3 finding polar decomposition! I need help finding the polar decomposition of $ \left( \begin{array}{cc} 11 & -5 \\ -2 & 10\end{array} \right) $ • December 14th 2009, 12:51 PM Opalg If $A = \begin{bmatrix}11 & -5 \\ -2 & 10\end{bmatrix}$ then the polar decomposition of A is a factorisation $A = UR$, where U is unitary and R is positive. To find R use the fact that $R^2 = A^*A = \begin{bmatrix}125 & -75 \\ -75 & 125\end{bmatrix}$. So R is the positive square root of that matrix, which you can compute by diagonalising it. You should find that the eigenvalues of $R^2$ are 50 and 200, with corresponding normalised eigenvectors $\frac1{\sqrt2}\begin{bmatrix}1 \\ 1\end{bmatrix}$ and $\frac1{\sqrt2}\begin{bmatrix}1 \\ -1\end{bmatrix}$. So $R^2 = PDP^{-1}$, where $P = P^{-1} = \frac1{\sqrt2}\begin{bmatrix}1 &1 \\ 1 & -1\end{bmatrix}$ and $D = \begin{bmatrix}50 & 0 \\ 0 & 200\end{bmatrix}$. The square root is given by $R = PEP^{-1}$, where $E = D^{1/2} = \begin{bmatrix}5\sqrt2 & 0 \\ 0 & 10\sqrt2\end{bmatrix}$. Having found R, you can then get U as $U = AR^{-1}$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.994670569896698, "perplexity": 270.97829995398246}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246658116.80/warc/CC-MAIN-20150417045738-00054-ip-10-235-10-82.ec2.internal.warc.gz"}
http://primes.utm.edu/references/refs.cgi?raw=Beeger50
Reference Database (references for the Prime Pages) Home Search Site Largest Finding How Many? Mersenne Glossary Prime Curios! e-mail list FAQ Prime Lists Titans Submit primes This is the Prime Pages' interface to our BibTeX database.  Rather than being an exhaustive database, it just lists the references we cite on these pages.  Please let me know of any errors you notice. References: [ Home | Author index | Key index | Search ] #### Item(s) in original BibTeX format @article{Beeger50, author={N. G. W. H. Beeger}, title={On composite numbers $n$ for which $a^{n-1} \equiv 1 \pmod{n}$ for every $a$ prime to $n$}, journal= SM, volume= 16, year= 1950, pages={133--135}, mrnumber={12,159e} } Another prime page by Chris K. Caldwell
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3374098241329193, "perplexity": 11378.701267548895}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701162648.4/warc/CC-MAIN-20160205193922-00324-ip-10-236-182-209.ec2.internal.warc.gz"}
https://online.stat.psu.edu/stat200/lesson/4/4.3
# 4.3 - Introduction to Bootstrapping In order to construct a confidence interval we need information about the sampling distribution. In Lesson 4.1 we saw how we could construct a sampling distribution when population values were known. What if population values are not known? This is usually the case. If we have sample data, then we can use bootstrapping methods to construct a bootstrap sampling distribution to construct a confidence interval. Bootstrapping is a resampling procedure that uses data from one sample to generate a sampling distribution by repeatedly taking random samples from the known sample. Bootstrapping A resampling procedure for constructing a sampling distribution using data from a sample ## Example: Bootstrap Distribution for Mean Height Section We have data concerning the heights of individuals in a random sample of $$n=15$$. To construct a bootstrap distribution for the mean height we would first randomly select one individual from that sample and record their height. Then, with the that individual placed back into the sample, we would randomly select a second individual and record their height. This is known as "sampling with replacement" because we are putting each case back into the sample after recording their height. We would repeat this process until we have selected 15 values. Because we are sampling with replacement, some individuals may appear in the bootstrap sample more than once. We would use those 15 selected values to compute a bootstrapped sample mean. This process is repeated many times. The distribution of many bootstrapped sample means is known as the bootstrap distribution or bootstrap sampling distribution. The following pages include additional video examples that use StatKey to demonstrate the construction of bootstrap sampling distributions.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7048962116241455, "perplexity": 511.4551459371124}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143815.23/warc/CC-MAIN-20200218210853-20200219000853-00220.warc.gz"}
https://socratic.org/questions/how-do-you-use-the-distributive-property-to-simplify-0-25-6q-32
Algebra Topics # How do you use the distributive property to simplify 0.25 (6q + 32)? Mar 13, 2018 See a solution process below: #### Explanation: Multiply each term within the parenthesis by the term outside the parenthesis: $\textcolor{red}{0.25} \left(6 q + 32\right) \implies$ $\left(\textcolor{red}{0.25} \times 6 q\right) + \left(\textcolor{red}{0.25} \times 32\right) \implies$ $1.5 q + 8$ ##### Impact of this question 902 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 3, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.442505806684494, "perplexity": 3024.371039978424}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348521325.84/warc/CC-MAIN-20200606222233-20200607012233-00551.warc.gz"}
https://socratic.org/questions/how-do-you-find-the-critical-numbers-for-g-t-abs-3t-4-to-determine-the-maximum-a
Calculus Topics How do you find the critical numbers for g(t)=abs(3t-4) to determine the maximum and minimum? Nov 4, 2016 See below. Explanation: $g \left(t\right) = \left\mid 3 t - 4 \right\mid = \left\{\begin{matrix}3 t - 4 & \text{if" & t >= 4/3 \\ -3t+4 & "if} & t < \frac{4}{3}\end{matrix}\right.$ $g ' \left(t\right) = \left\{\begin{matrix}3 & \text{if" & t >= 4/3 \\ -3 & "if} & t < \frac{4}{3}\end{matrix}\right.$ $g '$ is never $0$ and is undefined (fals to exist) at $x = \frac{4}{3}$ The only critical number is $\frac{4}{3}$. We see that $g$ is decreasing left of $\frac{4}{3}$ and increasing on the right. So $g \left(\frac{4}{3}\right) = 0$ is a local minimum. Impact of this question 3868 views around the world You can reuse this answer
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 9, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9087880849838257, "perplexity": 1400.0845149854333}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583659944.3/warc/CC-MAIN-20190118070121-20190118092121-00466.warc.gz"}
https://tioj.ck.tp.edu.tw/problems/1866
92.3% (12/13) 23.9% (50/209) # Input Format $3 \leq n \leq 100$ $1 \leq T \leq 20$ $3 \leq n \leq 10^ 5$ $1 \leq a_ i \leq 10^ 9$ $1 \leq u,v \leq n$ 1 5 1 0 5 4 2 1 2 2 3 3 4 4 5 1 1 5 5 6 2015建中校隊入隊考試-複試
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.16993582248687744, "perplexity": 11362.085913791965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247487595.4/warc/CC-MAIN-20190218155520-20190218181520-00072.warc.gz"}
https://www.physicsforums.com/threads/small-limits-question.6843/
# Small Limits Question 1. Oct 7, 2003 ### Zargawee [SOLVED] Small Limits Question Hi There, I have this Simple Question In Limits : If limx->3 (f(x) - 2) / (x - 3) = 7 Then limx->3 (x2f(x) - 18) / (x-3) = ?? I solved the question this way : Since the denominator equals Zero , and limit exists , then the numerator equals zero . [4]f(x) - 2 = 0 ---> f(x) = 2 limx->3 (x2f(x) - 18) / (x-3) = limx->3 (2x2 - 18) / (x-3) = limx->3 2(x2- 9) / (x-3) = limx->3 2(x-3) (x+3) / (x-3) = limx->3 2(x+3) = 2 (3+3) = 12 But I also solved in this way : limx->3 (x2f(x) - 18) / (x-3) = limx->3 (x2f(x) - 2x2 + 2x2 - 18) / (x-3) limx->3 (x2f(x) + (2x2) /(x-3) - 2x2 - 18) / (x-3) limx->3 (x2) (f(x) + 2) /(x-3) - 2(x2 - 9) / (x-3) limx->3 (x2) (f(x) + 2) /(x-3) - 2(x2 - 9) / (x-3) limx->3 (x2) 7 - limx->3 2(x-3)(x+3) / (x-3) limx->3 (x2) 7 - limx->3 2(x-3)(x+3) / (x-3) 32 * 7 + limx->3 (x+3) (9 * 7) + (3 + 3) = 63 + 12 = 75 What's the wrong with the first One ? 2. Oct 7, 2003 ### StephenPrivitera I'm not sure about this reasoning. Your limit looks very similar to a difference quotient for derivatives. You could write limx-->a(f(x)-f(a))/(x-a)=f'(a). In this case, f'(3)=7, f(3)=2. You want to find limx-->ax2*f'(a)=a2f'(a)=9*7=63 3. Oct 7, 2003 ### Soroban Hello, Zargawee! We are given: limx->3 [f(x) - 2]/[x - 3] = 7 We're asked to find: limx->3[x2f(x) - 18]/[x - 3] In the numerator, add 2x2 and subtract 2x2: x2f(x) - 18 + 2x2 - 2x2 = x2[f(x) - 2] + 2(x2 - 9) We have: x2[f(x) - 2]/(x - 3) + 2(x2 - 9)/(x - 3) The second term reduces: 2(x - 3)(x + 3)/(x - 3) = 2(x + 3) Then we have: x2[f(x) - 2)/(x - 3) + 2(x + 3) Taking limits, we have: limx->3(x^2) * limx->3 [f(x) - 2]/(x - 3) + limx->3 2(x + 3) We are given that the middle limit is 7. Therefore, the answer is: (32)(7) + 2(6) = 75 Last edited by a moderator: Oct 7, 2003 4. Oct 7, 2003 ### Hurkyl Staff Emeritus That's not quite right; what is true is that limx&rarr;3 f(x) = 2 5. Oct 7, 2003 ### StephenPrivitera <--- ashamed 6. Oct 10, 2003 ### Zargawee Thanks all, But I think that I solved the question as the same as Soroban solved it (Look at my first post) Thanks again. Similar Discussions: Small Limits Question
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8485459685325623, "perplexity": 8335.110019488817}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948519776.34/warc/CC-MAIN-20171212212152-20171212232152-00069.warc.gz"}
http://www.ck12.org/book/CK-12-Middle-School-Math-Concepts-Grade-8/r19/section/12.15/
<meta http-equiv="refresh" content="1; url=/nojavascript/"> Evaluating Quadratic Functions | CK-12 Foundation You are reading an older version of this FlexBook® textbook: CK-12 Middle School Math Concepts - Grade 8 Go to the latest version. Difficulty Level: Basic Created by: CK-12 % Best Score Practice Quadratic Functions and Their Graphs Best Score % Have you ever thought about the speed of a baseball? Take a look at this dilemma. When passing the baseball field, Mr. Travis handed the students the following problem written on a piece of paper that looked like a baseball. Here is what it said. When an object is thrown into the air with a starting velocity of $r$ feet per second, its distance $d$ in feet, above its starting point $t$ seconds after it is thrown is about $d=rt-16t^2$ . Use a table of values to show the distance of an object from its starting point that has an initial velocity of 80 feet per second. Then graph the velocity of the ball. To figure out this problem, you will need to know about quadratic functions and their graphs. Pay close attention because you will need to work with this problem again at the end of the Concept. ### Guidance A function is a relation that assigns exactly one value of the domain to each value of the range. When we say quadratic function , we are referring to any function that can be written in the form $y=ax^2+bx+c$ , where $a, b,$ and $c$ are constants and $a \neq 0$ . This we defined as the standard form. Why can’t $a$ equal zero? What happens if it does? If the $a$ value is zero, you might notice that it would make the first term $ax^2$ disappear because anything times zero is zero. You would be left with simply $y=bx+c$ . Although this is still a function, it is no longer quadratic. This is a linear function. All quadratic functions are to the $2^{nd}$ degree. Let's look at quadratic functions in more detail. You know that the word domain refers to input values and the word range refers to output values. Recall that a function is a relation that assigns exactly one value of the domain to each value of the range. That means that for every $x$ value, there is only one $y$ value. We can find $y$ values by substituting $x$ values in the function. We organize the information using a table of values or a t-table. In most cases, the input values could be any numbers. However, for our convenience, we will use negative numbers, zero, and positive numbers. Complete a table of values for the function $y=x^2+3x+2$ . $x$ $y$ $-3$ $-2$ $-1$ ${\color{white}-} 0$ ${\color{white}-} 1$ ${\color{white}-} 2$ ${\color{white}-} 3$ To find the $y$ values, we will substitute the $x$ values in the equation. The completed t – table should look like this. $x$ $y$ $-3$ $2$ $-2$ $0$ $-1$ $0$ ${\color{white}-} 0$ $2$ ${\color{white}-} 1$ $6$ ${\color{white}-} 2$ $12$ ${\color{white}-} 3$ $20$ Evaluating a quadratic function is always the same. You substitute the $x$ – values into the equation and solve the for $y$ – values. The values of $a, b,$ and $c$ have an effect on the graphs of quadratic equations. Now we are going to use this information when we look at a quadratic function. What we know about the values of $a, b,$ and $c$ help us to understand the opening of a parabola. Value What it tells you Example $y = -3x^2 + x -2$ $a$ if $a > 0$ , graph opens upward if $a <$ graph opens downward if $a$ is close to zero, wider graph if $a$ is far from zero, narrower graph $a = -3$ $a$ is less than zero so graph opens downward $a$ is further from zero so will be narrow $b$ helps predict axis of symmetry $b = 1$ axis of symmetry of the parabola $c$ $y$ -intercept $c = -2$ graph crosses $y$ -axis at -2 We know that the graph of a quadratic function will always be a parabola. A parabola is a kind of “U” shape that is always symmetrical on both sides. It can go either upward or downward. Also, a parabola is not linear—no part of the parabola is actually a straight line. Thus, it cannot be vertical, either. If we wanted to predict the shape of the parabola, we would need to look at the value of $a$ . We know that $a$ helps us to determine a parabola’s shape. Take a look. $& y = x^2 && y = 3x^2 && y = \frac{1}{3}x^2\\& a = 1 \ so \ a > 0 && a = 3 \ so \ a > 0 && a = \frac{1}{3} \ so \ a > 0\\& \text{graph opens upward} && \text{graph opens upward} && \text{graph opens upward}\\& \text{neither wide nor narrow} && \text{graph is narrow} && \text{graph is wide}$ Now that you understand how these graphs look and how the equation of the graph affects its appearance, it is time to make some predictions. What would you predict about the graph of $y = 7x^2$ ? Because the $a$ value is 7, it would be very narrow. Also, because $a > 0$ , it would open upward. Answer the following questions by making predictions. #### Example A Predict the opening of $-3x^2+4$ . Solution: It will open downwards because the $a$ value is negative. #### Example B For the quadratic function in Example A, where will the vertex be? Solution: At $4$ #### Example C Which graph will have a wider opening one with a vertex at 0 or one with a vertex at 8? Solution: Vertex at 0 Now let's go back to the dilemma from the beginning of the Concept. First, think about the information that you have and the equation that you can write. $r & = 80\\d & = 80t - 16t^2$ Next, we can make a table of values. $& t \ (seconds) \quad \qquad 0 \qquad 1 \qquad 2 \qquad 3 \qquad 4 \qquad 5\\& d \text{ distance } (ft) \quad \ 0 \quad \ \ 64 \quad \ \ 96 \quad \ 96 \quad \ \ 64 \quad \ \ 0$ Finally we can take those values, insert them into a graphing calculator and create the following graph. ### Vocabulary Domain input value, independent value Range output value, dependent value Function relation that assigns one value of the domain to each value of range To the $2^{nd}$ degree in standard form-a parabola is created by a quadratic function. ### Guided Practice Here is one for you to try on your own. What would you predict about the graph of $y = - \frac{1}{4}x^2$ ? Solution Because the $a$ value is $-\frac{1}{4}$ , it would be very wide. Also, because $a < 0$ , it would open downward. ### Practice Use your tables to graph the following functions. 1. $y = x^2 - 8$ 2. $y = 3x^2 - x + 4$ 3. $y = 2x^2 + 4$ 4. $2y = 4x^2 + 4$ 5. $3y = 6x^2 + 12$ 6. $4y = 2x^2 - 12$ 7. $3y-1 = 6x^2 + 11$ 8. $2y+2 = 2x^2 + 4$ 9. $y = - 2x^2 + 5x$ 10. $y = -x^2 + 3x - 7$ 11. $y = \frac{2}{3}x^2 + 2x - 1$ 12. $y = x^2 + 8$ 13. $y = - 2x^2 + 5x$ 14. $y = -x^2 + 3x - 1$ 15. $y = 3x^2 + 2x + 1$ Basic Mar 19, 2013 Aug 05, 2014
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 97, "texerror": 0, "math_score": 0.5768324732780457, "perplexity": 378.0052595187898}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500800767.23/warc/CC-MAIN-20140820021320-00077-ip-10-180-136-8.ec2.internal.warc.gz"}
https://scienceready.com.au/pages/bromine-water-test
# Bromine Water Test This is part of the HSC Chemistry course under the topic Analysis of Organic Substances. This section explores a range of chemical tests that can be conducted in a school laboratory to identify and distinguish between various organic functional groups. ### Identifying C=C Bonds: Bromine Water Test Bromine water test is a chemical test to differentiate between alkenes and alkanes. It can also be used to identify the presence of a carbon-carbon double bond. This video outlines the bromine water test and explains why organic molecules give different observations when reacted with bromine water. ### Bromine Water Test • Reaction with bromine water distinguishes an alkene from an alkane. Alkane and alkene are nonpolar molecules which can dissolve in bromine water which is also nonpolar. • Safety Considerations • Cyclohexane and cyclohexene are typically used in schools as they are less volatile than smaller alkanes and alkenes due to their stronger dispersion forces. • Bromine water is also safer to use than bromine gas. • Method and experimental condition: • A few drops of orange/brown coloured bromine water are added to a solution of cyclohexane and cyclohexene in the absence of UV light • Record changes in the solution’s appearance • Observation: the reactive C=C bond in alkene undergoes addition reaction with bromine (Br2) to form an haloalkane. This decreases [Br2] and hence decolorises the solution. For example, bromination of ethene: $$C_2H_4(aq) + Br_2(aq) \rightarrow C_2H_5Br(aq) + HBr(aq)$$ • If the bromine water containing alkane is exposed to UV light, the alkane will undergo substitution reaction to produce haloalkane. This will also decolorise the solution, but at a much slower rate. $$C_2H_6 + Br_2 \rightarrow C_2H_5Br + HBr$$ $$C_2H_5Br + HBr \rightarrow C_2H_4Br_2 + H_2$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6726409792900085, "perplexity": 6897.29903831131}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572908.71/warc/CC-MAIN-20220817122626-20220817152626-00605.warc.gz"}
http://www.mun.ca/math/graduate/grad-thesis/
# Thesis Help Obtaining a PDF/A-1b thesis with Latex, XeLaTeX and pdfLaTeX Our institution requires that the final version of the thesis is submitted as a PDF/A-1b ISO-19005-1:2005 document. If your document has a good deal of equations and you like to have also some fancy stuff in the document, like bookmarks and so on, then the task of obtaining the such a document may not be straightforward. There is a general pdfLaTeX package named pdfx that is supposed to do this job as one of its options. However, there are many instances were the package fails to produce a PDF/A-1b file. Here an alternative to 'pdfx' that is somehow more robust is introduced. The package package is called pdfathesis. It can be used with pdfLaTeX, XeLaTeX, LaTeX and some other LaTeX processors. In addition to the regular files generated when processing the document, the package creates two additional files to be post-processed by the latest version of Ghostscript in order to obatain the PDF/A-1b file. If you do not know how to install it, please read the documentation included. If you have already formated your thesis, just include the package near the begining of yor preamble with the command \usepackage{pdfathesis} and remove all references to the packages 'hyperref' or 'bookmarks'. If you are lucky enough, after runing LaTeX or XeLaTeX of pdfLaTeX and post-processing with Ghostscript you will have a PDF/A-1b document ready. It may also happen that the fancy stuff that you want in your document is not compatible with the PDF/A specification and, sadly, you might need to get rid of it if you want to graduate. The package 'pdfathesis' also can be called as a LaTeX class and in such a case it will do the basic formating of your document. Enclosed in the package, there is a self-explanatory template called pdfathesis_template.tex that may help you to start this job.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9512251019477844, "perplexity": 1223.9328303800312}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399385.17/warc/CC-MAIN-20160624154959-00068-ip-10-164-35-72.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/272258/how-to-solve-pde-using-techniques-of-separation-variables-in-this-question-solv
# How to Solve PDE using techniques of Separation Variables in this question [SOLVED] Hi guys my name is Maxwell. This is my first time I asking question in this forum. I hope someone can help me this problem out : Question says : $U_{xx} + U_{yy} = U$. Solve this PDE product solution using separation of variables. What I do is $X''(x)Y(y) + X(x)Y''(y)=X(x)Y(y)$ $X''(x)Y(y)=X(x)[Y(y)-Y''(y)]$ $\frac{X''(x)}{X(x)}$ = $\frac{Y(y)-Y''(y)}{Y(y)}$= k, where k is a constant Then I made into 3 cases where $k>0$, $k<0$ and $k=0$ I already got the answer for $k<0$ and $k=0$ which my teacher say correct but for $k>0$ my teacher say wrong because he said for $k>0$ case, we need to divide into another 3 sub cases. My $k>0 [Let k=p^2 ]$, I got my answer $X(x)=Ae^{-px}+Be^{px}$ $Y(y)=Ce^{-\sqrt{1-p^2}y}$+$De^{\sqrt{1-p^2}y}$ For this part could somone please solve it for me.Please dont say tips and hints. I need some work shown from you so that I can understand better. Please guys I really need help from you. This my first time in this forum. If someone could solve it, i will be really appreciate it. Thanks in advance - Look at this. –  JohnD Jan 7 '13 at 17:10 The three cases are $1-p^2 > 0$, $=0$, $< 0$. - Why?? Could you show the final step please for the 3 sub cases. Please Im not good at PDE and I'm quite weak.I live in Malaysia and I dont have proper teacher to guide me. If you could show then I can analyse myself and try by myself later –  maxwell Jan 7 '13 at 17:17 My X(x) is correct just the Y(y) only it is wrong.So help me to proceed the Y(y) only please –  maxwell Jan 7 '13 at 17:24 This should be just like what you did for the $X$. Exponentials in one case, sine and cosine in another, $1$ and $y$ in the third. –  Robert Israel Jan 7 '13 at 17:27 I really cant get it Sir. Really sorry Sir I'm totally blur about this. I still unable to get as what you said Sir. –  maxwell Jan 7 '13 at 17:31 Is it like this now; for $1-p^2>0$, $Y(y)=Ce^{-\sqrt{1-p^2}y}$+$De^{\sqrt{1-p^2}y}$ and for $1-p^2<0$, $Y(y)=Ccos{\sqrt{1-p^2}y}$+$Dsin{\sqrt{1-p^2}y}$. Could you recheck my answer whether it is right or not Sir?? So that I can show to my teacher. –  maxwell Jan 7 '13 at 17:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8645902872085571, "perplexity": 804.8098920288033}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997901076.42/warc/CC-MAIN-20140722025821-00158-ip-10-33-131-23.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/44749-applicatios-integration-volumes-please-help-print.html
• July 28th 2008, 09:44 PM eawolbert Find the volume of the solid obtained by rotating the region bounded by the given curves about the specified axis. • July 28th 2008, 10:05 PM Jhevon Quote: Originally Posted by eawolbert $V = \pi \int_{-1}^1 \bigg[ (6 - x^4)^2 - 5^2 \bigg]~dx$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.898021936416626, "perplexity": 567.473457377281}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982939917.96/warc/CC-MAIN-20160823200859-00233-ip-10-153-172-175.ec2.internal.warc.gz"}
https://pytorch.org/ignite/v0.4.9/generated/ignite.distributed.auto.DistributedProxySampler.html
# DistributedProxySampler# class ignite.distributed.auto.DistributedProxySampler(sampler, num_replicas=None, rank=None)[source]# Distributed sampler proxy to adapt user’s sampler for distributed data parallelism configuration. Parameters • sampler – Input torch data sampler. • num_replicas – Number of processes participating in distributed training. • rank – Rank of the current process within num_replicas. Note Input sampler is assumed to have a constant size. Methods
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29960137605667114, "perplexity": 19411.193595651104}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573399.40/warc/CC-MAIN-20220818185216-20220818215216-00129.warc.gz"}
https://researchportal.hw.ac.uk/en/publications/boundary-value-problems-for-the-elliptic-sine-gordon-equation-in-
# Boundary value problems for the elliptic sine-Gordon equation in a semi-strip A. S. Fokas, J. Lenells, Beatrice Pelloni Research output: Contribution to journalArticle 7 Citations (Scopus) ### Abstract We study boundary value problems posed in a semistrip for the elliptic sine-Gordon equation, which is the paradigm of an elliptic integrable PDE in two variables. We use the method introduced by one of the authors, which provides a substantial generalization of the inverse scattering transform and can be used for the analysis of boundary as opposed to initial-value problems. We first express the solution in terms of a 2 by 2 matrix Riemann-Hilbert problem whose \jump matrix" depends on both the Dirichlet and the Neumann boundary values. For a well posed problem one of these boundary values is an unknown function. This unknown function is characterised in terms of the so-called global relation, but in general this characterisation is nonlinear. We then concentrate on the case that the prescribed boundary conditions are zero along the unbounded sides of a semistrip and constant along the bounded side. This corresponds to a case of the so-called linearisable boundary conditions, however a major difficulty for this problem is the existence of non-integrable singularities of the function q_y at the two corners of the semistrip; these singularities are generated by the discontinuities of the boundary condition at these corners. Motivated by the recent solution of the analogous problem for the modified Helmholtz equation, we introduce an appropriate regularisation which overcomes this difficulty. Furthermore, by mapping the basic Riemann-Hilbert problem to an equivalent modified Riemann-Hilbert problem, we show that the solution can be expressed in terms of a 2 by 2 matrix Riemann-Hilbert problem whose jump matrix depends explicitly on the width of the semistrip L, on the constant value d of the solution along the bounded side, and on the residues at the given poles of a certain spectral function denoted by h. The determination of the function h remains open. Original language English 241-282 42 Journal of Nonlinear Science 23 2 https://doi.org/10.1007/s00332-012-9150-5 Published - Apr 2013
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9285698533058167, "perplexity": 311.76716804329595}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400221980.49/warc/CC-MAIN-20200925021647-20200925051647-00245.warc.gz"}
http://www.payne.org/index.php/Calculating_Voronoi_Nodes
# Calculating Voronoi Nodes This note documents a general approach to calculating the location of Voronoi nodes for point, segment, and arc geometric entities. This is a part of my work for developing CAM software for my CNC machine. In a Voronoi diagram, nodes (or vertices) are points that are equidistant from three or more entities (points, lines, arcs, etc.). In most Voronoi literature, these entities are typically called "sites" or "generators". A bisector edge connecting two nodes separates two sites, and all points on that bisector are equidistant from the two sites. In implementations, nodes may be restricted to 3 bisector edges. In cases where the node actually has more bisectors, the diagram may be represented by nodes connected with zero-length segment. At output time, the implementation may then collapse these coincident nodes into nodes with an arbitrary number of edges. For best numerical stability, Voronoi nodes are calculated based on the three defining geometric sites, not by attempting to interset bisectors. For implementing a node solver, there are four cases to consider: • Line, line, line • Line, line, arc • Line, arc, arc • Arc, arc, arc (Note that points are merely zero-radius arcs). However, there are a number of degenerate cases to consider. When each degenerate case is considered for each of the four cases above, the implementation combinatorics can get daunting. This note describes a general node-solving approach, that handles all degenerate cases. ## Line and Arc Equations For segments, we use the approach of first adding all segment end-points, then the supporting line is added and used to calculate node distances. In this case, a line can be defined as: aix + biy + ci = 0 And an offset of the line may be defined as: aix + biy + ci + kit = 0 Where ki is the offset direction (1 or -1) and t is the offset distance. Note that the line coefficients must be normalized such that $a_i^2 + b_i^2 = 1$ or kit will not represent the correct offset distance. Similarly, an arc (circle) of radius r, centered on (xi,yi) is defined by: $\left(x - x_i\right)^2 + \left( y - y_i \right)^2 - r^2 = 0$ Note that a point is just a zero-radius arc. GIven this equation, an offset arc can be defined as: $\left(x - x_i\right)^2 + \left( y - y_i \right)^2 - (r + k_i t)^2 = 0$ Where ki is the offset direction (1 or -1) and t is the offset distance. ## Generalized Equation System Given the line and circle equations above, any site (segment, arc, or point) can be represented in a general form: q0(x2 + y2t2) + a0x + b0y + k0t + c0 = 0 Where q0 is 0 or 1. For lines, the generalized coefficients are simply the line coefficients: \begin{align} q_0 &= 0 \\ a_0 &= a_i \\ b_0 &= b_i \\ k_0 &= k_i \\ c_0 &= c_i \\ \end{align} For arcs, the generalized coefficients are: \begin{align} q_0 &= 1 \\ a_0 &= -2x_k \\ b_0 &= -2y_k \\ k_0 &= -2 k r_k \\ c_0 &= x_k^2 + y_k^2 - r_k^2 \\ \end{align} And points are merely zero-radius arcs: \begin{align} q_0 &= 1 \\ a_0 &= -2x_k \\ b_0 &= -2y_k \\ k_0 &= 0 \\ c_0 &= x_k^2 + y_k^2 \\ \end{align} Given this, a Voronoi node is found by solving a three-equation quadratic system of the form: \begin{align} q_0 (x^2 + y^2 - t^2) + a_0 x + b_0 y + k_0 t + c_0 &= 0 \\ q_1 (x^2 + y^2 - t^2) + a_1 x + b_1 y + k_1 t + c_1 &= 0 \\ q_2 (x^2 + y^2 - t^2) + a_2 x + b_2 y + k_2 t + c_2 &= 0 \\ \end{align} Where q0, q1, and q2 are each 0 or 1. This system can be stored in a 3x5 numeric array. ## Solving the System The generalized system can be solved as follows: 1. If the system is linear (all q values are zero), solve the 3x3 system using linear methods 2. Otherwise, reduce the system to a three-equation system with one quadratic equation and two linear equations 3. Check determinants of that system, and select a substitution 4. Evaluate the resulting system (2 linear, one quadratic) for the solution The general three-equation system above can be transformed to a system with one quadratic equation and two linear equations. In one case, the system is already in that form. If there are two or three quadratic equations, one can be subtracted from the other one or two to yield a quadratic-linear-linear system. At this point, the two linear equations form 2 equation system over 3 variables, and we can solve the system for any two variables in terms of a third, using one of three possible cases: • x and y in terms of t • y and t in terms of x • t and x in terms of y However, this is where we need to consider the degeneracies: any degeneracies in the original system will cause degeneracies in the 2-equation linear system. We need to check the determinants of the 2-equation linear system to pick one of the three substitutions, above. Now, we can substitute the linear variables and combine that with the quadratic equation to form a system of three variables (here, u, v and w), with u and v solved for in terms of w: \begin{align} a_0 u^2 + b_0 u + c_0 v^2 + d_0 v + e_0 w^2 + f_0 w + g_0 = 0 \\ u = a_1 w + b_1 \\ v = a_2 w + b_2 \\ \end{align} This system has a closed-form solution: \begin{align} a &= a_0 a_1^2 + c_0 a_2^2 + e_0 \\ b &= 2 a_0 a_1 b_1 + 2 a_2 b_2 c_0 + a_1 b_0 + a_2 d_0 + f_0 \\ c &= a_0 b_1^2 + c_0 b_2^2 + b_0 b_1 + b_2 d_0 + g_0 \\ \end{align} where w can be found as the roots of this quadratic equation: \begin{align} a w^2 + b w + c = 0 \end{align} Finally, u and v can be calculated from w. Substituting back to x, y and t yields the one or two solutions.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 10, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999830722808838, "perplexity": 745.1458063244631}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115861027.55/warc/CC-MAIN-20150124161101-00207-ip-10-180-212-252.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/cauchy-schwarz-equality-implies-parallel.842107/
# Cauchy Schwarz equality implies parallel 1. Nov 8, 2015 ### Bipolarity I'm learning about Support Vector Machines and would like to recap on some basic linear algebra. More specifically, I'm trying to prove the following, which I'm pretty sure is true: Let $v1$ and $v2$ be two vectors in an inner product space over $\mathbb{C}$. Suppose that $\langle v1 , v2 \rangle = ||v1|| \cdot ||v2||$, i.e. the special case of Cauchy Schwarz when it is an equality. Then prove that $v1$ is a scalar multiple of $v2$, assuming neither vector is $0$. I've tried using the triangle inequality and some other random stuff to no avail. I believe there's some algebra trick involved, could someone help me out? I really want to prove this and get on with my machine learning. Thanks! BiP 2. Nov 8, 2015 ### Staff: Mentor How <v1,v2> is defined? 3. Nov 8, 2015 ### Bipolarity Proving this should not require the definition of the inner product, only the properties. 4. Nov 8, 2015 ### Staff: Mentor What's the difference? Which properties do you mean? 5. Nov 8, 2015 ### Bipolarity Conjugate symmetry, linearity in the first argument, and positive-definiteness. 6. Nov 8, 2015 ### Staff: Mentor Looks to me as another version of the cosine formula if applied to v1+v2 7. Nov 9, 2015 ### rs1n By definition, $\langle v_1, v_2 \rangle = \| v_1 \| \cdot \| v_2 \| \cdot \cos(\theta)$ where $\theta$ is the angle between vectors $v_1$ and $v_2$. If you also additionally know that $\langle v_1, v_2 \rangle = \| v_1 \| \cdot \| v_2 \|$, then the angle between the two vectors must either be 0 or 180 degrees. So they are parallel; hence one is a scalar multiple of the other. 8. Nov 9, 2015 ### zinq That's the definition? It would be true in a real inner product space, but this one is over ℂ. 9. Nov 9, 2015 ### rs1n You are absolutely right! My eyes failed me, somehow. 10. Nov 9, 2015 ### PeroK One way to do it is to consider the vector $u = v_2 - \frac{<v1, v2>}{<v1, v1>} v_1$ Look at $<u, u>$ and show that it's zero when you have C-S equality. This also leads to a proof of the C-S inequality. 11. Nov 9, 2015 ### rs1n To get back to the problem, though... over the complex numbers, the inner product is presumably a Hermitian inner product. So \begin{align*} \| u + v \|^2 & = \langle u + v, u+v \rangle = \langle u,u \rangle + \langle u,v \rangle + \langle v,u \rangle + \langle v, v \rangle\\ & = \langle u,u \rangle + \langle u,v \rangle + \overline{\langle u,v \rangle} + \langle v, v \rangle \\ & = \langle u,u \rangle + 2 \mathrm{Re}(\langle u,v \rangle) + \langle v, v \rangle\\ & = \| u\|^2 + 2 \mathrm{Re}(\langle u,v \rangle) + \| v\|^2 \end{align*} Similarly, $0 \le \| u + \lambda v \|^2 = \| u\|^2 + 2 \mathrm{Re}(\overline{\lambda} \langle u,v \rangle) + |\lambda|^2 \| v\|^2$ Let $$\lambda = -\frac{\langle u, v\rangle }{\|v \|^2}$$ and the right hand side (above) will simplify to the C.S. inequality. Equality occurs if $$\| u + \lambda v \| = 0$$ 12. Nov 9, 2015 ### Hawkeye18 There are few possible ways of doing that. The first one is just to follow the proof of the Cauchy--Schwarz. Namely, for real $t$ consider $$\|v_1 - t v_2\|^2 = \|v_1\|^2 +t^2\|v_2\|^2 - 2t (v_1, v_2) = \|v_1\|^2 +t^2\|v_2\|^2 - 2t \|v_1\|\cdot \|v_2\| = (\|v_1\|-t\|v_2\|)^2.$$ The right hand side of this chain of equations is $0$ when $t=\|v_1\|/\|v_2\|$. So for this $t$ you get that $v_1-tv_2=0$, which is exactly what you need. Another way is more geometric and probably more intuitive. You define $w$ to be the orthogonal projection of $v_2$ onto the one dimensional subspace spanned by $v_1$, $w= \|v_1\|^{-2} (v_2, v_1) v_1$. Then $(v_1, v_2)= (v_1, w)$ (checked by direct calculation) and $v_2-w$ is orthogonal to $v_1$ (and so to $w$). Therefore $\|v_2\|^2 =\|w\|^2+\|v_2-w\|^2$. By Cauchy--Schwarz $(v_1, w) \le \|v_1\|\cdot \|w\|$, but on the other hand $(v_1, w) = (v_1, v2) = \|v_1\|\cdot \|v_2\|$, so $\|v_1\|\cdot \|v_2\| \le \|v_1\|\cdot \|w\|$ and therefore $\|v_2\|\le \|w\|$. Comparing this with $\|v_2\|^2 =\|w\|^2+\|v_2-w\|^2$ we conclude that $v_2-w=0$. The second proof is a bit longer, but it is more intuitive, in a sense that it is a pretty standard reasoning used when one works with orthogonal projections. 13. Nov 9, 2015 ### PeroK The second method is what I suggested in post #10. And, in fact, you can prove Cauchy Schwartz more intuitively this way. 14. Nov 9, 2015 ### Bipolarity I see! Thank you all for your replies! I knew I had seen it somewhere, little did I know it was right there in the proof of the C-S inequality itself! BiP
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9837754368782043, "perplexity": 367.4714451189513}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589470.9/warc/CC-MAIN-20180716213101-20180716233101-00620.warc.gz"}
https://socratic.org/questions/what-is-23pi-12-radians-in-degrees-1
Trigonometry Topics # What is (-23pi)/12 radians in degrees? Dec 30, 2015 $= - {345}^{o}$ #### Explanation: To convert radians to degrees, you need to multiply the number with ${180}^{o} / \pi$ So, we get: $\frac{- 23 \cancel{\pi}}{12} \cdot {180}^{o} / \cancel{\pi}$ $= \frac{- 23 \cdot 180}{12}$ $= - {345}^{o}$ ##### Impact of this question 844 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9536821842193604, "perplexity": 6647.8824934977865}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487660269.75/warc/CC-MAIN-20210620084505-20210620114505-00388.warc.gz"}
http://mathhelpforum.com/discrete-math/151354-pigeonhole-theorem.html
1. ## Pigeonhole Theorem A form of the pigeonhole theorem is stated as "If f is a function from a finite set X to a finite set Y with |X| > |Y| then $f(x_1) = f(x_2)$ for some $x_1, x_2 \in X$, $x_1 \neq x_2$ Now a question says: An inventory consists of a list of 89 items, each marked "available" or "unavailable". There are 45 available items, show that there are at least two available items in the list exactly 9 items apart. Now I just don't know how to apply the pigeonhole theorem to this question, what is set X and Y in this case? And what is the function? Many thanks! 2. Please check the wording of this problem. I think the it must be 80 items and not 89. PS. That is a typo. Otherwise I think there is a counterexample. 3. Can someone please translate this question to Hebrew(especially the last part...)? 4. Thanks guys, I have rechecked the question, that is exactly what it says :S 5. if it is 80 here is a hint split all the numbers into to 9 buckets (using modulo 9) how many maximum numbers can you select from each bucket? Note: I have been thinking about how to explicitly express this in terms of the statement of the pigeon-hole principle - but couldn't make much progress. Maybe someone on this froum can help 6. Thanks again, actually the question also said (For example items at position 13 and 22 satisfy the condition) and I assume by 89 items they are listed in position from 1-89. Does this help? 7. Originally Posted by usagi_killer Thanks again, actually the question also said (For example items at position 13 and 22 satisfy the condition) and I assume by 89 items they are listed in position from 1-89. Does this help? So the source really says 89? Counterexample {1, 2, 3, 4, 5, 6, 7, 8, 9, 19, 20, 21, 22, 23, 24, 25, 26, 27, 37, 38, 39, 40, 41, 42, 43, 44, 45, 55, 56, 57, 58, 59, 60, 61, 62, 63, 73, 74, 75, 76, 77, 78, 79, 80, 81} 8. Yeah I got that counter example too, the solutions do it like this: (It makes sense so I don't know where the fallacy is) Let $a_i$ denote the position of the $i$th available item. We must show that $a_i-a_j = 9$ for some i and j. Consider the numbers: $a_1, a_2, \cdots, a_{45}$ ...[1] and $a_1+9, a_2+9, \cdots. a_{45}+9$ ...[2] The 90 numbers from [1] and [2] have possible values from 1-89. So by the form of the pigeonhole theorem I stated in the OP, two numbers must coincide. We cannot have two of [1] or two of [2] identical, thus some number in [1] is equal to some number in [2]. Therefore $a_i = a_j+9 \implies a_i-a_j = 9$ for some i and j. Now I understand their working but obviously there exists a counterexample, so where is the fallacy in their working? Many thanks again! 9. Originally Posted by usagi_killer Yeah I got that counter example too, the solutions do it like this: (It makes sense so I don't know where the fallacy is) Let $a_i$ denote the position of the $i$th available item. We must show that $a_i-a_j = 9$ for some i and j. Consider the numbers: $a_1, a_2, \cdots, a_{45}$ ...[1] and $a_1+9, a_2+9, \cdots. a_{45}+9$ ...[2] The 90 numbers from [1] and [2] have possible values from 1-89. So by the form of the pigeonhole theorem I stated in the OP, two numbers must coincide. We cannot have two of [1] or two of [2] identical, thus some number in [1] is equal to some number in [2]. Therefore $a_i = a_j+9 \implies a_i-a_j = 9$ for some i and j. Now I understand their working but obviously there exists a counterexample, so where is the fallacy in their working? Many thanks again! That wording could be better, but anyway here's an obvious flaw: the numbers in [1] must be in {1, ..., 89} and the numbers in [2] must be in {10, ... , 98}. So, a function from a set with cardinality 90 to a set with cardinality 98 need not have a duplicate. That makes me realize how to apply pidgeonhole to the real problem when we have 80 items in our inventory. You do the same steps but you get that you're trying to fit 90 elements into a set with cardinality 89. 10. Oh right, that makes sense! Thanks very much!! 11. Originally Posted by undefined That wording could be better, but anyway here's an obvious flaw: the numbers in [1] must be in {1, ..., 89} and the numbers in [2] must be in {10, ... , 98}. So, a function from a set with cardinality 90 to a set with cardinality 98 need not have a duplicate. That makes me realize how to apply pidgeonhole to the real problem when we have 80 items in our inventory. You do the same steps but you get that you're trying to fit 90 elements into a set with cardinality 89. @undefined - sorry but can you elaborate the proof for "That makes me realize how to apply pidgeonhole to the real problem when we have 80 items in our inventory. You do the same steps but you get that you're trying to fit 90 elements into a set with cardinality 89". I haven't been able to follow it please. 12. I knew that I had seen this question before. It is worked out as as example on page 250 of the fourth edition of Discrete Mathematics by Johnsonbaugh. And it is 80 items and not 89. 13. Originally Posted by aman_cc @undefined - sorry but can you elaborate the proof for "That makes me realize how to apply pidgeonhole to the real problem when we have 80 items in our inventory. You do the same steps but you get that you're trying to fit 90 elements into a set with cardinality 89". I haven't been able to follow it please. Sure. So this is just post #8 rewritten slightly. Label the items {1, 2, ... , 80}. Let $\displaystyle A = \{a_1, a_2,\,\dots\,,a_{45}\}$ be the set of available items. The elements of A are distinct and in the range {1, 2, ... , 80}. Let $B = \displaystyle \{a_1+9, a_2+9,\,\dots\,,a_{45}+9\}$. Likewise the elements of B are distinct and in the range {10, 11, ... , 89}. Let $\displaystyle X = A \cup B$. So X is composed of elements in the range {1, 2, ... , 89}. But we started with 90 elements (45 from A and 45 from B), so by the pidgeonhole principle, at least two of those 90 elements are identical. Since the elements of A and B are distinct, we must have some $\displaystyle a_i \in A$ that is identical to some $\displaystyle a_j + 9 \in B$. QED. 14. Originally Posted by Plato I knew that I had seen this question before. It is worked out as as example on page 250 of the fourth edition of Discrete Mathematics by Johnsonbaugh. And it is 80 items and not 89. Yeah I have the 7th international edition, says 89 for some reason haha
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 20, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7580821514129639, "perplexity": 310.1143782788639}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824293.62/warc/CC-MAIN-20171020173404-20171020193404-00548.warc.gz"}
https://proofwiki.org/wiki/Subtract_Half_is_Replicative_Function
# Subtract Half is Replicative Function ## Theorem Let $f: \R \to \R$ be the real function defined as: $\forall x \in \R: f \left({x}\right) = x - \dfrac 1 2$ Then $f$ is a replicative function. ## Proof $\displaystyle \sum_{k \mathop = 0}^{n - 1} f \left({x + \frac k n}\right)$ $=$ $\displaystyle \sum_{k \mathop = 0}^{n - 1} \left({x - \frac 1 2 + \frac k n}\right)$ $\displaystyle$ $=$ $\displaystyle n x - \frac n 2 + \frac 1 n \sum_{k \mathop = 0}^{n - 1} k$ $\displaystyle$ $=$ $\displaystyle n x - \frac n 2 + \frac 1 n \frac {n \left({n - 1}\right)} 2$ Closed Form for Triangular Numbers $\displaystyle$ $=$ $\displaystyle n x - \frac n 2 + \frac n 2 - \frac 1 2$ $\displaystyle$ $=$ $\displaystyle n x - \frac 1 2$ $\displaystyle$ $=$ $\displaystyle f \left({n x}\right)$ Hence the result by definition of replicative function. $\blacksquare$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9520912766456604, "perplexity": 179.75378319663162}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986660067.26/warc/CC-MAIN-20191015155056-20191015182556-00293.warc.gz"}
http://www.science.gov/topicpages/i/indirect+detection+experiments.html
#### Sample records for indirect detection experiments 1. Indirect microbial detection NASA Technical Reports Server (NTRS) Wilkins, J. R. 1980-01-01 Indirect method for detection of microbial growth utilizes flow of charged particles across barrier that physically separated growing cells from electrodes and measures resulting difference in potential between two platinum electrodes. Technique allows simplified noncontact monitoring of all growth in highly infectious cultures or in critical biochemical studies. 2. Indirect microbial detection NASA Technical Reports Server (NTRS) Wilkins, J. R. (Inventor) 1981-01-01 The growth of microorganisms in a sample is detected and monitored by culturing microorganisms in a growth medium and detecting a change in potential between two electrodes, separated from the microbial growth by a barrier which is permeable to charged paticles but microorganism impermeable. 3. Indirect electroanalytical detection of phenols. PubMed Kolliopoulos, Athanasios V; Kampouris, Dimitrios K; Banks, Craig E 2015-05-01 A novel indirect electrochemical protocol for the electroanalytical detection of phenols is presented for the first time. This methodology is demonstrated with the indirect determination of the target analytes phenol, 2-chlorophenol, 4-chlorophenol and 2,4-dichlorophenol through an electrochemically adapted optical protocol. This electrochemical adaptation allows the determination of the above mentioned phenols without the use of any oxidising agents, as is the case in the optical method, where pyrazoline compounds (mediators) chemically react with the target phenols forming a quinoneimine product which is electrochemically active providing an indirect analytical signal to measure the target phenol(s). A range of commercially available pyrazoline substitution products, namely 4-dimethylaminoantipyrine, antipyrine, 3-methyl-1-(2-phenylethyl)-2-pyrazolin-5-one, 3-amino-1-(1-naphthylmethyl)-2-Pyrazolin-5-one, 4-amino-1,2-dimethyl-3-pentadecyl-3-pyrazolin-5-one hydrochloride, 3-amino-1-(2-amino-4-methylsulfonylphenyl)-2-pyrazolin-5-one hydrochloride and 4-aminoantipyrine are evaluated as mediators for the indirect detection of phenols. The indirect electrochemical detection of phenol, 2-chlorophenol, 4-chlorophenol and 2,4-dichlorophenol through the use of 4-aminoantipyrine as a mediator are successfully determined in drinking water samples at analytically useful levels. Finally, the comparison of the direct (no mediator) and the proposed indirect determination (with 4-aminoantipyrine) towards the analytical detection of the target phenols in drinking water is presented. The limitation of the proposed electroanalytical protocol is quantified for all the four target phenols. PMID:25771897 4. Cosmic Ray Experiments and the Implications for Indirect Detection of Dark Matter NASA Technical Reports Server (NTRS) Mitchell, John W.; Ormes, Jonathan F.; Streitmatter, Robert E. 2013-01-01 Detection of cosmic-ray antiprotons was first reported by Golden et al. in 1979 and their existence was firmly established by the BESS and IMAX collaborations in the early 1990s. Increasingly precise measurements of the antiproton spectrum, most recently from BESS-Polar and PAMELA, have made it an important tool for investigating cosmic-ray transport in the galaxy and heliosphere and for constraining dark-matter models. The history of antiproton measurements will be briefly reviewed. The current status will be discussed, focusing on the results of BESS-Polar II and their implications for the possibility of antiprotons from primordial black hole evaporation. The current results of the BESS-Polar II antihelium search are also presented. 5. Indirect Reciprocity; A Field Experiment PubMed Central van Apeldoorn, Jacobien; Schram, Arthur 2016-01-01 Indirect reciprocity involves cooperative acts towards strangers, either in response to their kindness to third parties (downstream) or after receiving kindness from others oneself (upstream). It is considered to be important for the evolution of cooperative behavior amongst humans. Though it has been widely studied theoretically, the empirical evidence of indirect reciprocity has thus far been limited and based solely on behavior in laboratory experiments. We provide evidence from an online environment where members can repeatedly ask and offer services to each other, free of charge. For the purpose of this study we created several new member profiles, which differ only in terms of their serving history. We then sent out a large number of service requests to different members from all over the world. We observe that a service request is more likely to be rewarded for those with a profile history of offering the service (to third parties) in the past. This provides clear evidence of (downstream) indirect reciprocity. We find no support for upstream indirect reciprocity (in this case, rewarding the service request after having previously received the service from third parties), however. Our evidence of downstream indirect reciprocity cannot be attributed to reputational effects concerning one’s trustworthiness as a service user. PMID:27043712 6. Scalar dark matter: direct vs. indirect detection NASA Astrophysics Data System (ADS) Duerr, Michael; Pérez, Pavel Fileviez; Smirnov, Juri 2016-06-01 We revisit the simplest model for dark matter. In this context the dark matter candidate is a real scalar field which interacts with the Standard Model particles through the Higgs portal. We discuss the relic density constraints as well as the predictions for direct and indirect detection. The final state radiation processes are investigated in order to understand the visibility of the gamma lines from dark matter annihilation. We find two regions where one could observe the gamma lines at gamma-ray telescopes. We point out that the region where the dark matter mass is between 92 and 300 GeV can be tested in the near future at direct and indirect detection experiments. 7. Deducing the nature of dark matter from direct and indirect detection experiments in the absence of collider signatures of new physics SciTech Connect Beltran, Maria; Hooper, Dan; Kolb, Edward W.; Krusberg, Zosia A. C. 2009-08-15 Despite compelling arguments that significant discoveries of physics beyond the standard model are likely to be made at the Large Hadron Collider, it remains possible that this machine will make no such discoveries, or will make no discoveries directly relevant to the dark matter problem. In this article, we study the ability of astrophysical experiments to deduce the nature of dark matter in such a scenario. In most dark matter studies, the relic abundance and detection prospects are evaluated within the context of some specific particle physics model or models (e.g., supersymmetry). Here, assuming a single weakly interacting massive particle constitutes the Universe's dark matter, we attempt to develop a model-independent approach toward the phenomenology of such particles in the absence of any discoveries at the Large Hadron Collider. In particular, we consider generic fermionic or scalar dark matter particles with a variety of interaction forms, and calculate the corresponding constraints from and sensitivity of direct and indirect detection experiments. The results may provide some guidance in disentangling information from future direct and indirect detection experiments. 8. Remote Leak Detection: Indirect Thermal Technique NASA Technical Reports Server (NTRS) Clements, Sandra 2002-01-01 Remote sensing technologies are being considered for efficient, low cost gas leak detection. Eleven specific techniques have been identified for further study and evaluation of several of these is underway. The Indirect Thermal Technique is one of the techniques that is being explored. For this technique, an infrared camera is used to detect the temperature change of a pipe or fitting at the site of a gas leak. This temperature change is caused by the change in temperature of the gas expanding from the leak site. During the 10-week NFFP program, the theory behind the technique was further developed, experiments were performed to determine the conditions for which the technique might be viable, and a proof-of-concept system was developed and tested in the laboratory. 9. Do infants detect indirect reciprocity? PubMed Meristo, Marek; Surian, Luca 2013-10-01 In social interactions involving indirect reciprocity, agent A acts prosocially towards B and this prompts C to act prosocially towards A. This happens because A's actions enhanced its reputation in the eyes of third parties. Indirect reciprocity may have been of central importance in the evolution of morality as one of the major mechanisms leading to the selection of helping and fair attitudes. Here we show that 10-month-old infants expect third parties to act positively towards fair donors who have distributed attractive resources equally between two recipients, rather than toward unfair donors who made unequal distributions. Infants' responses were dependent on the reciprocator's perceptual exposure to previous relevant events: they expected the reciprocator to reward the fair donor only when it had seen the distributive actions performed by the donors. We propose that infants were able to generate evaluations of agents that were based on the fairness of their distributive actions and to generate expectations about the social preferences of informed third parties. PMID:23887149 10. Dark matter dynamics and indirect detection SciTech Connect Bertone, Gianfranco; Merritt, David; /Rochester Inst. Tech. 2005-04-01 Non-baryonic, or ''dark'', matter is believed to be a major component of the total mass budget of the universe. We review the candidates for particle dark matter and discuss the prospects for direct detection (via interaction of dark matter particles with laboratory detectors) and indirect detection (via observations of the products of dark matter self-annihilations), focusing in particular on the Galactic center, which is among the most promising targets for indirect detection studies. The gravitational potential at the Galactic center is dominated by stars and by the supermassive black hole, and the dark matter distribution is expected to evolve on sub-parsec scales due to interaction with these components. We discuss the dominant interaction mechanisms and show how they can be used to rule out certain extreme models for the dark matter distribution, thus increasing the information that can be gleaned from indirect detection searches. 11. (In)direct detection of boosted dark matter NASA Astrophysics Data System (ADS) Agashe, Kaustubh; Cui, Yanou; Necib, Lina; Thaler, Jesse 2014-10-01 We initiate the study of novel thermal dark matter (DM) scenarios where present-day annihilation of DM in the galactic center produces boosted stable particles in the dark sector. These stable particles are typically a subdominant DM component, but because they are produced with a large Lorentz boost in this process, they can be detected in large volume terrestrial experiments via neutral-current-like interactions with electrons or nuclei. This novel DM signal thus combines the production mechanism associated with indirect detection experiments (i.e. galactic DM annihilation) with the detection mechanism associated with direct detection experiments (i.e. DM scattering off terrestrial targets). Such processes are generically present in multi-component DM scenarios or those with non-minimal DM stabilization symmetries. As a proof of concept, we present a model of two-component thermal relic DM, where the dominant heavy DM species has no tree-level interactions with the standard model and thus largely evades direct and indirect DM bounds. Instead, its thermal relic abundance is set by annihilation into a subdominant lighter DM species, and the latter can be detected in the boosted channel via the same annihilation process occurring today. Especially for dark sector masses in the 10 MeV-10 GeV range, the most promising signals are electron scattering events pointing toward the galactic center. These can be detected in experiments designed for neutrino physics or proton decay, in particular Super-K and its upgrade Hyper-K, as well as the PINGU/MICA extensions of IceCube. This boosted DM phenomenon highlights the distinctive signatures possible from non-minimal dark sectors. 12. (In)direct detection of boosted dark matter SciTech Connect Agashe, Kaustubh; Cui, Yanou; Necib, Lina; Thaler, Jesse E-mail: [email protected] E-mail: [email protected] 2014-10-01 We initiate the study of novel thermal dark matter (DM) scenarios where present-day annihilation of DM in the galactic center produces boosted stable particles in the dark sector. These stable particles are typically a subdominant DM component, but because they are produced with a large Lorentz boost in this process, they can be detected in large volume terrestrial experiments via neutral-current-like interactions with electrons or nuclei. This novel DM signal thus combines the production mechanism associated with indirect detection experiments (i.e. galactic DM annihilation) with the detection mechanism associated with direct detection experiments (i.e. DM scattering off terrestrial targets). Such processes are generically present in multi-component DM scenarios or those with non-minimal DM stabilization symmetries. As a proof of concept, we present a model of two-component thermal relic DM, where the dominant heavy DM species has no tree-level interactions with the standard model and thus largely evades direct and indirect DM bounds. Instead, its thermal relic abundance is set by annihilation into a subdominant lighter DM species, and the latter can be detected in the boosted channel via the same annihilation process occurring today. Especially for dark sector masses in the 10 MeV–10 GeV range, the most promising signals are electron scattering events pointing toward the galactic center. These can be detected in experiments designed for neutrino physics or proton decay, in particular Super-K and its upgrade Hyper-K, as well as the PINGU/MICA extensions of IceCube. This boosted DM phenomenon highlights the distinctive signatures possible from non-minimal dark sectors. 13. (In)Direct detection of boosted dark matter NASA Astrophysics Data System (ADS) Agashe, Kaustubh; Cui, Yanou; Necib, Lina; Thaler, Jesse 2016-05-01 We present a new multi-component dark matter model with a novel experimental signature that mimics neutral current interactions at neutrino detectors. In our model, the dark matter is composed of two particles, a heavier dominant component that annihilates to produce a boosted lighter component that we refer to as boosted dark matter. The lighter component is relativistic and scatters off electrons in neutrino experiments to produce Cherenkov light. This model combines the indirect detection of the dominant component with the direct detection of the boosted dark matter. Directionality can be used to distinguish the dark matter signal from the atmospheric neutrino background. We discuss the viable region of parameter space in current and future experiments. 14. Neutralino Dark Matter: Update on Direct and Indirect Detection SciTech Connect Scopel, S. 2005-12-02 Neutralinos represent a viable solution to the Dark Matter problem. In particular, while I discuss here a wide range for their masses, I will deserve a special attention to light neutralinos, which arise in supersymmetric models without unifications conditions of gaugino masses at the GUT scale. They have sizeable direct and indirect detection signals, which are bounded from below by the cosmological constraint on their relic abundance, but are not yet excluded by present direct and indirect searches, including limits coming from the BR(Bs {yields} {mu}+ + {mu}-) decay rate. They represent so an interesting experimental challenge. An intriguing aspect of light neutralinos is also that they could explain the DAMA modulation effect in a still existing compatibility window with other direct search experiments. I also discuss the gamma-ray signal from dark matter annihilation in our Galaxy and give some examples of external objects, namely the Andromeda Galaxy (M31) and M87. Predictions for the fluxes turn out to be below the level required to explain the possible indications of a {gamma}-ray excess shown by EGRET, CANGAROO-II and HESS (toward the Galactic Center) and HEGRA (from M87). As far as future experiments are concerned, only the signal from the galactic center could be accessible to both satellite-borne experiments and to ACTs, even though this requires very steep dark matter density profiles. 15. Direct and indirect detection of dissipative dark matter SciTech Connect Fan, JiJi; Katz, Andrey; Shelton, Jessie E-mail: [email protected] 2014-06-01 We study the constraints from direct detection and solar capture on dark matter scenarios with a subdominant dissipative component. This dissipative dark matter component in general has both a symmetric and asymmetric relic abundance. Dissipative dynamics allow this subdominant dark matter component to cool, resulting in its partial or total collapse into a smaller volume inside the halo (e.g., a dark disk) as well as a reduced thermal velocity dispersion compared to that of normal cold dark matter. We first show that these features considerably relax the limits from direct detection experiments on the couplings between standard model (SM) particles and dissipative dark matter. On the other hand, indirect detection of the annihilation of the symmetric dissipative dark matter component inside the Sun sets stringent and robust constraints on the properties of the dissipative dark matter. In particular, IceCube observations force dissipative dark matter particles with mass above 50 GeV to either have a small coupling to the SM or a low local density in the solar system, or to have a nearly asymmetric relic abundance. Possible helioseismology signals associated with purely asymmetric dissipative dark matter are discussed, with no present constraints. 16. Indirect detections and analyses of GRBs by ionospheric response SciTech Connect Slosiar, R.; Hudec, R. 2009-05-25 We report on the independent and indirect detection of GRBs by their ionospheric response (SID--Sudden Ionospheric Disturbance) observed at VLF (Very Low Frequency), and discus its possible impact on GRB science and investigations in general. Although few such detections have been already reported in the past, the capability of such alternative and indirect investigations of GRBs still remains to be investigated in more details. We present and discuss examples of such VLF/SID detection of GRBs 060124A, GRB080319D a GRB080320A. A network of SID monitors has been created and is operated to detect more GRBs. The importance of these detections for GRB analyses and GRB science in general is still to be further and in full detail exploited. Some possible outcomes in this direction will be outlined and discussed. 17. The focal account: Indirect lie detection need not access unconscious, implicit knowledge. PubMed Street, Chris N H; Richardson, Daniel C 2015-12-01 People are poor lie detectors, but accuracy can be improved by making the judgment indirectly. In a typical demonstration, participants are not told that the experiment is about deception at all. Instead, they judge whether the speaker appears, say, tense or not. Surprisingly, these indirect judgments better reflect the speaker's veracity. A common explanation is that participants have an implicit awareness of deceptive behavior, even when they cannot explicitly identify it. We propose an alternative explanation. Attending to a range of behaviors, as explicit raters do, can lead to conflict: A speaker may be thinking hard (indicating deception) but not tense (indicating honesty). In 2 experiments, we show that the judgment (and in turn the correct classification rate) is the result of attending to a single behavior, as indirect raters are instructed to do. Indirect lie detection does not access implicit knowledge, but simply focuses the perceiver on more useful cues. PMID:26301728 18. Indirect detection analysis: wino dark matter case study SciTech Connect Hryczuk, Andrzej; Cholis, Ilias; Iengo, Roberto; Ullio, Piero; Tavakoli, Maryam E-mail: [email protected] E-mail: [email protected] 2014-07-01 We perform a multichannel analysis of the indirect signals for the Wino Dark Matter, including one-loop electroweak and Sommerfeld enhancement corrections. We derive limits from cosmic ray antiprotons and positrons, from continuum galactic and extragalactic diffuse γ-ray spectra, from the absence of γ-ray line features at the galactic center above 500 GeV in energy, from γ-rays toward nearby dwarf spheroidal galaxies and galaxy clusters, and from CMB power-spectra. Additionally, we show the future prospects for neutrino observations toward the inner Galaxy and from antideuteron searches. For each of these indirect detection probes we include and discuss the relevance of the most important astrophysical uncertainties that can impact the strength of the derived limits. We find that the Wino as a dark matter candidate is excluded in the mass range bellow ≅ 800 GeV from antiprotons and between 1.8 and 3.5 TeV from the absence of a γ-ray line feature toward the galactic center. Limits from other indirect detection probes confirm the main bulk of the excluded mass ranges. 19. The Indirect Detection of GRBs by Ionospheric Response - Detection of GRB060124A SciTech Connect Slosiar, Rudolf; Hudec, Rene 2008-05-22 We report on the independent and indirect detection of GRBs by their ionospheric response (SID-Sudden Ionospheric Disturbance) observed at VLF (Very Low Frequency). Although few such detections have been already reported in the past, the capability of such alternative and indirect investigations of GRBs still remains to be investigated in more details. We present and discuss an example of such VLF/SID detection of the GRB 060124A. 20. Indirect detection of radiation sources through direct detection of radiolysis products DOEpatents Farmer, Joseph C.; Fischer, Larry E.; Felter, Thomas E. 2010-04-20 1. Indirect fluorometric detection techniques on thin layer chromatography and effect of ultrasound on gel electrophoresis SciTech Connect Yinfa, Ma. 1990-12-10 Thin-layer chromatography (TLC) is a broadly applicable separation technique. It offers many advantages over high performance liquid chromatography (HPLC), such as easily adapted for two-dimensional separation, for whole-column'' detection and for handling multiple samples, etc. However, due to its draggy development of detection techniques comparing with HPLC, TLC has not received the attention it deserves. Therefore, exploring new detection techniques is very important to the development of TLC. It is the principal of this dissertation to present a new detection method for TLC -- indirect fluorometric detection method. This detection technique is universal sensitive, nondestructive, and simple. This will be described in detail from Sections 1 through Section 5. Section 1 and 3 describe the indirect fluorometric detection of anions and nonelectrolytes in TLC. In Section 2, a detection method for cations based on fluorescence quenching of ethidium bromide is presented. In Section 4, a simple and interesting TLC experiment is designed, three different fluorescence detection principles are used for the determination of caffeine, saccharin and sodium benzoate in beverages. A laser-based indirect fluorometric detection technique in TLC is developed in Section 5. Section 6 is totally different from Sections 1 through 5. An ultrasonic effect on the separation of DNA fragments in agarose gel electrophoresis is investigated. 262 refs. 2. Direct and Indirect Dark Matter Detection in Gauge Theories SciTech Connect Queiroz, Farinaldo 2013-01-01 The Dark matter (DM) problem constitutes a key question at the interface among Particle Physics, Astrophysics and Cosmology. The observational data which have been accumulated in the last years point to an existence of non baryonic amount of DM. Since the Standard Model (SM) does not provide any candidate for such non-baryonic DM, the evidence of DM is a major indication for new physics beyond the SM. We will study in this work one of the most popular DM candidates, the so called WIMPs (Weakly Interacting Massive Particles) from a direct and indirect detection perspective. In order to approach the direct and indirect dection of DM in the context of Particle Physics in a more pedagogic way, we will begin our discussion talking about a minimal extension of the SM. Later we will work on the subject in a 3-3-1 model. Next, we will study the role of WIMPs in the Big Bang Nucleosynthesis. Lastly, we will look for indirect DM signals in the center of our galaxy using the NASA Satellite, called Fermi-LAT. Through a comprehensive analysis of the data events observed by Fermi-LAT and some background models, we will constrain the dark matter annihilation cross section for several annihilation channels and dark matter halo profiles. 3. Indirect target detection method in FLIR image sequences NASA Astrophysics Data System (ADS) Zhu, Hu; Zhang, Tianxu; Deng, Lizhen 2013-09-01 Due to the complexity of the scene, target detection in forward-looking infrared (FLIR) imagery is a challenging problem, especially for occluded target. The main contribution of this paper is to propose an indirect detection method for improving the recognition probability and effectiveness of target detection method in FLIR image sequences under complex conditions. The proposed method mainly includes four steps: preparation of forward-looking reference image of landmark, extraction of the real-time scene image, template matching and target location, in which some key technologies are proposed, such as perspective transformation used to solve projective problems, position prediction for improving real-time performance, and target location used for identifying the target's position. Experimental results are shown to demonstrate the robustness and efficiency of proposed method in FLIR image sequences. 4. Indirect detection of dark matter with γ rays. PubMed Funk, Stefan 2015-10-01 The details of what constitutes the majority of the mass that makes up dark matter in the Universe remains one of the prime puzzles of cosmology and particle physics today-80 y after the first observational indications. Today, it is widely accepted that dark matter exists and that it is very likely composed of elementary particles, which are weakly interacting and massive [weakly interacting massive particles (WIMPs)]. As important as dark matter is in our understanding of cosmology, the detection of these particles has thus far been elusive. Their primary properties such as mass and interaction cross sections are still unknown. Indirect detection searches for the products of WIMP annihilation or decay. This is generally done through observations of γ-ray photons or cosmic rays. Instruments such as the Fermi large-area telescope, high-energy stereoscopic system, major atmospheric gamma-ray imaging Cherenkov, and very energetic radiation imaging telescope array, combined with the future Cherenkov telescope array, will provide important complementarity to other search techniques. Given the expected sensitivities of all search techniques, we are at a stage where the WIMP scenario is facing stringent tests, and it can be expected that WIMPs will be either be detected or the scenario will be so severely constrained that it will have to be rethought. In this sense, we are on the threshold of discovery. In this article, I will give a general overview of the current status and future expectations for indirect searches of dark matter (WIMP) particles. PMID:24821791 5. Indirect detection of dark matter with γ rays PubMed Central Funk, Stefan 2015-01-01 The details of what constitutes the majority of the mass that makes up dark matter in the Universe remains one of the prime puzzles of cosmology and particle physics today—80 y after the first observational indications. Today, it is widely accepted that dark matter exists and that it is very likely composed of elementary particles, which are weakly interacting and massive [weakly interacting massive particles (WIMPs)]. As important as dark matter is in our understanding of cosmology, the detection of these particles has thus far been elusive. Their primary properties such as mass and interaction cross sections are still unknown. Indirect detection searches for the products of WIMP annihilation or decay. This is generally done through observations of γ-ray photons or cosmic rays. Instruments such as the Fermi large-area telescope, high-energy stereoscopic system, major atmospheric gamma-ray imaging Cherenkov, and very energetic radiation imaging telescope array, combined with the future Cherenkov telescope array, will provide important complementarity to other search techniques. Given the expected sensitivities of all search techniques, we are at a stage where the WIMP scenario is facing stringent tests, and it can be expected that WIMPs will be either be detected or the scenario will be so severely constrained that it will have to be rethought. In this sense, we are on the threshold of discovery. In this article, I will give a general overview of the current status and future expectations for indirect searches of dark matter (WIMP) particles. PMID:24821791 6. Direct/indirect detection signatures of nonthermally produced dark matter SciTech Connect Nagai, Minoru; Nakayama, Kazunori 2008-09-15 We study direct and indirect detection possibilities of neutralino dark matter produced nonthermally by, e.g., the decay of long-lived particles, as is easily implemented in the case of anomaly or mirage-mediation models. In this scenario, large self-annihilation cross sections are required to account for the present dark matter abundance, and it leads to significant enhancement of the gamma-ray signature from the galactic center and the positron flux from the dark matter annihilation. It is found that GLAST and PAMELA will find the signal or give tight constraints on such nonthermal production scenarios of neutralino dark matter. 7. Beryllium ignition target design for indirect drive NIF experiments NASA Astrophysics Data System (ADS) Simakov, A. N.; Wilson, D. C.; Yi, S. A.; Kline, J. L.; Salmonson, J. D.; Clark, D. S.; Milovich, J. L.; Marinak, M. M. 2016-03-01 Beryllium (Be) ablator offers multiple advantages over carbon based ablators for indirectly driven NIF ICF ignition targets. These are higher mass ablation rate, ablation pressure and ablation velocity, lower capsule albedo, and higher thermal conductivity at cryogenic temperatures. Such advantages can be used to improve the target robustness and performance. While previous NIF Be target designs exist, they were obtained a long time ago and do not incorporate the latest improved physical understanding and models based upon NIF experiments. Herein, we propose a new NIF Be ignition target design at 1.45 MJ, 430 TW that takes all this knowledge into account. 8. Indirect detection of dryout in simulated LMFBR fuel assemblies SciTech Connect Levin, A.E. 1981-01-01 The method of indirect dryout detection was developed by an examination of the data from THORS Bundle 6A. This was a 19-pin bundle of FFTF configuration. The pin size, wire-wrap size and axial pitch were identical to those in Bundle 9; the major difference between the Bundle 6A and Bundle 9 FPSs was the length of the upper unheated zone, which simulated, in Bundle 6A, the reflector and fission gas plenum in FFTF (1.19 m) and, in Bundle 9, the upper axial blanket and fission gas plenum in CRBR (1.54 m). In addition, Bundle 6A had half-size (0.71 mm) edge channel wire-wraps and a low thermal inertia (0.51 mm thick) duct wall surrounded by calcium silicate insulation in an attempt to flatten the bundle temperature profile. 9. High convergence, indirect drive inertial confinement fusion experiments at Nova SciTech Connect Lerche, R.A.; Cable, M.D.; Hatchett, S.P. 1995-06-02 High convergence, indirect drive implosion experiments have been done at the Nova Laser Facility. The targets were deuterium and deuterium/tritium filled, glass microballoons driven symmetrically by x rays produced in a surrounding uranium hohlraum. Implosions achieved convergence ratios of 24:1 with fuel densities of 19 g/cm{sup 3}; this is equivalent to the range required for the hot spot of ignition scale capsules. The implosions used a shaped drive and were well characterized by a variety of laser and target measurements. The primary measurement was the fuel density using the secondary neutron technique (neutrons from the reaction {sup 2}H({sup 3}H,n){sup 4}He in initially pure deuterium fuel). Laser measurements include power, energy and pointing. Simultaneous measurement of neutron yield, fusion reaction rate, and x-ray images provide additional information about the implosion process. Computer models are in good agreement with measured results. 10. Indirect electrochemical detection for total bile acids in human serum. PubMed Zhang, Xiaoqing; Zhu, Mingsong; Xu, Biao; Cui, Yue; Tian, Gang; Shi, Zhenghu; Ding, Min 2016-11-15 Bile acids level in serum is a useful index for screening and diagnosis of hepatobiliary diseases. As bile acids concentration is closely related to the degree of hepatobiliary diseases, detecting it is a vital factor to understand the stage of the diseases. The prevalent determination for bile acids is the enzymatic cycling method which has low sensitivity while reagent-consuming. It is desirable to develop a new method with lower cost and higher sensitivity. An indirect electrochemical detection (IED) for bile acids in human serum was established using the screen printed carbon electrode (SPCE). Since bile acids do not show electrochemical signals, they were converted to 3-ketosteroids by 3-α-hydroxysteroid dehydrogenase (3α-HSD) in the presence of nicotinamide adenine dinucleotide (NAD(+)), which was reduced to NADH. NADH could then be oxidized on the surface of SPCE, generating a signal that was used to calculate the total bile acids (TBA) concentration. A good linear calibration for TBA was obtained at the concentration range from 5.00μM to 400μM in human serum. Both the precisions and recoveries were sufficient to be used in a clinical setting. The TBA concentrations in 35 human serum samples by our IED method didn't show significant difference with the result by enzymatic cycling method, using the paired t-test. Moreover, our IED method is reagent-saving, sensitive and cost-effective. PMID:27236139 11. Male and Female University Students' Experiences of Indirect Aggression ERIC Educational Resources Information Center Leenaars, Lindsey; Rinaldi, Christina M. 2010-01-01 This study examines the role of sex, gender role orientation, social representations of indirect aggression, and indicators of psychosocial adjustment in indirect aggression and victimization in an emerging adult sample. A total of 42 participants (19 men, 23 women) recruited are required to complete the questionnaires, along with 18 participants… 12. A micro-capture ELISA for detecting Mycoplasma pneumoniae IgM: comparison with indirect immunofluorescence and indirect ELISA. PubMed Central Wreghitt, T. G.; Sillis, M. 1985-01-01 A mu-capture ELISA was developed for detecting Mycoplasma pneumoniae-specific IgM, and compared with an indirect immunofluorescent antibody (IFA) technique and an indirect ELISA. mu-capture ELISA and IFA compared well and were found to be the most sensitive assays. The IFA test can be completed in 2 h whilst the results of the mu-capture ELISA can be available in 24 h. Both tests are amenable to routine diagnostic use and have similar sensitivity. Indirect ELISA was found to be less sensitive and less specific, giving high assay values with several sera having undetectable M. pneumoniae CF antibody or CF antibody in low titre. Serum samples obtained from 11 patients at various times after M. pneumoniae infection showed maximum antibody levels within the first month by all assays, with a gradual fall in amount of IgM with time when assayed by mu-capture ELISA, a more gradual decline by IFA and hardly any decline with indirect ELISA. It was concluded that the indirect ELISA is unsuitable for the investigation of possible M. pneumoniae infection because the sustained high assay values with serum samples taken many months after infection, make interpretation of the test results very difficult. PMID:3921607 13. Impact of dark matter microhalos on signatures for direct and indirect detection SciTech Connect Schneider, Aurel; Moore, Ben; Krauss, Lawrence 2010-09-15 Detecting dark matter as it streams through detectors on Earth relies on knowledge of its phase space density on a scale comparable to the size of our Solar System. Numerical simulations predict that our galactic halo contains an enormous hierarchy of substructures, streams and caustics, the remnants of the merging hierarchy that began with tiny Earth-mass microhalos. If these bound or coherent structures persist until the present time, they could dramatically alter signatures for the detection of weakly interacting elementary particle dark matter. Using numerical simulations that follow the coarse grained tidal disruption within the Galactic potential and fine grained heating from stellar encounters, we find that microhalos, streams, and caustics have a negligible likelihood of impacting direct detection signatures implying that dark matter constraints derived using simple smooth halo models are relatively robust. We also find that many dense central cusps survive, yielding a small enhancement in the signal for indirect detection experiments. 14. Means and method for capillary zone electrophoresis with laser-induced indirect fluorescence detection DOEpatents Yeung, Edwards; Kuhr, Werner G. 1991-04-09 A means and method for capillary zone electrphoresis with laser-induced indirect fluorescence detection. A detector is positioned on the capillary tube of a capillary zone electrophoresis system. The detector includes a laser which generates a laser beam which is imposed upon a small portion of the capillary tube. Fluorescence of the elutant electromigrating through the capillary tube is indirectly detected and recorded. 15. Means and method for capillary zone electrophoresis with laser-induced indirect fluorescence detection DOEpatents Yeung, Edward S.; Kuhr, Werner G. 1996-02-20 A means and method for capillary zone electrphoresis with laser-induced indirect fluorescence detection. A detector is positioned on the capillary tube of a capillary zone electrophoresis system. The detector includes a laser which generates a laser beam which is imposed upon a small portion of the capillary tube. Fluorescence of the elutant electromigrating through the capillary tube is indirectly detected and recorded. 16. Indirect chiral separation of tryptophan enantiomers by high performance liquid chromatography with indirect chemiluminiscence detection. PubMed Zhou, Jie; Chen, Shanshan; Sun, Fang; Luo, Pei; Du, Qiuzheng; Zhao, Suzhen 2015-12-01 In recent years, the study of chiral compounds in vivo has received much attention. In this study, a novel method based on high performance liquid chromatography (HPLC) coupled with chemiluminescence (CL) detection was developed for the separation of tryptophan (Trp) enantiomers. o-Phthalaldehyde and N-acetyl-l-cysteine were used as chiral derivatization reagents for Trp before it can be detected by HPLC-CL method. The separation was carried out on an ODS column using a mobile phase composed of methanol-0.01mol/L phosphate buffer (40/60, v/v). Under the optimum conditions, satisfactory results were obtained, including complete separation, good relative standard deviations and low detection limits. The applicability of the proposed method has been validated by determining Trp in biological samples. Linear responses (r>0.9990) were observed over the range of 2.5×10(-7) to 1.2×10(-5)g/mL of Trp enantiomers, with quantitation limit of 2.5×10(-7)g/mL. The assay method shows good specificity to Trp enantiomers, and thus it will have great potential application in clinical diagnosis. The mean extraction efficiency of Trp enantiomers in mice plasma samples were 98.48% and 97.40%, respectively. The mean relative standard deviation (RSD) of Trp enantiomers were <3%. PMID:26523665 17. Characterization of cationic copolymers by capillary electrophoresis using indirect UV detection and contactless conductivity detection. PubMed Anik, Nadia; Airiau, Marc; Labeau, Marie-Pierre; Vuong, Chi-Thanh; Cottet, Hervé 2012-01-01 For many industrial applications, the combination of two different monomers in statistical or diblock copolymers enhances the properties of the corresponding polymer. However, during the polymerization reaction, homopolymers might be formed and can influence the properties for the applications. Consequently, the separation and the quantification of the homopolymers contained in copolymer samples are crucial. In addition, the charge density distribution of the statistical copolymer is an important characteristic for the applications. The purpose of this work was to study the characterization of a statistical copolymer of acrylic acid (AA) and diallyldimethyl ammonium chloride (DADMAC) by capillary electrophoresis (CE) in acidic conditions (cationic copolymers). For that purpose, a free solution electrophoretic separation was carried out according to the charge rate (chemical composition) independently of the molar mass. The second objective was to compare contactless conductivity detection and indirect UV absorbance modes for the quantification of DADMAC homopolymers present in copolymer samples. Different coated capillaries based on neutral or positively charged modification were also compared. The comparison of indirect absorbance UV and contactless conductimetric detection demonstrated that both detection modes can be used for a complete CE characterization of non-UV absorbing PAA-DADMAC copolymers. PMID:22169192 18. A rapid method to improve protein detection by indirect ELISA Technology Transfer Automated Retrieval System (TEKTRAN) The enzyme-linked immunosorbant assay (ELISA) is a rapid, high-throughput, quantitative immunoassay for the selective detection of target antigens. The general principle behind an ELISA is antibody mediated capture and detection of an antigen with a measureable substrate. Numerous incarnations of th... 19. Flow biosensing and sampling in indirect electrochemical detection PubMed Central Lamberti, Francesco; Luni, Camilla; Zambon, Alessandro; Andrea Serra, Pier; Giomo, Monica; Elvassore, Nicola 2012-01-01 Miniaturization in biological analyses has several advantages, such as sample volume reduction and fast response time. The integration of miniaturized biosensors within lab-on-a-chip setups under flow conditions is highly desirable, not only because it simplifies process handling but also because measurements become more robust and operator-independent. In this work, we study the integration of flow amperometric biosensors within a microfluidic platform when analyte concentration is indirectly measured. As a case study, we used a platinum miniaturized glucose biosensor, where glucose is enzymatically converted to H2O2 that is oxidized at the electrode. The experimental results produced are strongly coupled to a theoretical analysis of fluid dynamic conditions affecting the electrochemical response of the sensor. We verified that the choice of the inlet flow rate is a critical parameter in flow biosensors, because it affects both glucose and H2O2 transport, to and from the electrode. We identify optimal flow rate conditions for accurate sensing at high time resolution. A dimensionless theoretical analysis allows the extension of the results to other sensing systems according to fluid dynamic similarity principles. Furthermore, we developed a microfluidic design that connects a sampling unit to the biosensor, in order to decouple the sampling flow rate from that of the actual measurement. PMID:22655022 20. Indirect Charged Particle Detection: Concepts and a Classroom Demonstration NASA Astrophysics Data System (ADS) Childs, Nicholas B.; Horányi, Mihály; Collette, Andrew 2013-11-01 We describe the principles of macroscopic charged particle detection in the laboratory and their connections to concepts taught in the physics classroom. Electrostatic dust accelerator systems, capable of launching charged dust grains at hypervelocities (1-100 km/s), are a critical tool for space exploration. Dust grains in space typically have large speeds relative to the probes or satellites that encounter them. Development and testing of instruments that look for dust in space therefore depends critically on the availability of fast, well-characterized dust grains in the laboratory. One challenge for the experimentalist is to precisely measure the speed and mass of laboratory dust particles without disturbing them. Detection systems currently in use exploit the well-known effect of image charge to register the passage of dust grains without changing their speed or mass. We describe the principles of image charge detection and provide a simple classroom demonstration of the technique using soup cans and pith balls. 1. Indirect detection of light neutralino dark matter in the next-to-minimal supersymmetric standard model SciTech Connect Ferrer, Francesc; Krauss, Lawrence M.; Profumo, Stefano 2006-12-01 We explore the prospects for indirect detection of neutralino dark matter in supersymmetric models with an extended Higgs sector (next-to-minimal supersymmetric standard model, or NMSSM). We compute, for the first time, one-loop amplitudes for NMSSM neutralino pair annihilation into two photons and two gluons, and point out that extra diagrams (with respect to the minimal supersymmetric standard model, or MSSM), featuring a potentially light CP-odd Higgs boson exchange, can strongly enhance these radiative modes. Expected signals in neutrino telescopes due to the annihilation of relic neutralinos in the Sun and in the Earth are evaluated, as well as the prospects of detection of a neutralino annihilation signal in space-based gamma-ray, antiproton and positron search experiments, and at low-energy antideuteron searches. We find that in the low mass regime the signals from capture in the Earth are enhanced compared to the MSSM, and that NMSSM neutralinos have a remote possibility of affecting solar dynamics. Also, antimatter experiments are an excellent probe of galactic NMSSM dark matter. We also find enhanced two-photon decay modes that make the possibility of the detection of a monochromatic gamma-ray line within the NMSSM more promising than in the MSSM, although likely below the sensitivity of next generation gamma-ray telescopes. 2. Indirectly detected chemical shift correlation NMR spectroscopy in solids under fast magic angle spinning SciTech Connect Mao, Kanmi 2011-01-01 The development of fast magic angle spinning (MAS) opened up an opportunity for the indirect detection of insensitive low-γ nuclei (e.g., 13C and 15N) via the sensitive high-{gamma} nuclei (e.g., 1H and 19F) in solid-state NMR, with advanced sensitivity and resolution. In this thesis, new methodology utilizing fast MAS is presented, including through-bond indirectly detected heteronuclear correlation (HETCOR) spectroscopy, which is assisted by multiple RF pulse sequences for 1H-1H homonuclear decoupling. Also presented is a simple new strategy for optimization of 1H-1H homonuclear decoupling. As applications, various classes of materials, such as catalytic nanoscale materials, biomolecules, and organic complexes, are studied by combining indirect detection and other one-dimensional (1D) and two-dimensional (2D) NMR techniques. Indirectly detected through-bond HETCOR spectroscopy utilizing refocused INEPT (INEPTR) mixing was developed under fast MAS (Chapter 2). The time performance of this approach in 1H detected 2D 1H{l_brace}13C{r_brace} spectra was significantly improved, by a factor of almost 10, compared to the traditional 13C detected experiments, as demonstrated by measuring naturally abundant organic-inorganic mesoporous hybrid materials. The through-bond scheme was demonstrated as a new analytical tool, which provides complementary structural information in solid-state systems in addition to through-space correlation. To further benefit the sensitivity of the INEPT transfer in rigid solids, the combined rotation and multiple-pulse spectroscopy (CRAMPS) was implemented for homonuclear 1H decoupling under fast MAS (Chapter 3). Several decoupling schemes (PMLG5m$\\bar{x}$, PMLG5mm$\\bar{x}$x and SAM3) were analyzed to maximize the performance of through-bond transfer based 3. Indirect Charged Particle Detection: Concepts and a Classroom Demonstration ERIC Educational Resources Information Center Childs, Nicholas B.; Horányi, Mihály; Collette, Andrew 2013-01-01 We describe the principles of macroscopic charged particle detection in the laboratory and their connections to concepts taught in the physics classroom. Electrostatic dust accelerator systems, capable of launching charged dust grains at hypervelocities (1-100 km/s), are a critical tool for space exploration. Dust grains in space typically have… 4. Indirect detection imprint of a C P violating dark sector NASA Astrophysics Data System (ADS) Chao, Wei; Ramsey-Musolf, Michael J.; Yu, Jiang-Hao 2016-05-01 We introduce a simple scenario involving fermionic dark matter (χ ) and singlet scalar mediators that may account for the Galactic center GeV γ -ray excess while satisfying present direct detection constraints. C P violation in the scalar potential leads to a mixing between the Standard Model Higgs boson and the scalar singlet, resulting in three scalars, h1 ,2 ,3, of indefinite C P -transformation properties. This mixing enables s -wave χ χ ¯ annihilation into discalar states, followed by decays into four-fermion final states. The observed γ -ray spectrum can be fitted while respecting present direct detection bounds and Higgs boson properties for mχ=60 ˜80 GeV , and mh 3˜mχ. Searches for the Higgs exotic decay channel h1→h3h3 at the 14 TeV LHC should be able to further probe the parameter region favored by the γ -ray excess. 5. Indirect Detection of Forming Protoplanets via Chemical Asymmetries in Disks NASA Astrophysics Data System (ADS) Cleeves, L. Ilsedore; Bergin, Edwin A.; Harries, Tim J. 2015-07-01 We examine changes in the molecular abundances resulting from increased heating due to a self-luminous planetary companion embedded within a narrow circumstellar disk gap. Using 3D models that include stellar and planetary irradiation, we find that luminous young planets locally heat up the parent circumstellar disk by many tens of Kelvin, resulting in efficient thermal desorption of molecular species that are otherwise locally frozen out. Furthermore, the heating is deposited over large regions of the disk, ±5 AU radially and spanning ≲ 60^\\circ azimuthally. From the 3D chemical models, we compute rotational line emission models and full Atacama Large Millimeter Array simulations, and find that the chemical signatures of the young planet are detectable as chemical asymmetries in ∼ 10h observations. HCN and its isotopologues are particularly clear tracers of planetary heating for the models considered here, and emission from multiple transitions of the same species is detectable, which encodes temperature information in addition to possible velocity information from the spectra itself. We find submillimeter molecular emission will be a useful tool to study gas giant planet formation in situ, especially beyond R≳ 10 AU. 6. The WIMP Forest: Indirect Detection of a Chiral Square SciTech Connect Bertone, Gianfranco; Jackson, C.B.; Shaughnessy, Gabe; Tait, Tim M.P.; Vallinotto, Alberto 2009-04-01 The spectrum of photons arising from WIMP annihilation carries a detailed imprint of the structure of the dark sector. In particular, loop-level annihilations into a photon and another boson can in principle lead to a series of lines (a WIMP forest) at energies up to the WIMP mass. A specific model which illustrates this feature nicely is a theory of two universal extra dimensions compactified on a chiral square. Aside from the continuum emission, which is a generic prediction of most dark matter candidates, we find a 'forest' of prominent annihilation lines that, after convolution with the angular resolution of current experiments, leads to a distinctive (2-bump plus continuum) spectrum, which may be visible in the near future with the Fermi Gamma-Ray Space Telescope (formerly known as GLAST). 7. Indirect detection of subsurface outflow from a rift valley lake NASA Astrophysics Data System (ADS) Darling, W. G.; Allen, D. J.; Armannsson, H. 1990-02-01 Naivasha, highest of the Kenya (Gregory) Rift Valley lakes, has no surface outlet. However, unlike other Rift lakes it has not become saline despite high potential evaporation rates, which indicates that there must be some subsurface drainage. The fate of this outflow has been the subject of speculation for many years, especially during the general decline in lake water level during the 1980's. Particularly to the south of the lake, there are few opportunities to obtain information from direct groundwater sampling. However, the stable isotopic composition of fumarole steam from late Quaternary volcanic centres in the area has been used to infer groundwater composition. Using a simple mixing model between Rift-flank groundwater and highly-evaporated lakewater, this has enabled subsurface water flow to be contoured by its lakewater content. By this method, outflow can still be detected some 30 km to the south of the lake. Stable isotope data also confirm that much of the steam used by the local Olkaria geothermal power station is derived from lakewater, though simple balance considerations show that steam use cannot alone be responsible for the fall in lake level observed during the 1980's. 8. The impact of the phase-space density on the indirect detection of dark matter SciTech Connect Ferrer, Francesc; Hunter, Daniel R. 2013-09-01 We study the indirect detection of dark matter when the local dark matter velocity distribution depends upon position, as expected for the Milky Way and its dwarf spheroidal satellites, and the annihilation cross-section is not purely s-wave. Using a phase-space distribution consistent with the dark matter density profile, we present estimates of cosmic and gamma-ray fluxes from dark matter annihilations. The expectations for the indirect detection of dark matter can differ significantly from the usual calculation that assumes that the velocity of the dark matter particles follows a Maxwell-Boltzmann distribution. 9. Liquid chromatographic analysis of glucosamine in commercial dietary supplements using indirect fluorescence detection. PubMed Shen, Xiaoxuan; Yang, Min; Tomellini, Sterling A 2007-02-01 A method of using indirect fluorescence detection is evaluated for the analysis of glucosamine in commercial dietary supplements following chromatographic separation. In this method, the eluting analyte, glucosamine, was detected by monitoring an increase in the fluorescence signal for L-tryptophan (L-Trp) or DL-5-methoxytryptophan (5-MTP) after glucosamine complexed with a copper(II) ion and released either L-Trp or 5-MTP from a copper(II) complex, which is introduced postcolumn. The fluorescence of L-Trp and 5-MTP are quenched when complexed with the copper(II) ion. The results obtained using indirect fluorescence detection are compared with the results obtained for precolumn derivatization with phenylisothiocyanate. Statistical analysis is performed to compare the results obtained for the two postcolumn interaction components, Cu(L-Trp)2 and Cu(5-MTP)2, as well as the results obtained using the indirect fluorescence detection method and a precolumn derivatization method. The indirect fluorescence detection method provided an alternative to precolumn derivatization for determining the concentration of glucosamine in commercial dietary supplements. The stability of the glucosamine-o-phthalaldehyde-3-mercaptopropionic acid derivative is also evaluated to investigate the applicability of the popular precolumn derivatization reagent, o-phthalaldehyde-3-mercaptopropionic acid, for this analysis. PMID:17425135 10. Light neutralino dark matter: direct/indirect detection and collider searches NASA Astrophysics Data System (ADS) Han, Tao; Liu, Zhen; Su, Shufang 2014-08-01 We study the neutralino being the Lightest Supersymmetric Particle (LSP) as a cold Dark Matter (DM) candidate with a mass less than 40 GeV in the framework of the Next-to-Minimal-Supersymmetric-Standard-Model (NMSSM). We find that with the current collider constraints from LEP, the Tevatron and the LHC, there are three types of light DM solutions consistent with the direct/indirect searches as well as the relic abundance considerations: ( i) A 1, H 1-funnels, ( ii) stau coannihilation and ( iii) sbottom coannihilation. Type-( i) may take place in any theory with a light scalar (or pseudo-scalar) near the LSP pair threshold; while Type-( ii) and ( iii) could occur in the framework of Minimal-Supersymmetric-Standard-Model (MSSM) as well. We present a comprehensive study on the properties of these solutions and point out their immediate relevance to the experiments of the underground direct detection such as superCDMS and LUX/LZ, and the astro-physical indirect search such as Fermi-LAT. We also find that the decays of the SM-like Higgs boson may be modified appreciably and the new decay channels to the light SUSY particles may be sizable. The new light CP-even and CP-odd Higgs bosons will decay to a pair of LSPs as well as other observable final states, leading to interesting new Higgs phenomenology at colliders. For the light sfermion searches, the signals would be very challenging to observe at the LHC given the current bounds. However, a high energy and high luminosity lepton collider, such as the ILC, would be able to fully cover these scenarios by searching for events with large missing energy plus charged tracks or displaced vertices. 11. PRECONCENTRATION OF ALIPHATIC AMINES FROM WATER DETERMINED BY CAPILLARY ELECTROPHORESIS WITH INDIRECT UV DETECTION EPA Science Inventory Preconcentration methodology based on adsorption chromatographies for enriching aliphatic amines (c1 to C4 substituted primary, secondary, and tertiary) and alkanolamines in water was studied by free zone capillary electrophoresis (CZE)with indirect UV detection. The solid-phase ... 12. Using Indirect Questions to Detect Intimate Partner Violence: The SAFE-T Questionnaire ERIC Educational Resources Information Center Fulfer, Jamie L.; Tyler, Jillian J.; Choi, Natalie J. S.; Young, Jill A.; Verhulst, Steven J.; Kovach, Regina; Dorsey, J. Kevin 2007-01-01 A screening instrument for detecting intimate partner violence (IPV) was developed using indirect questions. The authors identified 5 of 18 items studied that clearly distinguished victims of IPV from a random group of health conference attendees with a sensitivity of 85% and a specificity of 87%. This 5-item instrument (SAFE-T) was then tested on… 13. Powering up with indirect reciprocity in a large-scale field experiment PubMed Central Yoeli, Erez; Hoffman, Moshe; Rand, David G.; Nowak, Martin A. 2013-01-01 A defining aspect of human cooperation is the use of sophisticated indirect reciprocity. We observe others, talk about others, and act accordingly. We help those who help others, and we cooperate expecting that others will cooperate in return. Indirect reciprocity is based on reputation, which spreads by communication. A crucial aspect of indirect reciprocity is observability: reputation effects can support cooperation as long as peoples’ actions can be observed by others. In evolutionary models of indirect reciprocity, natural selection favors cooperation when observability is sufficiently high. Complimenting this theoretical work are experiments where observability promotes cooperation among small groups playing games in the laboratory. Until now, however, there has been little evidence of observability’s power to promote large-scale cooperation in real world settings. Here we provide such evidence using a field study involving 2413 subjects. We collaborated with a utility company to study participation in a program designed to prevent blackouts. We show that observability triples participation in this public goods game. The effect is over four times larger than offering a \$25 monetary incentive, the company’s previous policy. Furthermore, as predicted by indirect reciprocity, we provide evidence that reputational concerns are driving our observability effect. In sum, we show how indirect reciprocity can be harnessed to increase cooperation in a relevant, real-world public goods game. PMID:23754399 14. Dark matter direct-detection experiments NASA Astrophysics Data System (ADS) Marrodán Undagoitia, Teresa; Rauch, Ludwig 2016-01-01 In recent decades, several detector technologies have been developed with the quest to directly detect dark matter interactions and to test one of the most important unsolved questions in modern physics. The sensitivity of these experiments has improved with a tremendous speed due to a constant development of the detectors and analysis methods, proving uniquely suited devices to solve the dark matter puzzle, as all other discovery strategies can only indirectly infer its existence. Despite the overwhelming evidence for dark matter from cosmological indications at small and large scales, clear evidence for a particle explaining these observations remains absent. This review summarises the status of direct dark matter searches, focusing on the detector technologies used to directly detect a dark matter particle producing recoil energies in the keV energy scale. The phenomenological signal expectations, main background sources, statistical treatment of data and calibration strategies are discussed. 15. Combined Direct and Indirect CT Venography (Combined CTV) in Detecting Lower Extremity Deep Vein Thrombosis PubMed Central Shi, Wan-Yin; Wang, Li-Wei; Wang, Shao-Juan; Yin, Xin-Dao; Gu, Jian-Ping 2016-01-01 Abstract This study aimed to evaluate the diagnostic accuracy of combined direct and indirect CT venography (combined CTV) in the detection of lower extremity deep vein thrombosis (LEDVT). The institutional review board approved the study protocol, and patients or qualifying family members provided informed consent. A total of 96 consecutive patients undergoing combined CTV were prospectively enrolled. A combined examination with digital subtraction angiography (DSA) plus duplex ultrasonography (US) was used as the criterion standard. Three observers were blinded to clinical, DSA, and US results, and they independently analyzed all combined CTV datasets. Interobserver agreement was expressed in terms of the Cohen k value for categorical variables. Accuracy, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) of combined CTV in the detection of LEDVT were determined by using patient- and location-based evaluations. Of the 96 patients, DSA plus US revealed LEDVT in 125 segmental veins in 63 patients. Patient-based evaluation with combined CTV yielded an accuracy of 96.9% to 97.9%, a sensitivity of 95.2% to 96.8%, a specificity of 100% to 100%, a PPV of 100% to 100%, and an NPV of 91.7% to 94.3% in the detection of LEDVT. Location-based evaluation yielded similar results. Through combined direct and indirect CTV, patients obtained a combined CT angiogram on the diseased limb and an indirect CT angiogram on the opposite side. The image quality of combined CTV was superior to an indirect venogram. Combined CTV shows promising diagnostic accuracy in the detection of LEDVT with 3-dimensional modeling of the lower limb venous system. PMID:26986113 16. Combined Direct and Indirect CT Venography (Combined CTV) in Detecting Lower Extremity Deep Vein Thrombosis. PubMed Shi, Wan-Yin; Wang, Li-Wei; Wang, Shao-Juan; Yin, Xin-Dao; Gu, Jian-Ping 2016-03-01 This study aimed to evaluate the diagnostic accuracy of combined direct and indirect CT venography (combined CTV) in the detection of lower extremity deep vein thrombosis (LEDVT). The institutional review board approved the study protocol, and patients or qualifying family members provided informed consent. A total of 96 consecutive patients undergoing combined CTV were prospectively enrolled. A combined examination with digital subtraction angiography (DSA) plus duplex ultrasonography (US) was used as the criterion standard. Three observers were blinded to clinical, DSA, and US results, and they independently analyzed all combined CTV datasets. Interobserver agreement was expressed in terms of the Cohen k value for categorical variables. Accuracy, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) of combined CTV in the detection of LEDVT were determined by using patient- and location-based evaluations. Of the 96 patients, DSA plus US revealed LEDVT in 125 segmental veins in 63 patients. Patient-based evaluation with combined CTV yielded an accuracy of 96.9% to 97.9%, a sensitivity of 95.2% to 96.8%, a specificity of 100% to 100%, a PPV of 100% to 100%, and an NPV of 91.7% to 94.3% in the detection of LEDVT. Location-based evaluation yielded similar results. Through combined direct and indirect CTV, patients obtained a combined CT angiogram on the diseased limb and an indirect CT angiogram on the opposite side. The image quality of combined CTV was superior to an indirect venogram. Combined CTV shows promising diagnostic accuracy in the detection of LEDVT with 3-dimensional modeling of the lower limb venous system. PMID:26986113 17. Comparison of various NMR methods for the indirect detection of nitrogen-14 nuclei via protons in solids. PubMed Shen, Ming; Trébosc, Julien; O'Dell, Luke A; Lafon, Olivier; Pourpoint, Frédérique; Hu, Bingwen; Chen, Qun; Amoureux, Jean-Paul 2015-09-01 We present an experimental comparison of several through-space Hetero-nuclear Multiple-Quantum Correlation experiments, which allow the indirect observation of homo-nuclear single- (SQ) or double-quantum (DQ) (14)N coherences via spy (1)H nuclei. These (1)H-{(14)N} D-HMQC sequences differ not only by the order of (14)N coherences evolving during the indirect evolution, t1, but also by the radio-frequency (rf) scheme used to excite and reconvert these coherences under Magic-Angle Spinning (MAS). Here, the SQ coherences are created by the application of center-band frequency-selective pulses, i.e. long and low-power rectangular pulses at the (14)N Larmor frequency, ν0((14)N), whereas the DQ coherences are excited and reconverted using rf irradiation either at ν0((14)N) or at the (14)N overtone frequency, 2ν0((14)N). The overtone excitation is achieved either by constant frequency rectangular pulses or by frequency-swept pulses, specifically Wide-band, Uniform-Rate, and Smooth-Truncation (WURST) pulse shapes. The present article compares the performances of four different (1)H-{(14)N} D-HMQC sequences, including those with (14)N rectangular pulses at ν0((14)N) for the indirect detection of homo-nuclear (i) (14)N SQ or (ii) DQ coherences, as well as their overtone variants using (iii) rectangular or (iv) WURST pulses. The compared properties include: (i) the sensitivity, (ii) the spectral resolution in the (14)N dimension, (iii) the rf requirements (power and pulse length), as well as the robustness to (iv) rf offset and (v) MAS frequency instabilities. Such experimental comparisons are carried out for γ-glycine and l-histidine.HCl monohydrate, which contain (14)N sites subject to moderate quadrupole interactions. We demonstrate that the optimum choice of the (1)H-{(14)N} D-HMQC method depends on the experimental goal. When the sensitivity and/or the robustness to offset are the major concerns, the D-HMQC sequence allowing the indirect detection of (14)N SQ 18. Comparison of various NMR methods for the indirect detection of nitrogen-14 nuclei via protons in solids NASA Astrophysics Data System (ADS) Shen, Ming; Trébosc, Julien; O'Dell, Luke A.; Lafon, Olivier; Pourpoint, Frédérique; Hu, Bingwen; Chen, Qun; Amoureux, Jean-Paul 2015-09-01 We present an experimental comparison of several through-space Hetero-nuclear Multiple-Quantum Correlation experiments, which allow the indirect observation of homo-nuclear single- (SQ) or double-quantum (DQ) 14N coherences via spy 1H nuclei. These 1H-{14N} D-HMQC sequences differ not only by the order of 14N coherences evolving during the indirect evolution, t1, but also by the radio-frequency (rf) scheme used to excite and reconvert these coherences under Magic-Angle Spinning (MAS). Here, the SQ coherences are created by the application of center-band frequency-selective pulses, i.e. long and low-power rectangular pulses at the 14N Larmor frequency, ν0(14N), whereas the DQ coherences are excited and reconverted using rf irradiation either at ν0(14N) or at the 14N overtone frequency, 2ν0(14N). The overtone excitation is achieved either by constant frequency rectangular pulses or by frequency-swept pulses, specifically Wide-band, Uniform-Rate, and Smooth-Truncation (WURST) pulse shapes. The present article compares the performances of four different 1H-{14N} D-HMQC sequences, including those with 14N rectangular pulses at ν0(14N) for the indirect detection of homo-nuclear (i) 14N SQ or (ii) DQ coherences, as well as their overtone variants using (iii) rectangular or (iv) WURST pulses. The compared properties include: (i) the sensitivity, (ii) the spectral resolution in the 14N dimension, (iii) the rf requirements (power and pulse length), as well as the robustness to (iv) rf offset and (v) MAS frequency instabilities. Such experimental comparisons are carried out for γ-glycine and L-histidine.HCl monohydrate, which contain 14N sites subject to moderate quadrupole interactions. We demonstrate that the optimum choice of the 1H-{14N} D-HMQC method depends on the experimental goal. When the sensitivity and/or the robustness to offset are the major concerns, the D-HMQC sequence allowing the indirect detection of 14N SQ coherences should be employed. Conversely 19. An indirect conductimetric screening method for the detection of antibiotic residues in bovine kidneys. PubMed Myllyniemi, Anna-Liisa; Sipilä, Hannu; Nuotio, Lasse; Niemi, Anneli; Honkanen-Buzalski, Tuula 2002-09-01 An indirect conductimetric screening method using three test bacterium-medium combinations was developed for rapid detection of antibiotic residues in bovine carcasses. The detection time (DT), i.e. the point when the growth of the test bacterium was detected, was determined by observing the rate of change in the conductance plotted against time. This detection time averaged half of the reference time recorded by the instrument software. Total change in conductance (TC) was used as a further measure of growth. Threshold values for DT and TC were determined with inhibitor-free kidney samples. The presence of a residue was indicated if the DT exceeded the respective threshold value and was confirmed if the TC remained below the TC threshold value. The limits of detection (LODs) determined with fortified samples were at about or below the MRLs for cephalexin, chlortetracycline, ciprofloxacin, dihydrostreptomycin, enrofloxacin, oxytetracycline and penicillin G. The LODs for penicillin G, oxytetracycline and the sum of enrofloxacin and ciprofloxacin were also estimated with incurred samples; these samples were also analysed using liquid chromatography. The LODs determined with fortified and incurred samples were in close agreement. Given its rapid detection, good sensitivity to a wide range of antibiotics and ease of performance, the indirect conductimetric method developed here would seem to offer an appealing alternative to agar diffusion tests. PMID:12375852 20. Determination of chloride, chlorate and perchlorate by PDMS microchip electrophoresis with indirect amperometric detection. PubMed Li, Xin-Ai; Zhou, Dong-Mei; Xu, Jing-Juan; Chen, Hong-Yuan 2008-03-15 In this work, chloride, chlorate and perchlorate are fast separated on PDMS microchip and detected via in-channel indirect amperometric detection mode. With PDMS/PDMS microchip treated by oxygen plasma, anions chloride (Cl-), chlorate (ClO3-), and perchlorate (ClO4-) are separated within 35s. Some parameters including buffer salt concentration, buffer pH, separation voltage and detection potential are investigated in detail. The separation conditions using 15 mM (pH 6.12) of 2-(N-morpholino)ethanesulfonic acid (MES)+L-histidine (L-His) as running buffer, -2000 V as separation voltage and 0.7 V as detection potential are optimized. Under this condition, the detection limits of Cl-, ClO3-, and ClO4- are 1.9, 3.6, and 2.8 microM, respectively. PMID:18371861 1. Detection of typhoid fever by Widal and indirect fluorescent antibody (IFA) tests. A comparative study. PubMed Rai, G P; Zachariah, K; Shrivastava, S 1989-01-01 Widal test is a conventional method for the detection of typhoid fever. However, it takes 18-24 hours to complete the test. In the present study indirect fluorescent antibody test has been compared with the Widal test using single serum specimens and was found to be rapid, sensitive and specific. Serum specimens from 41 culture proven cases of typhoid fever, 14 clinically suspected cases and 22 normal individuals were collected. Whereas Widal test detected 63.41% positive cases, IFA test detected 87.80% from among culturally proven typhoid cases. Among the clinically suspected cases of typhoid fever, IFA test detected 85.71% (28.57 + 57.14%) while Widal test detected only 57.13% (35.71 + 21.42%) positive cases out of above 14 cases. PMID:2478615 2. Detection of Thiobacillus ferrooxidans in acid mine environments by indirect fluorescent antibody staining. PubMed Apel, W A; Dugan, P R; Filppi, J A; Rheins, M S 1976-07-01 An indirect fluorescent antibody (FA) staining technique was developed for the rapid detection of Thiobacillus ferrooxidans. The specificity of the FA stain for T. ferrooxidans was demonstrated with both laboratory and environmental samples. Coal refuse examined by scanning electron microscopy exhibited a rough, porous surface, which was characteristically covered by water-soluble crystals. Significant numbers of T. ferrooxidans were detected in the refuse pores. A positive correlation between numbers of T. ferrooxidans and acid production in coal refuse in the laboratory was demonstrated with the FA technique. PMID:61736 3. Indirect laser-induced breakdown of transparent thin gel layer for sensitive trace element detection NASA Astrophysics Data System (ADS) Xiu, Junshan; Bai, Xueshi; Negre, Erwan; Motto-Ros, Vincent; Yu, Jin 2013-06-01 Optical emissions from major and trace elements embodied in a transparent gel prepared from cooking oil were detected when the gel was spread in thin film on a metallic substrate and a plasma was induced on the substrate surface using nanosecond infrared pulsed laser. Such emissions are due to indirect breakdown of the coating layer. The generated plasma, a mixture of substances from the substrate, the layer, and the ambient gas, was characterized using emission spectroscopy. Temperature higher than 15 000 K determined in the plasma allows considering sensitive detection of trace elements in liquids, gels, biological samples, or thin films. 4. Simple J-factors and D-factors for indirect dark matter detection NASA Astrophysics Data System (ADS) Evans, N. W.; Sanders, J. L.; Geringer-Sameth, Alex 2016-05-01 J-factors (or D-factors) describe the distribution of dark matter in an astrophysical system and determine the strength of the signal provided by annihilating (or decaying) dark matter respectively. We provide simple analytic formulas to calculate the J-factors for spherical cusps obeying the empirical relationship between enclosed mass, velocity dispersion and half-light radius. We extend the calculation to the spherical Navarro-Frenk-White model, and demonstrate that our new formulas give accurate results in comparison to more elaborate Jeans models driven by Markov chain Monte Carlo methods. Of the known ultrafaint dwarf spheroidals, we show that Ursa Major II, Reticulum II, Tucana II and Horologium I have the largest J-factors and so provide the most promising candidates for indirect dark matter detection experiments. Amongst the classical dwarfs, Draco, Sculptor and Ursa Minor have the highest J-factors. We show that the behavior of the J-factor as a function of integration angle can be inferred for general dark halo models with inner slope γ and outer slope β . The central and asymptotic behavior of the J-factor curves are derived as a function of the dark halo properties. Finally, we show that models obeying the empirical relation on enclosed mass and velocity dispersion have J-factors that are most robust at the integration angle equal to the projected half-light radius of the dwarf spheroidal (dSph) divided by heliocentric distance. For most of our results, we give the extension to the D-factor which is appropriate for the decaying dark matter picture. 5. Characterization of an indirect X-ray imaging detector by simulation and experiment. PubMed Doshi, C; van Riessen, G; Balaur, E; de Jonge, M D; Peele, A G 2015-01-01 We describe a comprehensive model of a commercial indirect X-ray imaging detector that accurately predicts the detector point spread function and its dependence on X-ray energy. The model was validated by measurements using monochromatic synchrotron radiation and extended to polychromatic X-ray sources. Our approach can be used to predict the performance of an imaging detector and can be used to optimize imaging experiments with broad-band X-ray sources. PMID:25203971 6. Evaluation of an indirect fluorescent-antibody stain for detection of Pneumocystis carinii in respiratory specimens. PubMed Central Ng, V L; Yajko, D M; McPhaul, L W; Gartner, I; Byford, B; Goodman, C D; Nassos, P S; Sanders, C A; Howes, E L; Leoung, G 1990-01-01 Two prospective studies were undertaken to evaluate a commercial indirect fluorescent-antibody (IFA) stain for the detection of Pneumocystis carinii in respiratory specimens from individuals at risk for or with the acquired immunodeficiency syndrome. The first study compared IFA with Diff-Quik (DQ; a rapid Giemsa-like stain) for detecting P. carinii in 95 induced sputa obtained from 77 asymptomatic patients who had survived one previous episode of P. carinii pneumonia and who were being treated prophylactically with aerosolized pentamidine. Only one induced sputum specimen was found to contain P. carinii; organisms were detected by both stains. The second study compared the performance of the IFA stain versus DQ, modified toluidine blue O, and Gomori methenamine silver stains for detecting P. carinii in symptomatic individuals at risk for or with acquired immunodeficiency syndrome. Of 182 specimens examined, P. carinii was detected in 105 by one or more stains; the DQ stain detected 73 (70%), the modified toluidine blue O stain detected 75 (71%), the Gomori methenamine silver stain detected 76 (72%), and the IFA stain detected 95 (90%). The IFA stain was more sensitive (P less than 0.01) than the other traditional stains for detecting P. carinii; however, a subsequent clinical evaluation revealed that a subset of IFA-positive-only specimens were from patients whose clinical symptoms resolved without specific anti-P. carinii therapy. Images PMID:1693631 7. Experience of the Indirect Neutron Radiography Method Based on the X-ray Imaging Plate at CARR NASA Astrophysics Data System (ADS) Wei, Guohai; Han, Songbai; Wang, Hongli; He, Linfeng; Wang, Yu; Wu, Meimei; Liu, Yuntao; Chen, Dongfeng Indirect neutron radiography (INR) experiments by X-ray imaging plate were carried out at the China Advanced Research Reactor (CARR). The key experiment parameters were optimized, especially the exposure time of the neutron converter andimaging plate. The optimized total exposure time is 37.25 min, it is two-fifths of the timebased on the film method under the same experimental conditions. The qualitative and quantitativeinspections were tested with dummy nuclear fuel rods and a water temperaturesensor ofa motor vehicle. The spring in the sensor and the defects of the dummy fuel rod's pellets can be qualitatively detected. The thickness of the tape at one position on the cladding of the dummy nuclear fuel rodwas quantitatively calculated to be 9.57 layers with the relative error of ±4.3%. 8. Indirect detection of gravitino dark matter including its three-body decays SciTech Connect Choi, Ki-Young; Restrepo, Diego; Yaguna, Carlos E.; Zapata, Oscar E-mail: [email protected] E-mail: [email protected] 2010-10-01 It was recently pointed out that in supersymmetric scenarios with gravitino dark matter and bilinear R-parity violation, gravitinos with masses below M{sub W} typically decay with a sizable branching ratio into the 3-body final states W*l and Z*ν. In this paper we study the indirect detection signatures of gravitino dark matter including such final states. First, we obtain the gamma ray spectrum from gravitino decays, which features a monochromatic contribution from the decay into γν and a continuum contribution from the three-body decays. After studying its dependence on supersymmetric parameters, we compute the expected gamma ray fluxes and derive new constraints, from recent FERMI data, on the R-parity breaking parameter and on the gravitino lifetime. Indirect detection via antimatter searches, a new possibility brought about by the three-body final states, is also analyzed. For models compatible with the gamma ray observations, the positron signal is found to be negligible whereas the antiproton one can be significant. 9. An indirect immunofluorescent test for detection of rabies virus antibodies in foxes. PubMed Hostnik, P; Grom, J 1997-01-01 The blood-containing fluids in the thoracic cavity or blood from the heart from 177 red foxes (Vulpes vulpes) in Slovenia were evaluated for rabies antibodies by rapid fluorescent focus inhibition test (RFFIT) and an adapted indirect immunofluorescent test (IIF) in 1994. We evaluated the usefulness of anti-dog fluorescein-isothiocyanate (FITC) conjugate instead of anti-fox FITC conjugate in detection of antibodies against rabies virus in fox sera. In the RFFIT test, 92 (52%) of the fox samples were positive and 70 (40%) samples were negative for rabies antibodies; 15 (8.5%) samples were not suitable for examination in this test. In the IIF test, 98 (55%) fox samples were positive and 79 (45%) sera were negative. The IIF test was suitable for the rapid detection of antibodies against rabies virus in foxes, as often required for vaccine efficacy trials. PMID:9027703 10. Progress in indirect and direct-drive planar experiments on hydrodynamic instabilities at the ablation front SciTech Connect Casner, A. Masse, L.; Huser, G.; Galmiche, D.; Liberatore, S.; Riazuelo, G.; Delorme, B.; Martinez, D.; Remington, B.; Smalyuk, V. A.; Igumenshchev, I.; Michel, D. T.; Froula, D.; Seka, W.; Goncharov, V. N.; Olazabal-Loumé, M.; Nicolaï, Ph.; Breil, J.; Tikhonchuk, V. T.; Fujioka, S.; and others 2014-12-15 Understanding and mitigating hydrodynamic instabilities and the fuel mix are the key elements for achieving ignition in Inertial Confinement Fusion. Cryogenic indirect-drive implosions on the National Ignition Facility have evidenced that the ablative Rayleigh-Taylor Instability (RTI) is a driver of the hot spot mix. This motivates the switch to a more flexible higher adiabat implosion design [O. A. Hurricane et al., Phys. Plasmas 21, 056313 (2014)]. The shell instability is also the main candidate for performance degradation in low-adiabat direct drive cryogenic implosions [Goncharov et al., Phys. Plasmas 21, 056315 (2014)]. This paper reviews recent results acquired in planar experiments performed on the OMEGA laser facility and devoted to the modeling and mitigation of hydrodynamic instabilities at the ablation front. In application to the indirect-drive scheme, we describe results obtained with a specific ablator composition such as the laminated ablator or a graded-dopant emulator. In application to the direct drive scheme, we discuss experiments devoted to the study of laser imprinted perturbations with special phase plates. The simulations of the Richtmyer-Meshkov phase reversal during the shock transit phase are challenging, and of crucial interest because this phase sets the seed of the RTI growth. Recent works were dedicated to increasing the accuracy of measurements of the phase inversion. We conclude by presenting a novel imprint mitigation mechanism based on the use of underdense foams. The foams induce laser smoothing by parametric instabilities thus reducing the laser imprint on the CH foil. 11. Indirect detection of ethylene glycol oligomers using a contactless conductivity detector in capillary liquid chromatography. PubMed Takeuchi, Toyohide; Sedyohutomo, Anang; Lim, Lee Wah 2009-07-01 Ethylene glycol oligomers were visualized by indirect conductimetric detection based on dilution of the mobile phase due to the analytes. A high electrical conductivity background was maintained by the addition of 5 mM sodium nitrate in the mobile phase, and the analytes were visualized by decreases in the background when they eluted. A capacitively coupled contactless conductivity detector was convenient to monitor effluents from the microcolumn with minimum extra-column band broadening. The signals as negative peaks were linear to the concentration of the analytes, and a concentration detection limit of 0.025% was achieved for tetraethylene glycol at S/N=3, corresponding to the mass detection limit of 38 ng for 0.15 microl injection. The logarithm of the retention factor of ethylene glycol oligomers was linear to the degree of polymerization (DP) as well as to the acetonitrile composition in the mobile phase. These situations allowed us to estimate the DP of eluted ethylene glycol oligomers by using a few oligomers with known DP. The dynamic reserve, defined as the ratio of the background to its noise level achieved under the present conditions, was 2.3 x 10(5) which was much larger than that achieved by UV absorption detection. The present method was applied to profile ethylene glycol oligomers contained in commercially available PEG reagents. PMID:19609021 12. Indirect-drive ablative Rayleigh-Taylor growth experiments on the Shenguang-II laser facility SciTech Connect Wu, J. F.; Fan, Z. F.; Zheng, W. D.; Wang, M.; Pei, W. B.; Zhu, S. P.; Zhang, W. Y.; Miao, W. Y.; Yuan, Y. T.; Cao, Z. R.; Deng, B.; Jiang, S. E.; Liu, S. Y.; Ding, Y. K.; Wang, L. F.; Ye, W. H. He, X. T. 2014-04-15 In this research, a series of single-mode, indirect-drive, ablative Rayleigh-Taylor (RT) instability experiments performed on the Shenguang-II laser facility [X. T. He and W. Y. Zhang, Eur. Phys. J. D 44, 227 (2007)] using planar target is reported. The simulation results from the one-dimensional hydrocode for the planar foil trajectory experiment indicate that the energy flux at the hohlraum wall is obviously less than that at the laser entrance hole. Furthermore, the non-Planckian spectra of x-ray source can strikingly affect the dynamics of the foil flight and the perturbation growth. Clear images recorded by an x-ray framing camera for the RT growth initiated by small- and large-amplitude perturbations are obtained. The observed onset of harmonic generation and transition from linear to nonlinear growth regime is well predicted by two-dimensional hydrocode simulations. 13. Electro-Weak Dark Matter: Non-perturbative effect confronting indirect detections NASA Astrophysics Data System (ADS) Chun, Eung Jin; Park, Jong-Chul 2015-11-01 We update indirect constraints on Electro-Weak Dark Matter (EWDM) considering the Sommerfeld-Ramsauer-Townsend (SRT) effect for its annihilations into a pair of standard model gauge bosons assuming that EWDM accounts for the observed dark matter (DM) relic density for a given DM mass and mass gaps among the multiplet components. For the radiative or smaller mass splitting, the hypercharged triplet and higher multiplet EWDMs are ruled out up to the DM mass ≈ 10- 20 TeV by the combination of the most recent data from AMS-02 (antiproton), Fermi-LAT (gamma-ray), and HESS (gamma-line). The Majorana triplet (wino-like) EWDM can evade all the indirect constraints only around Ramsauer-Townsend dips which can occur for a tiny mass splitting of order 10 MeV or less. In the case of the doublet (Higgsino-like) EWDM, a wide range of its mass ≳ 500 GeV is allowed except Sommerfeld peak regions. Such a stringent limit on the triplet DM can be evaded by employing a larger mass gap of the order of 10 GeV which allows its mass larger than about 1 TeV. However, the future CTA experiment will be able to cover most of the unconstrained parameter space. 14. Use of capillary electrophoresis and indirect detection to quantitate in-capillary enzyme-catalyzed microreactions. PubMed Zhang, Y; el-Maghrabi, M R; Gomez, F A 2000-04-01 The use of capillary electrophoresis and indirect detection to quantify reaction products of in-capillary enzyme-catalyzed microreactions is described. Migrating in a capillary under conditions of electrophoresis, plugs of enzyme and substrate are injected and allowed to react. Capillary electrophoresis is subsequently used to measure the extent of reaction. This technique is demonstrated using two model systems: the conversion of fructose-1,6-bisphosphate to dihydroxyacetone phosphate and glyceraldehyde-3-phosphate by fructose-biphosphate aldolase (ALD, EC 4.1.2.13), and the conversion of fructose-1,6-bisphosphate to fructose-6-phosphate by fructose-1,6-bisphospatase (FBPase, EC 3.1.3.11). These procedures expand the use of the capillary as a microreactor and offer a new approach to analyzing enzyme-mediated reactions. PMID:10892022 15. Indirect detection of NMR via geometry-dependent dipolar fields, revisited. PubMed Dong, Wei; Meriles, C A 2007-06-01 We explore the dipolar interactions between two separate nuclear spin ensembles in a mixture containing oil and water. Here we expand initial results [C.A. Meriles, W. Dong, J. Magn. Reson. 181 (2006) 331.] to the case in which both systems have the shape of flat, stacked disks. We find that-in spite of the strong inhomogeneity of the coupling dipolar field-the signal encoded in one of the components can be made approximately proportional to the magnetization in the other. This allows us to use one of these systems as a 'sensor' to indirectly reconstruct the resonance spectrum or to determine the relaxation time of the 'sample' system. In the regime in which dipolar interactions are sufficiently strong, our method can be set to scale-up weaker signals in a non-linear fashion, which, potentially, could allow one to introduce contrast or to improve detection sensitivity of less magnetized samples. PMID:17363306 16. Design and synthesis of immunoconjugates and development of an indirect ELISA for rapid detection of 3, 5-dinitrosalicyclic Acid hydrazide. PubMed Shen, Yu-Dong; Zhang, Shi-Wei; Lei, Hong-Tao; Wang, Hong; Xiao, Zhi-Li; Jiang, Yue-Ming; Sun, Yuan-Ming 2008-01-01 In this study novel immunoconjugates were designed, synthesized and then used to develop a rapid, specific and sensitive indirect ELISA method to directly detect residues of 3,5-dinitrosalicyclic acid hydrazide (DNSH), a toxic metabolite of nifursol present in chicken tissues. The hapten DNSHA was first designed and used to covalently couple to BSA to form an immunogen which was immunized to rabbits to produce a polyclonal antibody against DNSH. Furthermore, a novel 3,5-dinitrosalicylic acidovalbumin (DNSA-OVA) immunoconjugate structurally different from DNSHA-OVA was designed and used as a "substructural coating antigen" to improve the sensitivity of an indirect ELISA analysis for a direct DNSH detection. Based on the "substructural coating antigen" concept, an optimized indirect ELISA method was established that exhibited good specificity and high sensitivity for detecting DNSH, with a cross-reactivity of less than 0.1% (excluding the parent compound nifursol), IC(50) of 0.217 nmol/mL and detection limit of 0.018 nmol/mL. Finally, a simple and efficient analysis of DNSH samples in chicken tissues showed that the average recovery rate of the indirect ELISA analysis was 82.3%, with the average coefficient of variation 15.9%. Thus, the developed indirect ELISA method exhibited the potential for a rapid detection of DNSH residues in tissue. PMID:18830153 17. Indirect monitoring shot-to-shot shock waves strength reproducibility during pump-probe experiments NASA Astrophysics Data System (ADS) Pikuz, T. A.; Faenov, A. Ya.; Ozaki, N.; Hartley, N. J.; Albertazzi, B.; Matsuoka, T.; Takahashi, K.; Habara, H.; Tange, Y.; Matsuyama, S.; Yamauchi, K.; Ochante, R.; Sueda, K.; Sakata, O.; Sekine, T.; Sato, T.; Umeda, Y.; Inubushi, Y.; Yabuuchi, T.; Togashi, T.; Katayama, T.; Yabashi, M.; Harmand, M.; Morard, G.; Koenig, M.; Zhakhovsky, V.; Inogamov, N.; Safronova, A. S.; Stafford, A.; Skobelev, I. Yu.; Pikuz, S. A.; Okuchi, T.; Seto, Y.; Tanaka, K. A.; Ishikawa, T.; Kodama, R. 2016-07-01 We present an indirect method of estimating the strength of a shock wave, allowing on line monitoring of its reproducibility in each laser shot. This method is based on a shot-to-shot measurement of the X-ray emission from the ablated plasma by a high resolution, spatially resolved focusing spectrometer. An optical pump laser with energy of 1.0 J and pulse duration of ˜660 ps was used to irradiate solid targets or foils with various thicknesses containing Oxygen, Aluminum, Iron, and Tantalum. The high sensitivity and resolving power of the X-ray spectrometer allowed spectra to be obtained on each laser shot and to control fluctuations of the spectral intensity emitted by different plasmas with an accuracy of ˜2%, implying an accuracy in the derived electron plasma temperature of 5%-10% in pump-probe high energy density science experiments. At nano- and sub-nanosecond duration of laser pulse with relatively low laser intensities and ratio Z/A ˜ 0.5, the electron temperature follows Te ˜ Ilas2/3. Thus, measurements of the electron plasma temperature allow indirect estimation of the laser flux on the target and control its shot-to-shot fluctuation. Knowing the laser flux intensity and its fluctuation gives us the possibility of monitoring shot-to-shot reproducibility of shock wave strength generation with high accuracy. 18. Exercise Experiences and Changes in Affective Attitude: Direct and Indirect Effects of In Situ Measurements of Experiences PubMed Central Sudeck, Gorden; Schmid, Julia; Conzelmann, Achim 2016-01-01 Objectives: The purpose of this study was to examine the relationship between exercise experiences (perceptions of competence, perceived exertion, acute affective responses to exercise) and affective attitudes toward exercise. This relationship was analyzed in a non-laboratory setting during a 13-weeks exercise program. Materials and Methods: 56 women and 49 men (aged 35–65 years; Mage = 50.0 years; SD = 8.2 years) took part in the longitudinal study. Affective responses to exercise (affective valence, positive activation, calmness) as well as perceptions of competence and perceived exertion were measured at the beginning, during, and end of three exercise sessions within the 13-weeks exercise program. Affective attitude toward exercise were measured before and at the end of the exercise program. A two-level path analysis was conducted. The direct and indirect effects of exercise experiences on changes in affective attitude were analyzed on the between-person level: firstly, it was tested whether perceptions of competence and perceived exertion directly relate to changes in affective attitude. Secondly, it was assessed whether perceptions of competence and perceived exertion indirectly relate to changes in affective attitudes—imparted via the affective response during exercise. Results and Conclusion: At the between-person level, a direct effect on changes in affective attitude was found for perceptions of competence (β = 0.24, p < 0.05). The model revealed one significant indirect pathway between perceived exertion and changes in affective attitude via positive activation: on average, the less strenuous people perceive physical exercise to be, the more awake they will feel during exercise (β = -0.57, p < 0.05). Those people with higher average levels of positive activation during exercise exhibit more improvements in affective attitudes toward exercise from the beginning to the end of the 13-weeks exercise program (β = 0.24, p < 0.05). Main study results 19. An indirect measurement of the width of the w boson at the D0 experiment SciTech Connect Telford, Paul; /Manchester U. 2006-08-01 This thesis presents an indirect measurement of the width of the W boson using data collected at the D0 experiment, a multipurpose particle detector utilizing the Fermilab Tevatron. The W width was determined from the ratio of W {yields} {mu}{nu} to Z {yields} {mu}{sup +}{mu}{sup -} cross sections to be {Gamma}{sub W} = 2168 {+-} 22(stat) {+-} 62(syst){sub -16}{sup +24}(pdf) {+-} 4(other) MeV, in good agreement with the Standard Model prediction and other experimental measurements. In addition there is a description of how work made towards this measurement has been used to improve the parameterized detector simulation, a vital tool in the obtention of physics results from signals observed in the detector, and in estimating the uncertainty due to choice of PDF, which is of interest for all measurements made at hadron colliders. 20. Indirect versus direct detection methods of Trichinella spp. infection in wild boar (Sus scrofa) PubMed Central 2014-01-01 Background Trichinella spp. infections in wild boar (Sus scrofa), one of the main sources of human trichinellosis, continue to represent a public health problem. The detection of Trichinella spp. larvae in muscles of wild boar by digestion can prevent the occurrence of clinical trichinellosis in humans. However, the analytical sensitivity of digestion in the detection process is dependent on the quantity of tested muscle. Consequently, large quantities of muscle have to be digested to warrant surveillance programs, or more sensitive tests need to be employed. The use of indirect detection methods, such as the ELISA to detect Trichinella spp. infections in wild boar has limitations due to its low specificity. The aim of the study was to implement serological detection of anti-Trichinella spp. antibodies in meat juices from hunted wild boar for the surveillance of Trichinella spp. infections. Methods Two tests were used, ELISA for the initial screening test, and a specific and sensitive Western blot (Wb) as a confirmatory test. The circulation of anti-Trichinella IgG was determined in hunted wild boar muscle juice samples in 9 provinces of 5 Italian regions. Results From 1,462 muscle fluid samples, 315 (21.5%, 95% C.I. 19.51-23.73) were tested positive by ELISA. The 315 ELISA-positive muscle fluid samples were further tested by Wb and 32 (10.1%, 95% C.I. 7.29-13.99) of these were positive with a final seroprevalence of 2.2% (95% C.I 1.55-3.07; 32/1,462). Trichinella britovi larvae were detected by artificial digestion in muscle tissues of one (0.07%, 95%C.I. 0.01-0.39) out of the 1,462 hunted wild boars. No Trichinella spp. larvae were detected in Wb-negative wild boar. From 2006 to 2012, a prevalence of 0.017% was detected by muscle digestion in wild boar hunted in the whole Italian territory. Conclusions The combined use of both serological methods had a sensitivity 31.4 times higher than that of the digestion (32/1,462 versus 1/1,462), suggesting their potential 1. Exchange facilitated indirect detection of hyperpolarized 15ND 2-amido-glutamine NASA Astrophysics Data System (ADS) Barb, A. W.; Hekmatyar, S. K.; Glushka, J. N.; Prestegard, J. H. 2011-10-01 Hyperpolarization greatly enhances opportunities to observe in vivo metabolic processes in real time. Accessible timescales are, however, limited by nuclear spin relaxation times, and sensitivity is limited by magnetogyric ratios of observed nuclei. The majority of applications to date have involved direct 13C observation of metabolites with non-protonated carbons at sites of interest ( 13C enriched carbonyls, for example), a choice that extends relaxation times and yields moderate sensitivity. Interest in 15N containing metabolites is equally high but non-protonated sites are rare and direct 15N observation insensitive. Here an approach is demonstrated that extends applications to protonated 15N sites with high sensitivity. The normally short relaxation times are lengthened by initially replacing protons (H) with deuterons (D) and low sensitivity detection of 15N is avoided by indirect detection through protons reintroduced by H/D exchange. A pulse sequence is presented that periodically samples 15N polarization at newly protonated sites by INEPT transfer to protons while returning 15N magnetization of deuterated sites to the + Z axis to preserve polarization for subsequent samplings. Applications to 15ND 2-amido-glutamine are chosen for illustration. Glutamine is an important regulator and a direct donor of nitrogen in cellular metabolism. Potential application to in vivo observation is discussed. 2. Indirect immunofluorescence detection of E. coli O157:H7 with fluorescent silica nanoparticles. PubMed Chen, Ze-Zhong; Cai, Li; Chen, Min-Yan; Lin, Yi; Pang, Dai-Wen; Tang, Hong-Wu 2015-04-15 A method of fluorescent nanoparticle-based indirect immunofluorescence assay using either fluorescence microscopy or flow cytometry for the rapid detection of pathogenic Escherichia coli O157:H7 was developed. The dye-doped silica nanoparticles (NPs) were synthesized using W/O microemulsion methods with the combination of 3-aminopropyltriethoxysilane (APTES) and fluorescein isothiocyanate (FITC) and polymerization reaction with carboxyethylsilanetriol sodium salt (CEOS). Protein A was immobilized at the surface of the NPs by covalent binding to the carboxyl linkers and the surface coverage of Protein A on NPs was determined by the Bradford method. Rabbit anti-E. Coli O157:H7 antibody was used as primary antibody to recognize E. coli O157:H7 and then antibody binding protein (Protein A) labeled with FITC-doped silica NPs (FSiNPs) was used to generate fluorescent signal. With this method, E. Coli O157:H7 in buffer and bacterial mixture was detected. In addition, E. coli O157:H7 in several spiked background beef samples were measured with satisfactory results. Therefore, the FSiNPs are applicable in signal-amplified bioassay of pathogens due to their excellent capabilities such as brighter fluorescence and higher photostability than the direct use of conventional fluorescent dyes. PMID:25460888 3. Development of an indirect competitive ELISA for simultaneous detection of enrofloxacin and ciprofloxacin* PubMed Central Zhang, Hai-tang; Jiang, Jin-qing; Wang, Zi-liang; Chang, Xin-yao; Liu, Xing-you; Wang, San-hu; Zhao, Kun; Chen, Jin-shan 2011-01-01 Modified 1-ethyl-3-(3-dimethylaminopropy) carbodiimide (EDC) method was employed to synthesize the artificial antigen of enrofloxacin (ENR), and New Zealand rabbits were used to produce anti-ENR polyclonal antibody (pAb). Based on the checkerboard titration, an indirect competitive enzyme-linked immunosorbent assay (ELISA) standard curve was established. This assay was sensitive and had a linear range from 0.6 to 148.0 μg/kg (R 2=0.9567), with the half maximal inhibitory concentration (IC50) and limit of detection (LOD) values of 9.4 μg/kg and 0.2 μg/kg, respectively. Of all the competitive analogues, the produced pAb exhibited a high cross-reactivity to ciprofloxacin (CIP) (87%), the main metabolite of ENR in tissues. After optimization, the matrix effects can be ignored using a 10-fold dilution in beef and 20-fold dilution in pork. The overall recoveries and coefficients of variation (CVs) were in the ranges of 86%–109% and 6.8%–13.1%, respectively. It can be concluded that the established ELISA method is suitable for simultaneous detection of ENR and CIP in animal tissues. PMID:22042652 4. Indirect Reciprocity, Resource Sharing, and Environmental Risk: Evidence from Field Experiments in Siberia PubMed Central Howe, E. Lance; Murphy, James J.; Gerkey, Drew; West, Colin Thor 2016-01-01 Integrating information from existing research, qualitative ethnographic interviews, and participant observation, we designed a field experiment that introduces idiosyncratic environmental risk and a voluntary sharing decision into a standard public goods game. Conducted with subsistence resource users in rural villages on the Kamchatka Peninsula in Northeast Siberia, we find evidence consistent with a model of indirect reciprocity and local social norms of helping the needy. When participants are allowed to develop reputations in the experiments, as is the case in most small-scale societies, we find that sharing is increasingly directed toward individuals experiencing hardship, good reputations increase aid, and the pooling of resources through voluntary sharing becomes more effective. We also find high levels of voluntary sharing without a strong commitment device; however, this form of cooperation does not increase contributions to the public good. Our results are consistent with previous experiments and theoretical models, suggesting strategic risks tied to rewards, punishments, and reputations are important. However, unlike studies that focus solely on strategic risks, we find the effects of rewards, punishments, and reputations are altered by the presence of environmental factors. Unexpected changes in resource abundance increase interdependence and may alter the costs and benefits of cooperation, relative to defection. We suggest environmental factors that increase interdependence are critically important to consider when developing and testing theories of cooperation PMID:27442434 5. Evaluation of laser light specularly reflected by the hohlraum surface on OMEGA indirect implosion experiments NASA Astrophysics Data System (ADS) Izumi, Nobuhiko; Turner, R. E.; Landen, O. L.; Wallace, R. J.; Koch, R. A. 2003-10-01 Due to the cylindrical shape of hohlraums typically used in indirect implosion experiments, the laser beams specularly reflected by the inner hohlraum surface are focused onto the capsule surface. This effect, which is known as the glint light effect, is important during the early stages of laser irradiation ( ˜200 ps), and might seed undesirable hydrodynamic instabilities which could grow during the implosion. We performed ray-trace calculations to evaluate this effect, and found that with a typical laser configuration the peak intensity of glint light can be up to 4 × 10^14 W/cm^2. We also performed experiments to measure of glint light effect at Omega using a time resolved x-ray re-emission technique, and evaluated the effect of rough hohlraum walls on the glint light intensity and spatial distribution. The results of the calculations and experiments will be presented. This work was performed under the auspices of the U.S. Department of Energy by the University of California, Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48. 6. Indirect Reciprocity, Resource Sharing, and Environmental Risk: Evidence from Field Experiments in Siberia. PubMed Howe, E Lance; Murphy, James J; Gerkey, Drew; West, Colin Thor 2016-01-01 Integrating information from existing research, qualitative ethnographic interviews, and participant observation, we designed a field experiment that introduces idiosyncratic environmental risk and a voluntary sharing decision into a standard public goods game. Conducted with subsistence resource users in rural villages on the Kamchatka Peninsula in Northeast Siberia, we find evidence consistent with a model of indirect reciprocity and local social norms of helping the needy. When participants are allowed to develop reputations in the experiments, as is the case in most small-scale societies, we find that sharing is increasingly directed toward individuals experiencing hardship, good reputations increase aid, and the pooling of resources through voluntary sharing becomes more effective. We also find high levels of voluntary sharing without a strong commitment device; however, this form of cooperation does not increase contributions to the public good. Our results are consistent with previous experiments and theoretical models, suggesting strategic risks tied to rewards, punishments, and reputations are important. However, unlike studies that focus solely on strategic risks, we find the effects of rewards, punishments, and reputations are altered by the presence of environmental factors. Unexpected changes in resource abundance increase interdependence and may alter the costs and benefits of cooperation, relative to defection. We suggest environmental factors that increase interdependence are critically important to consider when developing and testing theories of cooperation. PMID:27442434 7. [Determination of organic acids in cane vinasse by micellar electrokinetic capillary chromatography with indirect ultraviolet detection]. PubMed Xu, Yuanjin; Xu, Guiping; Wei, Yuanan 2006-01-01 Micellar electrokinetic capillary chromatography (MECC) with indirect ultraviolet (UV) detection method for the separation and determination of several organic acids in cane vinasse, including malonic, formic, tartaric, malic, succinic, glutaric, acetic, lactic and glutamic acids, were developed. Electrophoretic conditions were as follows: uncoated fused silica capillary (56 cm/ 64 cm (effective/total length), 50 microm i. d. ), 7.5 mmol/L potassium acid phthalate-1. 5 mmol/L cetyltrimethyl-ammonium bromide (CTAB) at pH = 6.50 as buffer solution, applied voltage -25 kV, temperature 25 degrees C, detection wavelength 300 nm, reference wavelength 210 nm. Good linearities were obtained for nine organic acids, and the detection limits were 0.5 mg/L, 0.3 mg/L, 1.5 mg/L, 1.5 mg/L, 0.3 mg/L, 0.3 mg/L, 0.4 mg/L, 0.4 mg/L, 0.4 mg/L for malonic, formic, tartaric, malic, succinic, glutaric, acetic, lactic and glutamic acid, respectively. The relative standard deviations (RSDs) for migration times and peak areas of nine organic acids within a day were 0.4% - 0.6% and 2.3% - 4.8%, respectively. The corresponding data for five days were 0.5% -0.7% and 3.3% - 5.2%. The recoveries of acid standards were above 93%. The method can be applied to determine the organic acids in cane vinasse with satisfactory results. PMID:16827307 8. Determination of the detective quantum efficiency of a prototype, megavoltage indirect detection, active matrix flat-panel imager. PubMed El-Mohri, Y; Jee, K W; Antonuk, L E; Maolinbay, M; Zhao, Q 2001-12-01 After years of aggressive development, active matrix flat-panel imagers (AMFPIs) have recently become commercially available for radiotherapy imaging. In this paper we report on a comprehensive evaluation of the signal and noise performance of a large-area prototype AMFPI specifically developed for this application. The imager is based on an array of 512 x 512 pixels incorporating amorphous silicon photodiodes and thin-film transistors offering a 26 x 26 cm2 active area at a pixel pitch of 508 microm. This indirect detection array was coupled to various x-ray converters consisting of a commercial phosphor screen (Lanex Fast B, Lanex Regular, or Lanex Fine) and a 1 mm thick copper plate. Performance of the imager in terms of measured sensitivity, modulation transfer function (MTF), noise power spectra (NPS), and detective quantum efficiency (DQE) is reported at beam energies of 6 and 15 MV and at doses of 1 and 2 monitor units (MU). In addition, calculations of system performance (NPS, DQE) based on cascaded-system formalism were reported and compared to empirical results. In these calculations, the Swank factor and spatial energy distributions of secondary electrons within the converter were modeled by means of EGS4 Monte Carlo simulations. Measured MTFs of the system show a weak dependence on screen type (i.e., thickness), which is partially due to the spreading of secondary radiation. Measured DQE was found to be independent of dose for the Fast B screen, implying that the imager is input-quantum-limited at 1 MU, even at an extended source-to-detector distance of 200 cm. The maximum DQE obtained is around 1%--a limit imposed by the low detection efficiency of the converter. For thinner phosphor screens, the DQE is lower due to their lower detection efficiencies. Finally, for the Fast B screen, good agreement between calculated and measured DQE was observed. PMID:11797959 9. [Comparative study of indirect immunofluorescence (IIF) and ELISA techniques in the detection of parvovirus B19]. PubMed González, M; Hassanhi, M; Rivera, S; Bracho, M P 2000-03-01 To determine the seroprevalence of IgG and IgM antibodies against Parvovirus B19 (P. B19), we studied the sera of 53 patients with different hematologic disorders and the sera of 15 controls using indirect immunofluorescence (IFI) and the ELISA method. The prevalence of IgG in the control group was 46.6%, in patients with aplastic crisis was 83.3% (IFI) and 66.7% (ELISA) and, in patients without crisis was 68.9% (IFI) and 72.4% (ELISA). IgM was negative except for patients with crisis: 8.3% (IFI) and 29.1% (ELISA). The higher seroprevalence (IgG) found in patients in comparison with controls might be due to a greater exposure of of patients to the virus. The agreement for both techniques was 81%(IgG) and 93% (IgM) however ELISA technique was more sensitive for detecting IgM of P. B19. In spite of serologic evidence and evaluating a simple serum sample per patient, we could establish an association between aplastic crisis and viral infection for IgM ELISA but not for IgG between hematologic disorders and infection for the P. B19. PMID:10758696 10. SERS detection of indirect viral DNA capture using colloidal gold and methylene blue as a Raman label Technology Transfer Automated Retrieval System (TEKTRAN) An indirect capture model assay using colloidal Au nanoparticles is demonstrated for surface enhanced Raman scattering (SERS) spectroscopy detection of DNA. The sequence targeted for capture is derived from the West Nile Virus (WNV) RNA genome and was selected on the basis of exhibiting minimal seco... 11. Indirect drive ablative Rayleigh-Taylor experiments with rugby hohlraums on OMEGA SciTech Connect Casner, A.; Galmiche, D.; Huser, G.; Jadaud, J.-P.; Liberatore, S.; Vandenboomgaerde, M. 2009-09-15 Results of ablative Rayleigh-Taylor instability growth experiments performed in indirect drive on the OMEGA laser facility [T. R. Boehly, D. L. Brown, S. Craxton et al., Opt. Commun. 133, 495 (1997)] are reported. These experiments aim at benchmarking hydrocodes simulations and ablator instabilities growth in conditions relevant to ignition in the framework of the Laser MegaJoule [C. Cavailler, Plasma Phys. Controlled Fusion 47, 389 (2005)]. The modulated samples under study were made of germanium-doped plastic (CHGe), which is the nominal ablator for future ignition experiments. The incident x-ray drive was provided using rugby-shaped hohlraums [M. Vandenboomgaerde, J. Bastian, A. Casner et al., Phys. Rev. Lett. 99, 065004 (2007)] and was characterized by means of absolute time-resolved soft x-ray power measurements through a dedicated diagnostic hole, shock breakout data and one-dimensional and two-dimensional (2D) side-on radiographies. All these independent x-ray drive diagnostics lead to an actual on-foil flux that is about 50% smaller than laser-entrance-hole measurements. The experimentally inferred flux is used to simulate experimental optical depths obtained from face-on radiographies for an extensive set of initial conditions: front-side single-mode (wavelength {lambda}=35, 50, and 70 {mu}m) and two-mode perturbations (wavelength {lambda}=35 and 70 {mu}m, in phase or in opposite phase). Three-dimensional pattern growth is also compared with the 2D case. Finally the case of the feedthrough mechanism is addressed with rear-side modulated foils. 12. Optimizing the hohlraum gas density for better symmetry control of indirect drive implosion experiments NASA Astrophysics Data System (ADS) Izumi, Nobuhiko; Hall, G. N.; Nagel, S. R.; Khan, S.; Rygg, R. R.; MacKinnon, A. J.; Ho, D. D.; Berzak Hopkins, L.; Jones, O. S.; Town, R. P. J.; Bradley, D. K. 2014-10-01 To achieve a spherically symmetric implosion, control of drive uniformity is essential. Both the ablation pressure and the mass ablation rate on the capsule surface should be made as uniform as possible for the duration of the drive. For an indirect drive implosion, the drive uniformity changes during the pulse because of: (1) the dynamic movement of the laser spots due to blow-off of the hohlraum wall, and (2) cross-beam energy transfer caused by laser-plasma interaction in the hohlraum. To tamp the wall blow-off, we use gas filled hohlraums. The cross-beam energy transfer can be controlled by applying a wave length separation between the cones of the laser beams. However, both of those dynamic effects are sensitive to the initial density of the hohlraum gas fill. To assess this, we performed implosion experiments with different hohlraum gas densities and tested the effect on drive asymmetry. The uniformity of the acceleration was measured by in-flight x-ray backlit imaging of the capsule. The uniformity of the core assembly was observed by imaging the self emission x-ray from the core. We will report on the experimental results and compare them to hydrodynamic simulations. Prepared by LLNL under Contract DE-AC52-07NA27344. LLNL-ABS-626372. 13. Experimental demonstration of early time, hohlraum radiation symmetry tuning for indirect drive ignition experiments NASA Astrophysics Data System (ADS) Dewald, E. L.; Milovich, J.; Thomas, C.; Kline, J.; Sorce, C.; Glenn, S.; Landen, O. L. 2011-09-01 Early time radiation symmetry at the capsule for indirect drive ignition on the National Ignition Facility (NIF) [G. H. Miller, E. I. Moses, and C. R. Wuest, Nucl. Fusion 44, 228 (2004)] will be inferred from the instantaneous soft x-ray re-emission pattern of a high-Z sphere replacing the ignition capsule. This technique was tested on the OMEGA laser facility [J. M. Soures, R. L. McCrory, T. Boehly et al., Laser Part. Beams 11, 317 (1991)] in near full ignition scale vacuum hohlraums using an equivalent experimental setup to the one planned for NIF. Two laser cones entering each laser entrance hole heat the hohlraums to radiation temperatures of 100 eV, mimicking the NIF ignition pulse foot drive. The experiments have demonstrated accuracies of ±1.5% (±2%) in inferred P2/P0 (P4/P0) Legendre mode incident flux asymmetry and consistency between 900 eV and 1200 eV re-emission patterns. We have also demonstrated the expected tuning capability of P2/P0, from positive (pole hot) to negative (waist hot), decreasing linearly with the inner/outer beams power fraction. P4/P0 on the other hand shows very little variation with power fraction. We developed a simple analytical viewfactor model that is in good agreement with both measured P2/P0 and P4/P0 and their dependence on inner beam power fraction. 14. Optimized beryllium target design for indirectly driven inertial confinement fusion experiments on the National Ignition Facility SciTech Connect Simakov, Andrei N. Wilson, Douglas C.; Yi, Sunghwan A.; Kline, John L.; Batha, Steven H.; Clark, Daniel S.; Milovich, Jose L.; Salmonson, Jay D. 2014-02-15 For indirect drive inertial confinement fusion, Beryllium (Be) ablators offer a number of important advantages as compared with other ablator materials, e.g., plastic and high density carbon. In particular, the low opacity and relatively high density of Be lead to higher rocket efficiencies giving a higher fuel implosion velocity for a given X-ray drive; and to higher ablation velocities providing more ablative stabilization and reducing the effect of hydrodynamic instabilities on the implosion performance. Be ablator advantages provide a larger target design optimization space and can significantly improve the National Ignition Facility (NIF) [J. D. Lindl et al., Phys. Plasmas 11, 339 (2004)] ignition margin. Herein, we summarize the Be advantages, briefly review NIF Be target history, and present a modern, optimized, low adiabat, Revision 6 NIF Be target design. This design takes advantage of knowledge gained from recent NIF experiments, including more realistic levels of laser-plasma energy backscatter, degraded hohlraum-capsule coupling, and the presence of cross-beam energy transfer. 15. Indirect-detection single-photon-counting x-ray detector for breast tomosynthesis NASA Astrophysics Data System (ADS) Jiang, Hao; Kaercher, Joerg; Durst, Roger 2016-03-01 X-ray mammography is a crucial screening tool for early identification of breast cancer. However, the overlap of anatomical features present in projection images often complicates the task of correctly identifying suspicious masses. As a result, there has been increasing interest in acquisition of volumetric information through digital breast tomosynthesis (DBT) which, compared to mammography, offers the advantage of depth information. Since DBT requires acquisition of many projection images, it is desirable that the noise in each projection image be dominated by the statistical noise of the incident x-ray quanta and not by the additive noise of the imaging system (referred to as quantum-limited imaging) and that the cumulative dose be as low as possible (e.g., no more than for a mammogram). Unfortunately, the electronic noise (~2000 electrons) present in current DBT systems based on active matrix, flat-panel imagers (AMFPIs) is still relatively high compared with modest x-ray gain of the a-Se and CsI:Tl x-ray converters often used. To overcome the modest signal-to-noise ratio (SNR) limitations of current DBT systems, we have developed a large-area x-ray imaging detector with the combination of an extremely low noise (~20 electrons) active-pixel CMOS and a specially designed high resolution scintillator. The high sensitivity and low noise of such system provides better SNR by at least an order of magnitude than current state-of-art AMFPI systems and enables x-ray indirect-detection single photon counting (SPC) at mammographic energies with the potential of dose reduction. 16. Application of photostable quantum dots for indirect immunofluorescent detection of specific bacterial serotypes on small marine animals NASA Astrophysics Data System (ADS) Decho, Alan W.; Beckman, Erin M.; Chandler, G. Thomas; Kawaguchi, Tomohiro 2008-06-01 An indirect immunofluorescence approach was developed using semiconductor quantum dot nanocrystals to label and detect a specific bacterial serotype of the bacterial human pathogen Vibrio parahaemolyticus, attached to small marine animals (i.e. benthic harpacticoid copepods), which are suspected pathogen carriers. This photostable labeling method using nanotechnology will potentially allow specific serotypes of other bacterial pathogens to be detected with high sensitivity in a range of systems, and can be easily applied for sensitive detection to other Vibrio species such as Vibrio cholerae. 17. Separation and detection of VX and its methylphosphonic acid degradation products on a microchip using indirect laser-induced fluorescence. PubMed Heleg-Shabtai, Vered; Gratziany, Natzach; Liron, Zvi 2006-05-01 The application of indirect LIF (IDLIF) technique for on-chip electrophoretic separation and detection of the nerve agent O-ethyl S-[2-(diisopropylamino)ethyl] methylphosphonothiolate (VX) and its major phosphonic degradation products, ethyl methylphosphonic acid (EMPA) and methylphosphonic acid (MPA) was demonstrated. Separation and detection of MPA degradation products of VX and the nerve agent isopropyl methylphosphonofluoridate (GB) are presented. The negatively charged dye eosin was found to be a good fluorescent marker for both the negatively charged phosphonic acids and the positively charged VX, and was chosen as the IDLIF visualization fluorescent dye. Separation and detection of VX, EMPA, and MPA in a simple-cross microchip were completed within less than a minute, and consumed only a 50 pL sample volume. A characteristic system peak that appeared in all IDLIF electropherograms served as an internal standard that increased the reliability of peak identification. The negative peak of both VX and the MPAs is in agreement with indirect detection theory and with previous reports in the literature. The LOD of VX and EMPA by IDLIF was 30 and 37 microM, respectively. Despite the fact that the detection sensitivity is relatively low, the rapid simultaneous on-chip analysis of both VX and its degradation products as well as the separation and detection of the MPA degradation products of both VX and GB, increases detection reliability and may present a choice when sensitivity is not critical compared with speed and simplicity of the assay. PMID:16703628 18. Signal, noise power spectrum, and detective quantum efficiency of indirect-detection flat-panel imagers for diagnostic radiology. PubMed Siewerdsen, J H; Antonuk, L E; el-Mohri, Y; Yorkston, J; Huang, W; Cunningham, I A 1998-05-01 The performance of an indirect-detection, active matrix flat-panel imager (FPI) at diagnostic energies is reported in terms of measured and theoretical signal size, noise power spectrum (NPS), and detective quantum efficiency (DQE). Based upon a 1536 x 1920 pixel, 127 microns pitch array of a-Si:H thin-film transistors and photodiodes, the FPI was developed as a prototype for examination of the potential of flat-panel technology in diagnostic x-ray imaging. The signal size per unit exposure (x-ray sensitivity) was measured for the FPI incorporating five commercially available Gd2O2S:Tb converting screens at energies 70-120 kVp. One-dimensional and two-dimensional NPS and DQE were measured for the FPI incorporating three such converters and as a function of the incident exposure. The measurements support the hypothesis that FPIs have significant potential for application in diagnostic radiology. A cascaded systems model that has shown good agreement with measured individual pixel signal and noise properties is employed to describe the performance of various FPI designs and configurations under a variety of diagnostic imaging conditions. Theoretical x-ray sensitivity, NPS, and DQE are compared to empirical results, and good agreement is observed in each case. The model is used to describe the potential performance of FPIs incorporating a recently developed, enhanced array that is commercially available and has been proposed for testing and application in diagnostic radiography and fluoroscopy. Under conditions corresponding to chest radiography, the analysis suggests that such systems can potentially meet or even exceed the DQE performance of existing technology, such as screen-film and storage phosphor systems; however, under conditions corresponding to general fluoroscopy, the typical exposure per frame is such that the DQE is limited by the total system gain and additive electronic noise. The cascaded systems analysis provides a valuable means of identifying the 19. Flight experience with windshear detection NASA Technical Reports Server (NTRS) Zweifel, Terry 1990-01-01 Windshear alerts resulting from the Honeywell Windshear Detection and Guidance System are presented based on data from approximately 248,000 revenue flights at Piedmont Airlines. The data indicate that the detection system provides a significant benefit to the flight crew of the aircraft. In addition, nuisance and false alerts were found to occur at an acceptably low rate to maintain flight crew confidence in the system. Data from a digital flight recorder is also presented which shows the maximum and minimum windshear magnitudes recorded for a representative number of flights in February, 1987. The effect of the boundary layer of a steady state wind is also discussed. 20. Indirect detection of nitrogen spins in ammonia target at superlow temperatures NASA Astrophysics Data System (ADS) Kiselev, Yu.; Doshita, N.; Gautheron, F.; Kondo, K.; Meyer, W. 2013-05-01 The COMPASS polarized target at CERN operates with irradiated ammonia (NH3) as a material having a reasonable content of polarizable nucleons and the highest resistance against radiation damages. We study the magnetic structure of ammonia polarized by the Dynamic Nuclear Polarization (DNP) method at 0.2 K and 2.5 T. In this material, electron spins, induced by ionizing radiation, couple proton and nitrogen nuclear spins by indirect J-interactions. This coupling and the dipole-dipole interactions between nuclear spins produce an asymmetry in the proton NMR line shape depending on the value of nitrogen polarization. We consider the asymmetry as an indirect imaging of the actual nitrogen spectra, useful for research developments and, in practice, for monitoring of nitrogen polarization in the long target, instead of a complicated analysis of NMR nitrogen spectra. 1. Flow cytometry compared with indirect immunofluorescence for rapid detection of dengue virus type 1 after amplification in tissue culture. PubMed Kao, C L; Wu, M C; Chiu, Y H; Lin, J L; Wu, Y C; Yueh, Y Y; Chen, L K; Shaio, M F; King, C C 2001-10-01 Dengue virus (DV) was detected early in infected mosquito C6/36 cells by using indirect immunofluorescence (IF) in conjunction with flow cytometry. Three fixation-permeabilization methods and three DV serotype 1 (DEN-1)-specific monoclonal antibodies, 8-8 (anti-E), 16-4 (anti-NS1), and 15F3-1 (anti-NS1), were evaluated for the detection of DEN-1 in infected C6/36 cells. We found that these three monoclonal antibodies were capable of detecting DV in C6/36 cells as early as 24 h postinoculation by using a conventional indirect IF stain. Both 8-8 and 16-4 detected DV earlier and showed a greater number of DV-positive cells than 15F3-1. In flow cytometry, 3% paraformaldehyde plus 0.1% Triton X-100 with 16-4, the best fixation-permeabilization method for testing DV, showed higher sensitivity (up to 1 PFU) than indirect IF stain. The higher sensitivity of 16-4 in detecting DEN-1 was found with both IF and flow cytometry. Flow cytometry, which had a sensitivity similar to that of nested reverse transcription-PCR, was more sensitive in detecting DV in the infected mosquito cells 10 h earlier than the conventional IF stain. When clinical specimens were amplified in mosquito C6/36 cells and then assayed for DV using flow cytometry and conventional virus isolation at day 7 postinfection, both methods had 97.22% (35 out of 36) agreement. Moreover, among 12 positive samples which were detected by conventional culture method, the flow cytometry assay could detect DV in 58.33% (7 out of 12) of samples even at day 3 postinfection. In conclusion, both monoclonal antibodies 8-8 and 16-4 can be used for the early detection of DEN-1-infected C6/36 cells, with 16-4 (anti-NS1) being the best choice for the rapid diagnosis of DV by both the IF staining and flow cytometry methods. PMID:11574589 2. Model-independent indirect detection constraints on hidden sector dark matter NASA Astrophysics Data System (ADS) Elor, Gilly; Rodd, Nicholas L.; Slatyer, Tracy R.; Xue, Wei 2016-06-01 If dark matter inhabits an expanded hidden sector'', annihilations may proceed through sequential decays or multi-body final states. We map out the potential signals and current constraints on such a framework in indirect searches, using a model-independent setup based on multi-step hierarchical cascade decays. While remaining agnostic to the details of the hidden sector model, our framework captures the generic broadening of the spectrum of secondary particles (photons, neutrinos, e+e‑ and bar p p) relative to the case of direct annihilation to Standard Model particles. We explore how indirect constraints on dark matter annihilation limit the parameter space for such cascade/multi-particle decays. We investigate limits from the cosmic microwave background by Planck, the Fermi measurement of photons from the dwarf galaxies, and positron data from AMS-02. The presence of a hidden sector can change the constraints on the dark matter by up to an order of magnitude in either direction (although the effect can be much smaller). We find that generally the bound from the Fermi dwarfs is most constraining for annihilations to photon-rich final states, while AMS-02 is most constraining for electron and muon final states; however in certain instances the CMB bounds overtake both, due to their approximate independence on the details of the hidden sector cascade. We provide the full set of cascade spectra considered here as publicly available code with examples at http://web.mit.edu/lns/research/CascadeSpectra.html. 3. Use of an indirect enzyme-linked immunosorbent assay for detection of antibodies in sheep naturally infected with Salmonella Abortusovis. PubMed Wirz-Dittus, Sophie; Belloy, Luc; Doherr, Marcus G; Hüssy, Daniela; Sting, Reinhard; Gabioud, Patricia; Waldvogel, Andreas S 2010-07-01 An indirect enzyme-linked immunosorbent assay (ELISA) was modified and validated to detect antibodies against Salmonella Abortusovis in naturally infected sheep. The ELISA was validated with 44 positive and 45 negative control serum samples. Compared with the immunoblot, the sensitivity and specificity of the assay were 98% and 100%, respectively. To follow antibody levels over time, samples from 12 infected ewes were collected at 1, 3, and 10 months after abortion. All animals showed antibody levels above the cutoff value throughout the observation period. One and 3 months after abortion, high antibody levels could be detected in all but one animal, whereas after 10 months, 9 animals had markedly lower but still positive antibody levels. The test characteristics and evidence for the persistence of detectable antibody levels in all infected animals for up to 10 months indicates that the ELISA can be used for herd surveillance testing. PMID:20622222 4. Indirect readout: detection of optimized subsequences and calculation of relative binding affinities using different DNA elastic potentials PubMed Central Becker, Nils B.; Wolff, Lars; Everaers, Ralf 2006-01-01 Essential biological processes require that proteins bind to a set of specific DNA sites with tuned relative affinities. We focus on the indirect readout mechanism and discuss its theoretical description in relation to the present understanding of DNA elasticity on the rigid base pair level. Combining existing parametrizations of elastic potentials for DNA, we derive elastic free energies directly related to competitive binding experiments, and propose a computationally inexpensive local marker for elastically optimized subsequences in protein–DNA co-crystals. We test our approach in an application to the bacteriophage 434 repressor. In agreement with known results we find that indirect readout dominates at the central, non-contacted bases of the binding site. Elastic optimization involves all deformation modes and is mainly due to the adapted equilibrium structure of the operator, while sequence-dependent elasticity plays a minor role. These qualitative observations are robust with respect to current parametrization uncertainties. Predictions for relative affinities mediated by indirect readout depend sensitively on the chosen parametrization. Their quantitative comparison with experimental data allows for a critical evaluation of DNA elastic potentials and of the correspondence between crystal and solution structures. The software written for the presented analysis is included as Supplementary Data. PMID:17038333 5. Advantages of an indirect semiconductor quantum well system for infrared detection NASA Technical Reports Server (NTRS) Yang, Chan-Lon; Somoano, Robert; Pan, Dee-Son 1989-01-01 The infrared intersubband absorption process in quantum well systems with anisotropic bulk effective masses, which usually occurs in indirect semiconductors was analyzed. It is found that the anisotropic effective mass can be utilized to provide allowed intersubband transitions at normal incidence to the quantum well growth direction. This transition is known to be forbidden for cases of isotropic effective mass. This property can be exploited for infrared sensor application of quantum well structures by allowing direct illumination of large surface areas without using special waveguide structures. The 10-micron intersubband absorption in quantum wells made of the silicon-based system Si/Si(1-x)Ge(x) was calculated. It is found that it is readily possible to achieve an absorption constant of the order of 10,000/cm in these Si quantum wells with current doping technology. 6. Detection of Ganoderic Acid A in Ganoderma lingzhi by an Indirect Competitive Enzyme-Linked Immunosorbent Assay. PubMed Sakamoto, Seiichi; Kohno, Toshitaka; Shimizu, Kuniyoshi; Tanaka, Hiroyuki; Morimoto, Satoshi 2016-05-01 Ganoderma is a genus of medicinal mushroom traditionally used for treating various diseases. Ganoderic acid A is one of the major bioactive Ganoderma triterpenoids isolated from Ganoderma species. Herein, we produced a highly specific monoclonal antibody against ganoderic acid A (MAb 12 A) and developed an indirect competitive ELISA for the highly sensitive detection of ganoderic acid A in Ganoderma lingzhi, with a limit of detection of 6.10 ng/mL. Several validation analyses support the accuracy and reliability of the developed indirect competitive ELISA for use in the quality control of Ganoderma based on ganoderic acid A content. Furthermore, quantitative analysis of ganoderic acid A in G. lingzhi revealed that the pileus exhibits the highest ganoderic acid A content compared with the stipe and spore of the fruiting body; the best extraction efficiency was found when 50 % ethanol was used, which suggests the use of a strong liquor to completely harness the potential of Ganoderma triterpenoids in daily life. PMID:27093250 7. Detection of Schistosoma mansoni Antibodies in a Low-Endemicity Area Using Indirect Immunofluorescence and Circumoval Precipitin Test PubMed Central Carvalho do Espírito-Santo, Maria Cristina; Pinto, Pedro Luiz; Gargioni, Cybele; Viviana Alvarado-Mora, Monica; Pagliusi Castilho, Vera Lúcia; Pinho, João Ranato Rebello; de Albuquerque Luna, Expedito José; Borges Gryschek, Ronaldo Cesar 2014-01-01 Parasitological diagnostic methods for schistosomiasis lack sensitivity, especially in regions of low endemicity. The objective of this study was to determine the prevalence of Schistosoma mansoni infections by antibody detection using the indirect immunofluorescence assay (IFA-IgM) and circumoval precipitin test (COPT). Serum samples of 572 individuals were randomly selected. The IFA-IgM and COPT were used to detect anti-S. mansoni antibodies. Of the patients studied, 15.9% (N = 91) were IFA-IgM positive and 5.1% (N = 29) had COPT reactions (P < 0.001 by McNemar's test). Immunodiagnostic techniques showed higher infection prevalence than had been previously estimated. This study suggests that combined use of these diagnostic tools could be useful for the diagnosis of schistosomiasis in epidemiological studies in areas of low endemicity. PMID:24639303 8. Rapid differentiation of commercial juices and blends by using sugar profiles obtained by capillary zone electrophoresis with indirect UV detection. PubMed Navarro-Pascual-Ahuir, María; Lerma-García, María Jesús; Simó-Alfonso, Ernesto F; Herrero-Martínez, José Manuel 2015-03-18 A method for the determination of sugars in several fruit juices and nectars by capillary zone electrophoresis with indirect UV-vis detection has been developed. Under optimal conditions, commercial fruit juices and nectars from several fruits were analyzed, and the sugar and cyclamate contents were quantified in less than 6 min. A study for the detection of blends of high-value juices (orange and pineapple) with cheaper alternatives was also developed. For this purpose, different chemometric techniques, based on sugar content ratios, were applied. Linear discriminant analysis showed that fruit juices can be distinguished according to the fruit type, juice blends also being differentiated. Multiple linear regression models were also constructed to predict the adulteration of orange and pineapple juices with grape juice. This simple and reliable methodology provides a rapid analysis of fruit juices of economic importance, which is relevant for quality control purposes in food industries and regulatory agencies. PMID:25719749 9. The MANX Muon Cooling Experiment Detection System SciTech Connect Kahn, S. A.; Abrams, R. J.; Ankenbrandt, C.; Cummings, M. A. C.; Johnson, R. P.; Robertsa, T. J.; Yoneharab, K. 2010-03-30 The MANX experiment is being proposed to demonstrate the reduction of 6D muon phase space emittance, using a continuous liquid absorber to provide ionization cooling in a helical solenoid magnetic channel. The experiment involves the construction of a two-period-long helical cooling channel (HCC) to reduce the muon invariant emittance by a factor of two. The HCC would replace the current cooling section of the MICE experiment now being set up at the Rutherford Appleton Laboratory. The MANX experiment would use the existing MICE spectrometers and muon beam line. We discuss the placement of detection planes to optimize the muon track resolution. 10. In-channel indirect amperometric detection of heavy metal ions for electrophoresis on a poly(dimethylsiloxane) microchip. PubMed Li, Xin-Ai; Zhou, Dong-Mei; Xu, Jing-Juan; Chen, Hong-Yuan 2007-02-28 In-channel indirect amperometric detection mode for microchip capillary electrophoresis with positive separation electric field is successfully applied to some heavy metal ions. The influences of separation voltage, detection potential, the concentration and pH value of running buffer on the response of the detector have been investigated. An optimized condition of 1200V separation voltage, -0.1V detection potential, 20mM (pH 4.46) running buffer of 2-(N-morpholino)ethanesulfonic acid (MES)+l-histidine (l-His) was selected. The results clearly showed that Pb(2+), Cd(2+), and Cu(2+) were efficiently separated within 80s in a 3.7cm long native separation PDMS/PDMS channel and successfully detected at a single carbon fibre electrode. The theoretical plate numbers of Pb(2+), Cd(2+), and Cu(2+) were 1.2x10(5), 2.5x10(5), and 1.9x10(5)m(-1), respectively. The detection limits for Pb(2+), Cd(2+), and Cu(2+) were 1.3, 3.3 and 7.4muM (S/N=3). PMID:19071423 11. Change Detection Experiments Using Low Cost UAVs NASA Technical Reports Server (NTRS) Logan, Michael J.; Vranas, Thomas L.; Motter, Mark; Hines, Glenn D.; Rahman, Zia-ur 2005-01-01 This paper presents the progress in the development of a low-cost change-detection system. This system is being developed to provide users with the ability to use a low-cost unmanned aerial vehicle (UAV) and image processing system that can detect changes in specific fixed ground locations using video provided by an autonomous UAV. The results of field experiments conducted with the US Army at Ft. A.P.Hill are presented. 12. Mode-shape-based mass detection scheme using mechanically diverse, indirectly coupled microresonator arrays NASA Astrophysics Data System (ADS) Glean, Aldo A.; Judge, John A.; Vignola, Joseph F.; Ryan, Teresa J. 2015-02-01 We explore vibration localization in arrays of microresonators used for ultrasensitive mass detection and describe an algorithm for identifying the location and amount of added mass using measurements of a vibration mode of the system. For a set of sensing elements coupled through a common shuttle mass, the inter-element coupling is shown to be proportional to the ratio of the element masses to the shuttle mass and to vary with the frequency mistuning between any two sensing elements. When any two elements have sufficiently similar frequencies, mass adsorption on one element can result in measurable changes to multiple modes of the system. We describe the effects on system frequencies and mode shapes due to added mass, in terms of mass ratio and frequency spacing. In cases in which modes are not fully localized, frequency-shift-based mass detection methods may give ambiguous results. The mode-shape-based detection algorithm presented uses a single measured mode shape and corresponding natural frequency to identify the location and amount of added mass. Mass detection in the presence of measurement noise is numerically simulated using a ten element sensor array. The accuracy of the detection scheme is shown to depend on the amplitude with which each element vibrates in the chosen mode. 13. Climate and infectious disease: use of remote sensing for detection of Vibrio cholerae by indirect measurement NASA Technical Reports Server (NTRS) Lobitz, B.; Beck, L.; Huq, A.; Wood, B.; Fuchs, G.; Faruque, A. S.; Colwell, R. 2000-01-01 It has long been known that cholera outbreaks can be initiated when Vibrio cholerae, the bacterium that causes cholera, is present in drinking water in sufficient numbers to constitute an infective dose, if ingested by humans. Outbreaks associated with drinking or bathing in unpurified river or brackish water may directly or indirectly depend on such conditions as water temperature, nutrient concentration, and plankton production that may be favorable for growth and reproduction of the bacterium. Although these environmental parameters have routinely been measured by using water samples collected aboard research ships, the available data sets are sparse and infrequent. Furthermore, shipboard data acquisition is both expensive and time-consuming. Interpolation to regional scales can also be problematic. Although the bacterium, V. cholerae, cannot be sensed directly, remotely sensed data can be used to infer its presence. In the study reported here, satellite data were used to monitor the timing and spread of cholera. Public domain remote sensing data for the Bay of Bengal were compared directly with cholera case data collected in Bangladesh from 1992-1995. The remote sensing data included sea surface temperature and sea surface height. It was discovered that sea surface temperature shows an annual cycle similar to the cholera case data. Sea surface height may be an indicator of incursion of plankton-laden water inland, e.g., tidal rivers, because it was also found to be correlated with cholera outbreaks. The extensive studies accomplished during the past 25 years, confirming the hypothesis that V. cholerae is autochthonous to the aquatic environment and is a commensal of zooplankton, i.e., copepods, when combined with the findings of the satellite data analyses, provide strong evidence that cholera epidemics are climate-linked. 14. Guidelines to indirectly measure and enhance detection efficiency of stationary PIT tag interrogation systems in streams USGS Publications Warehouse Connolly, Patrick J. 2010-01-01 With increasing use of passive integrated transponder (PIT) tags and reliance on stationary PIT tag interrogation systems to monitor fish populations, guidelines are offered to inform users how best to use limited funding and human resources to create functional systems that maximize a desired level of detection and precision. The estimators of detection efficiency and their variability as described by Connolly et al. (2008) are explored over a span of likely performance metrics. These estimators were developed to estimate detection efficiency without relying on a known number of fish passing the system. I present graphical displays of the results derived from these estimators to show the potential efficiency and precision to be gained by adding an array or by increasing the number of PIT-tagged fish expected to move past an interrogation system. 15. Indirect detection of halide ions via fluorescence quenching of quinine sulfate in microcolumn ion chromatography. PubMed Takeuchi, Toyohide; Sumida, Junichi 2004-06-01 Halide ions could be visualized via fluorescence quenching in microcolumn ion chromatography. The fluorescence of quinine sulfate, which was contained in an acidic eluent, was quenched by halide ions. The observed fluorescence quenching values increased in this order: iodide, bromide, and chloride. The present detection system was relatively sensitive to halide ions except for fluoride: other anions gave smaller signals than halide ions. The present detection system provided quantitative information, so it could be applied to the determination of chloride in water samples. PMID:15228124 16. Principle of indirect comparison (PIC): simulation and analysis of PIC-based anomaly detection in multispectral data NASA Astrophysics Data System (ADS) Rosario, Dalton 2006-05-01 The Army has gained a renewed interest in hyperspectral (HS) imagery for military surveillance. As a result, a HS research team has been established at the Army Research Lab (ARL) to focus exclusively on the design of innovative algorithms for target detection in natural clutter. In 2005 at this symposium, we presented comparison performances between a proposed anomaly detector and existing ones testing real HS data. Herein, we present some insightful results on our general approach using analyses of statistical performances of an additional ARL anomaly detector testing 1500 simulated realizations of model-specific data to shed some light on its effectiveness. Simulated data of increasing background complexity will be used for the analysis, where highly correlated multivariate Gaussian random samples will model homogeneous backgrounds and mixtures of Gaussian will model non-homogeneous backgrounds. Distinct multivariate random samples will model targets, and targets will be added to backgrounds. The principle that led to the design of our detectors employs an indirect sample comparison to test the likelihood that local HS random samples belong to the same population. Let X and Y denote two random samples, and let Z = X U Y, where U denotes the union. We showed that X can be indirectly compared to Y by comparing, instead, Z to Y (or to X). Mathematical implementations of this simple idea have shown a remarkable ability to preserve performance of meaningful detections (e.g., full-pixel targets), while significantly reducing the number of meaningless detections (e.g., transitions of background regions in the scene). 17. Characterization of Antibodies and Development of an Indirect Competitive Immunoassay for Detection of Deamidated Gluten. PubMed Tranquet, Olivier; Lupi, Roberta; Echasserieau-Laporte, Valerie; Pietri, Manon; Larré, Colette; Denery-Papini, Sandra 2015-06-10 Diversification of gluten applications in the food and cosmetics industries was achieved through the production of water-soluble gluten that can be obtained by deamidation. Current analytical methods dedicated to gluten detection failed to detect deamidated gluten. After immunizing mice with the peptide LQPEEPFPE conjugated to keyhole limpet hemocyanin, five mouse monoclonal antibodies (mAbs) were produced and sequences of bound epitopes were determined as XPXEPFPE, where X is Q or E. The mAbs exhibited high specificity for deamidated gliadins and low molecular weight glutenin subunits. A competitive enzyme-linked immunosorbent assay (ELISA) based on INRA-DG1 mAb was developed with an IC50% of 85 ng/mL and a limit of detection of 25 ng/mL. The intra- and interassay coefficients of variation (CV) were <10% except for the interassay CV of the low-level control (40 ng/mL), which was 20%. This assay was capable of detecting three of the four deamidated gluten samples spiked in rice flour at 20 mg/kg. PMID:25980542 18. Development of an indirect competitive assay-based aptasensor for highly sensitive detection of tetracycline residue in honey. PubMed Wang, Sai; Yong, Wei; Liu, Jiahui; Zhang, Liya; Chen, Qilong; Dong, Yiyang 2014-07-15 Tetracycline (TC) is widely used for prevention and control of animal diseases for its broad spectrum antimicrobial activity and low cost, but its abuse can seriously affect human health and may result in trade loss. Thus there is an imperative need to develop high-performing analytical technique for TC detection. In this study, we developed a biosensor based on an indirect competitive enzyme-linked aptamer assay (ic-ELAA). A 76mer single-stranded DNA (ssDNA) aptamer, selected by Systematic Evolution of Ligands by Exponential Enrichment (SELEX), was applied for the recognition and detection of TC in honey. The limit of detection was 9.6×10(-3) ng/mL with a linear working range from 0.01 to 100 ng/mL toward TC in honey, and a mean recovery rate of 93.23% in TC-spiked honey was obtained. This aptasensor can be applied to detect TC residue in food with high sensitivity and simplicity, and it is prospective to develop useful ELAA Kits for TC determination in food. PMID:24583691 19. A novel colorimetric sandwich aptasensor based on an indirect competitive enzyme-free method for ultrasensitive detection of chloramphenicol. PubMed 2016-04-15 Analytical methods for detection and quantitation of chloramphenicol in blood serum and foodstuffs arse highly in demand. In this study, a colorimetric sandwich aptamer-based sensor (aptasensor) was fabricated for sensitive and selective detection of chloramphenicol, based on an indirect competitive enzyme-free assay using gold nanoparticles (AuNPs), biotin and streptavidin. The designed aptasensor acquires characteristics of AuNPs, including large surface area and unique optical properties, and strong interaction of biotin with streptavidin. In the absence of chloramphenicol, the sandwich structure of aptasensor forms, leading to the observation of sharp red color. In the presence of target, functionalized AuNPs could not bind to 96-well plates, resulting in a faint red color. The fabricated colorimetric aptasensor exhibited high selectivity toward chloramphenicol with a limit of detection as low as 451 pM. Moreover, the developed colorimetric aptasensor was successfully used to detect chloramphenicol in milk and serum with LODs of 697 and 601 pM, respectively. PMID:26599477 20. Indirect effects of bioinsecticides on the nontarget fauna: The Camargue experiment calls for future research NASA Astrophysics Data System (ADS) Poulin, Brigitte 2012-10-01 Following its high selectivity and low toxicity to nontarget organisms, Bacillus thuringiensis var. israelensis (Bti) has become the most commonly used microbial agent to control mosquitoes worldwide. Considered non-toxic to mammals, birds, fish, plants and most aquatic organisms, Bti direct effects on the nontarget fauna are largely limited to non-biting midges (Chironomidae). Studies addressing the indirect effects of Bti through food web perturbations are scanty and showed no significant results. Mosquito-control in southern France was implemented in 1965 using various insecticides over 400 km of coast. In spite of a high mosquito nuisance, the Camargue wetlands were excluded from this control programme to preserve biodiversity. The expanding use of Bti has prompted the implementation of an experimental mosquito control in 2006 involving 2500 of the 25,000 ha of larval biotopes of the Camargue, accompanied by impact studies on the nontarget fauna. Using birds from natural and human-inhabited areas as model species, we assessed trophic perturbations caused by three years of Bti applications. The preliminary results of this 5-yr programme revealed significant effects of Bti spraying on abundance of reed-dwelling invertebrates serving as food to passerines, as well as on the diet and breeding success of house martins nesting in rural estates and small towns. Very few studies (if any) have provided such compelling evidence of an insecticide affecting vertebrate populations, putting into question the environmental-friendly character of Bti, at least in some areas. The significance of these results are discussed within a wider context and completed with an analysis of the current Bti bibliography to highlight and orient priorities for future research on this topic. 1. Indirect Dark Matter Detection Limits from the Ultra-Faint Milky Way Satellite Segue 1 SciTech Connect Essig, Rouven; Sehgal, Neelima; Strigari, Louis E.; Geha, Marla; Simon, Joshua D.; /Carnegie Inst. Observ. 2011-08-11 We use new kinematic data from the ultra-faint Milky Way satellite Segue 1 to model its dark matter distribution and derive upper limits on the dark matter annihilation cross-section. Using gamma-ray ux upper limits from the Fermi satellite and MAGIC, we determine cross-section exclusion regions for dark matter annihilation into a variety of different particles including charged leptons. We show that these exclusion regions are beginning to probe the regions of interest for a dark matter interpretation of the electron and positron uxes from PAMELA, Fermi, and HESS, and that future observations of Segue 1 have strong prospects for testing such an interpretation. We additionally discuss prospects for detecting annihilation with neutrinos using the IceCube detector, finding that in an optimistic scenario a few neutrino events may be detected. Finally we use the kinematic data to model the Segue 1 dark matter velocity dispersion and constrain Sommerfeld enhanced models. 2. Indirect dark matter detection limits from the ultrafaint Milky Way satellite Segue 1 SciTech Connect Essig, Rouven; Sehgal, Neelima; Strigari, Louis E.; Geha, Marla; Simon, Joshua D. 2010-12-15 We use new kinematic data from the ultrafaint Milky Way satellite Segue 1 to model its dark matter distribution and derive upper limits on the dark matter annihilation cross section. Using gamma-ray flux upper limits from the Fermi satellite and MAGIC, we determine cross section exclusion regions for dark matter annihilation into a variety of different particles including charged leptons. We show that these exclusion regions are beginning to probe the regions of interest for a dark matter interpretation of the electron and positron fluxes from PAMELA, Fermi, and HESS, and that future observations of Segue 1 have strong prospects for testing such an interpretation. We additionally discuss prospects for detecting annihilation with neutrinos using the IceCube detector, finding that in an optimistic scenario a few neutrino events may be detected. Finally, we use the kinematic data to model the Segue 1 dark matter velocity dispersion and constrain Sommerfeld enhanced models. 3. A new indirect impedancemeter to detect microbial contaminations in agro-food industry. PubMed Ribeiro, T; Romestant, G; Depoortere, J; Pauss, A 2003-01-01 The impedancemetry method can be used in Microbiology to perform the detection, quantification and even identification of some bacteria. Basic knowledge about this subject can be stated from Ur and Brown (1975), Firstenberg-Eden and Eden (1984), the reviews of Silley and Forsythe (1996), and Wawerla et al. (1999). With Escherichia coli, Bacillus subtilis and Saccharomyces cerevisiae cultures, the conductimetric responses were highly replicable, and repeatable for inocula concentrations from 1 to 10(8) CFU mL(-1). The main use of such devices could be the detection of contaminations of foodstuff. Several of these foodstuffs incubated at 37 degrees C spontaneously release quite large amounts of CO2. Our impedancemeter, however, was able to detect an Escherichia coli presence in canned French beans down to 2.35 10(-2) colony forming units (CFU) mL(-1), and a Saccharomyces cerevisae contamination of apple purée in glass jars down to 6.1 10(-3) CFU mL(-1). PMID:24757800 4. Do chimpanzees learn reputation by observation? Evidence from direct and indirect experience with generous and selfish strangers. PubMed Subiaul, Francys; Vonk, Jennifer; Okamoto-Barth, Sanae; Barth, Jochen 2008-10-01 Can chimpanzees learn the reputation of strangers indirectly by observation? Or are such stable behavioral attributions made exclusively by first-person interactions? To address this question, we let seven chimpanzees observe unfamiliar humans either consistently give (generous donor) or refuse to give (selfish donor) food to a familiar human recipient (Experiments 1 and 2) and a conspecific (Experiment 3). While chimpanzees did not initially prefer to beg for food from the generous donor (Experiment 1), after continued opportunities to observe the same behavioral exchanges, four chimpanzees developed a preference for gesturing to the generous donor (Experiment 2), and transferred this preference to novel unfamiliar donor pairs, significantly preferring to beg from the novel generous donors on the first opportunity to do so. In Experiment 3, four chimpanzees observed novel selfish and generous acts directed toward other chimpanzees by human experimenters. During the first half of testing, three chimpanzees exhibited a preference for the novel generous donor on the first trial. These results demonstrate that chimpanzees can infer the reputation of strangers by eavesdropping on third-party interactions. PMID:18357476 5. Modified Hodge Test versus Indirect Carbapenemase Test: Prospective Evaluation of a Phenotypic Assay for Detection of Klebsiella pneumoniae Carbapenemase (KPC) in Enterobacteriaceae PubMed Central Carroll, Joanne; Sifri, Costi D.; Hazen, Kevin C. 2013-01-01 The currently recommended phenotypic test for the detection of carbapenemase-producing members of the family Enterobacteriaceae is the modified Hodge test (MHT). However, the MHT lacks specificity. Here we demonstrate an alternative phenotypic test, the indirect carbapenemase test, for the detection of blaKPC-producing isolates that has specificity superior to that of the MHT for non-Klebsiella Enterobacteriaceae. PMID:23390272 6. Dark matter protohalos in a nine parameter MSSM and implications for direct and indirect detection NASA Astrophysics Data System (ADS) Diamanti, Roberta; Catalan, Maria Eugenia Cabrera; Ando, Shin'ichiro 2015-09-01 We study how the kinetic decoupling of dark matter within a minimal supersymmetric extension of the standard model, by adopting nine independent parameters (MSSM-9), could improve our knowledge of the properties of the dark matter protohalos. We show that the most probable neutralino mass regions, which satisfy the relic density and the Higgs mass constraints, are those with the lightest supersymmetric neutralino mass around 1 TeV and 3 TeV, corresponding to Higgsino-like and winolike neutralino, respectively. The kinetic decoupling temperature in the MSSM-9 scenario leads to a most probable protohalo mass in a range of Mph˜10-12- 10-7M⊙ . The part of the region closer to ˜2 TeV gives also important contributions from the neutralino-stau coannihilation, reducing the effective annihilation rate in the early Universe. We also study how the size of the smallest dark matter substructures correlates to experimental signatures, such as the spin-dependent and spin-independent scattering cross sections, relevant for direct detection of dark matter. Improvements on the spin-independent sensitivity might reduce the most probable range of the protohalo mass between ˜10-9M⊙ and ˜10-7M⊙, while the expected spin-dependent sensitivity provides weaker constraints. We show how the boost of the luminosity due to dark matter annihilation increases, depending on the protohalo mass. In the Higgsino case, the protohalo mass is lower than the canonical value often used in the literature (˜10-6M⊙), while ⟨σ v ⟩ does not deviate from ⟨σ v ⟩˜10-26 cm3 s-1 ; there is no significant enhancement of the luminosity. On the contrary, in the wino case, the protohalo mass is even lighter, and ⟨σ v ⟩ is two orders of magnitude larger; as its consequence, we see a substantial enhancement of the luminosity. 7. An efficient NMR method for the characterisation of 14N sites through indirect 13C detection PubMed Central Jarvis, James A.; Haies, Ibraheem M. 2013-01-01 Nitrogen is one of the most abundant elements and plays a key role in the chemistry of biological systems. Despite its widespread distribution, the study of the naturally occurring isotope of nitrogen, 14N (99.6%), has been relatively limited as it is a spin-1 nucleus that typically exhibits a large quadrupolar interaction. Accordingly, most studies of nitrogen sites in biomolecules have been performed on samples enriched with 15N, limiting the application of NMR to samples which can be isotopically enriched. This precludes the analysis of naturally occurring samples and results in the loss of the wealth of structural and dynamic information that the quadrupolar interaction can provide. Recently, several experimental approaches have been developed to characterize 14N sites through their interaction with neighboring ‘spy’ nuclei. Here we describe a novel version of these experiments whereby coherence between the 14N site and the spy nucleus is mediated by the application of a moderate rf field to the 14N. The resulting 13C/14N spectra show good sensitivity on natural abundance and labeled materials; whilst the 14N lineshapes permit the quantitative analysis of the quadrupolar interaction. PMID:23589073 8. Recursive Indirect-Paths Modularity (RIP-M) for Detecting Community Structure in RNA-Seq Co-expression Networks. PubMed Rahmani, Bahareh; Zimmermann, Michael T; Grill, Diane E; Kennedy, Richard B; Oberg, Ann L; White, Bill C; Poland, Gregory A; McKinney, Brett A 2016-01-01 Clusters of genes in co-expression networks are commonly used as functional units for gene set enrichment detection and increasingly as features (attribute construction) for statistical inference and sample classification. One of the practical challenges of clustering for these purposes is to identify an optimal partition of the network where the individual clusters are neither too large, prohibiting interpretation, nor too small, precluding general inference. Newman Modularity is a spectral clustering algorithm that automatically finds the number of clusters, but for many biological networks the cluster sizes are suboptimal. In this work, we generalize Newman Modularity to incorporate information from indirect paths in RNA-Seq co-expression networks. We implement a merge-and-split algorithm that allows the user to constrain the range of cluster sizes: large enough to capture genes in relevant pathways, yet small enough to resolve distinct functions. We investigate the properties of our recursive indirect-pathways modularity (RIP-M) and compare it with other clustering methods using simulated co-expression networks and RNA-seq data from an influenza vaccine response study. RIP-M had higher cluster assignment accuracy than Newman Modularity for finding clusters in simulated co-expression networks for all scenarios, and RIP-M had comparable accuracy to Weighted Gene Correlation Network Analysis (WGCNA). RIP-M was more accurate than WGCNA for modest hard thresholds and comparable for high, while WGCNA was slightly more accurate for soft thresholds. In the vaccine study data, RIP-M and WGCNA enriched for a comparable number of immunologically relevant pathways. PMID:27242890 9. Recursive Indirect-Paths Modularity (RIP-M) for Detecting Community Structure in RNA-Seq Co-expression Networks PubMed Central Rahmani, Bahareh; Zimmermann, Michael T.; Grill, Diane E.; Kennedy, Richard B.; Oberg, Ann L.; White, Bill C.; Poland, Gregory A.; McKinney, Brett A. 2016-01-01 Clusters of genes in co-expression networks are commonly used as functional units for gene set enrichment detection and increasingly as features (attribute construction) for statistical inference and sample classification. One of the practical challenges of clustering for these purposes is to identify an optimal partition of the network where the individual clusters are neither too large, prohibiting interpretation, nor too small, precluding general inference. Newman Modularity is a spectral clustering algorithm that automatically finds the number of clusters, but for many biological networks the cluster sizes are suboptimal. In this work, we generalize Newman Modularity to incorporate information from indirect paths in RNA-Seq co-expression networks. We implement a merge-and-split algorithm that allows the user to constrain the range of cluster sizes: large enough to capture genes in relevant pathways, yet small enough to resolve distinct functions. We investigate the properties of our recursive indirect-pathways modularity (RIP-M) and compare it with other clustering methods using simulated co-expression networks and RNA-seq data from an influenza vaccine response study. RIP-M had higher cluster assignment accuracy than Newman Modularity for finding clusters in simulated co-expression networks for all scenarios, and RIP-M had comparable accuracy to Weighted Gene Correlation Network Analysis (WGCNA). RIP-M was more accurate than WGCNA for modest hard thresholds and comparable for high, while WGCNA was slightly more accurate for soft thresholds. In the vaccine study data, RIP-M and WGCNA enriched for a comparable number of immunologically relevant pathways. PMID:27242890 10. Determination of L-carnitine in food supplement formulations using ion-pair chromatography with indirect conductimetric detection. PubMed Kakou, Aikaterini; Megoulas, Nikolaos C; Koupparis, Michael A 2005-04-01 A novel method for the determination of L-carnitine in food supplement formulations was developed and validated, using ion-pair chromatography with indirect conductimetric detection. The chromatographic method was based on a non-polar (C18) column and an aqueous octanesulfonate (0.64 mM) eluent, acidified with trifluoroacetic acid (5.2 mM). The retention time was 5.4 min and the asymmetry factor 0.65. A linear calibration curve from 10 to 1000 microg/ml (r= 0.99998), with a detection limit of 2.7 microg/ml (25 microl injection volume), a repeatability %RSD of 0.8 (40 microg/ml, n = 5) and reproducibility %RSD of 2.6 were achieved. The proposed method was applied for the determination of carnitine in oral solutions and capsules. No interference from excipients was found and the only pretreatment step required was the appropriate dilution with the mobile phase. Recovery from spiked samples was ranged from 97.7 to 99.7% with a precision (%RSD, n = 3) of 0.01-2.1%. PMID:15830947 11. Development of indirect competitive ELISA using egg yolk-derived immunoglobulin (IgY) for the detection of Gentamicin residues. PubMed He, Jinxin; Hu, Jianjun; Thirumalai, Diraviyam; Schade, Ruediger; Du, Enqi; Zhang, Xiaoying 2016-01-01 Gentamicin (Gent) is an aminoglycoside antibiotic being used in livestock sector. Gent residues could cause some genetic disorders by nonsense mutations. This study aimed to develop IgY-based ELISA for the detection of Gent in animal products. Gent was conjugated with Bovine serum albumin (BSA) by carbodiimide method for further immunization in the laying chickens. PEG-6000 extraction method was employed to extract IgY from the egg yolk. The titer of anti-Gent-IgY attained the peak of 1:256,000 after the 5(th) booster immunization. Checkerboard titration confirmed that, anti-Gent IgY in 1:2,000 dilution could give an Optical Density (OD) 1.0 at 2 µg mL(-1) of Gent-OVA coating concentration. IgY-based indirect competitive ELISA (Ic-ELISA) showed that, the IC50 value of anti-Gent IgY was 2.69 ng mL(-1) and regression curve equation was y = -16.27x + 56.97 (R(2) = 0.95, n = 3), confirming that, the detection limit (LOD, IC10 value) was 0.01 ng mL(-1). Recoveries from fresh milk, pork and chicken samples were ranged from 69.82% to 94.32%, with relative standard deviation lower than 10.88%. Our results suggested that generated anti-Gent IgY antibodies can be used in routine screening analysis of Gent residues in food samples. PMID:26513166 12. Ion chromatography with the indirect ultraviolet detection of alkali metal ions and ammonium using imidazolium ionic liquid as ultraviolet absorption reagent and eluent. PubMed Liu, Yong-Qiang; Yu, Hong 2016-08-01 Indirect ultraviolet detection was conducted in ultraviolet-absorption-agent-added mobile phase to complete the detection of the absence of ultraviolet absorption functional group in analytes. Compared with precolumn derivatization or postcolumn derivatization, this method can be widely used, has the advantages of simple operation and good linear relationship. Chromatographic separation of Li(+) , Na(+) , K(+) , and NH4 (+) was performed on a carboxylic acid base cation exchange column using imidazolium ionic liquid/acid/organic solvent as the mobile phase, in which imidazolium ionic liquids acted as ultraviolet absorption reagent and eluting agent. The retention behaviors of four kinds of cations are discussed, and the mechanism of separation and detection are described. The main factors influencing the separation and detection were the background ultraviolet absorption reagent and the concentration of hydrogen ion in the ion chromatography-indirect ultraviolet detection. The successful separation and detection of Li(+) , Na(+) , K(+) , and NH4 (+) within 13 min was achieved using the selected chromatographic conditions, and the detection limits (S/N = 3) were 0.02, 0.11, 0.30, and 0.06 mg/L, respectively. A new separation and analysis method of alkali metal ions and ammonium by ion chromatography with indirect ultraviolet detection method was developed, and the application range of ionic liquid was expanded. PMID:27377245 13. Near Real Time Ship Detection Experiments NASA Astrophysics Data System (ADS) Brusch, S.; Lehner, S.; Schwarz, E.; Fritz, T. 2010-04-01 A new Near Real Time (NRT) ship detection processor SAINT (SAR AIS Integrated Toolbox) was developed in the framework of the ESA project MARISS. Data are received at DLRs ground segment DLR-BN (Neustrelitz, Germany). Results of the ship detection are available on ftp server within 30 min after the acquisition started. The detectability of ships on Synthetic Aperture Radar (SAR) ERS-2, ENVISAT ASAR and TerraSAR-X (TS-X) images is validated by coastal (live) AIS and space AIS. The monitoring areas chosen for surveillance are the North-, Baltic Sea, and Cape Town. The detectability in respect to environmental parameters like wind field, sea state, currents and changing coastlines due to tidal effects is investigated. In the South Atlantic a tracking experiment of the German research vessel Polarstern has been performed. Issues of piracy in particular in respect to ships hijacked at the Somali coast are discussed. Some examples using high resolution images from TerraSAR-X are given. 14. Broadband excitation and indirect detection of nitrogen-14 in rotating solids using Delays Alternating with Nutation (DANTE) NASA Astrophysics Data System (ADS) Vitzthum, Veronika; Caporini, Marc A.; Ulzega, Simone; Bodenhausen, Geoffrey 2011-09-01 A train of short rotor-synchronized pulses in the manner of Delays Alternating with Nutations for Tailored Excitation (DANTE) applied to nitrogen-14 nuclei ( I = 1) in samples spinning at the magic angle at high frequencies (typically νrot = 62.5 kHz so that τrot = 16 μs) allows one to achieve uniform excitation of a great number of spinning sidebands that arise from large first-order quadrupole interactions, as occur for aromatic nitrogen-14 nuclei in histidine. With routine rf amplitudes ω1( 14N)/(2 π) = 60 kHz and very short pulses of a typical duration 0.5 < τp < 2 μs, efficient excitation can be achieved with 13 rotor-synchronized pulses in 13 τrot = 208 μs. Alternatively, with 'overtone' DANTE sequences using 2, 4, or 8 pulses per rotor period one can achieve efficient broadband excitation in fewer rotor periods, typically 2-4 τrot. These principles can be combined with the indirect detection of 14N nuclei via spy nuclei with S = ½ such as 1H or 13C in the manner of Dipolar Heteronuclear Multiple-Quantum Correlation (D-HMQC). 15. Quantification of anions and cations in environmental water samples. Measurements with capillary electrophoresis and indirect-UV detection. PubMed Hiissa, T; Sirén, H; Kotiaho, T; Snellman, M; Hautojärvi, A 1999-08-20 The aim of this study was to validate two separation methods for determination of inorganic anions and cations from natural waters with capillary electrophoresis (CE) by using indirect-UV detection. The research is related to method development for screening of groundwater samples obtained in site investigations for spent fuel of the Finnish nuclear industry. In CE analysis, anions were separated in pyromellitic acid (pH 7.7) in the order bromide, chloride, sulphate, nitrite, nitrate, fluoride and dihydrogenphosphate. Cations were separated at pH 3.6 after anions using an 18-crown-6-ether solution. In these analyses, ammonium migrated first followed by potassium, calcium, sodium and magnesium. The concentrations of the ions in the natural water samples were calculated by using two or three calibration curves made using reference solutions at concentration levels of 0.5-250 mg/l. The repeatabilities of the anion and cation methods were tested using laboratory-made reference sample mixtures with high and low salt concentrations. The limits of quantification in the analyses were between 0.02 and 0.1 mg/l, depending on the ion. Concentrations of ions tested in natural waters varied from a few milligrams to tens of grams per litre. PMID:10486747 16. A recombinant nucleocapsid protein-based indirect enzyme-linked immunosorbent assay to detect antibodies against porcine deltacoronavirus PubMed Central SU, Mingjun; LI, Chunqiu; GUO, Donghua; WEI, Shan; WANG, Xinyu; GENG, Yufei; YAO, Shuang; GAO, Jing; WANG, Enyu; ZHAO, Xiwen; WANG, Zhihui; WANG, Jianfa; WU, Rui; FENG, Li; SUN, Dongbo 2015-01-01 Recently, porcine deltacoronavirus (PDCoV) has been proven to be associated with enteric disease in piglets. Diagnostic tools for serological surveys of PDCoV remain in the developmental stage when compared with those for other porcine coronaviruses. In our study, an indirect enzyme-linked immunosorbent assay (ELISA) (rPDCoV-N-ELISA) was developed to detect antibodies against PDCoV using a histidine-tagged recombinant nucleocapsid (N) protein as an antigen. The rPDCoV-N-ELISA did not cross-react with antisera against porcine epidemic diarrhea virus, swine transmissible gastroenteritis virus, porcine group A rotavirus, classical swine fever virus, porcine circovirus-2, porcine pseudorabies virus, and porcine reproductive and respiratory syndrome virus; the receiver operating characteristic (ROC) curve analysis revealed 100% sensitivity and 90.4% specificity of the rPDCoV-N-ELISA based on samples of known status (n=62). Analyses of field samples (n=319) using the rPDCoV-N-ELISA indicated that 11.59% of samples were positive for antibodies against PDCoV. These data demonstrated that the rPDCoV-N-ELISA can be used for epidemiological investigations of PDCoV and that PDCoV had a low serum prevalence in pig population in Heilongjiang province, northeast China. PMID:26668175 17. A recombinant nucleocapsid protein-based indirect enzyme-linked immunosorbent assay to detect antibodies against porcine deltacoronavirus. PubMed Su, Mingjun; Li, Chunqiu; Guo, Donghua; Wei, Shan; Wang, Xinyu; Geng, Yufei; Yao, Shuang; Gao, Jing; Wang, Enyu; Zhao, Xiwen; Wang, Zhihui; Wang, Jianfa; Wu, Rui; Feng, Li; Sun, Dongbo 2016-05-01 Recently, porcine deltacoronavirus (PDCoV) has been proven to be associated with enteric disease in piglets. Diagnostic tools for serological surveys of PDCoV remain in the developmental stage when compared with those for other porcine coronaviruses. In our study, an indirect enzyme-linked immunosorbent assay (ELISA) (rPDCoV-N-ELISA) was developed to detect antibodies against PDCoV using a histidine-tagged recombinant nucleocapsid (N) protein as an antigen. The rPDCoV-N-ELISA did not cross-react with antisera against porcine epidemic diarrhea virus, swine transmissible gastroenteritis virus, porcine group A rotavirus, classical swine fever virus, porcine circovirus-2, porcine pseudorabies virus, and porcine reproductive and respiratory syndrome virus; the receiver operating characteristic (ROC) curve analysis revealed 100% sensitivity and 90.4% specificity of the rPDCoV-N-ELISA based on samples of known status (n=62). Analyses of field samples (n=319) using the rPDCoV-N-ELISA indicated that 11.59% of samples were positive for antibodies against PDCoV. These data demonstrated that the rPDCoV-N-ELISA can be used for epidemiological investigations of PDCoV and that PDCoV had a low serum prevalence in pig population in Heilongjiang province, northeast China. PMID:26668175 18. Characterization of an Indirect-Detection Amorphous Silicon Detector for Dosimetric Measurement of Intensity Modulated Photon Fields NASA Astrophysics Data System (ADS) Bailey, Daniel Wayne Indirect-detection amorphous silicon electronic imagers show much promise for measurement of radiation dose, particularly for pre-treatment verification of patient-specific intensity modulated radiotherapy plans. These instruments, commonly known as Electronic Portal Imaging Devices (EPIDs), have high data density, large detecting area, convenient electronic read-out, excellent positional reproducibility, and are quickly becoming standard equipment on today's medical megavoltage linear accelerators. However, because these devices were originally intended to be digital radiograph imagers and not dosimeters, the modeling, calibration, and prediction of their response to dose carries a number of challenges. For instance, EPID dose images exhibit off-axis dose errors of up to 18% with increasing distance from the central axis of the imager (as compared to dose predictions calculated by a commercially available treatment planning system). Furthermore, these off-axis errors are asymmetric, with higher errors in the in-plane direction than in the cross-plane direction. In this work, methods are proposed to account for EPID off-axis effects by precisely calculating off-axis output factors from experimental measurements to increase the accuracy of EPID absolute dose measurement. Using these methods, dose readings acquired over the entire surface of the detector agree to within 2% accuracy as compared to respective EPID dose predictions. Similarly, the percentage of measured dose points that agree with respective calculated dose points (using 3%, 3 mm criteria) improves by as much as 60% for off-axis intensity modulated photon fields. Furthermore, a number of clinical applications of EPID dosimetry are investigated, including pixel response constancy, the effect of data density on a common metric for quantitatively comparing measured vs. calculated dose, and the implementation of an electronic portal dosimetry program for radiotherapy quality assurance. 19. Study on Indirect Measuring Technology of EAF Steelmaking Decarburization Rate by Off-gas Analysis Technique in Hot State Experiment NASA Astrophysics Data System (ADS) Dong, Kai; Liu, Wenjuan; Zhu, Rong 2015-10-01 In this paper, measurement method of EAF Steelmaking decarburization rate is studied. Because of the fuel gas blown and air mixed, the composition of hot temperature off-gas is measurand unreally, and the flow rate is unknown too, the direct measurement of EAF decarburization rate by furnace gas analysis is unrealized. Firstly, the off-gas generation process is discussed. After that, dynamic concentration of CO2, CO, and O2 in off-gas and EAF oxygen supply rate are monitored in real time. Finally, the concentration and volume flow rate of off-gas are obtained to measure the EAF decarburization rate indirectly. The results of the hot state experiments show that the decarburization rate in oxidization step can reach up to about 0.53 mol/s, and the forecasting carbon concentration is 1.14% corresponding to the average carbon concentration (1.43%) in finial metal samples. The measurement of decarburization rate by off-gas analysis technique can be reasonable in EAF production process. 20. The Earth's velocity for direct detection experiments NASA Astrophysics Data System (ADS) McCabe, Christopher 2014-02-01 The Earth's velocity relative to the Sun in galactic coordinates is required in the rate calculation for direct detection experiments. We provide a rigorous derivation of this quantity to first order in the eccentricity of the Earth's orbit. We also discuss the effect of the precession of the equinoxes, which has hitherto received little explicit discussion. Comparing with other expressions in the literature, we confirm that the expression of Lee, Lisanti and Safdi is correct, while the expression of Lewin and Smith, the de facto standard expression, contains an error. For calculations of the absolute event rate, the leading order expression is sufficient while for modulation searches, an expression with the eccentricity is required for accurate predictions of the modulation phase. 1. Characterization and quantitation of mixtures of alkyl ether sulfates and carboxylic acids by capillary electrophoresis with indirect photometric detection. PubMed Bernabé-Zafón, Virginia; Ortega-Gadea, Silvia; Simó-Alfonso, Ernesto F; Ramis-Ramos, Guillermo 2003-08-01 The separation, characterization, and determination of mixtures of alkyl ether sulfates (AES) and fatty acids (C10-C16) in background electrolytes (BGEs) containing acetonitrile (ACN)-water mixtures is addressed. Due to inhibition of the ionization of the carboxylate groups, the migration time and the resolution between the fatty acids decreased when the water content of the BGE was reduced, but efficiency and resolution between the AES oligomers improved. The migration times increased and resolution improved by substituting 5% ACN by an equivalent amount of dioxane. A complete separation of the two surfactant classes, up to the AES oligomers with 8 ethylene oxide units (EOs) with respect to C10, with excellent resolution between the AES oligomers, while preserving a satisfactory resolution between the fatty acids, was achieved with a BGE containing 5 mM trimethoxybenzoic acid, 7 mM dipentylamine, 85% ACN, 5% dioxane, and 10% water. The two surfactant classes were increasingly resolved by further reducing the water content of the BGE. Thus, C2 (acetate) was resolved from the AES oligomers up to 7 EOs using 90% ACN and 5% dioxane, but the resolution between the heavier fatty acids was poor with this BGE. Identification of the AES oligomers was eased by the excellent regularity of the successive migration times; thus, within each AES subclass or series of oligomers with the same number of carbon atoms in the alkyl chain, the migration times decreased following a mild curve as the number of EOs increased. The way how the data obtained by indirect photometry (corrected peak areas that are proportional to the molar concentrations) should be managed to avoid systematic error when the calibration curve is constructed using an AES standard with an oligomer distribution different from that of the samples is discussed and equations are given. Decyl sulfate was successfully used as internal standard. The detection limits (S/N = 3) were of ca. 2 microM for individual AES 2. The RAP experiment: Acoustic Detection of Particles NASA Astrophysics Data System (ADS) Bassan, M.; Buonomo, B.; Cavallari, G.; Coccia, E.; D'Antonio, S.; Delle Monache, G.; Di Gioacchino, D.; Fafone, V.; Ligi, C.; Marini, A.; Mazzitelli, G.; Modestino, G.; Pizzella, G.; Quintieri, L.; Roccella, S.; Rocchi, A.; Ronga, F.; Tripodi, P.; Valente, P. 2007-10-01 The RAP experiment is based on the acoustic detection of high energy particles by cylindrical bars. In fact, the interacting particles warm up the material around their track causing a local thermal expansion that, being prevented by the rest of the material, causes a local impulse of pressure. Consequently the bar starts to vibrate and the amplitude of the oscillation is proportional to the energy released. The RAP experiment has the aim to investigate the mechanical excitation of cylindrical bars caused by impinging particles depending on the conducting status of the material of which the detector is made. In particular physical phenomena related to the superconductivity state could be involved in such a way to enhance the conversion efficiency of the particle energy into mechanical vibrations. Essentially, two materials have been tested: aluminum alloy (Al5056) and niobium. In this report we report the measurements obtained for a niobium bar from room temperature down to 4K, below the transition temperature, and those obtained for an Al5056 bar above the transition (from 4 to 293 K). 3. Indirect detection of superoxide in RAW 264.7 macrophage cells using microchip electrophoresis coupled to laser-induced fluorescence. PubMed de Campos, Richard P S; Siegel, Joseph M; Fresta, Claudia G; Caruso, Giuseppe; da Silva, José A F; Lunte, Susan M 2015-09-01 Superoxide, a naturally produced reactive oxygen species (ROS) in the human body, is involved in many pathological and physiological signaling processes. However, if superoxide formation is left unregulated, overproduction can lead to oxidative damage to important biomolecules, such as DNA, lipids, and proteins. Superoxide can also lead to the formation of peroxynitrite, an extremely hazardous substance, through its reaction with endogenously produced nitric oxide. Despite its importance, quantitative information regarding superoxide production is difficult to obtain due to its high reactivity and low concentrations in vivo. MitoHE, a fluorescent probe that specifically reacts with superoxide, was used in conjunction with microchip electrophoresis (ME) and laser-induced fluorescence (LIF) detection to investigate changes in superoxide production by RAW 264.7 macrophage cells following stimulation with phorbol 12-myristate 13-acetate (PMA). Stimulation was performed in the presence and absence of the superoxide dismutase (SOD) inhibitors, diethyldithiocarbamate (DDC) and 2-metoxyestradiol (2-ME). The addition of these inhibitors resulted in an increase in the amount of superoxide specific product (2-OH-MitoE(+)) from 0.08 ± 0.01 fmol (0.17 ± 0.03 mM) in native cells to 1.26 ± 0.06 fmol (2.5 ± 0.1 mM) after PMA treatment. This corresponds to an approximately 15-fold increase in intracellular concentration per cell. Furthermore, the addition of 3-morpholino-sydnonimine (SIN-1) to the cells during incubation resulted in the production of 0.061 ± 0.006 fmol (0.12 ± 0.01 mM) of 2-OH-MitoE(+) per cell on average. These results demonstrate that indirect superoxide detection coupled with the use of SOD inhibitors and a separation method is a viable method to discriminate the 2-OH-MitoE(+) signal from possible interferences. PMID:26159570 4. NATO TG-53: acoustic detection of weapon firing joint field experiment NASA Astrophysics Data System (ADS) Robertson, Dale N.; Pham, Tien; Scanlon, Michael V.; Srour, Nassy; Reiff, Christian G.; Sim, Leng K.; Solomon, Latasha; Thompson, Dorothea F. 2006-05-01 In this paper, we discuss the NATO Task Group 53 (TG-53) acoustic detection of weapon firing field joint experiment at Yuma Proving Ground during 31 October to 4 November 2005. The participating NATO countries include France, the Netherlands, UK and US. The objectives of the joint experiments are: (i) to collect acoustic signatures of direct and indirect firings from weapons such as sniper, mortar, artillery and C4 explosives and (ii) to share signatures among NATO partners from a variety of acoustic sensing platforms on the ground and in the air distributed over a wide area. 5. Strangeness detection in ALICE experiment at LHC SciTech Connect Safarik, K. 1995-07-15 The authors present some parameters of the ALICE detector which concern the detection of strange particles. The results of a simulation for neutral strange particles and cascades, together with estimated rates are presented. They also briefly discuss the detection of charged K-mesons. Finally, they mention the possibility of open charm particle detection. 6. Dark matter indirect detection signals and the nature of neutrinos in the supersymmetric U(1)B-L extension of the standard model NASA Astrophysics Data System (ADS) Allahverdi, Rouzbeh; Campbell, Sheldon S.; Dutta, Bhaskar; Gao, Yu 2014-10-01 In this paper, we study the prospects for determining the nature of neutrinos in the context of a supersymmetric B-L extension of the standard model by using dark matter indirect detection signals and bounds on Neff from the cosmic microwave background data. The model contains two new dark matter candidates whose dominant annihilation channels produce more neutrinos than neutralino dark matter in the minimal supersymmetric standard model. The photon and neutrino counts may then be used to discriminate between the two models. If the dark matter comes from the B-L sector, its indirect signals and impact on the cosmic microwave background can shed light on the nature of the neutrinos. When the light neutrinos are of Majorana type, the indirect neutrino signal from the Sun and the Galactic center may show a prompt neutrino box feature, as well as an earlier cutoff in both neutrino and gamma-ray energy spectra. When the light neutrinos are of Dirac type, their contribution to the effective number of neutrinos Neff is at a detectable level. 7. Prospects for detecting dark matter with neutrino telescopes in light of recent results from direct detection experiments SciTech Connect Halzen, Francis; Hooper, Dan; /Fermilab 2005-10-01 Direct detection dark matter experiments, lead by the CDMS collaboration, have placed increasingly stronger constraints on the cross sections for elastic scattering of WIMPs on nucleons. These results impact the prospects for the indirect detection of dark matter using neutrino telescopes. With this in mind, we revisit the prospects for detecting neutrinos produced by the annihilation of WIMPs in the Sun. We find that the latest bounds do not seriously limit the models most accessible to next generation kilometer-scale neutrino telescopes such as IceCube. This is largely due to the fact that models with significant spin-dependent couplings to protons are the least constrained and, at the same time, the most promising because of the efficient capture of WIMPs in the Sun. We identify models where dark matter particles are beyond the reach of any planned direct detection experiments while within reach of neutrino telescopes. In summary, we find that, even when contemplating recent direct detection results, neutrino telescopes still have the opportunity to play an important as well as complementary role in the search for particle dark matter. 8. Indirect Imaging NASA Astrophysics Data System (ADS) Kundu, Mukul R. This book is the Proceedings of an International Symposium held in Sydney, Australia, August 30-September 2, 1983. The meeting was sponsored by the International Union of Radio Science and the International Astronomical Union.Indirect imaging is based upon the principle of determining the actual form of brightness distribution in a complex case by Fourier synthesis, using information derived from a large number of Fourier components. The main topic of the symposium was how to get the best images from data obtained from telescopes and other similar imaging instruments. Although the meeting was dominated by radio astronomers, with the consequent dominance of discussion of indirect imaging in the radio domain, there were quite a few participants from other disciplines. Thus there were some excellent discussions on optical imaging and medical imaging. 9. Simple Indirect Enzyme-Linked Immunosorbent Assay to Detect Antibodies Against Bovine Viral Diarrhea Virus, Based on Prokaryotically Expressed Recombinant MBP-NS3 Protein PubMed Central Mahmoodi, Pezhman; Seyfi Abad Shapouri, Masoud Reza; Ghorbanpour, Masoud; Haji Hajikolaei, Mohammad Rahim; Lotfi, Mohsen; Pourmahdi Boroujeni, Mahdi; Daghari, Maryam 2015-01-01 Background: Bovine viral diarrhea (BVD) is an economically important disease of cattle distributed worldwide. Diagnosis of BVD relies on laboratory-based detection of its viral causing agent or virus specific antibodies and the most common laboratory method for this purpose is Enzyme-Linked Immunosorbent Assay (ELISA). Objectives: The current study was aimed to develop a simple indirect ELISA to detect antibodies against Bovine Viral Diarrhea Virus (BVDV) in the sera of infected cattle. Materials and Methods: A new simple indirect ELISA method was developed to detect BVDV infection by prokaryotically (Escherichia coli, BL21 strain) expressed recombinant whole nonstructural protein 3 (NS3) of BVDV (NADL strain). Four hundred bovine serum samples were evaluated by the newly developed NS3-ELISA and virus neutralization test (VNT) as the gold standard method to diagnose BVD. Among these samples, 289 sera had been previously tested by a commercial ELISA kit. Results: Statistical analyses showed a very high correlation between the results of the developed NS3-ELISA and VNT (kappa coefficient = 0.935, P < 0.001), with the relative sensitivity and specificity of 94% and 98.8%, respectively. There was also a high correlation between the results of NS3-ELISA and the commercial ELISA kit (kappa coefficient = 0.802, P < 0.001) with the relative sensitivity and specificity of 90.72% and 91.15%, respectively. Conclusions: The newly developed simple indirect ELISA showed high sensitivity and specificity with respect to VNT. Developing such a simple, sensitive, and specific ELISA which is much less expensive than the available commercial ELISA kits can improve the detection of BVDV infections, help to eliminate the disease from herds, and decrease economic losses caused by this disease. PMID:25964844 10. Comparison between intraoral indirect and conventional film-based imaging for the detection of dental root fractures: an ex vivo study. PubMed Shintaku, Werner H; Venturin, Jaqueline S; Noujeim, Marcel; Dove, Stephen B 2013-12-01 Digital intraoral radiographic systems have been rapidly replacing conventional dental X-ray films for diagnosis of dental diseases. Current scientific literature supports the use of these digital systems for the detection of dental caries, periodontal bone loss, and periapical pathologies. However, relatively few studies have been published addressing the detection of dental root fractures. The purpose of this study was to compare the intraoral F-speed film (Insight) with two photostimulable phosphor (PSP) indirect digital systems (ScanX and Digora Optime) for the detection of simulated dental root fractures. Ten raters evaluated images acquired from 10 dry human cadaver mandibles under optimal viewing conditions. These data were analyzed by a 5-point receiver operating characteristic curve analysis for statistical differences. Sensitivity and specificity of these systems were also assessed. Since statistically significant difference between the systems was not observed, the results of this study agreeably support indirect digital PSP plates as an alternative to the evaluated conventional film for the detection of dental root fractures. PMID:23566073 11. Analysis of γ-hydroxy butyrate by combining capillary electrophoresis-indirect detection and wall dynamic coating: application to dried matrices. PubMed Saracino, Maria A; Catapano, Maria C; Iezzi, Rosa; Somaini, Lorenzo; Gerra, Gilberto; Mercolini, Laura 2015-11-01 γ-Hydroxybutyric acid (GHB) is a powerful central nervous system depressant, currently used in medicine for the treatment of narcolepsy and alcohol dependence. In recent years, it has gained popularity among illegal club drugs, mainly because of its euphoric effects as well as doping agent and date rape drug. The purpose of the present work was the development of a rapid analytical method for the analysis of GHB in innovative biological matrices, namely dried blood spots (DBSs) and dried urine spots (DUSs). The analytical method is based on capillary zone electrophoresis with indirect UV absorption detection at 210 nm and capillary wall dynamic coating. The background electrolyte is composed of a phosphate buffer containing nicotinic acid (probe for detection) and cetyltrimethylammonium bromide (CTAB, reversal of electroosmosis in wall dynamic coating). The influence of probe and CTAB concentration, together with buffer pH, on migration time and signal response was investigated. Under the optimized conditions, analytical linearity and precision were satisfactory; absolute recovery values were also high (>90 %); the use of dried matrices (DBSs and DUSs) was advantageous as an alternative matrix to classical ones. No interferences were found either from the most common exogenous or from endogenous compounds. This analytical approach can offer a rapid, precise and accurate method for GHB determination in innovative biological samples, which could be important for screening purposes in clinical and forensic toxicology. Graphical Abstract CE method, by combined indirect UV detection and dynamic coating, for GHB determination in DBSs and DUSs. PMID:26427507 12. First, Get Your Feet Wet: The Effects of Learning from Direct and Indirect Experience on Team Creativity ERIC Educational Resources Information Center Gino, Francesca; Argote, Linda; Miron-Spektor, Ella; Todorova, Gergana 2010-01-01 How does prior experience influence team creativity? We address this question by examining the effects of task experience acquired directly and task experience acquired vicariously from others on team creativity in a product-development task. Across three laboratory studies, we find that direct task experience leads to higher levels of team… 13. The PVLAS experiment: detecting vacuum magnetic birefringence NASA Astrophysics Data System (ADS) Zavattini, G.; Della Valle, F.; Gastaldi, U.; Messineo, G.; Milotti, E.; Pengo, R.; Piemontese, L.; Ruoso, G. 2013-06-01 The PVLAS collaboration is presently assembling a new apparatus to detect vacuum magnetic birefringence. This property is related to the structure of the QED vacuum and is predicted by the Euler-Heisenberg-Weisskopf effective Lagrangian. It can be detected by measuring the ellipticity acquired by a linearly polarised light beam propagating through a strong magnetic field. Here we report results of a scaled-down test setup and briefly describe the new PVLAS apparatus. This latter one is in construction and is based on a high-sensitivity ellipsometer with a high-finesse Fabry-Perot cavity (> 4×105) and two 0.8 m long 2.5 T rotating permanent dipole magnets. Measurements with the test setup have improved by a factor 2 the previous upper bound on the parameter Ae, which determines the strength of the nonlinear terms in the QED Lagrangian: Ae(PVLAS) < 3.3 × 10-21 T-2 95% c.l. 14. Evidence for a Bubble-Competition Regime in Indirectly Driven Ablative Rayleigh-Taylor Instability Experiments on the NIF NASA Astrophysics Data System (ADS) Martinez, D. A.; Smalyuk, V. A.; Kane, J. O.; Casner, A.; Liberatore, S.; Masse, L. P. 2015-05-01 We investigate on the National Ignition Facility the ablative Rayleigh-Taylor instability in the transition from weakly nonlinear to highly nonlinear regimes. A planar plastic package with preimposed two-dimensional broadband modulations is accelerated for up to 12 ns by the x-ray drive of a gas-filled Au radiation cavity with a radiative temperature plateau at 175 eV. This extended tailored drive allows a distance traveled in excess of 1 mm for a 130 μ m thick foil. Measurements of the modulation optical density performed by x-ray radiography show that a bubble-merger regime for the Rayleigh-Taylor instability at an ablation front is achieved for the first time in indirect drive. The mutimode modulation amplitudes are in the nonlinear regime, grow beyond the Haan multimode saturation level, evolve toward the longer wavelengths, and show insensitivity to the initial conditions. 15. Indirect decentralized learning control NASA Technical Reports Server (NTRS) Longman, Richard W.; Lee, Soo C.; Phan, M. 1992-01-01 The new field of learning control develops controllers that learn to improve their performance at executing a given task, based on experience performing this specific task. In a previous work, the authors presented a theory of indirect learning control based on use of indirect adaptive control concepts employing simultaneous identification and control. This paper develops improved indirect learning control algorithms, and studies the use of such controllers in decentralized systems. The original motivation of the learning control field was learning in robots doing repetitive tasks such as on an assembly line. This paper starts with decentralized discrete time systems, and progresses to the robot application, modeling the robot as a time varying linear system in the neighborhood of the nominal trajectory, and using the usual robot controllers that are decentralized, treating each link as if it is independent of any coupling with other links. The basic result of the paper is to show that stability of the indirect learning controllers for all subsystems when the coupling between subsystems is turned off, assures convergence to zero tracking error of the decentralized indirect learning control of the coupled system, provided that the sample time in the digital learning controller is sufficiently short. 16. Census cities experiment in urban change detection NASA Technical Reports Server (NTRS) Wray, J. R. (Principal Investigator) 1973-01-01 The author has identified the following significant results. Work continues on mapping of 1970 urban land use from 1970 census contemporaneous aircraft photography. In addition, change detection analysis from 1972 aircraft photography is underway for several urban test sites. Land use maps, mosaics, and census overlays for the two largest urban test sites are nearing publication readiness. Preliminary examinations of ERTS-1 imagery of San Francisco Bay have been conducted which show that tracts of land of more than 10 acres in size which are undergoing development in an urban setting can be identified. In addition, each spectral band is being evaluated as to its utility for urban analyses. It has been found that MSS infrared band 7 helps to differentiate intra-urban land use details not found in other MSS bands or in the RBV coverage of the same scene. Good quality false CIR composites have been generated from 9 x 9 inch positive MSS bands using the Diazo process. 17. Probing the deep nonlinear stage of the ablative Rayleigh-Taylor instability in indirect drive experiments on the National Ignition Facility SciTech Connect Casner, A. Masse, L.; Liberatore, S.; Loiseau, P.; Masson-Laborde, P. E.; Jacquet, L.; Martinez, D.; Moore, A. S.; Seugling, R.; Felker, S.; Haan, S. W.; Remington, B. A.; Smalyuk, V. A.; Farrell, M.; Giraldez, E.; Nikroo, A. 2015-05-15 Academic tests in physical regimes not encountered in Inertial Confinement Fusion will help to build a better understanding of hydrodynamic instabilities and constitute the scientifically grounded validation complementary to fully integrated experiments. Under the National Ignition Facility (NIF) Discovery Science program, recent indirect drive experiments have been carried out to study the ablative Rayleigh-Taylor Instability (RTI) in transition from weakly nonlinear to highly nonlinear regime [A. Casner et al., Phys. Plasmas 19, 082708 (2012)]. In these experiments, a modulated package is accelerated by a 175 eV radiative temperature plateau created by a room temperature gas-filled platform irradiated by 60 NIF laser beams. The unique capabilities of the NIF are harnessed to accelerate this planar sample over much larger distances (≃1.4 mm) and longer time periods (≃12 ns) than previously achieved. This extended acceleration could eventually allow entering into a turbulent-like regime not precluded by the theory for the RTI at the ablation front. Simultaneous measurements of the foil trajectory and the subsequent RTI growth are performed and compared with radiative hydrodynamics simulations. We present RTI growth measurements for two-dimensional single-mode and broadband multimode modulations. The dependence of RTI growth on initial conditions and ablative stabilization is emphasized, and we demonstrate for the first time in indirect-drive a bubble-competition, bubble-merger regime for the RTI at ablation front. 18. Probing the deep nonlinear stage of the ablative Rayleigh-Taylor instability in indirect drive experiments on the National Ignition Facilitya) NASA Astrophysics Data System (ADS) Casner, A.; Masse, L.; Liberatore, S.; Loiseau, P.; Masson-Laborde, P. E.; Jacquet, L.; Martinez, D.; Moore, A. S.; Seugling, R.; Felker, S.; Haan, S. W.; Remington, B. A.; Smalyuk, V. A.; Farrell, M.; Giraldez, E.; Nikroo, A. 2015-05-01 Academic tests in physical regimes not encountered in Inertial Confinement Fusion will help to build a better understanding of hydrodynamic instabilities and constitute the scientifically grounded validation complementary to fully integrated experiments. Under the National Ignition Facility (NIF) Discovery Science program, recent indirect drive experiments have been carried out to study the ablative Rayleigh-Taylor Instability (RTI) in transition from weakly nonlinear to highly nonlinear regime [A. Casner et al., Phys. Plasmas 19, 082708 (2012)]. In these experiments, a modulated package is accelerated by a 175 eV radiative temperature plateau created by a room temperature gas-filled platform irradiated by 60 NIF laser beams. The unique capabilities of the NIF are harnessed to accelerate this planar sample over much larger distances ( ≃1.4 mm) and longer time periods ( ≃12 ns) than previously achieved. This extended acceleration could eventually allow entering into a turbulent-like regime not precluded by the theory for the RTI at the ablation front. Simultaneous measurements of the foil trajectory and the subsequent RTI growth are performed and compared with radiative hydrodynamics simulations. We present RTI growth measurements for two-dimensional single-mode and broadband multimode modulations. The dependence of RTI growth on initial conditions and ablative stabilization is emphasized, and we demonstrate for the first time in indirect-drive a bubble-competition, bubble-merger regime for the RTI at ablation front. 19. Complementary test of the dark matter self-interaction in dark U(1) model by direct and indirect dark matter detection NASA Astrophysics Data System (ADS) Chen, Chian-Shu; Lin, Guey-Lin; Lin, Yen-Hsun 2016-01-01 The halo dark matter (DM) can be captured by the Sun if its final velocity after the collision with a nucleus in the Sun is less than the escape velocity. We consider a selfinteracting dark matter (SIDM) model where U(1) gauge symmetry is introduced to account for the DM self-interaction. Such a model naturally leads to isospin violating DM-nucleon interaction, although isospin symmetric interaction is still allowed as a special case. We present the IceCube-PINGU 2σ sensitivity to the parameter range of the above model with 5 years of search for neutrino signature from DM annihilation in the Sun. This indirect detection complements the direct detection by probing those SIDM parameter ranges which are either the region for very small mχ or the region opened up due to isospin violations. 20. Development and application of an indirect ELISA test for the detection of antibodies to Mycoplasma crocodyli infection in crocodiles (Crocodylus niloticus). PubMed Dawo, Fufa; Mohan, Krishna 2007-01-31 Non-availability of a standardized rapid serodiagnostic test for quick and accurate diagnosis of Mycoplasma crocodyli (M. crocodyli) infection in crocodiles was the underlining reason for conducting the present study. An indirect enzyme-linked immunosorbent assay (iELISA) for the detection of antibodies (Ab) to M. crocodyli infection in crocodile sera was developed using sonicated antigen (Ag) and anti-crocodile conjugate. The iELISA test was optimised with different reagents and at different steps. A cut-off value of percent positive greater than or equal to 53.47% resulted in an estimated sensitivity and specificity of 85.67 and 100%, respectively. The developed iELISA could be used for detection of Abs to M. crocodyli infection in crocodiles and may enable to understand the transmission of the disease. PMID:17014973 1. EXTRAGALACTIC DARK MATTER AND DIRECT DETECTION EXPERIMENTS SciTech Connect Baushev, A. N. 2013-07-10 Recent astronomical data strongly suggest that a significant part of the dark matter content of the Local Group and Virgo Supercluster is not incorporated into the galaxy halos and forms diffuse components of these galaxy clusters. A portion of the particles from these components may penetrate the Milky Way and make an extragalactic contribution to the total dark matter containment of our Galaxy. We find that the particles of the diffuse component of the Local Group are apt to contribute {approx}12% to the total dark matter density near Earth. The particles of the extragalactic dark matter stand out because of their high speed ({approx}600 km s{sup -1}), i.e., they are much faster than the galactic dark matter. In addition, their speed distribution is very narrow ({approx}20 km s{sup -1}). The particles have an isotropic velocity distribution (perhaps, in contrast to the galactic dark matter). The extragalactic dark matter should provide a significant contribution to the direct detection signal. If the detector is sensitive only to the fast particles (v > 450 km s{sup -1}), then the signal may even dominate. The density of other possible types of the extragalactic dark matter (for instance, of the diffuse component of the Virgo Supercluster) should be relatively small and comparable with the average dark matter density of the universe. However, these particles can generate anomaly high-energy collisions in direct dark matter detectors. 2. Indirect inversions NASA Astrophysics Data System (ADS) Sergienko, Olga 2013-04-01 Since Doug MacAyeal's pioneering studies of the ice-stream basal traction optimizations by control methods, inversions for unknown parameters (e.g., basal traction, accumulation patterns, etc) have become a hallmark of the present-day ice-sheet modeling. The common feature of such inversion exercises is a direct relationship between optimized parameters and observations used in the optimization procedure. For instance, in the standard optimization for basal traction by the control method, ice-stream surface velocities constitute the control data. The optimized basal traction parameters explicitly appear in the momentum equations for the ice-stream velocities (compared to the control data). The inversion for basal traction is carried out by minimization of the cost (or objective, misfit) function that includes the momentum equations facilitated by the Lagrange multipliers. Here, we build upon this idea, and demonstrate how to optimize for parameters indirectly related to observed data using a suite of nested constraints (like Russian dolls) with additional sets of Lagrange multipliers in the cost function. This method opens the opportunity to use data from a variety of sources and types (e.g., velocities, radar layers, surface elevation changes, etc.) in the same optimization process. 3. Early time implosion symmetry from two-axis shock-timing measurements on indirect drive NIF experiments SciTech Connect Moody, J. D. Robey, H. F.; Celliers, P. M.; Munro, D. H.; Barker, D. A.; Baker, K. L.; Döppner, T.; Hash, N. L.; Berzak Hopkins, L.; LaFortune, K.; Landen, O. L.; LePape, S.; MacGowan, B. J.; Ralph, J. E.; Ross, J. S.; Widmayer, C.; Nikroo, A.; Giraldez, E.; Boehly, T. 2014-09-15 An innovative technique has been developed and used to measure the shock propagation speed along two orthogonal axes in an inertial confinement fusion indirect drive implosion target. This development builds on an existing target and diagnostic platform for measuring the shock propagation along a single axis. A 0.4 mm square aluminum mirror is installed in the ablator capsule which adds a second orthogonal view of the x-ray-driven shock speeds. The new technique adds capability for symmetry control along two directions of the shocks launched in the ablator by the laser-generated hohlraum x-ray flux. Laser power adjustments in four different azimuthal cones based on the results of this measurement can reduce time-dependent symmetry swings during the implosion. Analysis of a large data set provides experimental sensitivities of the shock parameters to the overall laser delivery and in some cases shows the effects of laser asymmetries on the pole and equator shock measurements. 4. A Simple Ultrasonic Experiment Using a Phase Shift Detection Technique. ERIC Educational Resources Information Center Yunus, W. Mahmood Mat; Ahmad, Maulana 1996-01-01 Describes a simple ultrasonic experiment that can be used to measure the purity of liquid samples by detecting variations in the velocity of sound. Uses a phase shift detection technique that incorporates the use of logic gates and a piezoelectric transducer. (JRH) 5. Rapid Detection and Identification of Streptococcus Iniae Using a Monoclonal Antibody-Based Indirect Fluorescent Antibody Technique Technology Transfer Automated Retrieval System (TEKTRAN) Streptococcus iniae is among the major pathogens of a large number of fish species cultured in fresh and marine recirculating and net pen production systems . The traditional plate culture technique to detect and identify S. iniae is time consuming and may be problematic due to phenotypic variations... 6. A review of traditional and contemporary assays for direct and indirect detection of Equid herpesvirus 1 in clinical samples. PubMed Balasuriya, Udeni B R; Crossley, Beate M; Timoney, Peter J 2015-11-01 Equid herpesvirus 1 (EHV-1) is one of the most economically important equine viral pathogens. Its clinical manifestations in horses vary from acute upper respiratory tract disease, abortion, or neonatal death, to neurological disease termed equine herpesviral myeloencephalopathy, which may lead to paralysis and a fatal outcome. Successful identification of EHV-1 infection in horses depends on a variety of factors such as suitable case selection with emphasis on timing of sample collection, selection of appropriate sample(s) based on the clinical manifestations, application of relevant diagnostic technique(s) and/or test(s), and careful evaluation and interpretation of laboratory results. Several traditional serologic and virus isolation assays have been described; however, these assays have inherent limitations that prevent rapid and reliable detection of EHV-1. The advent of molecular biologic techniques has revolutionized the diagnosis of infectious diseases in humans and animal species. Specifically, polymerase chain reaction (PCR)-based assays have allowed detection of nucleic acid in clinical specimens precisely and rapidly as compared to the traditional methods that detect the agent or antigen, or agent-specific antibodies in serum. The new molecular methods, especially real-time PCR, can be a very useful means of EHV-1 detection and identification. Veterinarians involved in equine practice must be aware of the advantages and disadvantages of various real-time PCR assays, interpretation of viral genetic marker(s), and latency in order to provide the best standard of care for their equine patients. PMID:26472746 7. Indirect decentralized repetitive control NASA Technical Reports Server (NTRS) Lee, Soo Cheol; Longman, Richard W. 1993-01-01 Learning control refers to controllers that learn to improve their performance at executing a given task, based on experience performing this specific task. In a previous work, the authors presented a theory of indirect decentralized learning control based on use of indirect adaptive control concepts employing simultaneous identification and control. This paper extends these results to apply to the indirect repetitive control problem in which a periodic (i.e., repetitive) command is given to a control system. Decentralized indirect repetitive control algorithms are presented that have guaranteed convergence to zero tracking error under very general conditions. The original motivation of the repetitive control and learning control fields was learning in robots doing repetitive tasks such as on an assembly line. This paper starts with decentralized discrete time systems, and progresses to the robot application, modeling the robot as a time varying linear system in the neighborhood of the desired trajectory. Decentralized repetitive control is natural for this application because the feedback control for link rotations is normally implemented in a decentralized manner, treating each link as if it is independent of the other links. 8. Modified Indirect Porcine Circovirus (PCV) Type 2-Based and Recombinant Capsid Protein (ORF2)-Based Enzyme-Linked Immunosorbent Assays for Detection of Antibodies to PCV PubMed Central Nawagitgul, Porntippa; Harms, Perry A.; Morozov, Igor; Thacker, Brad J.; Sorden, Steven D.; Lekcharoensuk, Chalermpol; Paul, Prem S. 2002-01-01 Postweaning multisystemic wasting syndrome of swine associated with porcine circovirus (PCV) is a recently reported and economically important disease. Simple and reliable diagnostic methods are needed for detecting antibodies to PCV type 2 (PCV2) for monitoring of PCV infection. Here, we report the development of two modified indirect enzyme-linked immunosorbent assays (ELISAs): a PCV2 ELISA based on cell-culture-propagated PCV2 and an ORF2 ELISA based on recombinant major capsid protein. PCV2 and ORF2 ELISA detected antibodies to PCV2 and the capsid protein, respectively, in sera from pigs experimentally infected with PCV2 as early as 14 and 21 days postinoculation (dpi). The kinetics of the antibody response to PCV2 and the major capsid protein were similar. Repeatability tests revealed that the coefficients of variation of positive sera within and between runs for both assays were less than 30%. To validate the assays, PCV2 and ORF2 ELISAs were performed with 783 serum samples of young and adult pigs collected from different herds in the Midwestern United States and compared with an indirect immunofluorescent assay (IIF). Six out of 60 samples collected from nursery and growing pigs in 1987 were positive by both ELISA and IIF. Compared with IIF, the diagnostic sensitivity, specificity, and accuracy of PCV2 and ORF2 ELISAs were similar (>90%). The tests showed no cross-reactivity with antibodies to porcine parvovirus and porcine reproductive and respiratory syndrome virus. There was good agreement between the two ELISAs and between the ELISAs and IIF. The availability of the two ELISAs should accelerate our understanding of the host immune response to PCV2 and facilitate the development of prevention and control strategies by elucidating the ecology of PCV2 within swine populations. PMID:11777826 9. Determination of gamma-hydroxybutyric acid in human urine by capillary electrophoresis with indirect UV detection and confirmation with electrospray ionization ion-trap mass spectrometry. PubMed Baldacci, Andrea; Theurillat, Regula; Caslavska, Jitka; Pardubská, Helena; Brenneisen, Rudolf; Thormann, Wolfgang 2003-03-21 Gamma-hydroxybutyric acid (GHB), a minor metabolite or precursor of gamma-aminobutyric acid (GABA), acts as a neurotransmitter/neuromodulator via binding to GABA receptors and to specific presynaptic GHB receptors. Based upon the stimulatory effects, GHB is widely abused. Thus, there is great interest in monitoring GHB in body fluids and tissues. We have developed an assay for urinary GHB that is based upon liquid-liquid extraction and capillary zone electrophoresis (CZE) with indirect UV absorption detection. The background electrolyte is composed of 4 mM nicotinic acid (compound for indirect detection), 3 mM spermine (reversal of electroosmosis) and histidine (added to reach a pH of 6.2). Having a 50 microm I.D. capillary of 40 cm effective length, 1-octanesulfonic acid as internal standard, solute detection at 214 nm and a diluted urine with a conductivity of 2.4 mS/cm, GHB concentrations > or = 2 microg/ml can be detected. Limit of detection (LOD) and limit of quantitation (LOQ) were determined to be dependent on urine concentration and varied between 2-24 and 5-60 microg/ml, respectively. Data obtained suggest that LOD and LOQ (both in microg/ml) can be estimated with the relationships 0.83 kappa and 2.1 kappa, respectively, where kappa is the conductivity of the urine in mS/cm. The assay was successfully applied to urines collected after administration of 25 mg sodium GHB/kg body mass. Negative electrospray ionization ion-trap tandem mass spectrometry was used to confirm the presence of GHB in the urinary extract via selected reaction monitoring of the m/z 103.1-->m/z 85.1 precursor-product ion transition. Independent of urine concentration, this approach meets the urinary cut-off level of 10 microg/ml that is required for recognition of the presence of exogenous GHB. Furthermore, data obtained with injection of plain or diluted urine indicate that CZE could be used to rapidly recognize GHB amounts (in microg/ml) that are > or = 4 kappa. PMID:12685588 10. An indirect ELISA for detection of Theileria spp. antibodies using a recombinant protein (rTlSP) from Theileria luwenshuni. PubMed He, Haining; Li, Youquan; Liu, Junlong; Liu, Zhijie; Yang, Jifei; Liu, Aihong; Chen, Ze; Ren, Qiaoyun; Guan, Guiquan; Liu, Guangyuan; Luo, Jianxun; Yin, Hong 2016-07-01 Theileria is a tick-borne, intracellular protozoan parasite of worldwide economic and veterinary importance in small ruminants. Here, an enzyme-linked immunosorbent assay (ELISA) was developed based on Theileria luwenshuni recombinant surface protein (rTlSP) and was used in the standardization and validation of an ELISA for the detection of circulating antibodies against ovine and caprine theileriosis. A total of 233 sera samples were used for the calculation of the cut-off value which served as a threshold between the positive and the negative sera. When the positive threshold was chosen as 19% of the specific mean antibody rate, the specificity was 97.9%, and the sensitivity was 97.1%. There was a cross-reaction with sera against Theileria uilenbergi and Theileria ovis, and no cross-reaction with sera against Babesia spp. in the ELISA and Western blotting. Two hundred forty samples collected from sheep in Gansu province were detected with blood smears and ELISA, respectively. The results showed that the positive rate of Theileria infection in Gansu province were 63.75% with rTlSP-ELISA, and 46.67% with blood smears, respectively. Our test proved that the rTlSP ELISA is suitable to diagnose Theileria infection and could be used in serological surveys to map out the prevalence of ovine and caprine theileriosis. PMID:27048941 11. Development of an indirect enzyme-linked immunosorbent assay for the detection of leptospiral antibodies in dogs. PubMed Central Ribotta, M J; Higgins, R; Gottschalk, M; Lallier, R 2000-01-01 Serology plays an important role in the diagnosis of leptospirosis. Few laboratories have the resources, expertise, or facilities to perform the microscopic agglutination test (MAT). Thus, there is a need for a rapid and simple serological test that could be used in any diagnostic laboratory. In this study, a genus-specific, heat-stable antigenic preparation from Leptospira interrogans serovar pomona was used in an enzyme-linked immunosorbent assay (ELISA) for the detection of leptospiral antibodies in dog sera. This antigenic preparation reacted with rabbit antisera against L. interrogans serovars bratislava, autumnalis, icterohaemorrhagiae and pomona and with rabbit antiserum against L. kirschneri serovar grippotyphosa. The ELISA showed a relative specificity of 95.6% with 158 dog sera which were negative at a dilution of 1:100 in the MAT for serovars pomona, bratislava, icterohaemorrhagiae, autumnalis, hardjo, and grippotyphosa. The relative sensitivity of this assay with 21 dog sera that revealed serovars MAT titres of > or =100 to different serovars was 100%. This assay is easily standardized, technically more advantageous than MAT, and uses an antigenic preparation that can be routinely prepared in large amounts. It was concluded that this ELISA is sufficiently sensitive test to be used as an initial screening test for the detection of leptospiral antibodies in canine sera, with subsequent confirmation of positive test results with the MAT. PMID:10680654 12. [Multicenter evaluation of the indirect nitrate reductase assay for the rapid detection of multidrug-resistant tuberculosis]. PubMed Çoban, Ahmet Yılmaz; Taştekin, Berika; Uzun, Meltem; Kalaycı, Fatma; Ceyhan, İsmail; Biçmen, Can; Albay, Ali; Sığ, Ali Korhan; Özkütük, Nuri; Sürücüoğlu, Süheyla; Özkütük, Aydan; Esen, Nuran; Albayrak, Nurhan; Aslantürk, Ahmet; Sarıbaş, Zeynep; Alp, Alparslan 2016-01-01 Multidrug-resistant tuberculosis (MDR-TB) is defined as resistance to at least isoniazid (INH) and rifampicin (RIF), and it complicates the implementation of tuberculosis control programmes. The rapid detection of MDR-TB is crucial to reduce the transmission of disease. The nitrate reductase assay (NRA) is one of the colorimetric susceptibility test methods for rapid detection of MDR-TB and based on the ability of reduction of nitrate to nitrite by Mycobacterium tuberculosis. The aim of this study was to evaluate the performance of the NRA for the rapid detection of MDR-TB. A total of 237 M.tuberculosis complex (MTC) isolates that were identified by the same method (BD MGIT(TM) TBc Identification Test, USA) from nine different medical centers in Turkey were included in the study. The susceptibility results of the isolates against INH and RIF obtained by reference test (Bactec MGIT(TM) 960, BD, USA) were then compared with NRA. In order to ensure consistency between centers, Löwenstein-Jensen (LJ) medium with antibiotics and without antibiotics (growth control) and Griess reagent solution were prepared in a single center (Ondokuz Mayıs University School of Medicine, Medical Microbiology Department) and sent to all participant centers with the standardized test procedure. After the inoculation of bacteria into the test tubes, the tubes were incubated at 37°C, and after seven days of incubation, 500 μl Griess reagent was added to the LJ medium without antibiotics. If a color change was observed, an equal volume of Griess reagent was added to test LJ media with antibiotics. When a color change was observed in LJ media with antibiotics, it was considered that the isolate was resistant to tested antibiotics. Among 237 MTC isolates, 16 were resistant only to INH and nine were resistant only to RIF; 93 isolates (39.2%) were resistant (MDR) and 119 isolates (50.2%) were susceptible to both of the drugs determined with the reference susceptibility test. In the study 13. Comparison of an indirect fluorescent antibody test with Western blot for the detection of serum antibodies against Encephalitozoon cuniculi in cats. PubMed Künzel, Frank; Peschke, Roman; Tichy, Alexander; Joachim, Anja 2014-12-01 Current clinical research indicates that Encephalitozoon (E.) cuniculi infections in cats may be underdiagnosed, especially in animals with typical ocular signs (cataract/anterior uveitis). Although molecular detection of the pathogen in tissue appears promising, serology remains the major diagnostic tool in the living animal. While serological tests are established for the main host of E. cuniculi, the rabbit, the routine serological diagnosis for cats still needs validation. The aim of the study was to evaluate the consistency of indirect fluorescence antibody test (IFAT) and Western blot (WB) for the detection of IgG antibodies against E. cuniculi in the serum of 84 cats. In addition, PCR of liquefied lens material or intraocular fluid was performed in those of the cats with a suspected ocular E. cuniculi infection. Twenty-one cats with positive PCR results were considered as a positive reference group. Results obtained by IFAT and WB corresponded in 83/84 serum samples, indicating a very good correlation between both serological methods. Using WB as the standard reference, sensitivity and specificity for the detection of antibodies against E. cuniculi by the IFAT were 97.6 and 100%, respectively. The positive and negative predictive values for the IFAT were 100 and 97.7%, respectively. The accuracy (correct classified proportion) for the detection of IgG antibodies against E. cuniculi in cats was 98.8%. The comparison of both serological methods with the PCR results also revealed a good agreement as 20 out of 21 PCR-positive samples were seropositive both in IFAT and WB. Both tests can be considered as equally reliable assays to detect IgG antibodies against E. cuniculi in cats. As the IFAT is quicker and easier to perform, it is recommended for routine use in the diagnosis of feline encephalitozoonosis. PMID:25199557 14. Experiments on Adaptive Techniques for Host-Based Intrusion Detection SciTech Connect DRAELOS, TIMOTHY J.; COLLINS, MICHAEL J.; DUGGAN, DAVID P.; THOMAS, EDWARD V.; WUNSCH, DONALD 2001-09-01 This research explores four experiments of adaptive host-based intrusion detection (ID) techniques in an attempt to develop systems that can detect novel exploits. The technique considered to have the most potential is adaptive critic designs (ACDs) because of their utilization of reinforcement learning, which allows learning exploits that are difficult to pinpoint in sensor data. Preliminary results of ID using an ACD, an Elman recurrent neural network, and a statistical anomaly detection technique demonstrate an ability to learn to distinguish between clean and exploit data. We used the Solaris Basic Security Module (BSM) as a data source and performed considerable preprocessing on the raw data. A detection approach called generalized signature-based ID is recommended as a middle ground between signature-based ID, which has an inability to detect novel exploits, and anomaly detection, which detects too many events including events that are not exploits. The primary results of the ID experiments demonstrate the use of custom data for generalized signature-based intrusion detection and the ability of neural network-based systems to learn in this application environment. 15. Influence of Aerosols on the Shortwave Cloud Radiative Forcing from North Pacific Oceanic Clouds: Results from the Cloud Indirect Forcing Experiment (CIFEX) NASA Technical Reports Server (NTRS) Wilcox, Eric M.; Roberts, Greg; Ramanathan, V. 2007-01-01 Aerosols over the Northeastern Pacific Ocean enhance the cloud drop number concentration and reduce the drop size for marine stratocumulus and cumulus clouds. These microphysical effects result in brighter clouds, as evidenced by a combination of aircraft and satellite observations. In-situ measurements from the Cloud Indirect Forcing Experiment (CIFEX) indicate that the mean cloud drop number concentration in low clouds over the polluted marine boundary layer is greater by 53 cm(sup -3) compared to clean clouds, and the mean cloud drop effective radius is smaller by 4 micrometers. We link these in-situ measurements of cloud modification by aerosols, for the first time, with collocated satellite broadband radiative flux observations from the Clouds and the Earth s Radiant Energy System to show that these microphysical effects of aerosols enhance the top-of-atmosphere cooling by -.9.9 plus or minus 4.3 W m(sup -2) for overcast conditions. 16. The Influence of Aerosols on the Shortwave Cloud Radiative Forcing from North Pacific Oceanic Clouds: Results from the Cloud Indirect Forcing Experiment (CIFEX) NASA Technical Reports Server (NTRS) Wilcox, Eric M.; Roberts, Greg; Ramanathan, V. 2006-01-01 Aerosols over the Northeastern Pacific Ocean enhance the cloud drop number concentration and reduce the drop size for marine stratocumulus and cumulus clouds. These microphysical effects result in brighter clouds, as evidenced by a combination of aircraft and satellite observations. In-situ measurements from the Cloud Indirect Forcing Experiment (CIFEX) indicate that the mean cloud drop number concentration in low clouds over the polluted marine boundary layer is greater by 53/cu cm compared to clean clouds, and the mean cloud drop effective radius is smaller by 4 microns. We link these in-situ measurements of cloud modification by aerosols, for the first time, with collocated satellite broadband radiative flux observations from the Clouds and the Earth's Radiant Energy System (CERES) to show that these microphysical effects of aerosols enhance the top-of-atmosphere cooling by -9.9+/-4.3 W/sq m for overcast conditions. 17. Evaluation of the molecular recognition of monoclonal and polyclonal antibodies for sensitive detection of 2,4,6-trinitrotoluene (TNT) by indirect competitive surface plasmon resonance immunoassay. PubMed Shankaran, Dhesingh Ravi; Kawaguchi, Toshikazu; Kim, Sook Jin; Matsumoto, Kiyoshi; Toko, Kiyoshi; Miura, Norio 2006-11-01 Detection of TNT is an important environmental and security concern all over the world. We herein report the performance and comparison of four immunoassays for rapid and label-free detection of 2,4,6-trinitrotoluene (TNT) based on surface plasmon resonance (SPR). The immunosensor surface was constructed by immobilization of a home-made 2,4,6-trinitrophenyl-keyhole limpet hemocyanin (TNPh-KLH) conjugate onto an SPR gold surface by simple physical adsorption within 10 min. The immunoreaction of the TNPh-KLH conjugate with four different antibodies, namely, monoclonal anti-TNT antibody (M-TNT Ab), monoclonal anti-trinitrophenol antibody (M-TNP Ab), polyclonal anti-trinitrophenyl antibody (P-TNPh Ab), and polyclonal anti-TNP antibody (P-TNP Ab), was studied by SPR. The principle of indirect competitive immunoreaction was employed for quantification of TNT. Among the four antibodies, the P-TNPh Ab prepared by our group showed highest sensitivity with a detection limit of 0.002 ng/mL (2 ppt) TNT. The lowest detection limits observed with other commercial antibodies were 0.008 ng/mL (8 ppt), 0.25 ng/mL (250 ppt), and 40 ng/mL (ppb) for M-TNT Ab, P-TNP Ab, and M-TNP Ab, respectively, in the similar assay format. The concentration of the conjugate and the antibodies were optimized for use in the immunoassay. The response time for an immunoreaction was 36 s and a single immunocycle could be done within 2 min, including the sensor surface regeneration using pepsin solution. In addition to the quantification of TNT, all immunoassays were evaluated for robustness and cross-reactivity towards several TNT analogs. PMID:16900380 18. Development of an indirect competitive enzyme-linked immunosorbent assay based on the multiepitope peptide for the synchronous detection of staphylococcal enterotoxin A and G proteins in milk. PubMed Liang, Mingyan; Zhang, Tingting; Liu, Xuelan; Fan, Yanan; Xia, Shenglin; Xiang, Yiqing; Liu, Ziqi; Jinnian, Li 2015-02-01 Staphylococcal food poisoning (SFP), one of the most common foodborne diseases, results from ingestion of staphylococcal enterotoxins (SEs) in foods. In our previous studies, we found that SEA and SEG were two predominant SE proteins produced by milkacquired S. aureus isolates. Here, a tandemly arranged multiepitope peptide (named SEAGepis) was designed with six linear B-cell epitopes derived from SEA or SEG and was heterologously expressed. The SEAGepis-specific antibody was prepared by immunizing rabbit with rSEAGepis. Then, an indirect competitive enzyme-linked immunosorbent assay (ic-ELISA) based on rSEAGepis and the corresponding antibody was developed to simultaneously detect SEA and SEG. Under the optimized conditions, the ic-ELISA standard curve for rSEAGepis was constructed in the concentration range of 0.5 to 512 ng/ml, and the average coefficients of variation of intra- and interassay were 4.28 and 5.61% during six standard concentrations. The average half-maximal inhibitory concentration was 5.07 ng/ml, and the limit of detection at a signal-to-noise ratio of 3 was 0.52 ng/ml. The anti-rSEAGepis antibody displayed over 90% cross-reactivity with SEA and SEG but less than 0.5% cross-reactivity with other enterotoxins. Artificially contaminated milk with different concentrations of rSEAGepis, SEA, and SEG was detected by the established ic-ELISA; the recoveries of rSEAGepis, SEA, and SEG were 91.1 to 157.5%, 90.3 to 134.5%, and 89.1 to 117.5%, respectively, with a coefficient of variation below 12%. These results demonstrated that the newly established ic-ELISA possessed high sensitivity, specificity, stability, and accuracy and could potentially be a useful analytical method for synchronous detection of SEA and SEG in milk. PMID:25710152 19. Detection of African Swine Fever Virus Antibodies in Serum and Oral Fluid Specimens Using a Recombinant Protein 30 (p30) Dual Matrix Indirect ELISA. PubMed Giménez-Lirola, Luis G; Mur, Lina; Rivera, Belen; Mogler, Mark; Sun, Yaxuan; Lizano, Sergio; Goodell, Christa; Harris, D L Hank; Rowland, Raymond R R; Gallardo, Carmina; Sánchez-Vizcaíno, José Manuel; Zimmerman, Jeff 2016-01-01 In the absence of effective vaccine(s), control of African swine fever caused by African swine fever virus (ASFV) must be based on early, efficient, cost-effective detection and strict control and elimination strategies. For this purpose, we developed an indirect ELISA capable of detecting ASFV antibodies in either serum or oral fluid specimens. The recombinant protein used in the ELISA was selected by comparing the early serum antibody response of ASFV-infected pigs (NHV-p68 isolate) to three major recombinant polypeptides (p30, p54, p72) using a multiplex fluorescent microbead-based immunoassay (FMIA). Non-hazardous (non-infectious) antibody-positive serum for use as plate positive controls and for the calculation of sample-to-positive (S:P) ratios was produced by inoculating pigs with a replicon particle (RP) vaccine expressing the ASFV p30 gene. The optimized ELISA detected anti-p30 antibodies in serum and/or oral fluid samples from pigs inoculated with ASFV under experimental conditions beginning 8 to 12 days post inoculation. Tests on serum (n = 200) and oral fluid (n = 200) field samples from an ASFV-free population demonstrated that the assay was highly diagnostically specific. The convenience and diagnostic utility of oral fluid sampling combined with the flexibility to test either serum or oral fluid on the same platform suggests that this assay will be highly useful under the conditions for which OIE recommends ASFV antibody surveillance, i.e., in ASFV-endemic areas and for the detection of infections with ASFV isolates of low virulence. PMID:27611939 20. Colloidal gold-based indirect competitive immunochromatographic assay for rapid detection of bioactive isoflavone glycosides daidzin and genistin in soy products. PubMed Sakamoto, Seiichi; Yusakul, Gorawit; Pongkitwitoon, Benyakan; Tanaka, Hiroyuki; Morimoto, Satoshi 2016-03-01 Daidzin (DZ) and genistin (GEN) are two major soy isoflavone glycosides isolated from soybeans. Soy products containing isoflavones have recently been widely accepted for commercial use. However, the Japanese Government has suggested that soy isoflavone intake should be limited because of their estrogenic effects due to their interactions with estrogen receptors. In this study, we established a one-step indirect competitive immunochromatographic assay (ICA) for rapid and sensitive detection of total isoflavone glycosides (DZ and GEN) using gold nanoparticles conjugated with a monoclonal antibody against DZ. This assay was able to be completed in 15min following the immersion of a test strip in an analyte solution. Furthermore, the limit of detection for the total amount of isoflavone glycosides was ∼125ngmL(-1). Considering that the major soy isoflavone glycosides found in soy products are DZ and GEN, this study demonstrates the potential use of ICA for the assessments of over consumption of isoflavones in soy supplements and foods, which would increase the safe dietary intake of soy products. PMID:26471543 1. Serological detection of bovine ephemeral fever virus using an indirect ELISA based on antigenic site G1 expressed in Pichia pastoris. PubMed Zheng, Fu-Ying; Lin, Guo-Zhen; Qiu, Chang-Qing; Zhou, Ji-Zhang; Cao, Xiao-An; Gong, Xiao-Wei 2010-08-01 An indirect ELISA for the serological detection of bovine ephemeral fever virus (BEFV) infection in cattle is described in which a glycosylated protein of approximately 25 kDa (including the G1 antigenic site of the virus glycoprotein) expressed in Pichia pastoris GS115 was used as the coating antigen. The optimal concentration of coated antigen was 0.3 microg/well at a serum dilution of 1:40. The optimal positive threshold value of the assay was 1.88, as derived from receiver operating characteristic curve analysis. The test had 100% sensitivity and 96.7% specificity when compared with a micro-neutralisation test using 336 positive and 180 negative sera to BEFV, respectively. The inter-assay and intra-assay coefficients of variation for 15 sera were both <5.8% and there was no evidence of cross-reactivity between the tested coating antigen and antibodies to related rabies virus. The ELISA is an inexpensive and rapid serological detection method that would be suitable for screening for BEFV infection on a large scale. PMID:19586786 2. Performance of a simple UV LED light source in the capillary electrophoresis of inorganic anions with indirect detection using a chromate background electrolyte. PubMed King, Marion; Paull, Brett; Haddad, Paul R; Macka, Miroslav 2002-12-01 Light emitting diodes (LEDs) are known to be excellent light sources for detectors in liquid chromatography and capillary electromigration separation techniques, but to date only LEDs emitting in the visible range have been used. In this work, a UV LED was investigated as a simple alternative light source to standard mercury or deuterium lamps for use in indirect photometric detection of inorganic anions using capillary electrophoresis with a chromate background electrolyte (BGE). The UV LED used had an emission maximum at 379.5 nm, a wavelength at which chromate absorbs strongly and exhibits a 47% higher molar absorptivity than at 254 nm when using a standard mercury light source. The noise, sensitivity and linearity of the LED detector were evaluated and all exhibited superior performance to the mercury light source (up to 70% decrease in noise, up to 26.2% increase in sensitivity, and over 100% increase in linear range). Using the LED detector with a simple chromate-diethanolamine background electrolyte, limits of detection for the common inorganic anions, Cl-, NO3-, SO4(2-), F- and PO4(3-) ranged from 3 to 14 microg L(-1), using electrostatic injection at -5 kV for 5 s. PMID:12537359 3. Determination of diethanolamine or N-methyldiethanolamine in high ammonium concentration matrices by capillary electrophoresis with indirect UV detection: application to the analysis of refinery process waters. PubMed Bord, N; Crétier, G; Rocca, J-L; Bailly, C; Souchez, J-P 2004-09-01 Alkanolamines such as diethanolamine (DEA) and N-methyldiethanolamine (MDEA) are used in desulfurization processes in crude oil refineries. These compounds may be found in process waters following an accidental contamination. The analysis of alkanolamines in refinery process waters is very difficult due to the high ammonium concentration of the samples. This paper describes a method for the determination of DEA in high ammonium concentration refinery process waters by using capillary electrophoresis (CE) with indirect UV detection. The same method can be used for the determination of MDEA. Best results were achieved with a background electrolyte (BGE) comprising 10 mM histidine adjusted to pH 5.0 with acetic acid. The development of this electrolyte and the analytical performances are discussed. The quantification was performed by using internal standardization, by which triethanolamine (TEA) was used as internal standard. A matrix effect due to the high ammonium content has been highlighted and standard addition was therefore used. The developed method was characterized in terms of repeatability of migration times and corrected peak areas, linearity, and accuracy. Limits of detection (LODs) and quantification (LOQs) obtained were 0.2 and 0.7 ppm, respectively. The CE method was applied to the determination of DEA or MDEA in refinery process waters spiked with known amounts of analytes and it gave excellent results, since uncertainties obtained were 8 and 5%, respectively. PMID:15338092 4. Direct and indirect signal detection of 122 keV photons with a novel detector combining a pnCCD and a CsI(Tl) scintillator NASA Astrophysics Data System (ADS) Schlosser, D. M.; Huth, M.; Hartmann, R.; Abboud, A.; Send, S.; Conka-Nurdan, T.; Shokr, M.; Pietsch, U.; Strüder, L. 2016-01-01 By combining a low noise fully depleted pnCCD detector with a CsI(Tl) scintillator, an energy-dispersive area detector can be realized with a high quantum efficiency (QE) in the range from below 1 keV to above 100 keV. In direct detection mode the pnCCD exhibits a relative energy resolution of 1% at 122 keV and spatial resolution of less than 75 μm, the pixel size of the pnCCD. In the indirect detection mode, i.e. conversion of the incoming X-rays in the scintillator, the measured energy resolution was about 9-13% at 122 keV, depending on the depth of interaction in the scintillator, while the position resolution, extracted with the help of simulations, was 30 μm only. We show simulated data for incident photons of 122 keV and compare the various interaction processes and relevant physical parameters to experimental results obtained with a radioactive 57Co source. 5. Greenhouse Effect Detection Experiment (GEDEX). Selected data sets NASA Technical Reports Server (NTRS) Olsen, Lola M.; Warnock, Archibald, III 1992-01-01 This CD-ROM contains selected data sets compiled by the participants of the Greenhouse Effect Detection Experiment (GEDEX) workshop on atmospheric temperature. The data sets include surface, upper air, and/or satellite-derived measurements of temperature, solar irradiance, clouds, greenhouse gases, fluxes, albedo, aerosols, ozone, and water vapor, along with Southern Oscillation Indices and Quasi-Biennial Oscillation statistics. 6. The design of an experiment to detect low energy antiprotons NASA Technical Reports Server (NTRS) Lloyd-Evans, J.; Acharya, B. S.; Balasubrahmanyan, V. K.; Ormes, J. F.; Streitmatter, R. E.; Stephens, S. A. 1985-01-01 The techniques to be used in a balloon borne experiment APEX to detect 220 MeV antiprotons are described, paying particular attention to potential sources of background. Event time history is shown to be very effective in eliminating this background. Results of laboratory tests on the timing resolution which may be achieved are presented. 7. A protein A/G indirect enzyme-linked immunosorbent assay for the detection of anti-Brucella antibodies in Arctic wildlife. PubMed Nymo, Ingebjørg H; Godfroid, Jacques; Åsbakk, Kjetil; Larsen, Anett K; das Neves, Carlos G; Rødven, Rolf; Tryland, Morten 2013-05-01 A species-independent indirect enzyme-linked immunosorbent assay (iELISA) based on chimeric protein A/G was established for the detection of anti-Brucella antibodies in Arctic wildlife species and compared to previously established brucellosis serological tests for hooded seals (Cystophora cristata), minke whales (Balaenoptera acutorostrata), sei whales (Balaenoptera borealis), fin whales (Balaenoptera physalus), and polar bears (Ursus maritimus), as well as bacteriology results for reindeer and caribou (Rangifer tarandus sp.). The protein A/G iELISA results were consistent with the other serological tests with Cohen kappa values between 0.47 and 0.92, and the protein A/G iELISA can thus offer a technically simple method for these species yielding results consistent with established brucellosis serological tests. Receiver operator characteristics analysis proved that the reindeer and caribou protein A/G iELISA results were consistent with the bacteriological gold standard with an area under the curve of 0.99, and the protein A/G iELISA was thus validated as a sensitive and specific serological method for the detection of anti-Brucella antibodies in reindeer and caribou. The binding of the antibodies from the respective species to protein A and G were also evaluated in the iELISA. The antibodies from hooded seals and polar bears reacted stronger to protein A than to G. The sei whale, fin whale, reindeer, and caribou antibodies reacted stronger to protein G than to A. The minke whale antibodies reacted to both protein A and G. There was a strong correlation (r s = 0.88-0.98) between the optical density results obtained with the iELISA with protein A/G and protein A or G, showing that protein A/G is as well suited as protein A or G for the detection of anti-Brucella antibodies in these species with the iELISA. PMID:23572454 8. Physics from solar neutrinos in dark matter direct detection experiments NASA Astrophysics Data System (ADS) Cerdeño, David G.; Fairbairn, Malcolm; Jubb, Thomas; Machado, Pedro A. N.; Vincent, Aaron C.; Bœhm, Céline 2016-05-01 The next generation of dark matter direct detection experiments will be sensitive to both coherent neutrino-nucleus and neutrino-electron scattering. This will enable them to explore aspects of solar physics, perform the lowest energy measurement of the weak angle sin2 θ W to date, and probe contributions from new theories with light mediators. In this article, we compute the projected nuclear and electron recoil rates expected in several dark matter direct detection experiments due to solar neutrinos, and use these estimates to quantify errors on future measurements of the neutrino fluxes, weak mixing angle and solar observables, as well as to constrain new physics in the neutrino sector. Our analysis shows that the combined rates of solar neutrino events in second generation experiments (SuperCDMS and LZ) can yield a measurement of the pp flux to 2.5% accuracy via electron recoil, and slightly improve the 8B flux determination. Assuming a low-mass argon phase, projected tonne-scale experiments like DARWIN can reduce the uncertainty on both the pp and boron-8 neutrino fluxes to below 1%. Finally, we use current results from LUX, SuperCDMS and CDMSlite to set bounds on new interactions between neutrinos and electrons or nuclei, and show that future direct detection experiments can be used to set complementary constraints on the parameter space associated with light mediators. 9. Dark matter effective field theory scattering in direct detection experiments DOE PAGESBeta Schneck, K.; Cabrera, B.; Cerdeño, D. G.; Mandic, V.; Rogers, H. E.; Agnese, R.; Anderson, A. J.; Asai, M.; Balakishiyeva, D.; Barker, D.; et al 2015-05-18 We examine the consequences of the effective field theory (EFT) of dark matter-nucleon scattering for current and proposed direct detection experiments. Exclusion limits on EFT coupling constants computed using the optimum interval method are presented for SuperCDMS Soudan, CDMS II, and LUX, and the necessity of combining results from multiple experiments in order to determine dark matter parameters is discussed. Here. we demonstrate that spectral differences between the standard dark matter model and a general EFT interaction can produce a bias when calculating exclusion limits and when developing signal models for likelihood and machine learning techniques. In conclusion, we discussmore » the implications of the EFT for the next-generation (G2) direct detection experiments and point out regions of complementarity in the EFT parameter space.« less 10. Dark matter effective field theory scattering in direct detection experiments DOE PAGESBeta Schneck, K. 2015-05-01 We examine the consequences of the effective field theory (EFT) of dark matter–nucleon scattering for current and proposed direct detection experiments. Exclusion limits on EFT coupling constants computed using the optimum interval method are presented for SuperCDMS Soudan, CDMS II, and LUX, and the necessity of combining results from multiple experiments in order to determine dark matter parameters is discussed. We demonstrate that spectral differences between the standard dark matter model and a general EFT interaction can produce a bias when calculating exclusion limits and when developing signal models for likelihood and machine learning techniques. We also discuss the implicationsmore » of the EFT for the next-generation (G2) direct detection experiments and point out regions of complementarity in the EFT parameter space.« less 11. Dark matter effective field theory scattering in direct detection experiments SciTech Connect Schneck, K.; Cabrera, B.; Cerdeno, D. G.; Mandic, V.; Rogers, H. E.; Agnese, R.; Anderson, A. J.; Asai, M.; Balakishiyeva, D.; Barker, D.; Basu Thakur, R.; Bauer, D. A.; Billard, J.; Borgland, A.; Brandt, D.; Brink, P. L.; Bunker, R.; Caldwell, D. O.; Calkins, R.; Chagani, H.; Chen, Y.; Cooley, J.; Cornell, B.; Crewdson, C. H.; Cushman, Priscilla B.; Daal, M.; Di Stefano, P. C.; Doughty, T.; Esteban, L.; Fallows, S.; Figueroa-Feliciano, E.; Godfrey, G. L.; Golwala, S. R.; Hall, Jeter C.; Harris, H. R.; Hofer, T.; Holmgren, D.; Hsu, L.; Huber, M. E.; Jardin, D. M.; Jastram, A.; Kamaev, O.; Kara, B.; Kelsey, M. H.; Kennedy, A.; Leder, A.; Loer, B.; Lopez Asamar, E.; Lukens, W.; Mahapatra, R.; McCarthy, K. A.; Mirabolfathi, N.; Moffatt, R. A.; Morales Mendoza, J. D.; Oser, S. M.; Page, K.; Page, W. A.; Partridge, R.; Pepin, M.; Phipps, A.; Prasad, K.; Pyle, M.; Qiu, H.; Rau, W.; Redl, P.; Reisetter, A.; Ricci, Y.; Roberts, A.; Saab, T.; Sadoulet, B.; Sander, J.; Schnee, R. W.; Scorza, S.; Serfass, B.; Shank, B.; Speller, D.; Toback, D.; Upadhyayula, S.; Villano, A. N.; Welliver, B.; Wilson, J. S.; Wright, D. H.; Yang, X.; Yellin, S.; Yen, J. J.; Young, B. A.; Zhang, J. 2015-05-01 We examine the consequences of the effective eld theory (EFT) of dark matter-nucleon scattering or current and proposed direct detection experiments. Exclusion limits on EFT coupling constants computed using the optimum interval method are presented for SuperCDMS Soudan, CDMS II, and LUX, and the necessity of combining results from multiple experiments in order to determine dark matter parameters is discussed. We demonstrate that spectral di*erences between the standard dark matter model and a general EFT interaction can produce a bias when calculating exclusion limits and when developing signal models for likelihood and machine learning techniques. We also discuss the implications of the EFT for the next-generation (G2) direct detection experiments and point out regions of complementarity in the EFT parameter space. 12. Dark matter effective field theory scattering in direct detection experiments SciTech Connect Schneck, K. 2015-05-01 We examine the consequences of the effective field theory (EFT) of dark matter–nucleon scattering for current and proposed direct detection experiments. Exclusion limits on EFT coupling constants computed using the optimum interval method are presented for SuperCDMS Soudan, CDMS II, and LUX, and the necessity of combining results from multiple experiments in order to determine dark matter parameters is discussed. We demonstrate that spectral differences between the standard dark matter model and a general EFT interaction can produce a bias when calculating exclusion limits and when developing signal models for likelihood and machine learning techniques. We also discuss the implications of the EFT for the next-generation (G2) direct detection experiments and point out regions of complementarity in the EFT parameter space. 13. Dark matter effective field theory scattering in direct detection experiments SciTech Connect Schneck, K.; Cabrera, B.; Cerdeño, D. G.; Mandic, V.; Rogers, H. E.; Agnese, R.; Anderson, A. J.; Asai, M.; Balakishiyeva, D.; Barker, D.; Basu Thakur, R.; Bauer, D. A.; Billard, J.; Borgland, A.; Brandt, D.; Brink, P. L.; Bunker, R.; Caldwell, D. O.; Calkins, R.; Chagani, H.; Chen, Y.; Cooley, J.; Cornell, B.; Crewdson, C. H.; Cushman, P.; Daal, M.; Di Stefano, P. C. F.; Doughty, T.; Esteban, L.; Fallows, S.; Figueroa-Feliciano, E.; Godfrey, G. L.; Golwala, S. R.; Hall, J.; Harris, H. R.; Hofer, T.; Holmgren, D.; Hsu, L.; Huber, M. E.; Jardin, D. M.; Jastram, A.; Kamaev, O.; Kara, B.; Kelsey, M. H.; Kennedy, A.; Leder, A.; Loer, B.; Lopez Asamar, E.; Lukens, P.; Mahapatra, R.; McCarthy, K. A.; Mirabolfathi, N.; Moffatt, R. A.; Morales Mendoza, J. D.; Oser, S. M.; Page, K.; Page, W. A.; Partridge, R.; Pepin, M.; Phipps, A.; Prasad, K.; Pyle, M.; Qiu, H.; Rau, W.; Redl, P.; Reisetter, A.; Ricci, Y.; Roberts, A.; Saab, T.; Sadoulet, B.; Sander, J.; Schnee, R. W.; Scorza, S.; Serfass, B.; Shank, B.; Speller, D.; Toback, D.; Upadhyayula, S.; Villano, A. N.; Welliver, B.; Wilson, J. S.; Wright, D. H.; Yang, X.; Yellin, S.; Yen, J. J.; Young, B. A.; Zhang, J. 2015-05-18 We examine the consequences of the effective field theory (EFT) of dark matter-nucleon scattering for current and proposed direct detection experiments. Exclusion limits on EFT coupling constants computed using the optimum interval method are presented for SuperCDMS Soudan, CDMS II, and LUX, and the necessity of combining results from multiple experiments in order to determine dark matter parameters is discussed. Here. we demonstrate that spectral differences between the standard dark matter model and a general EFT interaction can produce a bias when calculating exclusion limits and when developing signal models for likelihood and machine learning techniques. In conclusion, we discuss the implications of the EFT for the next-generation (G2) direct detection experiments and point out regions of complementarity in the EFT parameter space. 14. Indirect and direct search for dark matter NASA Astrophysics Data System (ADS) Klasen, M.; Pohl, M.; Sigl, G. 2015-11-01 The majority of the matter in the universe is still unidentified and under investigation by both direct and indirect means. Many experiments searching for the recoil of dark-matter particles off target nuclei in underground laboratories have established increasingly strong constraints on the mass and scattering cross sections of weakly interacting particles, and some have even seen hints at a possible signal. Other experiments search for a possible mixing of photons with light scalar or pseudo-scalar particles that could also constitute dark matter. Furthermore, annihilation or decay of dark matter can contribute to charged cosmic rays, photons at all energies, and neutrinos. Many existing and future ground-based and satellite experiments are sensitive to such signals. Finally, data from the Large Hadron Collider at CERN are scrutinized for missing energy as a signature of new weakly interacting particles that may be related to dark matter. In this review article we summarize the status of the field with an emphasis on the complementarity between direct detection in dedicated laboratory experiments, indirect detection in the cosmic radiation, and searches at particle accelerators. 15. WIMP physics with ensembles of direct-detection experiments NASA Astrophysics Data System (ADS) Peter, Annika H. G.; Gluscevic, Vera; Green, Anne M.; Kavanagh, Bradley J.; Lee, Samuel K. 2014-12-01 The search for weakly-interacting massive particle (WIMP) dark matter is multi-pronged. Ultimately, the WIMP-dark-matter picture will only be confirmed if different classes of experiments see consistent signals and infer the same WIMP properties. In this work, we review the ideas, methods, and status of direct-detection searches. We focus in particular on extracting WIMP physics (WIMP interactions and phase-space distribution) from direct-detection data in the early discovery days when multiple experiments see of order dozens to hundreds of events. To demonstrate the essential complementarity of different direct-detection experiments in this context, we create mock data intended to represent the data from the near-future Generation 2 experiments. We consider both conventional supersymmetry-inspired benchmark points (with spin-independent and -dependent elastic cross sections just below current limits), as well as benchmark points for other classes of models (inelastic and effective-operator paradigms). We also investigate the effect on parameter estimation of loosening or dropping the assumptions about the local WIMP phase-space distribution. We arrive at two main conclusions. Firstly, teasing out WIMP physics with experiments depends critically on having a wide set of detector target materials, spanning a large range of target nuclear masses and spin-dependent sensitivity. It is also highly desirable to obtain data from low-threshold experiments. Secondly, a general reconstruction of the local WIMP velocity distribution, which will only be achieved if there are multiple experiments using different target materials, is critical to obtaining a robust and unbiased estimate of the WIMP mass. 16. Detection of anti-Leishmania infantum antibodies in sylvatic lagomorphs from an epidemic area of Madrid using the indirect immunofluorescence antibody test. PubMed Moreno, Inmaculada; Álvarez, Julio; García, Nerea; de la Fuente, Santiago; Martínez, Irene; Marino, Eloy; Toraño, Alfredo; Goyache, Joaquin; Vilas, Felipe; Domínguez, Lucas; Domínguez, Mercedes 2014-01-31 An outbreak of human leishmaniasis was confirmed in the southwest of the province of Madrid, Spain, between July 2009 and December 2012. Incidence of Leishmania infection in dogs was unchanged in this period, prompting a search for alternative sylvatic infection reservoirs. We evaluated exposure to Leishmania in serum samples from animals in the area with an indirect immunofluorescence test (IFAT). Using promastigotes from six culture passages and a 1/25 threshold titer, we found anti-Leishmania infantum seroreactivity in 9.3% of cats (4 of 43), 45.7% of rabbits (16/35) and 74.1% of hares (63/85). Use of promastigotes from >10 in vitro passages resulted in a notably IFAT lower titer, suggesting antigenic changes during extended culture. Postmortem inspection of seropositive animals showed no clinical signs of infection. The results clearly suggest that asymptomatic hares were the main reservoir in the outbreak, and corroborate IFAT as a sensitive serological surveillance method to detect such cryptic Leishmania infections. PMID:24211046 17. [Detection of anti-Brucella spp. antibodies in swine by agglutination techniques and indirect ELISA in the Buenos Aires and La Pampa provinces, Argentina]. PubMed Castro, H A; González, S R; Prat, M I; Baldi, P C 2006-01-01 Porcine brucellosis is one of the most important zoonoses in this country. Currently, there is no control program for porcine brucellosis in Argentina and the epidemiological situation is still unknown. The purpose of our study was to detect anti-Brucella spp. antibodies in swine in the southwest of the Buenos Aires province and the east of the La Pampa province. Blood samples were obtained when animals were slaughtered. The presence of anti-brucella antibodies was studied by the buffered plate agglutination test (BPA), the tube agglutination test (SAT), the 2-mercaptoethanol (2-ME) agglutination test and indirect ELISA tests, using the cytosolic fraction from Brucella abortus S19 (CYT), and lipopolysaccharide (LPS)-free cytosolic proteins (CP). Out of a total of 325 samples analyzed, 17.8% reacted positively to BPA, 13.8% to SAT, 8.0% to 2-ME, 21.0% to ELISA-CYT and 10.0% to ELISA-CP. These results agree with the few data available in our country and suggest that brucellosis screening should be extended to other regions. PMID:17037254 18. Experiments in ultrasonic flaw detection using a MEMS transducer NASA Astrophysics Data System (ADS) Jain, Akash; Greve, David W.; Oppenheim, Irving J. 2003-08-01 In earlier work we developed a MEMS phased array transducer, fabricated in the MUMPs process, and we reported on initial experimental studies in which the device was affixed into contact with solids. We demonstrated the successful detection of signals from a conventional ultrasonic source, and the successful localization of the source in an off-axis geometry using phased array signal processing. We now describe the predicted transmission and coupling characteristics for such devices in contact with solids, demonstrating reasonable agreement with experimental behavior. We then describe the results of flaw detection experiments, as well as results for fluid-coupled detectors. 19. Empirical and theoretical investigation of the noise performance of indirect detection, active matrix flat-panel imagers (AMFPIs) for diagnostic radiology. PubMed Siewerdsen, J H; Antonuk, L E; el-Mohri, Y; Yorkston, J; Huang, W; Boudry, J M; Cunningham, I A 1997-01-01 Noise properties of active matrix, flat-panel imagers under conditions relevant to diagnostic radiology are investigated. These studies focus on imagers based upon arrays with pixels incorporating a discrete photodiode coupled to a thin-film transistor, both fabricated from hydrogenated amorphous silicon. These optically sensitive arrays are operated with an overlying x-ray converter to allow indirect detection of incident x rays. External electronics, including gate driver circuits and preamplification circuits, are also required to operate the arrays. A theoretical model describing the signal and noise transfer properties of the imagers under conditions relevant to diagnostic radiography, fluoroscopy, and mammography is developed. This frequency-dependent model is based upon a cascaded systems analysis wherein the imager is conceptually divided into a series of stages having intrinsic gain and spreading properties. Predictions from the model are compared with x-ray sensitivity and noise measurements obtained from individual pixels from an imager with a pixel format of 1536 x 1920 pixels at a pixel pitch of 127 microns. The model is shown to be in excellent agreement with measurements obtained with diagnostic x rays using various phosphor screens. The model is used to explore the potential performance of existing and hypothetical imagers for application in radiography, fluoroscopy, and mammography as a function of exposure, additive noise, and fill factor. These theoretical predictions suggest that imagers of this general design incorporating a CsI: Tl intensifying screen can be optimized to provide detective quantum efficiency (DQE) superior to existing screen-film and storage phosphor systems for general radiography and mammography. For fluoroscopy, the model predicts that with further optimization of a-Si:H imagers, DQE performance approaching that of the best x-ray image intensifier systems may be possible. The results of this analysis suggest strategies for 20. Identification of inorganic improvised explosive devices by analysis of postblast residues using portable capillary electrophoresis instrumentation and indirect photometric detection with a light-emitting diode. PubMed Hutchinson, Joseph P; Evenhuis, Christopher J; Johns, Cameron; Kazarian, Artaches A; Breadmore, Michael C; Macka, Miroslav; Hilder, Emily F; Guijt, Rosanne M; Dicinoski, Greg W; Haddad, Paul R 2007-09-15 A commercial portable capillary electrophoresis (CE) instrument has been used to separate inorganic anions and cations found in postblast residues from improvised explosive devices (IEDs) of the type used frequently in terrorism attacks. The purpose of this analysis was to identify the type of explosive used. The CE instrument was modified for use with an in-house miniaturized light-emitting diode (LED) detector to enable sensitive indirect photometric detection to be employed for the detection of 15 anions (acetate, benzoate, carbonate, chlorate, chloride, chlorite, cyanate, fluoride, nitrate, nitrite, perchlorate, phosphate, sulfate, thiocyanate, thiosulfate) and 12 cations (ammonium, monomethylammonium, ethylammonium, potassium, sodium, barium, strontium, magnesium, manganese, calcium, zinc, lead) as the target analytes. These ions are known to be present in postblast residues from inorganic IEDs constructed from ammonium nitrate/fuel oil mixtures, black powder, and chlorate/perchlorate/sugar mixtures. For the analysis of cations, a blue LED (470 nm) was used in conjunction with the highly absorbing cationic dye, chrysoidine (absorption maximum at 453 nm). A nonaqueous background electrolyte comprising 10 mM chrysoidine in methanol was found to give greatly improved baseline stability in comparison to aqueous electrolytes due to the increased solubility of chrysoidine and its decreased adsorption onto the capillary wall. Glacial acetic acid (0.7% v/v) was added to ensure chrysoidine was protonated and to enhance separation selectivity by means of complexation with transition metal ions. The 12 target cations were separated in less than 9.5 min with detection limits of 0.11-2.30 mg/L (calculated at a signal-to-noise ratio of 3). The anions separation system utilized a UV LED (370 nm) in conjunction with an aqueous chromate electrolyte (absorption maximum at 371 nm) consisting of 10 mM chromium(VI) oxide and 10 mM sodium chromate, buffered with 40 mM tris 1. Recent results in dark matter direct detection experiments NASA Astrophysics Data System (ADS) Kelso, Christopher Michael Three dark matter direct detection experiments (DAMA/LIBRA, CoGeNT, and CRESST-II) have each reported signals which resemble that predicted for a dark matter particle with a mass of roughly 10 GeV. We review the theoretical background for direct detection experiments as well as these particular detectors and their reported signals over the last few years. We also compare the signals of these experiments and discuss whether they can be explained by a single species of dark matter particle, without conflicting with the constraints of other experiments. We show that the spectrum of events reported by CoGeNT and CRESST-II are consistent with each other and with the constraints from CDMS-II, although some tension with xenon- based experiments remains. Similarly, the modulation signals reported by DAMA/LIBRA and CoGeNT appear to be compatible, although the corresponding amplitude of the observed modulations are a factor of at least a few higher than would be naively expected, based on the event spectra reported by CoGeNT and CRESST-II. We also discuss some ways that this apparent discrepancy could potentially be resolved. 2. Optimization of detection sensitivity for a Fiber Optic Intrusion Detection System (FOIDS) using design of experiments. SciTech Connect Miller, Larry D.; Mack, Thomas Kimball; Mitchiner, Kim W.; Varoz, Carmella A. 2010-06-01 The Fiber Optic Intrusion Detection System (FOIDS)1 is a physical security sensor deployed on fence lines to detect climb or cut intrusions by adversaries. Calibration of detection sensitivity can be time consuming because, for example, the FiberSenSys FD-332 has 32 settings that can be adjusted independently to provide a balance between a high probability of detection and a low nuisance alarm rate. Therefore, an efficient method of calibrating the FOIDS in the field, other than by trial and error, was needed. This study was conducted to: x Identify the most significant settings for controlling detection x Develop a way of predicting detection sensitivity for given settings x Develop a set of optimal settings for validation The Design of Experiments (DoE) 2-4 methodology was used to generate small, planned test matrixes, which could be statistically analyzed to yield more information from the test data. Design of Experiments is a statistical methodology for quickly optimizing performance of systems with measurable input and output variables. DoE was used to design custom screening experiments based on 11 FOIDS settings believed to have the most affect on WKH types of fence perimeter intrusions were evaluated: simulated cut intrusions and actual climb intrusions. Two slightly different two-level randomized fractional factorial designed experiment matrixes consisting of 16 unique experiments were performed in the field for each type of intrusion. Three repetitions were conducted for every cut test; two repetitions were conducted for every climb test. Total number of cut tests analyzed was 51; the total number of climb tests was 38. This paper discusses the results and benefits of using Design of Experiments (DoE) to calibrate and optimize the settings for a FOIDS sensor 3. Full-scale Experiments for Roadbed Cavity Detection with GPR NASA Astrophysics Data System (ADS) Kim, C.; Kang, W.; Son, J. 2015-12-01 Past few decades, deterioration of the underground facilities such as sewage facilities has increased significantly with growing urban development in Korea. The old damaged sewage pipes or conduits have washed away the surrounding soils beneath the roadbed, causing underground cavities and eventual ground depressions or sinkholes in the urban areas. Therefore, the detection of the roadbed cavities is increasingly required to prevent property damage and loss of human lives for precautionary measures. 3-D GPR technique was applied to conduct the full-scale experiment for roadbed cavity detection. The physical experiment has employed the soil characteristics of silty sand soils. The experimental site is composed of physically simulated cavities (Styrofoam, ɛr = 1.03) with dome-shaped structure and concrete sewage conduit. The simulated cavities were installed at regular intervals in spatial distribution. The land surface of the site was not paved with asphalt concrete at the current stage of the experiments. The results of the GPR measurements over the experimental site show that the reflection patterns from the simulated cavities are hyperbolic returns typical to the point source in 2-D perspective. A closer inspection of 3-D GPR volume data has yielded more clear interpretation than 2-D GPR data regarding where the cavities are situated in space. However, in case sewage conduits adjacent to the cavities are present, they could mask the GPR signals from cavities, leading misinterpretations. Therefore, data processing procedures should be more appropriately applied compared to the data for linear target detections. It is strongly believed that 3-D high density GPR data could be usefully applied to the roadbed cavity detections in the experiments. This study is an ongoing project of KIGAM and more realistic environments of the underground conditions would be prepared for the future study. 4. Closing supersymmetric resonance regions with direct detection experiments SciTech Connect Kelso, Chris 2014-01-01 One of the few remaining ways that neutralinos could potentially evade constraints from direct detection experiments is if they annihilate through a resonance, as can occur if 2m{sub χ⁰} falls within about ~10% of either m{sub A/H}, m{sub h}, or m{sub Z}. Assuming a future rate of progress among direct detection experiments that is similar to that obtained over the past decade, we project that within 7 years the light Higgs and Z pole regions will be entirely closed, while the remaining parameter space near the A/H resonance will require that 2m{sub χ₀} be matched to the central value (near m{sub A}) to within less than 4%. At this rate of progress, it will be a little over a decade before multi-ton direct detection experiments will be able to close the remaining, highly-tuned, regions of the A/H resonance parameter space. 5. Small Arrays for Seismic Intruder Detections: A Simulation Based Experiment NASA Astrophysics Data System (ADS) Pitarka, A. 2014-12-01 Seismic sensors such as geophones and fiber optic have been increasingly recognized as promising technologies for intelligence surveillance, including intruder detection and perimeter defense systems. Geophone arrays have the capability to provide cost effective intruder detection in protecting assets with large perimeters. A seismic intruder detection system uses one or multiple arrays of geophones design to record seismic signals from footsteps and ground vehicles. Using a series of real-time signal processing algorithms the system detects, classify and monitors the intruder's movement. We have carried out numerical experiments to demonstrate the capability of a seismic array to detect moving targets that generate seismic signals. The seismic source is modeled as a vertical force acting on the ground that generates continuous impulsive seismic signals with different predominant frequencies. Frequency-wave number analysis of the synthetic array data was used to demonstrate the array's capability at accurately determining intruder's movement direction. The performance of the array was also analyzed in detecting two or more objects moving at the same time. One of the drawbacks of using a single array system is its inefficiency at detecting seismic signals deflected by large underground objects. We will show simulation results of the effect of an underground concrete block at shielding the seismic signal coming from an intruder. Based on simulations we found that multiple small arrays can greatly improve the system's detection capability in the presence of underground structures. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 6. H(C)Ag: a triple resonance NMR experiment for (109) Ag detection in labile silver-carbene complexes. PubMed Weske, Sebastian; Li, Yingjia; Wiegmann, Sara; John, Michael 2015-04-01 In silver complexes, indirect detection of (109) Ag resonances via (1) H,(109) Ag-HMQC frequently suffers from small or absent JHAg couplings or rapid ligand dissociation. In these cases, it would be favourable to employ H(X)Ag triple resonance spectroscopy that uses the large one-bond JXAg coupling (where the donor atom of the ligand X is the relay nucleus). We have applied an HMQC-based version of the H(C)Ag experiment to a labile silver-NHC complex (NHC=N-heterocyclic carbene) at natural (13) C isotopic abundance and variable temperature. In agreement with simulations, H(C)Ag detection became superior to (1) H,(109) Ag-HMQC detection above -20 °C. PMID:25641122 7. Construction of an AMR magnetometer for car detection experiments NASA Astrophysics Data System (ADS) Fúra, V.; Petrucha, V.; Platil, A. 2016-03-01 A new construction of magnetometer with commercially available AMR (anisotropic magnetoresistive) sensors intended for vehicle detection experiments is presented. Initial experiments with simple AMR gradiometer indicated viability of the approach in a real- world setup. For further experiments and acquisition of representative data, a new design of precise multi-channel magnetometer was developed. The design supports two models of commercial AMR sensors: the proven and reliable, but obsolete Honeywell HMC1021-series sensors and newly available Sensitec AFF755B sensors. In the comparison the two types are similar in most achieved parameters, except offset stability in flipped operation regime. Unfortunately, the new AFF755B sensors seem to have perhaps inferior coupling of the flipping (set/reset) coil to the ferromagnetic core that causes insufficient saturation of the AMR material. The issue is being solved by Sensitec, current deliverables of the AFF755B have “product sample” status (September 2015). 8. Light dark matter detection prospects at neutrino experiments NASA Astrophysics Data System (ADS) Kumar, Jason; Learned, John G.; Smith, Stefanie 2009-12-01 We consider the prospects for the detection of relatively light dark matter through direct annihilation to neutrinos. We specifically focus on the detection possibilities of water Cherenkov and liquid scintillator neutrino detection devices. We find, in particular, that liquid scintillator detectors may potentially provide excellent detection prospects for dark matter in the 4-10 GeV mass range. These experiments can provide excellent corroborative checks of the DAMA/LIBRA annual modulation signal, but may yield results for low mass dark matter in any case. We identify important tests of the ratio of electron to muon neutrino events (and neutrino versus antineutrino events), which discriminate against background atmospheric neutrinos. In addition, the fraction of events which arise from muon neutrinos or antineutrinos (Rμ and Rμ¯) can potentially yield information about the branching fractions of hypothetical dark matter annihilations into different neutrino flavors. These results apply to neutrinos from secondary and tertiary decays as well, but will suffer from decreased detectability. 9. Simulation of Seismic Tunnel Detection Experiments in Heterogeneous Geological Media NASA Astrophysics Data System (ADS) Sherman, C. S.; Glaser, S. D.; Rector, J. 2013-12-01 Detecting covert tunnels and other underground openings is an important yet challenging problem for geophysicists, especially where geological heterogeneity is pronounced. A number of geophysical methods have been employed to solve this problem, each with varying degrees of success. We focus on the near-surface seismic techniques of surface wave backscattering, surface wave attenuation tomography, body wave diffraction imaging, and resonant imaging. We use the elastodynamic wave propagation code E3D to simulate tunnel detection experiments completed at this site for a range of synthetic fractal velocity models. The Black Diamond mine, located near Pittsburg California, is used for the field test of our analysis. Our results show that for the relatively low-frequency surface wave attenuation and backscattering methods, the maximum detectable tunnel depth in a homogenous medium is approximately equal to the wavelength of the probing Rayleigh wave. The higher-frequency body wave diffraction and resonant imaging techniques are able to locate tunnels at greater depths, but require more sophisticated analysis and are prone to greater attenuation losses. As is expected, for large values of heterogeneity amplitude, ɛ, the percent standard deviation from the mean velocity model, the average observed surface wave attenuation signal decreases and the maximum detectable tunnel depth decreases. However, for moderate values of heterogeneity amplitude (ɛ < 3%), the average surface wave attenuation signal increases and the maximum detectable tunnel depth increases. For the body wave diffraction and resonant imaging experiments, as ɛ increases the complexity of the observed signal increases, resulting in more difficult processing and interpretation. The additional scattering attenuation tends to degrade the signals significantly due to their reliance on lower amplitude and higher frequency waves. 10. An investigation of signal performance enhancements achieved through innovative pixel design across several generations of indirect detection, active matrix, flat-panel arrays SciTech Connect Antonuk, Larry E.; Zhao Qihua; El-Mohri, Youcef; Du Hong; Wang Yi; Street, Robert A.; Ho, Jackson; Weisfield, Richard; Yao, William 2009-07-15 Active matrix flat-panel imager (AMFPI) technology is being employed for an increasing variety of imaging applications. An important element in the adoption of this technology has been significant ongoing improvements in optical signal collection achieved through innovations in indirect detection array pixel design. Such improvements have a particularly beneficial effect on performance in applications involving low exposures and/or high spatial frequencies, where detective quantum efficiency is strongly reduced due to the relatively high level of additive electronic noise compared to signal levels of AMFPI devices. In this article, an examination of various signal properties, as determined through measurements and calculations related to novel array designs, is reported in the context of the evolution of AMFPI pixel design. For these studies, dark, optical, and radiation signal measurements were performed on prototype imagers incorporating a variety of increasingly sophisticated array designs, with pixel pitches ranging from 75 to 127 {mu}m. For each design, detailed measurements of fundamental pixel-level properties conducted under radiographic and fluoroscopic operating conditions are reported and the results are compared. A series of 127 {mu}m pitch arrays employing discrete photodiodes culminated in a novel design providing an optical fill factor of {approx}80% (thereby assuring improved x-ray sensitivity), and demonstrating low dark current, very low charge trapping and charge release, and a large range of linear signal response. In two of the designs having 75 and 90 {mu}m pitches, a novel continuous photodiode structure was found to provide fill factors that approach the theoretical maximum of 100%. Both sets of novel designs achieved large fill factors by employing architectures in which some, or all of the photodiode structure was elevated above the plane of the pixel addressing transistor. Generally, enhancement of the fill factor in either discrete or 11. An investigation of signal performance enhancements achieved through innovative pixel design across several generations of indirect detection, active matrix, flat-panel arrays PubMed Central Antonuk, Larry E.; Zhao, Qihua; El-Mohri, Youcef; Du, Hong; Wang, Yi; Street, Robert A.; Ho, Jackson; Weisfield, Richard; Yao, William 2009-01-01 Active matrix flat-panel imager (AMFPI) technology is being employed for an increasing variety of imaging applications. An important element in the adoption of this technology has been significant ongoing improvements in optical signal collection achieved through innovations in indirect detection array pixel design. Such improvements have a particularly beneficial effect on performance in applications involving low exposures and∕or high spatial frequencies, where detective quantum efficiency is strongly reduced due to the relatively high level of additive electronic noise compared to signal levels of AMFPI devices. In this article, an examination of various signal properties, as determined through measurements and calculations related to novel array designs, is reported in the context of the evolution of AMFPI pixel design. For these studies, dark, optical, and radiation signal measurements were performed on prototype imagers incorporating a variety of increasingly sophisticated array designs, with pixel pitches ranging from 75 to 127 μm. For each design, detailed measurements of fundamental pixel-level properties conducted under radiographic and fluoroscopic operating conditions are reported and the results are compared. A series of 127 μm pitch arrays employing discrete photodiodes culminated in a novel design providing an optical fill factor of ∼80% (thereby assuring improved x-ray sensitivity), and demonstrating low dark current, very low charge trapping and charge release, and a large range of linear signal response. In two of the designs having 75 and 90 μm pitches, a novel continuous photodiode structure was found to provide fill factors that approach the theoretical maximum of 100%. Both sets of novel designs achieved large fill factors by employing architectures in which some, or all of the photodiode structure was elevated above the plane of the pixel addressing transistor. Generally, enhancement of the fill factor in either discrete or continuous 12. Experiment to Detect Accelerating Modes in a Photonic Bandgap Fiber SciTech Connect England, R. J.; Colby, E. R.; McGuinness, C. M.; Noble, R.; Plettner, T.; Siemann, R. H.; Spencer, J. E.; Walz, D.; Ischebeck, R.; Sears, C. M. S. 2009-01-22 An experimental effort is currently underway at the E-163 test beamline at Stanford Linear Accelerator Center to use a hollow-core photonic bandgap (PBG) fiber as a high-gradient laser-based accelerating structure for electron bunches. For the initial stage of this experiment, a 50 pC, 60 MeV electron beam will be coupled into the fiber core and the excited modes will be detected using a spectrograph to resolve their frequency signatures in the wakefield radiation generated by the beam. We will describe the experimental plan and recent simulation studies of candidate fibers. 13. Fabrication experiments on supersmooth optics for extrasolar planet detection NASA Technical Reports Server (NTRS) Ftaclas, C.; Krim, M. H.; Terrile, R. J. 1989-01-01 The direct detection of extrasolar planets by imaging will require reductions in scattered and diffracted light by factors in excess of 1000 within one arcsecond of a bright source. While diffraction can be reduced by a number of approaches, small angle scatter can only be reduced by controlling midspatial frequency figure errors. The surface requirements are reviewed and their meaning when compared to the data base of existing mirrors is considered. Experiments are discribed that were successful in reducing midspatial frequency figure so that the scatter level was 500 times less than diffraction for a 25-cm spherical mirror. 14. Experiment to Detect Accelerating Modes in a Photonic Bandgap Fiber SciTech Connect England, R.J.; Colby, E.R.; Ischebeck, R.; McGuinness, C.M.; Noble, R.; Plettner, T.; Sears, C.M.S.; Siemann, R.H.; Spencer, J.E.; Walz, D.; /SLAC 2011-11-21 An experimental effort is currently underway at the E-163 test beamline at Stanford Linear Accelerator Center to use a hollow-core photonic bandgap (PBG) fiber as a high-gradient laser-based accelerating structure for electron bunches. For the initial stage of this experiment, a 50pC, 60 MeV electron beam will be coupled into the fiber core and the excited modes will be detected using a spectrograph to resolve their frequency signatures in the wakefield radiation generated by the beam. They will describe the experimental plan and recent simulation studies of candidate fibers. 15. MSSM A-funnel and the galactic center excess: prospects for the LHC and direct detection experiments NASA Astrophysics Data System (ADS) Freese, Katherine; López, Alejandro; Shah, Nausheen R.; Shakya, Bibhushan 2016-04-01 The pseudoscalar resonance or " A-funnel" in the Minimal Supersymmetric Standard Model (MSSM) is a widely studied framework for explaining dark matter that can yield interesting indirect detection and collider signals. The well-known Galactic Center excess (GCE) at GeV energies in the gamma ray spectrum, consistent with annihilation of a ≲ 40 GeV dark matter particle, has more recently been shown to be compatible with significantly heavier masses following reanalysis of the background. In this paper, we explore the LHC and direct detection implications of interpreting the GCE in this extended mass window within the MSSM A-funnel framework. We find that compatibility with relic density, signal strength, collider constraints, and Higgs data can be simultaneously achieved with appropriate parameter choices. The compatible regions give very sharp predictions of 200-600 GeV CP-odd/even Higgs bosons at low tan β at the LHC and spin-independent cross sections ≈ 10-11 pb at direct detection experiments. Regardless of consistency with the GCE, this study serves as a useful template of the strong correlations between indirect, direct, and LHC signatures of the MSSM A-funnel region. 16. Photon detection system designs for the Deep Underground Neutrino Experiment NASA Astrophysics Data System (ADS) Whittington, D. 2016-05-01 The Deep Underground Neutrino Experiment (DUNE) will be a premier facility for exploring long-standing questions about the boundaries of the standard model. Acting in concert with the liquid argon time projection chambers underpinning the far detector design, the DUNE photon detection system will capture ultraviolet scintillation light in order to provide valuable timing information for event reconstruction. To maximize the active area while maintaining a small photocathode coverage, the experiment will utilize a design based on plastic light guides coated with a wavelength-shifting compound, along with silicon photomultipliers, to collect and record scintillation light from liquid argon. This report presents recent preliminary performance measurements of this baseline design and several alternative designs which promise significant improvements in sensitivity to low-energy interactions. 17. Chemical interpretation of Viking Lander 1 life detection experiment NASA Technical Reports Server (NTRS) Ballou, E. V.; Wood, P. C.; Wydeven, T.; Lehwalt, M. E.; Mack, R. E. 1978-01-01 An earth-based evaluation of the Viking Lander 1 life-detection experiments was conducted using a radiofrequency glow discharge in a simulated Martian atmosphere. The Gas Exchange Experiment conducted in the humid mode released substantial amounts of CO2, O2, N2, Ar, and CO into the atmosphere, indicating that these substances were adsorbed onto the Martian soil. An adsorption potential plot is given, graphing quantity of gas against time (d). For a model surface area of 17 squares meters per gram of measured substance, oxygen adsorption was found to be relatively high, a result which tends to confirm the hypothesis that Martian oxygen exists largely in chemisorbed states or in active oxygen compounds, e.g., peroxide, superoxide, hydroperoxide 18. Photon Detection System Designs for the Deep Underground Neutrino Experiment SciTech Connect Whittington, Denver 2015-11-19 The Deep Underground Neutrino Experiment (DUNE) will be a premier facility for exploring long-standing questions about the boundaries of the standard model. Acting in concert with the liquid argon time projection chambers underpinning the far detector design, the DUNE photon detection system will capture ultraviolet scintillation light in order to provide valuable timing information for event reconstruction. To maximize the active area while maintaining a small photocathode coverage, the experiment will utilize a design based on plastic light guides coated with a wavelength-shifting compound, along with silicon photomultipliers, to collect and record scintillation light from liquid argon. This report presents recent preliminary performance measurements of this baseline design and several alternative designs which promise significant improvements in sensitivity to low-energy interactions. 19. Detectability of strange matter in heavy ion experiments SciTech Connect Schaffner, J.,; Diener, A.; Stocker, H.,; Greiner, C., 1997-06-01 We discuss the properties of two distinct forms of hypothetical strange matter, small lumps of strange quark matter (strangelets) and of hyperon matter [metastable exotic multihypernuclear objects (MEMO{close_quote}s)], with special emphasis on their relevance for present and future heavy ion experiments. The masses of small strangelets up to A{sub B}=40 are calculated using the MIT bag model with shell mode filling for various bag parameters. The strangelets are checked for possible strong and weak hadronic decays, also taking into account multiple hadron decays. It is found that strangelets which are stable against strong decay are most likely highly negatively charged, contrary to previous findings. Strangelets can be stable against weak hadronic decay but their masses and charges are still rather high. This has serious impact on the present high sensitivity searches in heavy ion experiments at the AGS and CERN facilities. On the other hand, highly charged MEMO{close_quote}s are predicted on the basis of an extended relativistic mean-field model. Those objects could be detected in future experiments searching for short-lived, rare composites. It is demonstrated that future experiments can be sensitive to a much wider variety of strangelets. {copyright} {ital 1997} {ital The American Physical Society} 20. Detection of high frequency oscillations from space experiments and eclipses NASA Astrophysics Data System (ADS) Banerjee, Dipankar; Singh, Jagdev; Hasan, Siraj; Gupta, Girjesh R.; Nagaraju, K. We performed high resolution spectroscopy of the solar corona during the total solar eclipse of July 22, 2009 in two emission lines, namely the red line at 530.3 nm due to [Fe xiv] and the green line at 637.4 nm due to [Fex] simultaneously from Anji, China. Two mirror coelostat with 100 cm focal length lens made 9.2 mm image of the sun. The spectrograph using 140 cm focal length lens in Littrow mode and a grating with 600 lines per mm blazed at 2 micron provided a dispersion of 30 mA and 42 mA per pixel in the 4th order around green line and 3rd order red emission line, respectively. Two Peltier cooled 1K x 1K CCD cameras with pixel size of 13 micron square and 14-bit read out at 10 MHz operated in frame transfer mode, were used to obtain the time sequence spectra in each emission lines simultaneously. We detected presence of high frequency oscillations in intensity, velocity and line widths. We also studied the variation of line widths with height. The results will be discussed in terms of different MHD waves. Possibility of detecting these oscillations from space based experiments will be addressed. India is going to launch a emission line coronagraph on a small satellite platform called Aditya. The scientific goals of Aditya in pursuit to wave detection will be presented. 1. Direct and Indirect Educational Relationships: Developing a Typology for the Contribution of Different Categories of School Staff in Relation to Students' Educational Experiences ERIC Educational Resources Information Center Frelin, Anneli; Grannäs, Jan 2015-01-01 This article presents results from a research project exploring the relational interplay between school staff and students, its functions and complexity in the secondary school context. School relationships (between students and different kinds of staff) are more or less indirectly related to educational content: subject matter as well as norms… 2. Indirect techniques in nuclear astrophysics: a review NASA Astrophysics Data System (ADS) Tribble, R. E.; Bertulani, C. A.; La Cognata, M.; Mukhamedzhanov, A. M.; Spitaleri, C. 2014-10-01 In this review, we discuss the present status of three indirect techniques that are used to determine reaction rates for stellar burning processes, asymptotic normalization coefficients, the Trojan Horse method and Coulomb dissociation. A comprehensive review of the theory behind each of these techniques is presented. This is followed by an overview of the experiments that have been carried out using these indirect approaches. 3. Indirection and computer security. SciTech Connect Berg, Michael J. 2011-09-01 The discipline of computer science is built on indirection. David Wheeler famously said, 'All problems in computer science can be solved by another layer of indirection. But that usually will create another problem'. We propose that every computer security vulnerability is yet another problem created by the indirections in system designs and that focusing on the indirections involved is a better way to design, evaluate, and compare security solutions. We are not proposing that indirection be avoided when solving problems, but that understanding the relationships between indirections and vulnerabilities is key to securing computer systems. Using this perspective, we analyze common vulnerabilities that plague our computer systems, consider the effectiveness of currently available security solutions, and propose several new security solutions. 4. Chemical Detection and Identification Techniques for Exobiology Flight Experiments NASA Technical Reports Server (NTRS) Kojiro, Daniel R.; Sheverev, Valery A.; Khromov, Nikolai A. 2002-01-01 Exobiology flight experiments require highly sensitive instrumentation for in situ analysis of the volatile chemical species that occur in the atmospheres and surfaces of various bodies within the solar system. The complex mixtures encountered place a heavy burden on the analytical Instrumentation to detect and identify all species present. The minimal resources available onboard for such missions mandate that the instruments provide maximum analytical capabilities with minimal requirements of volume, weight and consumables. Advances in technology may be achieved by increasing the amount of information acquired by a given technique with greater analytical capabilities and miniaturization of proven terrestrial technology. We describe here methods to develop analytical instruments for the detection and identification of a wide range of chemical species using Gas Chromatography. These efforts to expand the analytical capabilities of GC technology are focused on the development of detectors for the GC which provide sample identification independent of the GC retention time data. A novel new approach employs Penning Ionization Electron Spectroscopy (PIES). 5. Indirect comparisons of therapeutic interventions PubMed Central Schöttker, Ben; Lühmann, Dagmar; Boulkhemair, Dalila; Raspe, Heiner 2009-01-01 any comparator, all randomised controlled trials (RCT) that provide a study arm with the intervention of interest. Adjusted indirect comparisons and metaregression analyses include only those studies that provide one study arm with the intervention of interest and another study arm with a common comparator. While the aforementioned methods use conventional metaanalytical techniques, Mixed treatment comparisons (MTC) use Bayesian statistics. They are able to analyse a complex network of RCT with multiple comparators simultaneously. During the period from 1999 to 2008 adjusted indirect comparisons are the most commonly used method for indirect comparisons. Since 2006 an increase in the application of the more methodologically challenging MTC is being observed. For the validity check 248 data sets, which include results of a direct and an indirect comparison, are available. The share of statistically significant discrepant results is greatest in the unadjusted indirect comparisons (25,5% [95% CI: 13,1%; 38%]), followed by metaregression analyses (16,7% [95% CI: -13,2%; 46,5%]), adjusted indirect comparisons (12,1% [95% CI: 6,1%; 18%]) and MTC (1,8% [95% CI: -1,7%; 5,2%]). Discrepant results are mainly detected if the basic assumption for an indirect comparison – between-study homogeneity – does not hold. However a systematic over- or underestimation of the results of direct comparisons by any of the indirectly comparing methods was not observed in this sample. Discussion The selection of an appropriate method for an indirect comparison has to account for its validity, the number of interventions to be compared and the quality as well as the quantity of available studies. Unadjusted indirect comparisons provide, contrasted with the results of direct comparisons, a low validity. Adjusted indirect comparisons and MTC may, under certain circumstances, give results which are consistent with the results of direct comparisons. The limited number of available reviews 6. Labeled Release - An experiment in radiorespirometry. [for Viking Mars Program life detection experiments NASA Technical Reports Server (NTRS) Levin, G. V.; Straat, P. A. 1976-01-01 The Labeled Release extraterrestrial life detection experiment onboard the Viking spacecraft is described as it will be implemented on the surface of Mars in 1976. This experiment is designed to detect heterotrophic life by supplying a dilute solution of radioactive organic substrates to a sample of Martian soil and monitoring for evolution of radioactive gas. A significantly attenuated response by a heat-sterilized control sample of the same soil would confirm a positive metabolic response. Experimental assumptions as well as criteria for the selection of organic substrates are presented. The Labeled Release nutrient has been widely tested, is versatile in eliciting terrestrial metabolic responses, and is stable to heat sterilization and to the long-term storage required before its use on Mars. A testing program has been conducted with flight-like instruments to acquire science data relevant to the interpretation of the Mars experiment. Factors involved in the delineation of a positive result are presented and the significance of the possible results discussed. 7. Validation and use of an indirect enzyme-linked immunosorbent assay for detection of antibodies to West Nile virus in American Alligators (Alligator mississippiensis) in Florida. PubMed Jacobson, Elliott R; Johnson, April J; Hernandez, Jorge A; Tucker, Sylvia J; Dupuis, Alan P; Stevens, Robert; Carbonneau, Dwayne; Stark, Lillian 2005-01-01 In October 2002, West Nile virus (WNV) was identified in farmed American alligators (Alligator mississippiensis) in Florida showing clinical signs and having microscopic lesions indicative of central nervous system disease. To perform seroepidemiologic studies, an indirect enzyme-linked immunosorbent assay (ELISA) was developed to determine exposure of captive and wild alligators to WNV. To validate the test, a group of WNV-seropositive and -seronegative alligators were identified at the affected farm using hemagglutination inhibition (HAI) and the plaque reduction neutralization test (PRNT). The indirect ELISA utilized a rabbit anti-alligator immunoglobulins polyclonal antibody as the secondary antibody, and inactivated WNV-infected Vero cells were used as the coating antigen. For all samples (n=58), the results of the ELISA were consistent with the HAI and PRNT findings. Plasma was collected from 669 free-ranging alligators from 21 sites across Florida in April and October 2003. Four samples collected in April and six in October were positive for WNV antibodies using HAI, PRNT, and the indirect ELISA. This indicated that wild alligators in Florida have been exposed to WNV. These findings can be used as a baseline for future surveys. PMID:15827216 8. Zirconia coated stir bar sorptive extraction combined with large volume sample stacking capillary electrophoresis-indirect ultraviolet detection for the determination of chemical warfare agent degradation products in water samples. PubMed Li, Pingjing; Hu, Bin; Li, Xiaoyong 2012-07-20 In this study, a sensitive, selective and reliable analytical method by combining zirconia (ZrO₂) coated stir bar sorptive extraction (SBSE) with large volume sample stacking capillary electrophoresis-indirect ultraviolet (LVSS-CE/indirect UV) was developed for the direct analysis of chemical warfare agent degradation products of alkyl alkylphosphonic acids (AAPAs) (including ethyl methylphosphonic acid (EMPA) and pinacolyl methylphosphonate (PMPA)) and methylphosphonic acid (MPA) in environmental waters. ZrO₂ coated stir bar was prepared by adhering nanometer-sized ZrO₂ particles onto the surface of stir bar with commercial PDMS sol as adhesion agent. Due to the high affinity of ZrO₂ to the electronegative phosphonate group, ZrO₂ coated stir bars could selectively extract the strongly polar AAPAs and MPA. After systematically optimizing the extraction conditions of ZrO₂-SBSE, the analytical performance of ZrO₂-SBSE-CE/indirect UV and ZrO₂-SBSE-LVSS-CE/indirect UV was assessed. The limits of detection (LODs, at a signal-to-noise ratio of 3) obtained by ZrO₂-SBSE-CE/indirect UV were 13.4-15.9 μg/L for PMPA, EMPA and MPA. The relative standard deviations (RSDs, n=7, c=200 μg/L) of the corrected peak area for the target analytes were in the range of 6.4-8.8%. Enhancement factors (EFs) in terms of LODs were found to be from 112- to 145-fold. By combining ZrO₂ coating SBSE with LVSS as a dual preconcentration strategy, the EFs were magnified up to 1583-fold, and the LODs of ZrO₂-SBSE-LVSS-CE/indirect UV were 1.4, 1.2 and 3.1 μg/L for PMPA, EMPA, and MPA, respectively. The RSDs (n=7, c=20 μg/L) were found to be in the range of 9.0-11.8%. The developed ZrO₂-SBSE-LVSS-CE/indirect UV method has been successfully applied to the analysis of PMPA, EMPA, and MPA in different environmental water samples, and the recoveries for the spiked water samples were found to be in the range of 93.8-105.3%. PMID:22673812 9. SPEED: the segmented pupil experiment for exoplanet detection NASA Astrophysics Data System (ADS) Martinez, P.; Preis, Olivier; Gouvret, C.; Dejonghe, J.; Daban, J.-B.; Spang, A.; Martinache, F.; Beaulieu, M.; Janin-Potiron, P.; Abe, L.; Fantei-Caujolle, Y.; Mattei, D.; Ottogalli, S. 2014-07-01 Searching for nearby exoplanets with direct imaging is one of the major scientific drivers for both space and groundbased programs. While the second generation of dedicated high-contrast instruments on 8-m class telescopes is about to greatly expand the sample of directly imaged planets, exploring the planetary parameter space to hitherto-unseen regions ideally down to Terrestrial planets is a major technological challenge for the forthcoming decades. This requires increasing spatial resolution and significantly improving high contrast imaging capabilities at close angular separations. Segmented telescopes offer a practical path toward dramatically enlarging telescope diameter from the ground (ELTs), or achieving optimal diameter in space. However, translating current technological advances in the domain of highcontrast imaging for monolithic apertures to the case of segmented apertures is far from trivial. SPEED - the segmented pupil experiment for exoplanet detection - is a new instrumental facility in development at the Lagrange laboratory for enabling strategies and technologies for high-contrast instrumentation with segmented telescopes. SPEED combines wavefront control including precision segment phasing architectures, wavefront shaping using two sequential high order deformable mirrors for both phase and amplitude control, and advanced coronagraphy struggled to very close angular separations (PIAACMC). SPEED represents significant investments and technology developments towards the ELT area and future spatial missions, and will offer an ideal cocoon to pave the road of technological progress in both phasing and high-contrast domains with complex/irregular apertures. In this paper, we describe the overall design and philosophy of the SPEED bench. 10. Development of a sensitive and specific indirect enzyme-linked immunosorbent assay based on a baculovirus recombinant antigen for detection of specific antibodies against Ehrlichia canis. PubMed López, Lissett; Venteo, Angel; Aguirre, Enara; García, Marga; Rodríguez, Majosé; Amusátegui, Inmaculada; Tesouro, Miguel A; Vela, Carmen; Sainz, Angel; Rueda, Paloma 2007-11-01 An indirect enzyme-linked immunosorbent assay (ELISA) based on baculovirus recombinant P30 protein of Ehrlichia canis and the 1BH4 anticanine IgG monoclonal antibody was developed and evaluated by examining a panel of 98 positive and 157 negative sera using the indirect fluorescent antibody (IFA) test as the reference technique. The P30-based ELISA appeared to be sensitive and specific (77.55% and 95.54%, respectively) when qualitative results (positive/negative) were compared with those of the IFA test; the coefficient of correlation (R) between the 2 tests was 0.833. Furthermore, it was possible to establish a mathematical formula for use in comparing the results of both techniques. These results indicate that recombinant P30 antigen-based ELISA is a suitable alternative of the IFA test for simple, consistent, and rapid serodiagnosis of canine ehrlichiosis. Moreover, the use of this recombinant protein as antigen offers a great advantage for antigen preparation in comparison with other techniques in which the whole E. canis organism is used as antigen. PMID:17998551 11. A Probability Model of Accuracy in Deception Detection Experiments. ERIC Educational Resources Information Center Park, Hee Sun; Levine, Timothy R. 2001-01-01 Extends the recent work on the veracity effect in deception detection. Explains the probabilistic nature of a receiver's accuracy in detecting deception and analyzes a receiver's detection of deception in terms of set theory and conditional probability. Finds that accuracy is shown to be a function of the relevant conditional probability and the… 12. Resource Construction and Evaluation for Indirect Opinion Mining of Drug Reviews PubMed Central Noferesti, Samira; Shamsfard, Mehrnoush 2015-01-01 Opinion mining is a well-known problem in natural language processing that has attracted increasing attention in recent years. Existing approaches are mainly limited to the identification of direct opinions and are mostly dedicated to explicit opinions. However, in some domains such as medical, the opinions about an entity are not usually expressed by opinion words directly, but they are expressed indirectly by describing the effect of that entity on other ones. Therefore, ignoring indirect opinions can lead to the loss of valuable information and noticeable decline in overall accuracy of opinion mining systems. In this paper, we first introduce the task of indirect opinion mining. Then, we present a novel approach to construct a knowledge base of indirect opinions, called OpinionKB, which aims to be a resource for automatically classifying people’s opinions about drugs. Using our approach, we have extracted 896 quadruples of indirect opinions at a precision of 88.08 percent. Furthermore, experiments on drug reviews demonstrate that our approach can achieve 85.25 percent precision in polarity detection task, and outperforms the state-of-the-art opinion mining methods. We also build a corpus of indirect opinions about drugs, which can be used as a basis for supervised indirect opinion mining. The proposed approach for corpus construction achieves the precision of 88.42 percent. PMID:25962135 13. Microwave detection of air showers with the MIDAS experiment NASA Astrophysics Data System (ADS) Privitera, Paolo; Alekotte, I.; Alvarez-Muñiz, J.; Berlin, A.; Bertou, X.; Bogdan, M.; Boháčová, M.; Bonifazi, C.; Carvalho, W. R.; de Mello Neto, J. R. T.; Facal San Luis, P.; Genat, J. F.; Hollon, N.; Mills, E.; Monasor, M.; Reyes, L. C.; Rouille d'Orfeuil, B.; Santos, E. M.; Wayne, S.; Williams, C.; Zas, E. 2011-03-01 Microwave emission from Extensive Air Showers could provide a novel technique for ultra-high energy cosmic rays detection over large area and with 100% duty cycle. We describe the design, performance and first results of the MIDAS (MIcrowave Detection of Air Showers) detector, a 4.5 m parabolic dish with 53 feeds in its focal plane, currently installed at the University of Chicago. 14. New results of the Borexino experiment: pp solar neutrino detection NASA Astrophysics Data System (ADS) Davini, S.; Bellini, G.; Benziger, J.; Bick, D.; Bonfini, G.; Bravo, D.; Caccianiga, B.; Calaprice, F.; Caminata, A.; Cavalcante, P.; Chepurnov, A.; D'Angelo, D.; Derbin, A.; Etenko, A.; Fomenko, K.; Franco, D.; Galbiati, C.; Ghiano, C.; Goretti, A.; Gromov, M.; Ianni, Aldo; Ianni, Andrea; Kobychev, V.; Korablev, D.; Korga, G.; Kryn, D.; Laubenstein, M.; Lewke, T.; Litvinovich, E.; Lombardi, F.; Lombardi, P.; Ludhova, L.; Lukyanchenko, G.; Machulin, I.; Manecki, S.; Maneschg, W.; Marcocci, S.; Meroni, E.; Misiaszek, M.; Mosteiro, P.; Muratova, V.; Oberauer, L.; Obolensky, M.; Ortica, F.; Otis, K.; Pallavicini, M.; Papp, L.; Pocar, A.; Ranucci, G.; Razeto, A.; Re, A.; Romani, A.; Rossi, N.; Salvo, C.; Schönert, S.; Simgen, H.; Skorokhvatov, M.; Smirnov, O.; Sotnikov, A.; Sukhotin, S.; Suvorov, Y.; Tartaglia, R.; Testera, G.; Vignaud, D.; Vogelaar, R. B.; Winter, J.; Wojcik, M.; Wurm, M.; Zaimidoroga, O.; Zavatarelli, S.; Zuzel, G. 2016-07-01 The Borexino experiment is an ultra-pure liquid scintillator detector, running at Laboratori Nazionali del Gran Sasso (Italy). Borexino has completed the real time spectroscopy of the solar neutrinos generated in the proton-proton chain in the core of the Sun. This article reviews the Borexino experiment and the first direct measurment of pp solar neutrinos. 15. Detection of ATP and NADH: A Bioluminescent Experience. ERIC Educational Resources Information Center Selig, Ted C.; And Others 1984-01-01 Described is a bioluminescent assay for adenosine triphosphate (ATP) and reduced nicotineamide-adenine dinucleotide (NADH) that meets the requirements of an undergraduate biochemistry laboratory course. The 3-hour experiment provides students with experience in bioluminescence and analytical biochemistry yet requires limited instrumentation,… 16. Indirect techniques in nuclear astrophysics: a review. PubMed Tribble, R E; Bertulani, C A; Cognata, M La; Mukhamedzhanov, A M; Spitaleri, C 2014-10-01 In this review, we discuss the present status of three indirect techniques that are used to determine reaction rates for stellar burning processes, asymptotic normalization coefficients, the Trojan Horse method and Coulomb dissociation. A comprehensive review of the theory behind each of these techniques is presented. This is followed by an overview of the experiments that have been carried out using these indirect approaches. PMID:25313189 17. Development and use of an indirect enzyme-linked immunosorbent assay for detection of iridovirus exposure in gopher tortoises (Gopherus polyphemus) and eastern box turtles (Terrapene carolina carolina). PubMed Johnson, April J; Wendland, Lori; Norton, Terry M; Belzer, Bill; Jacobson, Elliott R 2010-05-19 Iridoviruses, pathogens typically associated with fish and amphibians, have recently been shown to cause acute respiratory disease in chelonians including box turtles, red-eared sliders, gopher tortoises, and Burmese star tortoises. Case reports of natural infections in several chelonian species in the United States have been reported, however the prevalence remains unknown in susceptible populations of free-ranging chelonians. To determine the prevalence of iridovirus exposure in free-ranging gopher tortoises (Gopherus polyphemus) in the southeast United States, an indirect enzyme-linked immunosorbent assay (ELISA) was developed and used to evaluate plasma samples from wild gopher tortoises (G. polyphemus) from: Alabama (n=9); Florida (n=658); Georgia (n=225); Louisiana (n=12); Mississippi (n=28); and unknown locations (68) collected between 2001 and 2006. Eight (1.2%) seropositive tortoises were identified from Florida and seven (3.1%) from Georgia for an overall prevalence of 1.5%. Additionally, a population of eastern box turtles was sampled from a private nature sanctuary in Pennsylvania that experienced an outbreak of iridovirus the previous year, which killed 16 turtles. Only 1 turtle out of 55 survivors tested positive (1.8%). Results suggest a low exposure rate in chelonians to this pathogen; however, it is suspected that this is an underestimate of the true prevalence. Since experimental transmission studies and past outbreaks have shown a high rate of mortality in infected turtles, turtles may die before they develop an antibody response. Further, the duration of the antibody response is unknown and may also cause an underestimate of the true prevalence. PMID:19931321 18. Parallel detection experiment of fluorescence confocal microscopy using DMD. PubMed Wang, Qingqing; Zheng, Jihong; Wang, Kangni; Gui, Kun; Guo, Hanming; Zhuang, Songlin 2016-05-01 Parallel detection of fluorescence confocal microscopy (PDFCM) system based on Digital Micromirror Device (DMD) is reported in this paper in order to realize simultaneous multi-channel imaging and improve detection speed. DMD is added into PDFCM system, working to take replace of the single traditional pinhole in the confocal system, which divides the laser source into multiple excitation beams. The PDFCM imaging system based on DMD is experimentally set up. The multi-channel image of fluorescence signal of potato cells sample is detected by parallel lateral scanning in order to verify the feasibility of introducing the DMD into fluorescence confocal microscope. In addition, for the purpose of characterizing the microscope, the depth response curve is also acquired. The experimental result shows that in contrast to conventional microscopy, the DMD-based PDFCM system has higher axial resolution and faster detection speed, which may bring some potential benefits in the biology and medicine analysis. SCANNING 38:234-239, 2016. © 2015 Wiley Periodicals, Inc. PMID:26331288 19. Emergency First Responders' Experience with Colorimetric Detection Methods SciTech Connect Sandra L. Fox; Keith A. Daum; Carla J. Miller; Marnie M. Cortez 2007-10-01 Nationwide, first responders from state and federal support teams respond to hazardous materials incidents, industrial chemical spills, and potential weapons of mass destruction (WMD) attacks. Although first responders have sophisticated chemical, biological, radiological, and explosive detectors available for assessment of the incident scene, simple colorimetric detectors have a role in response actions. The large number of colorimetric chemical detection methods available on the market can make the selection of the proper methods difficult. Although each detector has unique aspects to provide qualitative or quantitative data about the unknown chemicals present, not all detectors provide consistent, accurate, and reliable results. Included here, in a consumer-report-style format, we provide “boots on the ground” information directly from first responders about how well colorimetric chemical detection methods meet their needs in the field and how they procure these methods. 20. The MIDAS experiment: MIcrowave Detection of Air Showers NASA Astrophysics Data System (ADS) Facal, Pedro; Bohacova, Martina; Monasor, Maria; Privitera, Paolo; Reyes, Luis C.; Williams, Cristopher 2010-02-01 Recent measurements suggest that extensive air showers initiated by high energy cosmic rays (above 1 EeV) emit signals in the microwave band of the EM spectrum caused by the collisions of the free-electrons with the atmospheric neutral molecules in the plasma produced by the passage of the shower. Such emission is isotropic and could allow the detection of air showers with 100% duty cycle and a calorimetric-like energy measurement - a significant improvement over current detection techniques. We have built a MIDAS prototype, which consists of a 4.5 m diameter antenna with a cluster of 55 feed-horns in the 4 GHz range, covering a 10^o x10^o field of view, with self-triggering capability. The details of the prototype and first results will be presented. ) 1. The Cloud Detection and UV Monitoring Experiment (CLUE) NASA Technical Reports Server (NTRS) Barbier, L.; Loh, E.; Sokolsky, P.; Streitmatter, R. 2004-01-01 We propose a large-area, low-power instrument to perform CLoud detection and Ultraviolet monitoring, CLUE. CLUE will combine the W detection capabilities of the NIGHTGLOW payload, with an array of infrared sensors to perform cloud slicing measurements. Missions such as EUSO and OWL which seek to measure UHE cosmic-rays at 1W20 eV use the atmosphere as a fluorescence detector. CLUE will provide several important correlated measurements for these missions, including: monitoring the atmospheric W emissions &om 330 - 400 nm, determining the ambient cloud cover during those W measurements (with active LIDAR), measuring the optical depth of the clouds (with an array of narrow band-pass IR sensors), and correlating LIDAR and IR cloud cover measurements. This talk will describe the instrument as we envision it. 2. Serotype- and serogroup-specific detection of African horsesickness virus using phage displayed chicken scFvs for indirect double antibody sandwich ELISAs. PubMed van Wyngaardt, Wouter; Mashau, Cordelia; Wright, Isabel; Fehrsen, Jeanni 2013-01-01 There is an ongoing need for standardized, easily renewable immunoreagents for detecting African horsesickness virus (AHSV). Two phage displayed single-chain variable fragment (scFv) antibodies, selected from a semi-synthetic chicken antibody library, were used to develop double antibody sandwich enzyme-linked immunosorbent assays (DAS-ELISAs) to detect AHSV. In the DAS-ELISAs, the scFv previously selected with directly immobilized AHSV-3 functioned as a serotype-specific reagent that recognized only AHSV-3. In contrast, the one selected with AHSV-8 captured by IgG against AHSV-3 recognized all nine AHSV serotypes but not the Bryanston strain of equine encephalosis virus. Serving as evidence for its serogroup-specificity. These two scFvs can help to rapidly confirm the presence of AHSV while additional serotype-specific scFvs may simplify AHSV serotyping. PMID:23388433 3. Bipolar Transistors Can Detect Charge in Electrostatic Experiments ERIC Educational Resources Information Center Dvorak, L. 2012-01-01 A simple charge indicator with bipolar transistors is described that can be used in various electrostatic experiments. Its behaviour enables us to elucidate links between 'static electricity' and electric currents. In addition it allows us to relate the sign of static charges to the sign of the terminals of an ordinary battery. (Contains 7 figures… 4. Photothermal lens detection of gold nanoparticles: theory and experiments. PubMed Brusnichkin, Anton V; Nedosekin, Dmitry A; Proskurnin, Mikhail A; Zharov, Vladimir P 2007-11-01 An approach for mode-mismatched two-beam (pump-probe) photothermal lens detection of multipoint light-absorbing targets in solution (e.g., gold nanoparticles) is developed for continuous-wave intensity-modulated laser-excitation mode. A description of the blooming of the thermooptical element (thermal lens) upon absorption of the excitation laser radiation is based on the summation of individual thermal waves from multiple heat sources. This description makes it possible to estimate the irregularities of the temperature (and, thus, the refractive index) profile for a discrete number of nanoparticles in the irradiated area and a change in the concentration and particle size parameters. Experimental results are in good agreement with theoretical dependences of the photothermal signal on nanoparticle size and concentration and excitation laser power. Calibration plots for particles from 2 to 250 nm show long linear ranges, limits of detection of gold nanoparticles at the level of hundreds of nanoparticles with the current setup, and the photothermal-lens sensitivity coefficient increases as a cubic function of particle size. Further improvements are discussed, including increasing the sensitivity thresholds up to one nanoparticle in the detected volume. PMID:18028698 5. Development of an Indirect ELISA Using Different Fragments of Recombinant Ncgra7 for Detection of Neospora caninum Infection in Cattle and Water Buffalo PubMed Central HAMIDINEJAT, Hossein; SEIFI ABAD SHAPOURI, Massoud Reza; NAMAVARI, Mohammad Mehdi; SHAYAN, Parviz; KEFAYAT, Marzieh 2015-01-01 Background: Dense granules are immunodominant proteins for the standardization of immunodiagnostic procedures to detect neosporosis. In the presented study different fragment of a dense-granule protein was evaluated for serodiagnosis of Neospora caninum in cattle and water buffalo. Methods: NcGRA7, from N. caninum tachyzoites was amplified. PCR product and pMAL-c2X plasmid were digested with EcoR1 restriction enzyme and expressed in Escherichia coli to evaluate its competence for detection of anti- N. caninum antibodies with ELISA in comparison with commercial IDEXX ELISA. Furthermore, 230 sera of presumably healthy cattle and water buffaloes (108 cattle and 122 water buffaloes) were analyzed by both tests to determine the agreement of these two procedures. Results: Sensitivities and specificities of NcGRA7-based ELISA were 94.64% and 90.38% respectively using sera of cattle, but were 98.57% and 86.54% in the case of buffaloes respectively. A good correlation between the results of IDEXX ELISA and ELISA based on recombinant NcGRA7 for detecting N. caninum antibodies was appeared. Analyzing by Mc Nemar′s showed that NcGRA7-based ELISA has acceptable capability to differentiate the positive results in comparison with IDEXX ELISA. Conclusion: NcGRA7-based ELISA considering utilized new fragment of genomic DNA is a good tool for serodiagnosis of anti- N. caninum antibodies for screening and epidemiological purposes on cattle herd and water buffaloes as well. PMID:25904948 6. Unexploded ordnance detection experiments using ultrawideband synthetic aperture radar NASA Astrophysics Data System (ADS) DeLuca, Clyde C.; Marinelli, Vincent R.; Ressler, Marc A.; Ton, Tuan T. 1998-09-01 The Army Research Laboratory (ARL) has several technology development programs that are evaluating the use of ultra- wideband synthetic aperture radar (UWB SAR) to detect and locate targets that are subsurface or concealed by foliage. Under these programs, a 1-GHz-bandwidth, low-frequency, fully polarimetric UWB SAR instrumentation system was developed to collect the data needed to support foliage and ground- penetrating radar studies. The radar was integrated onto a 150-ft-high mobile boomlift platform in 1995 and was thus named the BoomSAR. In 1997, under the sponsorship of the Strategic Environmental Research and Development Program (SERDP), ARL began a project focused on enhancing the detection and discrimination of unexploded ordnance (UXO). The program's technical approach is to collect high-quality, precision data to support phenomenological investigations of electromagnetic wave propagation through varying dielectric media, which in turn supports the development of algorithms for automatic target detection. For this project, a UXO test site was set up at the Steel Crater Test Area -- an existing test site that already contained subsurface mines, tactical vehicles, 55-gallon drums, storage containers, wires, pipes, and arms caches located at Yuma Proving Ground (YPG), Arizona. More than 600 additional pieces of inert UXO were added to the Steel Crater Test Area, including bombs (250, 500, 750, 1000, and 2000 lb), mortars (60 and 81 mm), artillery shells (105 and 155 mm), 2.75-in. rockets, submunitions (M42, BLU-63, M68, BLU-97, and M118), and mines (Gator, VS1.6, M12, PMN, and POM- Z). In the selection of UXO to be included at YPG, an emphasis was placed on the types of munitions that may be present at CONUS test and training ranges. 7. Indirect reciprocity with trinary reputations. PubMed Tanabe, Shoma; Suzuki, Hideyuki; Masuda, Naoki 2013-01-21 Indirect reciprocity is a reputation-based mechanism for cooperation in social dilemma situations when individuals do not repeatedly meet. The conditions under which cooperation based on indirect reciprocity occurs have been examined in great details. Most previous theoretical analysis assumed for mathematical tractability that an individual possesses a binary reputation value, i.e., good or bad, which depends on their past actions and other factors. However, in real situations, reputations of individuals may be multiple valued. Another puzzling discrepancy between the theory and experiments is the status of the so-called image scoring, in which cooperation and defection are judged to be good and bad, respectively, independent of other factors. Such an assessment rule is found in behavioral experiments, whereas it is known to be unstable in theory. In the present study, we fill both gaps by analyzing a trinary reputation model. By an exhaustive search, we identify all the cooperative and stable equilibria composed of a homogeneous population or a heterogeneous population containing two types of players. Some results derived for the trinary reputation model are direct extensions of those for the binary model. However, we find that the trinary model allows cooperation under image scoring under some mild conditions. PMID:23123557 8. Detection of Upward Air Showers with the EUSO Experiments NASA Technical Reports Server (NTRS) Takahashi, Y.; Hillman, L.; Zuccaro, Al; Adams, J.; Cline, D. 2003-01-01 Upward-going showers in the atmosphere can be detected by an orbiting satellite with appropriate instrumentation. If the method only uses directional Cherenkov radiation, it is difficult to discriminate the red shower events from the background noises of very short pulse. A spectroscopic polychromatic optical design can intentionally blur the focusing of photons at shorter wavelengths (300 - 330 nm), spreading the image size to 2 x 2 or 3 x 3 pixels. False triggers due to random chance coincidence of noises can be drastically reduced with a spectroscopic polychromatic, refractive telescope. 9. Detection of Upward Air Showers with the EUSO Experiments NASA Astrophysics Data System (ADS) Takahashi, Y.; Hillman, L.; Zuccaro, A.; Adams, J.; Cline, D.; EUSO Collaboration 2003-07-01 Upward-going showers in the atmosphere can be detected by an orbiting satellite having an appropriate instrumentation. If the method only uses directional Cherenkov radiation, it is difficult to discriminate the real shower events from the background noises of very short pulse. A spectroscopic polychromatic optical design can intentionally blur the fo cusing of photons at shorter wavelengths (300 330 nm), spreading the image size to 2 × 2 or 3 × 3 pixels. False triggers due to random chance coincidence of noises can be drastically reduced with a spectroscopic polychromatic, refractive telescope. 10. Searching for Dark Matter in Unification Models: A Hint from Indirect Sensitivities towards Future Signals in Direct Detection and B-decays SciTech Connect Olive, Keith A. 2006-11-28 A comparison is made between accelerator and direct detection constraints in constrained versions of the minimal supersymmetric standard model. Models considered are based on mSUGRA, where scalar and gaugino masses are unified at the GUT scale. In addition, the mSUGRA relation between the (unified) A and B parameters is assumed, as is the relation between m0 and gravitino mass. Also considered are models where the latter two conditions are dropped (the CMSSM), and a less constrained version where the Higgs soft masses are not unified at the GUT scale (the NUHM) 11. Numerical analysis of experiments on the generation of shock waves in aluminium under indirect (X-ray) action on the Iskra-5 facility SciTech Connect Bondarenko, S V; Dolgoleva, G V; Novikova, E A 2013-07-31 The dynamics of laser and X-ray radiation fields in experiments with cylindrical converter boxes (illuminators), which had earlier been carried out on the Iskra-5 laser facility (the second harmonic of iodine laser radiation, {lambda} = 0.66 {mu}m) was investigated in a sector approximation using the SND-LIRA numerical technique. In these experiments, the X-ray radiation temperature in the box was determined by measuring the velocity of the shock wave generated in the sample under investigation, which was located at the end of the cylindrical illuminator. Through simulations were made using the SND-LIRA code, which took into account the absorption of laser driver radiation at the box walls, the production of quasithermal radiation, as well as the formation and propagation of the shock wave in the sample under investigation. An analysis of the experiments permits determining the electron thermal flux limiter f: for f = 0.03 it is possible to match the experimental scaling data for X-ray in-box radiation temperature to the data of our simulations. The shock velocities obtained from the simulations are also consistent with experimental data. In particular, in the experiment with six laser beams (and a laser energy E{sub L} = 1380 J introduced into the box) the velocity of the shock front (determined from the position of a laser mark) after passage through a 50-{mu}m thick base aluminium layer was equal to 35{+-}1.6 km s{sup -1}, and in simulations to 36 km s{sup -1}. In the experiment with four laser beams (for E{sub L} = 850 J) the shock velocity (measured from the difference of transit times through the base aluminium layer and an additional thin aluminium platelet) was equal to 30{+-}3.6 km s{sup -1}, and in simulations to 30 km s{sup -1}. (interaction of laser radiation with matter) 12. Generation of anti-trenbolone monoclonal antibody and establishment of an indirect competitive enzyme-linked immunosorbent assay for detection of trenbolone in animal tissues, feed and urine. PubMed Zhang, Yuanyang; He, Fangyang; Wan, Yuping; Meng, Meng; Xu, Jing; Yi, Jian; Wang, Yabin; Feng, Caiwei; Wang, Shanliang; Xi, Rimo 2011-01-15 Trenbolone (TRE) is a steroid used by veterinarians on livestock to increase appetite and body weight. The use of TRE has been restricted because of its harmful side effect for consumers. To effectively control TRE residue in food and food product, a rapid and convenient immunoassay was developed by preparing an anti-TRE monoclonal antibody. The immunogen and coating antigen were prepared by coupling TRE hapten with carrier proteins via 1-ethyl-3-(dimethylaminopropyl)carbodiimide hydrochloride (EDC) method. The optimized method gave an average IC(50) value of 0.323 ng mL(-1) towards TRE and an average detection limit (LOD) of 0.06 ng mL(-1), which is much lower than the maximum residue levels (2.0 ng g(-1)) accepted by the Joint FAO/WHO Expert Committee on Food Additives (JECFA). The specificity of the antibody was evaluated by measuring cross-reactivity of six structurally related compounds, including 19-nortestosterone (9.7%), testosterone (0.13%), methyltestosterone (<0.01%), methandrostenolone (<0.01%), (+)-dehydroisoandrosterone (<0.001%) and β-estradiol (<0.001%). The recovery rates of the test in detection of TRE-fortified animal tissue, urine and animal feed samples were in the range of 81.3-89.4%, while the intra- and inter-assay coefficients of variation were less than 12.0%. PMID:21147313 13. Indirect resin composites PubMed Central Nandini, Suresh 2010-01-01 Aesthetic dentistry continues to evolve through innovations in bonding agents, restorative materials, and conservative preparation techniques. The use of direct composite restoration in posterior teeth is limited to relatively small cavities due to polymerization stresses. Indirect composites offer an esthetic alternative to ceramics for posterior teeth. This review article focuses on the material aspect of the newer generation of composites. This review was based on a PubMed database search which we limited to peer-reviewed articles in English that were published between 1990 and 2010 in dental journals. The key words used were ‘indirect resin composites,’ composite inlays,’ and ‘fiber-reinforced composites.’ PMID:21217945 14. Indirect visual cryptography scheme NASA Astrophysics Data System (ADS) Yang, Xiubo; Li, Tuo; Shi, Yishi 2015-10-01 Visual cryptography (VC), a new cryptographic scheme for image. Here in encryption, image with message is encoded to be N sub-images and any K sub-images can decode the message in a special rules (N>=2, 2<=K<=N). Then any K of the N sub-images are printed on transparency and stacked exactly, the message of original image will be decrypted by human visual system, but any K-1 of them get no information about it. This cryptographic scheme can decode concealed images without any cryptographic computations, and it has high security. But this scheme lacks of hidden because of obvious feature of sub-images. In this paper, we introduce indirect visual cryptography scheme (IVCS), which encodes sub-images to be pure phase images without visible strength based on encoding of visual cryptography. The pure phase image is final ciphertexts. Indirect visual cryptography scheme not only inherits the merits of visual cryptography, but also raises indirection, hidden and security. Meanwhile, the accuracy alignment is not required any more, which leads to the strong anti-interference capacity and robust in this scheme. System of decryption can be integrated highly and operated conveniently, and its process of decryption is dynamic and fast, which all lead to the good potentials in practices. 15. Indirect mechanisms of genotoxicity. PubMed Kirsch-Volders, Micheline; Vanhauwaert, Annelies; Eichenlaub-Ritter, Ursula; Decordier, Ilse 2003-04-11 Indirect mechanisms of genotoxicity correspond to interactions of mutagens with non-DNA targets, and are expected to show threshold concentration-effect response curves. If these thresholds can be proven experimentally they may provide a third alternative for risk assessment, besides the No Effect Level/Safety Factor approach and the low dose linear extrapolation method. We contributed significantly to the in vitro assessment of thresholds in human lymphocytes exposed to the spindle inhibitors nocodazole and carbendazim showing dose dependency and existence of lower thresholds for induction of non-disjunction as compared to chromosome loss. Micronuclei correlated with p53-independent or p53-dependent apoptosis and elimination of aneuploid cells. Extrapolation from in vitro threshold values to the in vivo situation remains unsolved. Comparing the in vitro threshold values for griseofulvin in human and rat lymphocytes with in vivo NOAEL/LOAEL in bone marrow/gut/erythrocytes suggests that the in vitro human system is the most sensitive. The threshold for induction of non-disjunction in in vitro maturing, nocodazole-exposed mouse oocytes was in the same low range. Regulators (UK Committee on Mutagenicity, http://www.doh.gov.uk/com/com.htm) considered the importance of thresholds for indirect mechanisms of genotoxicity. Acceptance of a non-linear extrapolation for mutagens requires mechanistic studies identifying the mutagen/target interactions. Moreover appropriate risk evaluation will require additional studies on individual susceptibility for indirect mutagenic effects and on interactions of aneugens in complex mixtures. PMID:12676452 16. Direct and indirect inversions NASA Astrophysics Data System (ADS) Virieux, Jean; Brossier, Romain; Métivier, Ludovic; Operto, Stéphane; Ribodetti, Alessandra 2016-06-01 A bridge is highlighted between the direct inversion and the indirect inversion. They are based on fundamental different approaches: one is looking after a projection from the data space to the model space while the other one is reducing a misfit between observed data and synthetic data obtained from a given model. However, it is possible to obtain similar structures for model perturbation, and we shall focus on P-wave velocity reconstruction. This bridge is built up through the Born approximation linearizing the forward problem with respect to model perturbation and through asymptotic approximations of the Green functions of the wave propagation equation. We first describe the direct inversion and its ingredients and then we focus on a specific misfit function design leading to a indirect inversion. Finally, we shall compare this indirect inversion with more standard least-squares inversion as the FWI, enabling the focus on small weak velocity perturbations on one side and the speed-up of the velocity perturbation reconstruction on the other side. This bridge has been proposed by the group led by Raul Madariaga in the early nineties, emphasizing his leading role in efficient imaging workflows for seismic velocity reconstruction, a drastic requirement at that time. 17. Introduction to Dark Matter Experiments NASA Astrophysics Data System (ADS) Schnee, Richard W. 2011-03-01 I provide an introduction to experiments designed to detect WIMP dark matter directly, focussing on building intuitive understanding of the characteristics of potential WIMP signals and the experimental techniques. After deriving the characteristics of potential signals in direct-detection experiments for standard WIMP models, I summarize the general experimental methods shared by most direct-detection experiments and review the advantages, challenges, and status of such searches. Experiments are already probing SUSY models, with best limits on the spin-independent coupling below 10-7 pb. Combined information from direct and indirect detection, along with detection at colliders, promises to teach us much about fundamental particle physics, cosmology, and astrophysics. 18. Liquid Chromatography with Electrochemical Detection (LC-EC): An Experiment Using 4-Aminophenol NASA Astrophysics Data System (ADS) Situmorang, Manihar; Lee, Maria Theresa B.; Witzeman, Kathey; Heineman, William R. 1998-08-01 The combination of liquid chromatography with electrochemical detection (LC-EC) is a powerful analytical tool for determining electroactive compounds in complex matrices. It has found numerous applications especially in the pharmaceutical and clinical areas. This experiment is intended to give students a practical experience with the LC-EC technique. The first part is designed to explore the electrochemistry of p-aminophenol (PAP), the analyte, while the second part deals with separation and identification of PAP in the presence of ascorbic acid and catechol. The improvement in detection limit with electrochemical detection compared to ultraviolet detection is also illustrated. 19. A Theory and Experiments for Detecting Shock Locations NASA Technical Reports Server (NTRS) Hariharan, S. I.; Johnson, D. K.; Adamovsky, G. 1994-01-01 In this paper we present a simplified one-dimensional theory for predicting locations of normal shocks in a converging diverging nozzle. The theory assumes that the flow is quasi one-dimensional and the flow is accelerated in the throat area. Optical aspects of the model consider propagation of electromagnetic fields transverse to the shock front. The theory consists of an inverse problem in which from the measured intensity it reconstructs an index of refraction profile for the shock. From this profile and the Dale-Gladstone relation, the density in the flow field is determined, thus determining the shock location. Experiments show agreement with the theory. In particular the location is determined within 10 percent of accuracy. Both the theoretical as well as the experimental results are presented to validate the procedures in this work. 20. Detection of Drugs in Nails: Three Year Experience. PubMed Shu, Irene; Jones, Joseph; Jones, Mary; Lewis, Douglas; Negrusz, Adam 2015-10-01 Nails (fingernails and toenails) are made of keratin. As the nail grows, substances incorporate into the keratin fibers where they can be detected 3-6 months after use. Samples are collected by clipping of 2-3 mm of nail from all fingers (100 mg). We present drug testing results from 10,349 nail samples collected from high-risk cases during a 3-year period of time. Samples were analyzed by validated analytical methods. The initial testing was performed mostly using enzyme-linked immunosorbent assay, but by liquid chromatography-tandem mass spectrometry (LC-MS-MS) as well. Presumptive positive samples were subjected to confirmatory testing with sample preparation procedures including washing, pulverizing, digestion and extraction optimized for each drug class. The total of 7,799 samples was analyzed for amphetamines. The concentrations ranged from 40 to 572,865 pg/mg (median, 100-3,687) for all amphetamine analytes. Amphetamine and methamphetamine were present in 14% of the samples, 22 samples were positive for 3,4-methylenedioxymethamphetamine (0.3%), 7 for methylenedioxyamphetamine (0.09%) and 4 for 3,4-methylenedioxy-N-ethylamphetamine (0.05%). Cocaine and related analytes were found in 5% samples (7,787 total), and the concentration range was 20-265,063 pg/mg (median 84-1,768). Opioids overall ranged from 40 to 118,229 pg/mg (median 123-830). The most prevalent opioid was oxycodone (15.1%) and hydrocodone (11.4%) compared with 1.0-3.6% for the others, including morphine, codeine, hydromorphone, methadone, 2-ethylidene-1,5-dimethyl-3,3-diphenylpyrrolidine and oxymorphone. Carboxy-Δ-9-tetrahydrocannabinol positivity rate was 18.1% (0.04-262 pg/mg, median 6.41). Out of 3,039 samples, 756 were positive (24.9%) for ethyl glucuronide (20-3,754 pg/mg, median 88). Other drugs found in nails included barbiturates, benzodiazepines, ketamine, meperidine, tramadol, zolpidem, propoxyphene, naltrexone and buprenorphine. Nail analyses have become a reliable way of determining 1. Indirect field technology for detecting areas object of illegal spills harmful to human health: application of drones, photogrammetry and hydrological models. PubMed Capolupo, Alessandra; Pindozzi, Stefania; Okello, Collins; Boccia, Lorenzo 2014-01-01 The accumulation of heavy metals in agricultural soils is a serious environmental problem. The Campania region in southern Italy has higher levels of cancer risk, presumably due to the accumulation of geogenic and anthropogenic soil pollutants, some of which have been incorporated into organic matter. The aim of this study was to introduce and test an innovative, field-applicable methodology to detect heavy metal accumulation using drone-based photogrammetry and microrill network modelling, specifically to generate wetlands wetlands prediction indices normally applied at large catchment scales, such as a large geographic basin. The processing of aerial photos taken using a hexacopter equipped with fifth-generation software for photogrammetry allowed the generation of a digital elevation model (DEM) with a resolution as high as 30 mm. Not only this provided a high potential for the study of micro-rill processes, but it was also useful for testing and comparing the capability of the topographic index (TI) and the clima-topographic index (CTI) to predict heavy metal sedimentation points at scales from 0.1 to 10 ha. Our results indicate that the TI and CTI indices can be used to predict points of heavy metal accumulation for small field catchments. PMID:25599640 2. Dietary effects on resting metabolic rate in C57BL/6 mice are differentially detected by indirect (O2/CO2 respirometry) and direct calorimetry PubMed Central Burnett, Colin M.L.; Grobe, Justin L. 2014-01-01 Resting metabolic rate (RMR) studies frequently involve genetically-manipulated mice and high fat diets (HFD). We hypothesize that the use of inadequate methods impedes the identification of novel regulators of RMR. This idea was tested by simultaneously measuring RMR by direct calorimetry and respirometry in C57BL/6J mice fed chow, 45% HFD, and then returned to chow. Comparing results during chow feeding uncovered an underestimation of RMR by respirometry (0.010 ± 0.001 kcal/h, P < 0.05), which is equivalent in magnitude to ∼2% of total daily caloric turnover. RMR during 45% HFD feeding was increased by respirometry (+0.013 ± 0.003 kcal/h, P < 0.05), but not direct calorimetry (+0.001 ± 0.002 kcal/h). Both methods indicated that return to chow reduced RMR compared to HFD, though direct calorimetry indicated a reduction below the initial chow fed state (−0.019 ± 0.004 kcal/h versus baseline, P < 0.05) that was not detected by respirometry (−0.003 ± 0.002 kcal/h versus baseline). These results highlight method-specific interpretations of the effects of dietary interventions upon RMR in mice, and prompt the reevaluation of preclinical screening methods used to identify novel RMR modulators. PMID:24944905 3. Indirect detection of pulmonary nodule on low-pass filtered and original x-ray images during limited and unlimited display times NASA Astrophysics Data System (ADS) Pietrzyk, Mariusz W.; McEntee, Mark; Evanoff, Michael G.; Brennan, Patrick C. 2012-02-01 Aim: This study evaluates the assumption that global impression is created based on low spatial frequency components of posterior-anterior chest radiographs. Background: Expert radiologists precisely and rapidly allocate visual attention on pulmonary nodules chest radiographs. Moreover, the most frequent accurate decisions are produced in the shortest viewing time, thus, the first hundred milliseconds of image perception seems be crucial for correct interpretation. Medical image perception model assumes that during holistic analysis experts extract information based on low spatial frequency (SF) components and creates a mental map of suspicious location for further inspection. The global impression results in flagged regions for detailed inspection with foveal vision. Method: Nine chest experts and nine non-chest radiologists viewed two sets of randomly ordered chest radiographs under 2 timing conditions: (1) 300ms; (2) free search in unlimited time. The same radiographic cases of 25 normal and 25 abnormal digitalized chest films constituted two image sets: low-pass filtered and unfiltered. Subjects were asked to detect nodules and rank confidence level. MRMC ROC DBM analyses were conducted. Results: Experts had improved ROC AUC while high SF components are displayed (p=0.03) or while low SF components were viewed under unlimited time (p=0.02) compared with low SF 300mSec viewings. In contrast, non-chest radiologists showed no significant changes when high SF are displayed under flash conditions compared with free search or while low SF components were viewed under unlimited time compared with flash. Conclusion: The current medical image perception model accurately predicted performance for non-chest radiologists, however chest experts appear to benefit from high SF features during the global impression. 4. Detection of Ocean Reflected GPS Signals: Theory and Experiment NASA Technical Reports Server (NTRS) Garrison, James L.; Katzberg, Stephen J.; Howell, Charles T., III 1997-01-01 A number of advanced applications of the Global Positioning System (GPS) have been proposed which use the signal reflected from a smooth ocean surface. The viability of these concepts hinges upon the ability to acquire and code track the reflected signal for an extended period of time over a variety of sea states. The analytical theory of specularly and diffusely reflected radio frequency radiation from a rough surface is reviewed. Experiments to demonstrate tracking of a reflected signal were performed on three aircraft flights over the Chesapeake Bay and the Eastern Shore of Virginia. The experimental hardware consisted of two of-the-shelf receivers configured so that one received the GPS signal in the conventional manner using a right hand circularly polarized (RHCP) antenna on top of the fuselage and the other could receive the reflected signal using a left hand circularly polarized (LHCP) antenna on the bottom of the fuselage. Three tests were performed on the data to verify that the signals received in the bottom antenna were viewed as sea surface reflections; Pseudorange double differences were compared against predicted geometric range double differences; Characteristics of a signal reflected from a random surface were observed in the carrier to noise ratio; Predicted specular points were plotted which demonstrate reflection only from wet areas. These tests indicated tracking of reflected signals for extended periods of time at altitudes of up to 5500 m and sporadic signal acquisition at higher altitudes. The duration of the continuous signal tracking was limited by the receiver's need to maintain carrier tracking. 5. Bayesian Validation of the Indirect Immunofluorescence Assay and Its Superiority to the Enzyme-Linked Immunosorbent Assay and the Complement Fixation Test for Detecting Antibodies against Coxiella burnetii in Goat Serum. PubMed Muleme, Michael; Stenos, John; Vincent, Gemma; Campbell, Angus; Graves, Stephen; Warner, Simone; Devlin, Joanne M; Nguyen, Chelsea; Stevenson, Mark A; Wilks, Colin R; Firestone, Simon M 2016-06-01 Although many studies have reported the indirect immunofluorescence assay (IFA) to be more sensitive in detection of antibodies to Coxiella burnetii than the complement fixation test (CFT), the diagnostic sensitivity (DSe) and diagnostic specificity (DSp) of the assay have not been previously established for use in ruminants. This study aimed to validate the IFA by describing the optimization, selection of cutoff titers, repeatability, and reliability as well as the DSe and DSp of the assay. Bayesian latent class analysis was used to estimate diagnostic specifications in comparison with the CFT and the enzyme-linked immunosorbent assay (ELISA). The optimal cutoff dilution for screening for IgG and IgM antibodies in goat serum using the IFA was estimated to be 1:160. The IFA had good repeatability (>96.9% for IgG, >78.0% for IgM), and there was almost perfect agreement (Cohen's kappa > 0.80 for IgG) between the readings reported by two technicians for samples tested for IgG antibodies. The IFA had a higher DSe (94.8%; 95% confidence interval [CI], 80.3, 99.6) for the detection of IgG antibodies against C. burnetii than the ELISA (70.1%; 95% CI, 52.7, 91.0) and the CFT (29.8%; 95% CI, 17.0, 44.8). All three tests were highly specific for goat IgG antibodies. The IFA also had a higher DSe (88.8%; 95% CI, 58.2, 99.5) for detection of IgM antibodies than the ELISA (71.7%; 95% CI, 46.3, 92.8). These results underscore the better suitability of the IFA than of the CFT and ELISA for detection of IgG and IgM antibodies in goat serum and possibly in serum from other ruminants. PMID:27122484 6. Bioechnology of indirect liquefaction SciTech Connect Datta, R.; Jain, M.K.; Worden, R.M.; Grethlein, A.J.; Soni, B.; Zeikus, J.G.; Grethlein, H. 1990-05-07 The project on biotechnology of indirect liquefaction was focused on conversion of coal derived synthesis gas to liquid fuels using a two-stage, acidogenic and solventogenic, anaerobic bioconversion process. The acidogenic fermentation used a novel and versatile organism, Butyribacterium methylotrophicum, which was fully capable of using CO as the sole carbon and energy source for organic acid production. In extended batch CO fermentations the organism was induced to produce butyrate at the expense of acetate at low pH values. Long-term, steady-state operation was achieved during continuous CO fermentations with this organism, and at low pH values (a pH of 6.0 or less) minor amounts of butanol and ethanol were produced. During continuous, steady-state fermentations of CO with cell recycle, concentrations of mixed acids and alcohols were achieved (approximately 12 g/l and 2 g/l, respectively) which are high enough for efficient conversion in stage two of the indirect liquefaction process. The metabolic pathway to produce 4-carbon alcohols from CO was a novel discovery and is believed to be unique to our CO strain of B. methylotrophicum. In the solventogenic phase, the parent strain ATCC 4259 of Clostridium acetobutylicum was mutagenized using nitrosoguanidine and ethyl methane sulfonate. The E-604 mutant strain of Clostridium acetobutylicum showed improved characteristics as compared to parent strain ATCC 4259 in batch fermentation of carbohydrates. 7. A Two-Week Guided Inquiry Protein Separation and Detection Experiment for Undergraduate Biochemistry ERIC Educational Resources Information Center Carolan, James P.; Nolta, Kathleen V. 2016-01-01 A laboratory experiment for teaching protein separation and detection in an undergraduate biochemistry laboratory course is described. This experiment, performed in two, 4 h laboratory periods, incorporates guided inquiry principles to introduce students to the concepts behind and difficulties of protein purification. After using size-exclusion… 8. HICO and RAIDS Experiment Payload - Remote Atmospheric and Ionospheric Detection System (RAIDS) NASA Technical Reports Server (NTRS) Budzien, Scott 2009-01-01 The HICO and RAIDS Experiment Payload - Remote Atmospheric and Ionospheric Detection System (HREP-RAIDS) experiment will provide atmospheric scientists with a complete description of the major constituents of the thermosphere (layer of the Earth's atmosphere) and ionosphere (uppermost layer of the Earth's atmosphere), global electron density profiles at altitudes between 100 - 350 kilometers. 9. [Value of functional lymphoscintigraphy and indirect lymphangiography in lipedema syndrome]. PubMed Weissleder, H; Brauer, J W; Schuchhardt, C; Herpertz, U 1995-12-01 Early terms of lymphostasis in lipedema can be detected with lymphoscintigraphy. A normal examination almost certainly excludes a lymphatic component. Indirect lymphography is only used to rule out morphological abnormalities of lymph vessels. If a lymphoscintigraphic study is normal indirect lymphography is not indicated. PMID:8659203 10. Developing a discrete choice experiment in Malawi: eliciting preferences for breast cancer early detection services PubMed Central Kohler, Racquel E; Lee, Clara N; Gopal, Satish; Reeve, Bryce B; Weiner, Bryan J; Wheeler, Stephanie B 2015-01-01 Background In Malawi, routine breast cancer screening is not available and little is known about women’s preferences regarding early detection services. Discrete choice experiments are increasingly used to reveal preferences about new health services; however, selecting appropriate attributes that describe a new health service is imperative to ensure validity of the choice experiment. Objective To identify important factors that are relevant to Malawian women’s preferences for breast cancer detection services and to select attributes and levels for a discrete choice experiment in a setting where both breast cancer early detection and choice experiments are rare. Methods We reviewed the literature to establish an initial list of potential attributes and levels for a discrete choice experiment and conducted qualitative interviews with health workers and community women to explore relevant local factors affecting decisions to use cancer detection services. We tested the design through cognitive interviews and refined the levels, descriptions, and designs. Results Themes that emerged from interviews provided critical information about breast cancer detection services, specifically, that breast cancer interventions should be integrated into other health services because asymptomatic screening may not be practical as an individual service. Based on participants’ responses, the final attributes of the choice experiment included travel time, health encounter, health worker type and sex, and breast cancer early detection strategy. Cognitive testing confirmed the acceptability of the final attributes, comprehension of choice tasks, and women’s abilities to make trade-offs. Conclusion Applying a discrete choice experiment for breast cancer early detection was feasible with appropriate tailoring for a low-income, low-literacy African setting. PMID:26508842 11. Determination of aminopolycarboxylic acids at ultra-trace levels by means of online coupling ion exchange chromatography and inductively coupled plasma-mass spectrometry with indirect detection via their Pd²⁺-complexes. PubMed Nette, David; Seubert, Andreas 2015-07-16 A new indirect IC-ICP-MS method for the determination of aminopolycarboxylic acids in water samples is described. It is based on the addition of an excess of Pd(II) to water samples. The analytes are forced into very strong and negatively charged palladium complexes, separated by ion exchange chromatography and detected by their palladium content, utilizing an on-line coupled ICP-MS. This method is suitable to determine the concentration of 8 aminopolycarboxylic acids (nitrilotriacetic acid (NTA), (2-carboxyethyl) iminodiacetic acid (β-ADA), methylglycinediacetic acid (MGDA), 2-hydroxyethyl) ethylenediamine triacetic acid (HEDTA), diethylene triamine pentaacetic acid (DTPA), ethylendiamine tetraacetic acid (EDTA), 1,3-diaminopropane tetraacetic acid (1,3-PDTA) and 1,2-diaminopropane tetraacetic acid (1,2-PDTA) at the ng kg(-1) level. The method is faster and easier than the established gas chromatography (GC)-method ISO 16588:2002 and up to two orders of magnitude more sensitive than the ion pair chromatography based method of DIN 38413-8. Analytic performance is superior to ISO 16588:2002 and the comparability is good. PMID:26073818 12. Model Intercomparison of Indirect Aerosol Effects NASA Technical Reports Server (NTRS) Penner, J. E.; Quaas, J.; Storelvmo, T.; Takemura, T.; Boucher, O.; Guo, H.; Kirkevag, A.; Kristjansson, J. E.; Seland, O. 2006-01-01 Modeled differences in predicted effects are increasingly used to help quantify the uncertainty of these effects. Here, we examine modeled differences in the aerosol indirect effect in a series of experiments that help to quantify how and why model-predicted aerosol indirect forcing varies between models. The experiments start with an experiment in which aerosol concentrations, the parameterization of droplet concentrations and the autoconversion scheme are all specified and end with an experiment that examines the predicted aerosol indirect forcing when only aerosol sources are specified. Although there are large differences in the predicted liquid water path among the models, the predicted aerosol first indirect effect for the first experiment is rather similar, about -0.6 W/sq m to -0.7 W/sq m. Changes to the autoconversion scheme can lead to large changes in the liquid water path of the models and to the response of the liquid water path to changes in aerosols. Adding an autoconversion scheme that depends on the droplet concentration caused a larger (negative) change in net outgoing shortwave radiation compared to the 1st indirect effect, and the increase varied from only 22% to more than a factor of three. The change in net shortwave forcing in the models due to varying the autoconversion scheme depends on the liquid water content of the clouds as well as their predicted droplet concentrations, and both increases and decreases in the net shortwave forcing can occur when autoconversion schemes are changed. The parameterization of cloud fraction within models is not sensitive to the aerosol concentration, and, therefore, the response of the modeled cloud fraction within the present models appears to be smaller than that which would be associated with model "noise". The prediction of aerosol concentrations, given a fixed set of sources, leads to some of the largest differences in the predicted aerosol indirect radiative forcing among the models, with values of 13. Dark Matter Indirect Search: The PAMELA Experiment NASA Astrophysics Data System (ADS) Ricci, M. 2010-02-01 The instrument PAMELA, in orbit since June 15th, 2006, on board the Russian satellite Resurs DK1, is daily delivering to ground 16 Gigabytes of data. The apparatus is designed to study charged particles in the cosmic radiation, with a particular focus on antiparticles for searching antimatter and signals of dark matter annihilation. A combination of a magnetic spectrometer and different detectors allows antiparticles to be reliably identified from a large background of other charged particles. New results on the antiproton-to-proton and positron-to-all electron ratios over a wide energy range (1-100 GeV) are presented. While the antiproton-to-proton ratio does not show significant differences from standard secondary production, in the positron-to-all electron ratio an enhancement is clearly seen at energies above 10 GeV. Possible interpretations of this effect are briefly discussed. 14. Propellant Feed System Leak Detection: Lessons Learned From the Linear Aerospike SR-71 Experiment (LASRE) NASA Technical Reports Server (NTRS) Hass, Neal; Mizukami, Masashi; Neal, Bradford A.; St. John, Clinton; Beil, Robert J.; Griffin, Timothy P. 1999-01-01 This paper presents pertinent results and assessment of propellant feed system leak detection as applied to the Linear Aerospike SR-71 Experiment (LASRE) program flown at the NASA Dryden Flight Research Center, Edwards, California. The LASRE was a flight test of an aerospike rocket engine using liquid oxygen and high-pressure gaseous hydrogen as propellants. The flight safety of the crew and the experiment demanded proven technologies and techniques that could detect leaks and assess the integrity of hazardous propellant feed systems. Point source detection and systematic detection were used. Point source detection was adequate for catching gross leakage from components of the propellant feed systems, but insufficient for clearing LASRE to levels of acceptability. Systematic detection, which used high-resolution instrumentation to evaluate the health of the system within a closed volume, provided a better means for assessing leak hazards. Oxygen sensors detected a leak rate of approximately 0.04 cubic inches per second of liquid oxygen. Pressure sensor data revealed speculated cryogenic boiloff through the fittings of the oxygen system, but location of the source(s) was indeterminable. Ultimately, LASRE was cancelled because leak detection techniques were unable to verify that oxygen levels could be maintained below flammability limits. 15. Indirect combustion noise of auxiliary power units NASA Astrophysics Data System (ADS) Tam, Christopher K. W.; Parrish, Sarah A.; Xu, Jun; Schuster, Bill 2013-08-01 Recent advances in noise suppression technology have significantly reduced jet and fan noise from commercial jet engines. This leads many investigators in the aeroacoustics community to suggest that core noise could well be the next aircraft noise barrier. Core noise consists of turbine noise and combustion noise. There is direct combustion noise generated by the combustion processes, and there is indirect combustion noise generated by the passage of combustion hot spots, or entropy waves, through constrictions in an engine. The present work focuses on indirect combustion noise. Indirect combustion noise has now been found in laboratory experiments. The primary objective of this work is to investigate whether indirect combustion noise is also generated in jet and other engines. In a jet engine, there are numerous noise sources. This makes the identification of indirect combustion noise a formidable task. Here, our effort concentrates exclusively on auxiliary power units (APUs). This choice is motivated by the fact that APUs are relatively simple engines with only a few noise sources. It is, therefore, expected that the chance of success is higher. Accordingly, a theoretical model study of the generation of indirect combustion noise in an Auxiliary Power Unit (APU) is carried out. The cross-sectional areas of an APU from the combustor to the turbine exit are scaled off to form an equivalent nozzle. A principal function of a turbine in an APU is to extract mechanical energy from the flow stream through the exertion of a resistive force. Therefore, the turbine is modeled by adding a negative body force to the momentum equation. This model is used to predict the ranges of frequencies over which there is a high probability for indirect combustion noise generation. Experimental spectra of internal pressure fluctuations and far-field noise of an RE220 APU are examined to identify anomalous peaks. These peaks are possible indirection combustion noise. In the case of the 16. Direct and indirect punishment among strangers in the field. PubMed Balafoutas, Loukas; Nikiforakis, Nikos; Rockenbach, Bettina 2014-11-11 Many interactions in modern human societies are among strangers. Explaining cooperation in such interactions is challenging. The two most prominent explanations critically depend on individuals' willingness to punish defectors: In models of direct punishment, individuals punish antisocial behavior at a personal cost, whereas in models of indirect reciprocity, they punish indirectly by withholding rewards. We investigate these competing explanations in a field experiment with real-life interactions among strangers. We find clear evidence of both direct and indirect punishment. Direct punishment is not rewarded by strangers and, in line with models of indirect reciprocity, is crowded out by indirect punishment opportunities. The existence of direct and indirect punishment in daily life indicates the importance of both means for understanding the evolution of cooperation. PMID:25349390 17. Direct and indirect punishment among strangers in the field PubMed Central Balafoutas, Loukas; Nikiforakis, Nikos; Rockenbach, Bettina 2014-01-01 Many interactions in modern human societies are among strangers. Explaining cooperation in such interactions is challenging. The two most prominent explanations critically depend on individuals’ willingness to punish defectors: In models of direct punishment, individuals punish antisocial behavior at a personal cost, whereas in models of indirect reciprocity, they punish indirectly by withholding rewards. We investigate these competing explanations in a field experiment with real-life interactions among strangers. We find clear evidence of both direct and indirect punishment. Direct punishment is not rewarded by strangers and, in line with models of indirect reciprocity, is crowded out by indirect punishment opportunities. The existence of direct and indirect punishment in daily life indicates the importance of both means for understanding the evolution of cooperation. PMID:25349390 18. Directed Design of Experiments (DOE) for Determining Probability of Detection (POD) Capability of NDE Systems (DOEPOD) NASA Technical Reports Server (NTRS) Generazio, Ed 2007-01-01 This viewgraph presentation reviews some of the issues that people who specialize in Non destructive evaluation (NDE) have with determining the statistics of the probability of detection. There is discussion of the use of the binominal distribution, and the probability of hit. The presentation then reviews the concepts of Directed Design of Experiments for Validating Probability of Detection of Inspection Systems (DOEPOD). Several cases are reviewed, and discussed. The concept of false calls is also reviewed. 19. Off-line experiments on radionuclide detection based on the sequential Bayesian approach NASA Astrophysics Data System (ADS) Qingpei, Xiang; Dongfeng, Tian; Fanhua, Hao; Ge, Ding; Jun, Zeng; Fei, Luo 2013-11-01 The sequential Bayesian approach proposed by Candy et al. for radioactive materials detection has aroused increasing interest in radiation detection research and is potentially a useful tool for prevention of the transportation of radioactive materials by terrorists. In our previous work, the performance of the sequential Bayesian approach was studied numerically through a simulation experiment platform. In this paper, a sequential Bayesian processor incorporating a LaBr3(Ce) detector, and using the energy, decay rate and emission probability of the radionuclide, is fully developed. Off-line experiments for the performance of the sequential Bayesian approach in radionuclide detection are developed by placing 60Co, 137Cs, 133Ba and 152Eu at various distances from the front face of the LaBr3(Ce) detector. The off-line experiment results agree well with the results of previous numerical experiments. The maximum detection distance is introduced to evaluate the processor‧s ability to detect radionuclides with a specific level of activity. 20. Direct (13)C-detected NMR experiments for mapping and characterization of hydrogen bonds in RNA. PubMed Fürtig, Boris; Schnieders, Robbin; Richter, Christian; Zetzsche, Heidi; Keyhani, Sara; Helmling, Christina; Kovacs, Helena; Schwalbe, Harald 2016-03-01 In RNA secondary structure determination, it is essential to determine whether a nucleotide is base-paired and not. Base-pairing of nucleotides is mediated by hydrogen bonds. The NMR characterization of hydrogen bonds relies on experiments correlating the NMR resonances of exchangeable protons and can be best performed for structured parts of the RNA, where labile hydrogen atoms are protected from solvent exchange. Functionally important regions in RNA, however, frequently reveal increased dynamic disorder which often leads to NMR signals of exchangeable protons that are broadened beyond (1)H detection. Here, we develop (13)C direct detected experiments to observe all nucleotides in RNA irrespective of whether they are involved in hydrogen bonds or not. Exploiting the self-decoupling of scalar couplings due to the exchange process, the hydrogen bonding behavior of the hydrogen bond donor of each individual nucleotide can be determined. Furthermore, the adaption of HNN-COSY experiments for (13)C direct detection allows correlations of donor-acceptor pairs and the localization of hydrogen-bond acceptor nucleotides. The proposed (13)C direct detected experiments therefore provide information about molecular sites not amenable by conventional proton-detected methods. Such information makes the RNA secondary structure determination by NMR more accurate and helps to validate secondary structure predictions based on bioinformatics. PMID:26852414 1. Indirect hemagglutination test for chlamydial antibodies. PubMed Lewis, V J; Thacker, W L; Engelman, H M 1972-07-01 An indirect hemagglutination (IHA) test is described for chlamydial antibodies in psittacosis diagnostic sera; for this test tanned sheep erythrocytes sensitized with a deoxycholate extract of Chlamydia psittaci grown in Vero cell monolayers were used. Adaptation of the IHA test to the Microtiter system decreased sensitivity; nevertheless, the Microtiter-IHA test was more sensitive than the complement fixation test. Lymphogranuloma venereum antibodies also were detected by using antigen extracted from C. psittaci. PMID:4626906 2. Poker face of inelastic dark matter: Prospects at upcoming direct detection experiments SciTech Connect Alves, Daniele S. M.; Lisanti, Mariangela; Wacker, Jay G. 2010-08-01 The XENON100 and CRESST experiments will directly test the inelastic dark matter explanation for DAMA's 8.9{sigma} anomaly. This article discusses how predictions for direct detection experiments depend on uncertainties in quenching factor measurements, the dark matter interaction with the standard model, and the halo velocity distribution. When these uncertainties are accounted for, an order of magnitude variation is found in the number of expected events at CRESST and XENON100. 3. Analyses and experiments of background sunlight's effects on laser detection system NASA Astrophysics Data System (ADS) Guo, Hao; Yin, Rui-guang; Ma, Na; Liang, Wei-wei; Li, Bo 2015-10-01 Background sunlight effect the technical performance of laser detection system significantly. Analyses and experiments were done to find the degree and regularity of effects of background sunlight on laser detection system. At first, we established the theoretical model of laser detection probability curve. We emulated and analysed the effects on probability curve under different sunlight intensity by the model. Moreover, we got the variation regularity of parameter in probability curve. Secondly, we proposed a prediction method of probability curve, which deduced the detecting parameter from measured data. The method can not only get the probability curve in arbitrary background sunlight by a measured probability curve in typical background sunlight, but also calculate the sensitivity of laser detection systems by probability curve at the specified probability. Thirdly, we measured the probability curves under three types of background sunlight. The illumination conditions in experiments included fine, overcast and night. These three curves can be used as reference to deduce other curves. Using model, method, and measured data mentioned above, we finally finished the analyses and appraisal of the effects of background sunlight on typical laser detection system. The research findings can provide the theoretical reference and technical support for adaptability evaluation of typical laser detection systems in different background sunlight. 4. An EAS experiment at mountain altitude for the detection of gamma-ray sources NASA Technical Reports Server (NTRS) Allkofer, O. C.; Samorski, M.; Stamm, W. 1985-01-01 The plan of an extensive air shower experiment 2.200 m above sea level for the detection of 10 to the 14th power eV to 10 to the 17th power eV gamma rays from sources in the declination band 0 deg to + 60 deg is described. The site selection, detector array and electronic layout are detailed. 5. Increasing sensitivity of pulse EPR experiments using echo train detection schemes NASA Astrophysics Data System (ADS) Mentink-Vigier, F.; Collauto, A.; Feintuch, A.; Kaminker, I.; Tarle, V.; Goldfarb, D. 2013-11-01 Modern pulse EPR experiments are routinely used to study the structural features of paramagnetic centers. They are usually performed at low temperatures, where relaxation times are long and polarization is high, to achieve a sufficient Signal/Noise Ratio (SNR). However, when working with samples whose amount and/or concentration are limited, sensitivity becomes an issue and therefore measurements may require a significant accumulation time, up to 12 h or more. As the detection scheme of practically all pulse EPR sequences is based on the integration of a spin echo - either primary, stimulated or refocused - a considerable increase in SNR can be obtained by replacing the single echo detection scheme by a train of echoes. All these echoes, generated by Carr-Purcell type sequences, are integrated and summed together to improve the SNR. This scheme is commonly used in NMR and here we demonstrate its applicability to a number of frequently used pulse EPR experiments: Echo-Detected EPR, Davies and Mims ENDOR (Electron-Nuclear Double Resonance), DEER (Electron-Electron Double Resonance|) and EDNMR (Electron-Electron Double Resonance (ELDOR)-Detected NMR), which were combined with a Carr-Purcell-Meiboom-Gill (CPMG) type detection scheme at W-band. By collecting the transient signal and integrating a number of refocused echoes, this detection scheme yielded a 1.6-5 folds SNR improvement, depending on the paramagnetic center and the pulse sequence applied. This improvement is achieved while keeping the experimental time constant and it does not introduce signal distortion. 6. Prospects for cosmic neutrino detection in tritium experiments in the case of hierarchical neutrino masses SciTech Connect Blennow, Mattias 2008-06-01 We discuss the effects of neutrino mixing and the neutrino mass hierarchy when considering the capture of the cosmic neutrino background (CNB) on radioactive nuclei. The implications of mixing and hierarchy at future generations of tritium decay experiments are considered. We find that the CNB should be detectable at these experiments provided that the resolution for the kinetic energy of the outgoing electron can be pushed to a few 0.01 eV for the scenario with inverted neutrino mass hierarchy, about an order of magnitude better than that of the upcoming KATRIN experiment. Another order of magnitude improvement is needed in the case of normal neutrino mass hierarchy. We also note that mixing effects generally make the prospects for CNB detection worse due to an increased maximum energy of the normal beta decay background. 7. A new multi-host species indirect ELISA using protein A/G conjugate for detection of anti-Toxoplasma gondii IgG antibodies with comparison to ELISA-IgG, agglutination assay and Western blot. PubMed 2014-02-24 Toxoplasma gondii is a zoonotic protozoan parasite which can cause significant disease and losses in livestock and wild animals. It is increasingly recognized as an important foodborne pathogen in a broad range of food animals and products. Effective control strategies require rapid, reliable and cost-effective detection methods for large scale surveys and diagnostic applications in a broad range of warm-blooded animals. To overcome one or more of these shortcomings in the currently available detection methods for T. gondii infection a non-species-specific protein A/G conjugate was used in the development of an indirect ELISA (ELISA-A/G) for the detection of IgG antibodies in serum samples obtained from experimentally infected pigs. The performance of the assay was evaluated using serum samples from pigs, cats, mice and seals with known positive or negative status for T. gondii infection. Results of the ELISA-A/G obtained with pig serum samples were compared with those generated by traditional ELISA using host specific IgG conjugate (ELISA-IgG), modified agglutination test (MAT) and Western blot analysis (WB). Using protein A/G conjugate, comparative analysis of results from 77 samples obtained from T. gondii infected pigs showed excellent agreement between the ELISA-A/G and in-house ELISA-IgG (0.917 κ). Similar agreements were also observed when these samples were tested by a commercial ELISA kit (0.816 κ), MAT (0.816 κ) and WB (0.79 κ). A total of 86 serum samples obtained from cats, mice and seals experimentally infected with T. gondii and tested by the ELISA-A/G as well as MAT for the presence of anti-Toxoplasma IgG antibodies yielded Kappa value of 1.0 for cats and mice and 0.79 for seals. These results show that the ELISA-A/G is a suitable method for serological detection of T. gondii infection in multiple host species and has the potential for testing samples from a broad range of domestic, wild, and aquatic mammalian host species. Simultaneous testing 8. The relationship study between image features and detection probability based on psychology experiments NASA Astrophysics Data System (ADS) Lin, Wei; Chen, Yu-hua; Wang, Ji-yuan; Gao, Hong-sheng; Wang, Ji-jun; Su, Rong-hua; Mao, Wei 2011-04-01 Detection probability is an important index to represent and estimate target viability, which provides basis for target recognition and decision-making. But it will expend a mass of time and manpower to obtain detection probability in reality. At the same time, due to the different interpretation of personnel practice knowledge and experience, a great difference will often exist in the datum obtained. By means of studying the relationship between image features and perception quantity based on psychology experiments, the probability model has been established, in which the process is as following.Firstly, four image features have been extracted and quantified, which affect directly detection. Four feature similarity degrees between target and background were defined. Secondly, the relationship between single image feature similarity degree and perception quantity was set up based on psychological principle, and psychological experiments of target interpretation were designed which includes about five hundred people for interpretation and two hundred images. In order to reduce image features correlativity, a lot of artificial synthesis images have been made which include images with single brightness feature difference, images with single chromaticity feature difference, images with single texture feature difference and images with single shape feature difference. By analyzing and fitting a mass of experiments datum, the model quantitys have been determined. Finally, by applying statistical decision theory and experimental results, the relationship between perception quantity with target detection probability has been found. With the verification of a great deal of target interpretation in practice, the target detection probability can be obtained by the model quickly and objectively. 9. Optimizing detection of RDX vapors using designed experiments for remote sensing. PubMed Ewing, Robert G; Heredia-Langner, Alejandro; Warner, Marvin G 2014-05-21 This paper presents results of designed experiments performed to study the effect of four factors on the detection of RDX vapors from desorption into an atmospheric flow tube mass spectrometer (AFT-MS). The experiments initially included four independent factors: gas flow rate, desorption current, solvent evaporation time and RDX mass. The values of three detection responses, peak height, peak width, and peak area were recorded but only the peak height response was analyzed. Results from the first block of experiments indicated that solvent evaporation time was not statistically significant at the 95% confidence level. A second round of experiments was designed and executed using flow rate, current, and RDX mass as factors and the results were used to create a model to predict conditions resulting in maximum peak height. Those conditions were confirmed experimentally and used to obtain data for a calibration model. The calibration model represented RDX amounts ranging from 1 to 25 pg desorbed into an air flow of 7 L min(-1). Air samples from a shipping container that held 2 closed explosive storage magazines were collected on metal filaments for varying amounts for time ranging from 5 to 90 minutes. RDX was detected from all of the filaments sampled by desorption into the AFT-MS. From the calibration model, RDX vapor concentrations within the shipping container were calculated to be in the range of 1 to 50 parts-per-quadrillion (ppqv) from data collected on 2 separate days. PMID:24695634 10. Overcoming velocity suppression in dark-matter direct-detection experiments NASA Astrophysics Data System (ADS) Dienes, Keith R.; Kumar, Jason; Thomas, Brooks; Yaylali, David 2014-07-01 Pseudoscalar couplings between Standard-Model quarks and dark matter are normally not considered relevant for dark-matter direct-detection experiments because they lead to velocity-suppressed scattering cross sections in the nonrelativistic limit. However, at the nucleon level, such couplings are effectively enhanced by factors of order O(mN/mq)˜103, where mN and mq are appropriate nucleon and quark masses, respectively. This enhancement can thus be sufficient to overcome the corresponding velocity suppression, implying—contrary to common lore—that direct-detection experiments can indeed be sensitive to pseudoscalar couplings. In this work, we explain how this enhancement arises, and present a model-independent analysis of pseudoscalar interactions at direct-detection experiments. We also identify those portions of the corresponding dark-matter parameter space which can be probed at current and future experiments of this type, and discuss the role of isospin violation in enhancing the corresponding experimental reach. 11. Future detectability of gravitational-wave induced lensing from high-sensitivity CMB experiments NASA Astrophysics Data System (ADS) Namikawa, Toshiya; Yamauchi, Daisuke; Taruya, Atsushi 2015-02-01 We discuss the future detectability of gravitational-wave induced lensing from high-sensitivity cosmic microwave background (CMB) experiments. Gravitational waves can induce a rotational component of the weak-lensing deflection angle, usually referred to as the curl mode, which would be imprinted on the CMB maps. Using the technique of reconstructing lensing signals involved in CMB maps, this curl mode can be measured in an unbiased manner, offering an independent confirmation of the gravitational waves complementary to B-mode polarization experiments. Based on the Fisher matrix analysis, we first show that with the noise levels necessary to confirm the consistency relation for the primordial gravitational waves, the future CMB experiments will be able to detect the gravitational-wave induced lensing signals. For a tensor-to-scalar ratio of r ≲0.1 , even if the consistency relation is difficult to confirm with a high significance, the gravitational-wave induced lensing will be detected at more than 3 σ significance level. Further, we point out that high-sensitivity experiments will be also powerful to constrain the gravitational waves generated after the recombination epoch. Compared to the B-mode polarization, the curl mode is particularly sensitive to gravitational waves generated at low redshifts (z ≲10 ) with a low frequency (k ≲1 0-3 Mpc-1 ), and it could give a much tighter constraint on their energy density ΩGW by more than 3 orders of magnitude. 12. Optimizing detection of RDX vapors using designed experiments for remote sensing SciTech Connect Ewing, Robert G.; Heredia-Langner, Alejandro; Warner, Marvin G. 2014-03-24 Abstract: This paper presents results of experiments performed to study the effect of four factors on the detection of RDX vapors from desorption into an atmospheric flow tube mass spectrometer (AFT-MS). The experiments initially included four independent factors: gas flow rate, desorption current, solvent evaporation time and RDX mass. The values of three detection responses, peak height, peak width, and peak area were recorded but only the peak height response was analyzed. Results from the first block of experiments indicated that solvent evaporation time was not statistically significant. A second round of experiments was performed using flow rate, current, and RDX mass as factors and the results were used to create a model to predict conditions resulting in maximum peak height. Those conditions were confirmed experimentally and used to obtain data for a calibration model. The calibration model represented RDX amounts ranging from 1 to 25 pg desorbed into an air flow of 7 L/min. Air samples from a shipping container that held 2 closed explosive storage magazines were collected on metal filaments for varying amounts for time ranging from 5 to 90 minutes. RDX was detected from all of the filaments sampled by desorption into the AFT-MS. From the calibration model, RDX vapor concentrations within the shipping container were calculated to be in the range of 1 to 50 parts-per-quadrillion from data collected on 2 separate days. 13. Characterising dark matter searches at colliders and direct detection experiments: Vector mediators SciTech Connect Buchmueller, Oliver; Dolan, Matthew J.; Malik, Sarah A.; McCabe, Christopher 2015-01-09 We introduce a Minimal Simplified Dark Matter (MSDM) framework to quantitatively characterise dark matter (DM) searches at the LHC. We study two MSDM models where the DM is a Dirac fermion which interacts with a vector and axial-vector mediator. The models are characterised by four parameters: mDM, Mmed , gDM and gq, the DM and mediator masses, and the mediator couplings to DM and quarks respectively. The MSDM models accurately capture the full event kinematics, and the dependence on all masses and couplings can be systematically studied. The interpretation of mono-jet searches in this framework can be used to establish an equal-footing comparison with direct detection experiments. For theories with a vector mediator, LHC mono-jet searches possess better sensitivity than direct detection searches for light DM masses (≲5 GeV). For axial-vector mediators, LHC and direct detection searches generally probe orthogonal directions in the parameter space. We explore the projected limits of these searches from the ultimate reach of the LHC and multi-ton xenon direct detection experiments, and find that the complementarity of the searches remains. In conclusion, we provide a comparison of limits in the MSDM and effective field theory (EFT) frameworks to highlight the deficiencies of the EFT framework, particularly when exploring the complementarity of mono-jet and direct detection searches. 14. Characterising dark matter searches at colliders and direct detection experiments: Vector mediators DOE PAGESBeta Buchmueller, Oliver; Dolan, Matthew J.; Malik, Sarah A.; McCabe, Christopher 2015-01-09 We introduce a Minimal Simplified Dark Matter (MSDM) framework to quantitatively characterise dark matter (DM) searches at the LHC. We study two MSDM models where the DM is a Dirac fermion which interacts with a vector and axial-vector mediator. The models are characterised by four parameters: mDM, Mmed , gDM and gq, the DM and mediator masses, and the mediator couplings to DM and quarks respectively. The MSDM models accurately capture the full event kinematics, and the dependence on all masses and couplings can be systematically studied. The interpretation of mono-jet searches in this framework can be used to establishmore » an equal-footing comparison with direct detection experiments. For theories with a vector mediator, LHC mono-jet searches possess better sensitivity than direct detection searches for light DM masses (≲5 GeV). For axial-vector mediators, LHC and direct detection searches generally probe orthogonal directions in the parameter space. We explore the projected limits of these searches from the ultimate reach of the LHC and multi-ton xenon direct detection experiments, and find that the complementarity of the searches remains. In conclusion, we provide a comparison of limits in the MSDM and effective field theory (EFT) frameworks to highlight the deficiencies of the EFT framework, particularly when exploring the complementarity of mono-jet and direct detection searches.« less 15. Design and experiment of a neural signal detection using a FES driving system. PubMed Zonghao, Huang; Zhigong, Wang; Xiaoying, Lu; Wenyuan, Li; Xiaoyan, Shen; Xintai, Zhao; Shushan, Xie; Haixian, Pan; Cunliang, Zhu 2010-01-01 The channel bridging, signal regenerating, and functional rebuilding of injured nerves is one of the most important issues in life science research. In recent years, some progresses in the research area have been made in repairing injured nerves with microelectronic neural bridge. Based on the previous work, this paper presents a neural signal detection and functional electrical stimulation (FES) driving system with using high performance operational amplifiers, which has been realized. The experimental results show that the designed system meets requirements. In animal experiments, sciatic nerve signal detection, regeneration and function rebuilding between two toads have been accomplished successfully by using the designed system. PMID:21096372 16. Direct vs. Indirect Moral Enhancement. PubMed Schaefer, G Owen 2015-09-01 Moral enhancement is an ostensibly laudable project. Who wouldn't want people to become more moral? Still, the project's approach is crucial. We can distinguish between two approaches for moral enhancement: direct and indirect. Direct moral enhancements aim at bringing about particular ideas, motives or behaviors. Indirect moral enhancements, by contrast, aim at making people more reliably produce the morally correct ideas, motives or behaviors without committing to the content of those ideas, motives and/or actions. I will argue, on Millian grounds, that the value of disagreement puts serious pressure on proposals for relatively widespread direct moral enhancement. A more acceptable path would be to focus instead on indirect moral enhancements while staying neutral, for the most part, on a wide range of substantive moral claims. I will outline what such indirect moral enhancement might look like, and why we should expect it to lead to general moral improvement. PMID:26412738 17. Moral assessment in indirect reciprocity PubMed Central Sigmund, Karl 2012-01-01 Indirect reciprocity is one of the mechanisms for cooperation, and seems to be of particular interest for the evolution of human societies. A large part is based on assessing reputations and acting accordingly. This paper gives a brief overview of different assessment rules for indirect reciprocity, and studies them by using evolutionary game dynamics. Even the simplest binary assessment rules lead to complex outcomes and require considerable cognitive abilities. PMID:21473870 18. Magnetoelastic Effect-Based Transmissive Stress Detection for Steel Strips: Theory and Experiment. PubMed Zhang, Qingdong; Su, Yuanxiao; Zhang, Liyuan; Bi, Jia; Luo, Jiang 2016-01-01 For the deficiencies of traditional stress detection methods for steel strips in industrial production, this paper proposes a non-contact stress detection scheme based on the magnetoelastic effect. The theoretical model of the transmission-type stress detection is established, in which the output voltage and the tested stress obey a linear relation. Then, a stress detection device is built for the experiment, and Q235 steel under uniaxial tension is tested as an example. The result shows that the output voltage rises linearly with the increase of the tensile stress, consistent with the theoretical prediction. To ensure the accuracy of the stress detection method in actual application, the temperature compensation, magnetic shielding and some other key technologies are investigated to reduce the interference of the external factors, such as environment temperature and surrounding magnetic field. The present research develops the theoretical and experimental foundations for the magnetic stress detection system, which can be used for online non-contact monitoring of strip flatness-related stress (tension distribution or longitudinal residual stress) in the steel strip rolling process, the quality evaluation of strip flatness after rolling, the life and safety assessment of metal construction and other industrial production links. PMID:27589742 19. The logic of indirect speech PubMed Central Pinker, Steven; Nowak, Martin A.; Lee, James J. 2008-01-01 When people speak, they often insinuate their intent indirectly rather than stating it as a bald proposition. Examples include sexual come-ons, veiled threats, polite requests, and concealed bribes. We propose a three-part theory of indirect speech, based on the idea that human communication involves a mixture of cooperation and conflict. First, indirect requests allow for plausible deniability, in which a cooperative listener can accept the request, but an uncooperative one cannot react adversarially to it. This intuition is supported by a game-theoretic model that predicts the costs and benefits to a speaker of direct and indirect requests. Second, language has two functions: to convey information and to negotiate the type of relationship holding between speaker and hearer (in particular, dominance, communality, or reciprocity). The emotional costs of a mismatch in the assumed relationship type can create a need for plausible deniability and, thereby, select for indirectness even when there are no tangible costs. Third, people perceive language as a digital medium, which allows a sentence to generate common knowledge, to propagate a message with high fidelity, and to serve as a reference point in coordination games. This feature makes an indirect request qualitatively different from a direct one even when the speaker and listener can infer each other's intentions with high confidence. PMID:18199841 20. Forecast constraints on inflation from combined CMB and gravitational wave direct detection experiments SciTech Connect Kuroyanagi, Sachiko; Gordon, Christopher; Silk, Joseph; Sugiyama, Naoshi 2010-04-15 We study how direct detection of the inflationary gravitational wave background constrains inflationary parameters and complements CMB polarization measurements. The error ellipsoids calculated using the Fisher information matrix approach with Planck and the direct detection experiment, Big Bang Observer (BBO), show different directions of parameter degeneracy, and the degeneracy is broken when they are combined. For a slow-roll parametrization, we show that BBO could significantly improve the constraints on the tensor-to-scalar ratio compared with Planck alone. We also look at a quadratic and a natural inflation model. In both cases, if the temperature of reheating is also treated as a free parameter, then the addition of BBO can significantly improve the error bars. In the case of natural inflation, we find that the addition of BBO could even partially improve the error bars of a cosmic variance-limited CMB experiment. 1. Optimized detection of transcription factor-binding sites in ChIP-seq experiments PubMed Central Elo, Laura L.; Kallio, Aleksi; Laajala, Teemu D.; Hawkins, R. David; Korpelainen, Eija; Aittokallio, Tero 2012-01-01 We developed a computational procedure for optimizing the binding site detections in a given ChIP-seq experiment by maximizing their reproducibility under bootstrap sampling. We demonstrate how the procedure can improve the detection accuracies beyond those obtained with the default settings of popular peak calling software, or inform the user whether the peak detection results are compromised, circumventing the need for arbitrary re-iterative peak calling under varying parameter settings. The generic, open-source implementation is easily extendable to accommodate additional features and to promote its widespread application in future ChIP-seq studies. The peakROTS R-package and user guide are freely available at http://www.nic.funet.fi/pub/sci/molbio/peakROTS. PMID:22009681 2. Detection loophole in Bell experiments: How postselection modifies the requirements to observe nonlocality SciTech Connect Branciard, Cyril 2011-03-15 A common problem in Bell-type experiments is the well-known detection loophole: if the detection efficiencies are not perfect and if one simply postselects the conclusive events, one might observe a violation of a Bell inequality, even though a local model could have explained the experimental results. In this paper, we analyze the set of all postselected correlations that can be explained by a local model, and show that it forms a polytope, larger than the Bell local polytope. We characterize the facets of this postselected local polytope in the Clauser-Horne-Shimony-Holt scenario, where two parties have binary inputs and outcomes. Our approach gives interesting insights on the detection loophole problem. 3. Crack-Detection Experiments on Simulated Turbine Engine Disks in NASA Glenn Research Center's Rotordynamics Laboratory NASA Technical Reports Server (NTRS) Woike, Mark R.; Abdul-Aziz, Ali 2010-01-01 The development of new health-monitoring techniques requires the use of theoretical and experimental tools to allow new concepts to be demonstrated and validated prior to use on more complicated and expensive engine hardware. In order to meet this need, significant upgrades were made to NASA Glenn Research Center s Rotordynamics Laboratory and a series of tests were conducted on simulated turbine engine disks as a means of demonstrating potential crack-detection techniques. The Rotordynamics Laboratory consists of a high-precision spin rig that can rotate subscale engine disks at speeds up to 12,000 rpm. The crack-detection experiment involved introducing a notch on a subscale engine disk and measuring its vibration response using externally mounted blade-tip-clearance sensors as the disk was operated at speeds up to 12 000 rpm. Testing was accomplished on both a clean baseline disk and a disk with an artificial crack: a 50.8-mm- (2-in.-) long introduced notch. The disk s vibration responses were compared and evaluated against theoretical models to investigate how successful the technique was in detecting cracks. This paper presents the capabilities of the Rotordynamics Laboratory, the baseline theory and experimental setup for the crack-detection experiments, and the associated results from the latest test campaign. 4. The subjective experience of object recognition: comparing metacognition for object detection and object categorization. PubMed Meuwese, Julia D I; van Loon, Anouk M; Lamme, Victor A F; Fahrenfort, Johannes J 2014-05-01 5. Neural Processing Associated with Comprehension of an Indirect Reply during a Scenario Reading Task ERIC Educational Resources Information Center Shibata, Midori; Abe, Jun-ichi; Itoh, Hiroaki; Shimada, Koji; Umeda, Satoshi 2011-01-01 In daily communication, we often use indirect speech to convey our intention. However, little is known about the brain mechanisms that underlie the comprehension of indirect speech. In this study, we conducted a functional MRI experiment using a scenario reading task to compare the neural activity induced by an indirect reply (a type of indirect… 6. A search for a nonbiological explanation of the Viking Labeled Release life detection experiment NASA Technical Reports Server (NTRS) Levin, G. V.; Straat, P. A. 1981-01-01 The possibility of nonbiological reactions involving hydrogen peroxide being the source of the positive response detected by the Viking Labeled Release (LR) life detection experiment on the surface of Mars is assessed. Labeled release experiments were conducted in the LR Test Standards Module which replicates the Viking flight instrument configuration on analog Martian soils prepared to match the Viking inorganic analysis of Mars surface material to which an aqueous solution of hydrogen peroxide had been added. Getter experiments were also conducted to compare several reactions simultaneously in the presence and absence of UV radiation prior to the addition of nutrient. Hydrogen peroxide on certain analog soils is found to be capable of reproducing the kinetics and thermal information contained in the Mars data. The peroxide concentration necessary for this response, however, is shown to require a chemical stability or production rate much greater than seems likely in the Mars environment. As previous experiments have shown hydrogen peroxide to be the most likely nonbiological source of the positive LR response, it is concluded that the presence of a biological agent on Mars must not yet be ruled out. 7. Quantifying (dis)agreement between direct detection experiments in a halo-independent way SciTech Connect Feldstein, Brian; Kahlhoefer, Felix E-mail: [email protected] 2014-12-01 We propose an improved method to study recent and near-future dark matter direct detection experiments with small numbers of observed events. Our method determines in a quantitative and halo-independent way whether the experiments point towards a consistent dark matter signal and identifies the best-fit dark matter parameters. To achieve true halo independence, we apply a recently developed method based on finding the velocity distribution that best describes a given set of data. For a quantitative global analysis we construct a likelihood function suitable for small numbers of events, which allows us to determine the best-fit particle physics properties of dark matter considering all experiments simultaneously. Based on this likelihood function we propose a new test statistic that quantifies how well the proposed model fits the data and how large the tension between different direct detection experiments is. We perform Monte Carlo simulations in order to determine the probability distribution function of this test statistic and to calculate the p-value for both the dark matter hypothesis and the background-only hypothesis. 8. On-Line Database of Vibration-Based Damage Detection Experiments NASA Technical Reports Server (NTRS) Pappa, Richard S.; Doebling, Scott W.; Kholwad, Tina D. 2000-01-01 This paper describes a new, on-line bibliographic database of vibration-based damage detection experiments. Publications in the database discuss experiments conducted on actual structures as well as those conducted with simulated data. The database can be searched and sorted in many ways, and it provides photographs of test structures when available. It currently contains 100 publications, which is estimated to be about 5-10% of the number of papers written to date on this subject. Additional entries are forthcoming. This database is available for public use on the Internet at the following address: http://sdbpappa-mac.larc.nasa.gov. Click on the link named "dd_experiments.fp3" and then type "guest" as the password. No user name is required. 9. Poker Face of Inelastic Dark Matter: Prospects at Upcoming Direct Detection Experiments SciTech Connect Alves, Daniele S.M.; Lisanti, Mariangela; Wacker, Jay G.; /SLAC 2011-08-12 The XENON100 and CRESST experiments will directly test the inelastic dark matter explanation for DAMA's 8.9{sigma} anomaly. This article discusses how predictions for direct detection experiments depend on uncertainties in quenching factor measurements, the dark matter interaction with the Standard Model and the halo velocity distribution. When these uncertainties are accounted for, an order of magnitude variation is found in the number of expected events at CRESST and XENON100. The process of testing the DAMA anomaly highlights many of the challenges inherent to direct detection experiments. In addition to determining the properties of the unknown dark matter particle, direct detection experiments must also consider the unknown flux of the incident dark matter, as well as uncertainties in converting a signal from one target nucleus to another. The predictions for both the CRESST 2009 run and XENON100 2010 run show an order of magnitude uncertainty. The nuclear form factor for {sup 184}W, when combined with additional theoretical and experimental uncertainties, will likely prevent CRESST from refuting the iDM hypothesis with an exposure of {Omicron}(100 kg-d) in a model-independent manner. XENON100, on the other hand, will be able to make a definitive statement about a spin-independent, inelastically scattering dark matter candidate. Still, the CRESST 2009 data can potentially confirm iDM for a large range of parameter space. In case of a positive signal, the combined data from CRESST and XENON100 will start probing the properties of the Milky Way DM profile and the interaction of the SM with the dark matter. 10. Directed Design of Experiments for Validating Probability of Detection Capability of NDE Systems (DOEPOD) NASA Technical Reports Server (NTRS) Generazio, Edward R. 2015-01-01 Directed Design of Experiments for Validating Probability of Detection Capability of NDE Systems (DOEPOD) Manual v.1.2 The capability of an inspection system is established by applications of various methodologies to determine the probability of detection (POD). One accepted metric of an adequate inspection system is that there is 95% confidence that the POD is greater than 90% (90/95 POD). Design of experiments for validating probability of detection capability of nondestructive evaluation (NDE) systems (DOEPOD) is a methodology that is implemented via software to serve as a diagnostic tool providing detailed analysis of POD test data, guidance on establishing data distribution requirements, and resolving test issues. DOEPOD demands utilization of observance of occurrences. The DOEPOD capability has been developed to provide an efficient and accurate methodology that yields observed POD and confidence bounds for both Hit-Miss or signal amplitude testing. DOEPOD does not assume prescribed POD logarithmic or similar functions with assumed adequacy over a wide range of flaw sizes and inspection system technologies, so that multi-parameter curve fitting or model optimization approaches to generate a POD curve are not required. DOEPOD applications for supporting inspector qualifications is included. 11. PROJECTED CONSTRAINTS ON THE COSMIC (SUPER)STRING TENSION WITH FUTURE GRAVITATIONAL WAVE DETECTION EXPERIMENTS SciTech Connect Sanidas, Sotirios A.; Battye, Richard A.; Stappers, Benjamin W. E-mail: [email protected] 2013-02-10 We present projected constraints on the cosmic string tension, G{mu}/c {sup 2}, that could be achieved by future gravitational wave detection experiments and express our results as semi-analytic relations of the form G{mu}({Omega}{sub gw} h {sup 2})/c {sup 2}, to allow for direct computation of the tension constraints for future experiments. These results can be applied to new constraints on {Omega}{sub gw} h {sup 2} as they are imposed. Experiments operating in different frequency bands probe different parts of the gravitational wave spectrum of a cosmic string network and are sensitive to different uncertainties in the underlying cosmic string model parameters. We compute the gravitational wave spectra of cosmic string networks based on the one-scale model, covering all the parameter space accessed by each experiment that is strongly dependent on the birth scale of loops relative to the horizon, {alpha}. The upper limits on the string tension avoid any assumptions on the model parameters. We perform this investigation for Pulsar Timing Array experiments of different durations, as well as ground-based and space-borne interferometric detectors. 12. What is the probability that direct detection experiments have observed dark matter? SciTech Connect Bozorgnia, Nassim; Schwetz, Thomas E-mail: [email protected] 2014-12-01 In Dark Matter direct detection we are facing the situation of some experiments reporting positive signals which are in conflict with limits from other experiments. Such conclusions are subject to large uncertainties introduced by the poorly known local Dark Matter distribution. We present a method to calculate an upper bound on the joint probability of obtaining the outcome of two potentially conflicting experiments under the assumption that the Dark Matter hypothesis is correct, but completely independent of assumptions about the Dark Matter distribution. In this way we can quantify the compatibility of two experiments in an astrophysics independent way. We illustrate our method by testing the compatibility of the hints reported by DAMA and CDMS-Si with the limits from the LUX and SuperCDMS experiments. The method does not require Monte Carlo simulations but is mostly based on using Poisson statistics. In order to deal with signals of few events we introduce the so-called ''signal length'' to take into account energy information. The signal length method provides a simple way to calculate the probability to obtain a given experimental outcome under a specified Dark Matter and background hypothesis. 13. Primordial Gravitational Wave Detectability with Deep Small-sky Cosmic Microwave Background Experiments NASA Astrophysics Data System (ADS) Farhang, M.; Bond, J. R.; Doré, O.; Netterfield, C. B. 2013-07-01 We use the Bayesian estimation on direct T - Q - U cosmic microwave background (CMB) polarization maps to forecast errors on the tensor-to-scalar power ratio r, and hence on primordial gravitational waves, as a function of sky coverage f sky. This map-based likelihood filters the information in the pixel-pixel space into the optimal combinations needed for r detection for cut skies, providing enhanced information over a first-step linear separation into a combination of E, B, and mixed modes, and ignoring the latter. With current computational power and for typical resolutions appropriate for r detection, the large matrix inversions required are accurate and fast. Our simulations explore two classes of experiments, with differing bolometric detector numbers, sensitivities, and observational strategies. One is motivated by a long duration balloon experiment like Spider, with pixel noise \\propto \\sqrt{f_{sky}} for a specified observing period. This analysis also applies to ground-based array experiments. We find that, in the absence of systematic effects and foregrounds, an experiment with Spider-like noise concentrating on f sky ~ 0.02-0.2 could place a 2σ r ≈ 0.014 boundary (~95% confidence level), which rises to 0.02 with an l-dependent foreground residual left over from an assumed efficient component separation. We contrast this with a Planck-like fixed instrumental noise as f sky varies, which gives a Galaxy-masked (f sky = 0.75) 2σ r ≈ 0.015, rising to ≈0.05 with the foreground residuals. Using as the figure of merit the (marginalized) one-dimensional Shannon entropy of r, taken relative to the first 2003 WMAP CMB-only constraint, gives -2.7 bits from the 2012 WMAP9+ACT+SPT+LSS data, and forecasts of -6 bits from Spider (+ Planck); this compares with up to -11 bits for CMBPol, COrE, and PIXIE post-Planck satellites and -13 bits for a perfectly noiseless cosmic variance limited experiment. We thus confirm the wisdom of the current strategy for r 14. PRIMORDIAL GRAVITATIONAL WAVE DETECTABILITY WITH DEEP SMALL-SKY COSMIC MICROWAVE BACKGROUND EXPERIMENTS SciTech Connect Farhang, M.; Bond, J. R.; Netterfield, C. B.; Dore, O. 2013-07-01 We use the Bayesian estimation on direct T - Q - U cosmic microwave background (CMB) polarization maps to forecast errors on the tensor-to-scalar power ratio r, and hence on primordial gravitational waves, as a function of sky coverage f{sub sky}. This map-based likelihood filters the information in the pixel-pixel space into the optimal combinations needed for r detection for cut skies, providing enhanced information over a first-step linear separation into a combination of E, B, and mixed modes, and ignoring the latter. With current computational power and for typical resolutions appropriate for r detection, the large matrix inversions required are accurate and fast. Our simulations explore two classes of experiments, with differing bolometric detector numbers, sensitivities, and observational strategies. One is motivated by a long duration balloon experiment like Spider, with pixel noise {proportional_to}{radical}(f{sub sky}) for a specified observing period. This analysis also applies to ground-based array experiments. We find that, in the absence of systematic effects and foregrounds, an experiment with Spider-like noise concentrating on f{sub sky} {approx} 0.02-0.2 could place a 2{sigma}{sub r} Almost-Equal-To 0.014 boundary ({approx}95% confidence level), which rises to 0.02 with an l-dependent foreground residual left over from an assumed efficient component separation. We contrast this with a Planck-like fixed instrumental noise as f{sub sky} varies, which gives a Galaxy-masked (f{sub sky} = 0.75) 2{sigma}{sub r} Almost-Equal-To 0.015, rising to Almost-Equal-To 0.05 with the foreground residuals. Using as the figure of merit the (marginalized) one-dimensional Shannon entropy of r, taken relative to the first 2003 WMAP CMB-only constraint, gives -2.7 bits from the 2012 WMAP9+ACT+SPT+LSS data, and forecasts of -6 bits from Spider (+ Planck); this compares with up to -11 bits for CMBPol, COrE, and PIXIE post-Planck satellites and -13 bits for a perfectly 15. On-Board Cryosphere Change Detection With The Autonomous Sciencecraft Experiment NASA Astrophysics Data System (ADS) Doggett, T.; Greeley, R.; Castano, R.; Chien, S.; Davies, A.; Tran, D.; Mazzoni, D.; Baker, V.; Dohm, J.; Ip, F. 2006-12-01 The Autonomous Sciencecraft Experiment (ASE) is operating on-board Earth Observing - 1 (EO-1 with the Hyperion hyper-spectral visible to short-wave infrared spectrometer. ASE science activities include autonomous monitoring of cryospheric changes, triggering the collection of additional data when change is detected and filtering of null data such as no change or cloud cover. A cryosphere classification algorithm, developed with Support Vector Machine (SVM) machine learning techniques [1], replacing a manually derived classifier used in earlier operations [2], has been used in conjunction with on-board autonomous software application to execute over three hundred on-board scenarios in 2005 and early 2006, to detect and autonomously respond to sea ice break-up and formation, lake freeze and thaw, as well as the onset and melting of snow cover on land. This demonstrates an approach which could be applied to the monitoring of cryospheres on Earth and Mars as well as the search for dynamic activity on the icy moons of the outer Solar System. [1] Castano et al. (2006) Onboard classifiers for science event detection on a remote-sensing spacecraft, KDD '06, Aug 20-23 2006, Philadelphia, PA. [2] Doggett et al. (2006), Autonomous detection of cryospheric change with Hyperion on-board Earth Observing-1, Rem. Sens. Env., 101, 447-462. 16. Research of detecting details and features of infrared polarization imaging experiment NASA Astrophysics Data System (ADS) Yang, Fan; Liu, Xiao-cheng; Wang, Ji-zhong 2013-09-01 Along with modern infrared camouflage technique developed, it is hard to distinguish target and background by using traditional infrared intensity imaging in general because infrared feature of target and background are tending to consistent. To address this issue, a thought that utilizes infrared polarization imaging technique to detect target is proposed in this paper based on analyzing of the principle of infrared polarization imaging. The experiments are carried out for detecting of infrared low-contrast target imaging. Comparing with the infrared intensity images, the average gradient of the infrared polarization image has been improved 155% and the contrast of target and background has been improved 120% in infrared polarization images. The effective experimental data and imaging law between infrared polarization images and infrared intensity images are obtained that, the technology of infrared polarization imaging can detect details of infrared target more clearly than the infrared intensity imaging, and it can obviously increase the contrast between target and background. Therefore, it is more helpful to detecting details and features of target. 17. Clinical experience with the use of 5-ALA for the detection of superficial bladder cancer NASA Astrophysics Data System (ADS) Stepp, Herbert G.; Baumgartner, Reinhold; Knuechel, Ruth; Kriegmair, M.; Stepp, H. G.; Zaak, D.; Hofstetter, Alfons G. 2000-06-01 We report about the experience obtained in the fluorescence cystoscopic evaluation of 647 patients investigated since 1993. Of all histologically confirmed tumors, 32 percent would have been missed with conventional cystoscopy. Only 16 of 38 CIS were also detected under white light. In patients with entirely normal or unspecifically inflamed appearing mucosa, 44 otherwise invisible malignant lesions could be localized by fluorescence, 16 of them being present in patients with negative bladder washing cytology. The specificity of fluorescence cystoscopy is comparable to white light cystoscopy. A prospective multi-center study was conducted to show, whether a fluorescence controlled transurethral two weeks revealed residual tumor in 53 percent in the white light arm compared to 33 percent in the fluorescence arm. This difference was statistically significant. Of the 33 percent tumor in the fluorescence arm, most was gathered within the resection margins of the first resection, indicating an insufficiently deep resection rather than a failure in detecting the lesion. 18. Directed Design of Experiments for Validating Probability of Detection Capability of NDE Systems (DOEPOD) NASA Technical Reports Server (NTRS) Generazio, Edward R. 2008-01-01 The capability of an inspection system is established by applications of various methodologies to determine the probability of detection (POD). One accepted metric of an adequate inspection system is that there is 95% confidence that the POD is greater than 90% (90/95 POD). Directed design of experiments for probability of detection (DOEPOD) has been developed to provide an efficient and accurate methodology that yields observed POD and confidence bounds for both Hit- Miss or signal amplitude testing. Specifically, DOEPOD demands utilization of observance of occurrences. Directed DOEPOD does not assume prescribed POD logarithmic or similar functions with assumed adequacy over a wide range of flaw sizes and inspection system technologies, so that multi-parameter curve fitting or model optimization approaches to generate a POD curve are not required. 19. The difference of detecting water mist and smoke by electromagnetic wave in simulation experiments NASA Astrophysics Data System (ADS) Zhang, Jingdi; Cui, Bing; Xiao, Si 2015-10-01 Although mist is similar to smoke in morphology, their compositions are very different. Therefore there is a significant difference between mist and smoke when detected by electromagnetic wave. This paper puts forward a kind of feasible solution based on Ansoft HFSS software about how to determine the forest fire by distinguishing mist and smoke above the forest. The experiments simulate the difference between mist and smoke model when detected by electromagnetic wave in different wavelengths. We find the mist and smoke model cannot absorb or reflect electromagnetic wave efficiently in Megahertz band. While in Gigahertz band mist model began to absorb and reflect electromagnetic wave above 650 Gigahertz band, but no change in smoke model. And the biggest difference appears in Terahertz band. 20. Development of experiment and theory to detect and predict ligand phase separation on silver nanoparticles. PubMed Farrell, Zachary; Merz, Steve; Seager, Jon; Dunn, Caroline; Egorov, Sergei; Green, David L 2015-05-26 MALDI mass-spectrometry measurements are combined with self-consistent mean-field (SCF) calculations to detect and predict ligand phase separation on Ag nanoparticles. The experimental and theoretical techniques complement each other by enabling quantification of the nearest-neighbor distribution of a ligand mixture in a monolayer shell. By tracking a characteristic metallic fragment family, analysis of a MALDI spectrum produces a frequency distribution corresponding to specific ligand patterning. Inherent to the SCF calculation is the enumeration of local interactions that dictate ligand assembly. Interweaving MALDI and SCF facilitates a comparison between the experimentally and theoretically derived frequency distributions as well as their deviation from a well-mixed state. Thus, we combine these techniques to detect and predict phase separation in monolayers that mix uniformly or experience varying degrees of de-mixing, including microphase separation and stripe formation. Definition of MALDI removed as this is a commonly recognized technique. PMID:25882701 1. Failure detection and isolation experiments with the Langley Mini-Mast NASA Technical Reports Server (NTRS) Vander Velde, Wallace E.; Van Schalkwyk, Christiaan M. 1990-01-01 A report is presented on experiments to demonstrate failure detection and isolation (FDI) using the flexible truss facility Mini-Mast at the NASA Langley Research Center. Two techniques are selected for study because they are applicable both to sensor and actuator failures and because they do not depend on hypotheses about the forms of possible failures. These two are the method of generalized parity relations and the failure detection filter. These methods utilize the concept of analytical redundancy and therefore their performance depends on the fidelity of the model of the dynamics of the system being monitored. Results are given for sensor FDI using generalized parity relations and input-output data collected during operation of the Mini-Mast. component failures are simulated in the data. The dependence of the performance of the methods on choices of the parameters in their implementation is explored. 2. Single photon detection and timing in the Lunar Laser Ranging Experiment. NASA Technical Reports Server (NTRS) Poultney, S. K. 1972-01-01 The goals of the Lunar Laser Ranging Experiment lead to the need for the measurement of a 2.5 sec time interval to an accuracy of a nanosecond or better. The systems analysis which included practical retroreflector arrays, available laser systems, and large telescopes led to the necessity of single photon detection. Operation under all background illumination conditions required auxiliary range gates and extremely narrow spectral and spatial filters in addition to the effective gate provided by the time resolution. Nanosecond timing precision at relatively high detection efficiency was obtained using the RCA C31000F photomultiplier and Ortec 270 constant fraction of pulse-height timing discriminator. The timing accuracy over the 2.5 sec interval was obtained using a digital interval with analog vernier ends. Both precision and accuracy are currently checked internally using a triggerable, nanosecond light pulser. Future measurements using sub-nanosecond laser pulses will be limited by the time resolution of single photon detectors. 3. Circumventing resistance: using values to indirectly change attitudes. PubMed Blankenship, Kevin L; Wegener, Duane T; Murray, Renee A 2012-10-01 Most research on persuasion examines messages that directly address the attitude of interest. However, especially when message recipients are inclined to resist change, indirect methods might be more effective. Because values are rarely attacked and defended, value change could serve as a useful indirect route for attitude change. Attitudes toward affirmative action changed more when the value of equality was attacked (indirect change) than when affirmative action was directly attacked using the same message (Experiments 1-2). Changes in confidence in the value were responsible for the indirect change when the value was attacked (controlling for changes in favorability toward the value), whereas direct counterarguments to the message were responsible for the relative lack of change when the attitude was attacked directly (Experiment 2). Attacking the value of equality influenced attitudes toward policies related to the value but left policy attitudes unrelated to the value unchanged (Experiment 3). Finally, a manipulation of value confidence that left attitudes toward the value intact demonstrated similar confidence-based influences on policies related to the value of freedom (Experiment 4). Undermined value confidence also resulted in less confidence in the resulting policy attitudes controlling for the changes in the policy attitudes themselves (Experiments 3 and 4). Therefore, indirect change through value attacks presented a double threat--to both the policy attitudes and the confidence with which those policy attitudes were held (potentially leaving them open to additional influence). PMID:22746672 4. Experiences with a new soil gas technique for detecting petroleum pollution SciTech Connect Mazac, O.; Landa, I.; Rohde, J.R.; Kelly, W.E.; Blaha, J.H. 1996-12-31 This paper presents field experiences obtained with a new technology for detecting petroleum pollution in soil and ground water based on in situ determination of hydrocarbon concentrations in soil air. Ecoprobe is a new soil gas device from RS-Dynamics in the Czech Republic. The rugged waterproof device is equipped with a built-in computer-controlled semiconductor sensor. Three case histories are presented that demonstrate the use of the equipment under typical conditions. Two case histories present the use of the device under typical field conditions; the third case history compares results from the Ecoprobe and a commercial photoionization detector (PID) device. 5. LHC optics measurement with proton tracks detected by the Roman pots of the TOTEM experiment NASA Astrophysics Data System (ADS) The TOTEM Collaboration; Antchev, G.; Aspell, P.; Atanassov, I.; Avati, V.; Baechler, J.; Berardi, V.; Berretti, M.; Bossini, E.; Bottigli, U.; Bozzo, M.; Brücken, E.; Buzzo, A.; Cafagna, F. S.; Catanesi, M. G.; Covault, C.; Csanád, M.; Csörgö, T.; Deile, M.; Doubek, M.; Eggert, K.; Eremin, V.; Ferro, F.; Fiergolski, A.; Garcia, F.; Georgiev, V.; Giani, S.; Grzanka, L.; Hammerbauer, J.; Heino, J.; Hilden, T.; Karev, A.; Kašpar, J.; Kopal, J.; Kundrát, V.; Lami, S.; Latino, G.; Lauhakangas, R.; Leszko, T.; Lippmaa, E.; Lippmaa, J.; Lokajíček, M. V.; Losurdo, L.; Lo Vetere, M.; Rodríguez, F. Lucas; Macrí, M.; Mäki, T.; Mercadante, A.; Minafra, N.; Minutoli, S.; Nemes, F.; Niewiadomski, H.; Oliveri, E.; Oljemark, F.; Orava, R.; Oriunno, M.; Österberg, K.; Palazzi, P.; Peroutka, Z.; Procházka, J.; Quinto, M.; Radermacher, E.; Radicioni, E.; Ravotti, F.; Robutti, E.; Ropelewski, L.; Ruggiero, G.; Saarikko, H.; Scribano, A.; Smajek, J.; Snoeys, W.; Sziklai, J.; Taylor, C.; Turini, N.; Vacek, V.; Welti, J.; Whitmore, J.; Wyszkowski, P.; Zielinski, K. 2014-10-01 Precise knowledge of the beam optics at the LHC is crucial to fulfill the physics goals of the TOTEM experiment, where the kinematics of the scattered protons is reconstructed with near-beam telescopes—so-called Roman pots (RP). Before being detected, the protons’ trajectories are influenced by the magnetic fields of the accelerator lattice. Thus precise understanding of the proton transport is of key importance for the experiment. A novel method of optics evaluation is proposed which exploits kinematical distributions of elastically scattered protons observed in the RPs. Theoretical predictions, as well as Monte Carlo studies, show that the residual uncertainty of the optics estimation method is smaller than 2.5. 6. Development of a corrosion detection experiment to evaluate conventional and advanced NDI techniques SciTech Connect Roach, D. 1995-12-31 The Aging Aircraft NDI Validation Center (AANC) was established by the Federal Aviation Administration Technical Center (FAATC) at Sandia National Laboratories in August of 1991. The goal of the AANC is to provide independent validation of technologies intended to enhance the structural inspection of aging commuter and transport aircraft. The deliverables from the AANCs validation activities are assessments of the reliability of existing and emerging inspection technologies as well as analyses of the cost benefits to be derived from their implementation. This paper describes the methodology developed by the AANC to assess the performance of NDI techniques. In particular, an experiment being developed to evaluate corrosion detection devices will be presented. The experiment uses engineered test specimens, as well as complete aircraft test beds to provide metrics for NDI validation. 7. 19 CFR 10.816 - Indirect materials. Code of Federal Regulations, 2011 CFR 2011-04-01 ... 19 Customs Duties 1 2011-04-01 2011-04-01 false Indirect materials. 10.816 Section 10.816 Customs... Rules of Origin § 10.816 Indirect materials. Indirect materials are to be disregarded in determining..., except that the cost of such indirect materials may be included in meeting the value-content... 8. 19 CFR 10.879 - Indirect materials. Code of Federal Regulations, 2011 CFR 2011-04-01 ... 19 Customs Duties 1 2011-04-01 2011-04-01 false Indirect materials. 10.879 Section 10.879 Customs... of Origin § 10.879 Indirect materials. Indirect materials are to be disregarded in determining..., except that the cost of such indirect materials may be included in meeting the value-content... 9. 19 CFR 10.776 - Indirect materials. Code of Federal Regulations, 2011 CFR 2011-04-01 ... 19 Customs Duties 1 2011-04-01 2011-04-01 false Indirect materials. 10.776 Section 10.776 Customs... Rules of Origin § 10.776 Indirect materials. Indirect materials are to be disregarded in determining..., except that the cost of such indirect materials may be included in meeting the value-content... 10. 19 CFR 10.879 - Indirect materials. Code of Federal Regulations, 2014 CFR 2014-04-01 ... 19 Customs Duties 1 2014-04-01 2014-04-01 false Indirect materials. 10.879 Section 10.879 Customs... of Origin § 10.879 Indirect materials. Indirect materials are to be disregarded in determining..., except that the cost of such indirect materials may be included in meeting the value-content... 11. 19 CFR 10.816 - Indirect materials. Code of Federal Regulations, 2010 CFR 2010-04-01 ... 19 Customs Duties 1 2010-04-01 2010-04-01 false Indirect materials. 10.816 Section 10.816 Customs... Rules of Origin § 10.816 Indirect materials. Indirect materials are to be disregarded in determining..., except that the cost of such indirect materials may be included in meeting the value-content... 12. 19 CFR 10.776 - Indirect materials. Code of Federal Regulations, 2013 CFR 2013-04-01 ... 19 Customs Duties 1 2013-04-01 2013-04-01 false Indirect materials. 10.776 Section 10.776 Customs... Rules of Origin § 10.776 Indirect materials. Indirect materials are to be disregarded in determining..., except that the cost of such indirect materials may be included in meeting the value-content... 13. 19 CFR 10.776 - Indirect materials. Code of Federal Regulations, 2010 CFR 2010-04-01 ... 19 Customs Duties 1 2010-04-01 2010-04-01 false Indirect materials. 10.776 Section 10.776 Customs... Rules of Origin § 10.776 Indirect materials. Indirect materials are to be disregarded in determining..., except that the cost of such indirect materials may be included in meeting the value-content... 14. 19 CFR 10.816 - Indirect materials. Code of Federal Regulations, 2014 CFR 2014-04-01 ... 19 Customs Duties 1 2014-04-01 2014-04-01 false Indirect materials. 10.816 Section 10.816 Customs... Rules of Origin § 10.816 Indirect materials. Indirect materials are to be disregarded in determining..., except that the cost of such indirect materials may be included in meeting the value-content... 15. 19 CFR 10.776 - Indirect materials. Code of Federal Regulations, 2014 CFR 2014-04-01 ... 19 Customs Duties 1 2014-04-01 2014-04-01 false Indirect materials. 10.776 Section 10.776 Customs... Rules of Origin § 10.776 Indirect materials. Indirect materials are to be disregarded in determining..., except that the cost of such indirect materials may be included in meeting the value-content... 16. 7 CFR 3430.54 - Indirect costs. Code of Federal Regulations, 2010 CFR 2010-01-01 ...-GENERAL AWARD ADMINISTRATIVE PROVISIONS Post-Award and Closeout § 3430.54 Indirect costs. Indirect cost... assistance regulations and cost principles, unless superseded by another authority. Use of indirect costs as... 7 Agriculture 15 2010-01-01 2010-01-01 false Indirect costs. 3430.54 Section 3430.54... 17. 24 CFR 576.109 - Indirect costs. Code of Federal Regulations, 2014 CFR 2014-04-01 ... 24 Housing and Urban Development 3 2014-04-01 2013-04-01 true Indirect costs. 576.109 Section 576... § 576.109 Indirect costs. (a) In general. ESG grant funds may be used to pay indirect costs in.... Indirect costs may be allocated to each eligible activity under § 576.101 through § 576.108, so long... 18. 24 CFR 576.109 - Indirect costs. Code of Federal Regulations, 2013 CFR 2013-04-01 ... 24 Housing and Urban Development 3 2013-04-01 2013-04-01 false Indirect costs. 576.109 Section 576... § 576.109 Indirect costs. (a) In general. ESG grant funds may be used to pay indirect costs in.... Indirect costs may be allocated to each eligible activity under § 576.101 through § 576.108, so long... 19. 24 CFR 578.63 - Indirect costs. Code of Federal Regulations, 2013 CFR 2013-04-01 ... 24 Housing and Urban Development 3 2013-04-01 2013-04-01 false Indirect costs. 578.63 Section 578... Indirect costs. (a) In general. Continuum of Care funds may be used to pay indirect costs in accordance with OMB Circulars A-87 or A-122, as applicable. (b) Allocation. Indirect costs may be allocated... 20. 7 CFR 3430.54 - Indirect costs. Code of Federal Regulations, 2011 CFR 2011-01-01 ... Post-Award and Closeout § 3430.54 Indirect costs. Indirect cost rates for grants and cooperative... 7 Agriculture 15 2011-01-01 2011-01-01 false Indirect costs. 3430.54 Section 3430.54 Agriculture..., unless superseded by another authority. Use of indirect costs as in-kind matching contributions... 1. 7 CFR 3430.54 - Indirect costs. Code of Federal Regulations, 2014 CFR 2014-01-01 ... Post-Award and Closeout § 3430.54 Indirect costs. Indirect cost rates for grants and cooperative... 7 Agriculture 15 2014-01-01 2014-01-01 false Indirect costs. 3430.54 Section 3430.54 Agriculture..., unless superseded by another authority. Use of indirect costs as in-kind matching contributions... 2. 24 CFR 578.63 - Indirect costs. Code of Federal Regulations, 2014 CFR 2014-04-01 ... 24 Housing and Urban Development 3 2014-04-01 2013-04-01 true Indirect costs. 578.63 Section 578... Indirect costs. (a) In general. Continuum of Care funds may be used to pay indirect costs in accordance with OMB Circulars A-87 or A-122, as applicable. (b) Allocation. Indirect costs may be allocated... 3. 7 CFR 3430.54 - Indirect costs. Code of Federal Regulations, 2013 CFR 2013-01-01 ... Post-Award and Closeout § 3430.54 Indirect costs. Indirect cost rates for grants and cooperative... 7 Agriculture 15 2013-01-01 2013-01-01 false Indirect costs. 3430.54 Section 3430.54 Agriculture..., unless superseded by another authority. Use of indirect costs as in-kind matching contributions... 4. 19 CFR 10.816 - Indirect materials. Code of Federal Regulations, 2012 CFR 2012-04-01 ... 19 Customs Duties 1 2012-04-01 2012-04-01 false Indirect materials. 10.816 Section 10.816 Customs... Rules of Origin § 10.816 Indirect materials. Indirect materials are to be disregarded in determining..., except that the cost of such indirect materials may be included in meeting the value-content... 5. 19 CFR 10.879 - Indirect materials. Code of Federal Regulations, 2012 CFR 2012-04-01 ... 19 Customs Duties 1 2012-04-01 2012-04-01 false Indirect materials. 10.879 Section 10.879 Customs... of Origin § 10.879 Indirect materials. Indirect materials are to be disregarded in determining..., except that the cost of such indirect materials may be included in meeting the value-content... 6. Water Rockets and Indirect Measurement. ERIC Educational Resources Information Center Inman, Duane 1997-01-01 Describes an activity that teaches a number of scientific concepts including indirect measurement, Newton's third law of motion, manipulating and controlling variables, and the scientific method of inquiry. Uses process skills such as observation, inference, prediction, mensuration, and communication as well as problem solving and higher-order… 7. Indirect methods in nuclear astrophysics NASA Astrophysics Data System (ADS) Bertulani, C. A.; Shubhchintak; Mukhamedzhanov, A.; Kadyrov, A. S.; Kruppa, A.; Pang, D. Y. 2016-04-01 We discuss recent developments in indirect methods used in nuclear astrophysics to determine the capture cross sections and subsequent rates of various stellar burning processes, when it is difficult to perform the corresponding direct measurements. We discuss in brief, the basic concepts of Asymptotic Normalization Coefficients, the Trojan Horse Method, the Coulomb Dissociation Method, (d,p), and charge-exchange reactions. 8. Modeling Indirect Tunneling in Silicon NASA Astrophysics Data System (ADS) Chen, Edward Indirect tunneling in silicon p-n junctions catches people's attention again in recent years. First, the phenomenon induces a serious leakage problem, so called gate-induced drain leakage (GIDL) effect, in modern metal-oxide-semiconductor field-effect transistors (MOSFETs). Second, it is utilized to develop a novel tunneling transistor with the sharp turn-on ability for continuing ITRS roadmap. Although the indirect tunneling is important for the state-of-the-art transistor-technology, the accuracy of the present tunneling models in technology computer-aided design (TCAD) tools is still vague. In the research work, the theory of indirect tunneling in silicon has been thoroughly studied. The phonon-assisted tunneling model has been developed and compared with the existing ones in the Sentaurus-Synopsys, Medici-Synopsys, and Atlas-Silvaco TCAD tools. Beyond these existing models, ours successfully predicts the indirect tunneling current under the different field direction in silicon. In addition, bandgap narrowing in heavily-doped p-n junctions under the reverse-biased condition is also studied during the model development. At the end of the research work, the application to low standby power (LSTP) transistors is demonstrated to show the capability of our tunneling model in the device level. 9. Ecology: Dynamics of Indirect Extinction. PubMed Montoya, Jose M 2015-12-01 The experimental identification of the mechanism by which extinctions of predators trigger further predator extinctions emphasizes the role of indirect effects between species in disturbed ecosystems. It also has deep consequences for the hidden magnitude of the current biodiversity crisis. PMID:26654371 10. Feedback control indirect response models. PubMed Zhang, Yaping; D'Argenio, David Z 2016-08-01 A general framework is introduced for modeling pharmacodynamic processes that are subject to autoregulation, which combines the indirect response (IDR) model approach with methods from classical feedback control of engineered systems. The canonical IDR models are modified to incorporate linear combinations of feedback control terms related to the time course of the difference (the error signal) between the pharmacodynamic response and its basal value. Following the well-established approach of traditional engineering control theory, the proposed feedback control indirect response models incorporate terms proportional to the error signal itself, the integral of the error signal, the derivative of the error signal or combinations thereof. Simulations are presented to illustrate the types of responses produced by the proposed feedback control indirect response model framework, and to illustrate comparisons with other PK/PD modeling approaches incorporating feedback. In addition, four examples from literature are used to illustrate the implementation and applicability of the proposed feedback control framework. The examples reflect each of the four mechanisms of drug action as modeled by each of the four canonical IDR models and include: selective serotonin reuptake inhibitors and extracellular serotonin; histamine H2-receptor antagonists and gastric acid; growth hormone secretagogues and circulating growth hormone; β2-selective adrenergic agonists and potassium. The proposed feedback control indirect response approach may serve as an exploratory modeling tool and may provide a bridge for development of more mechanistic systems pharmacology models. PMID:27394724 11. Field experiments of multi-channel oceanographic fluorescence lidar for oil spill and chlorophyll- a detection NASA Astrophysics Data System (ADS) Li, Xiaolong; Zhao, Chaofang; Ma, Youjun; Liu, Zhishen 2014-08-01 A Multi-channel Oceanographic Fluorescence Lidar (MOFL), with a UV excitation at 355 nm and multiple receiving channels at typical wavelengths of fluorescence from oil spills and chlorophyll- a (Chl- a), has been developed using the Laser-induced Fluorescence (LIF) technique. The sketch of the MOFL system equipped with a compact multi-channel photomultiplier tube (MPMT) is introduced in the paper. The methods of differentiating the oil fluorescence from the background water fluorescence and evaluating the Chl- a concentration are described. Two field experiments were carried out to investigate the field performance of the system, i.e., an experiment in coastal areas for oil pollution detection and an experiment over the Yellow Sea for Chl- a monitoring. In the coastal experiment, several oil samples and other fluorescence substances were used to analyze the fluorescence spectral characteristics for oil identification, and to estimate the thickness of oil films at the water surface. The experiment shows that both the spectral shape of fluorescence induced from surface water and the intensity ratio of two channels ( I 495/ I 405) are essential to determine oil-spill occurrence. In the airborne experiment, MOFL was applied to measure relative Chl- a concentrations in the upper layer of the ocean. A comparison of relative Chl- a concentration measurements by MOFL and the Moderate Resolution Imaging Spectroradiometer (MODIS) indicates that the two datasets are in good agreement. The results show that the MOFL system is capable of monitoring oil spills and Chl- a in the upper layer of ocean water. 12. In-Situ Detections of a Satellite Breakup by the SPADUS Experiment NASA Technical Reports Server (NTRS) Tuzzolino, A. J.; McKibben, R. B.; Simpson, J. A.; BenZvi, S.; Voss, H. D.; Gursky, H.; Johnson, Nicholas L. 2001-01-01 13. Evaluation of geophysical methods for the detection of subsurfacetetracgloroethyene in controlled spill experiments SciTech Connect Mazzella, Aldo; Majer, Ernest L. 2006-04-10 A controlled Tetrachloroethylene (PCE) spill experiment was conducted in a multi-layer formation consisting of sand and clayey-sandlayers. The purpose of the work was to determine the detection limits and capability of various geophysical methods. Measurements were made with ten different geophysical techniques before, during, and after the PCE injection. This experiment provided a clear identification of any geophysical anomalies associated with the presence of the PCE. During the injection period all the techniques indicated anomalies associated with the PCE. In order to quantify the results and provide an indication of the PCE detection limits of the various geophysical methods, the tank was subsequently excavated and samples of the various layers were analyzed for residual PCE concentration with gas chromatography (GC). This paper presents some of the results of five of the techniques: cross borehole complex resistivity (CR) also referred to as spectral induced polarization (SIP), cross borehole high resolution seismic (HRS), borehole self potential (SP), surface ground penetration radar (GPR), and borehole video (BV). 14. Can we predict indirect interactions from quantitative food webs?--an experimental approach. PubMed Tack, Ayco J M; Gripenberg, Sofia; Roslin, Tomas 2011-01-01 1. Shared enemies may link the dynamics of their prey. Recently, quantitative food webs have been used to infer that herbivorous insect species attacked by the same major parasitoid species will affect each other negatively through apparent competition. Nonetheless, theoretical work predicts several alternative outcomes, including positive effects. 2. In this paper, we use an experimental approach to link food web patterns to realized population dynamics. First, we construct a quantitative food web for three dominant leaf miner species on the oak Quercus robur. We then measure short- and long-term indirect effects by increasing leaf miner densities on individual trees. Finally, we test whether experimental results are consistent with natural leaf miner dynamics on unmanipulated trees. 3. The quantitative food web shows that all leaf miner species share a minimum of four parasitoid species. While only a small fraction of the parasitoid pool is shared among Tischeria ekebladella and each of two Phyllonorycter species, the parasitoid communities of the congeneric Phyllonorycter species overlap substantially. 4. Based on the structure of the food web, we predict strong short- and long-term indirect interactions between the Phyllonorycter species, and limited interactions between them and T. ekebladella. As T. ekebladella is the main source of its own parasitoids, we expect to find intraspecific density-dependent parasitism in this species. 5. Consistent with these predictions, parasitism in T. ekebladella was high on trees with high densities of conspecifics in the previous generation. Among leaf miner species sharing more parasitoids, we found positive rather than negative interactions among years. No short-term indirect interactions (i.e. indirect interactions within a single generation) were detected. 6. Overall, this study is the first to experimentally demonstrate that herbivores with overlapping parasitoid communities may exhibit independent population dynamics 15. Improved detection of differentially expressed genes in microarray experiments through multiple scanning and image integration PubMed Central Romualdi, Chiara; Trevisan, Silvia; Celegato, Barbara; Costa, Germano; Lanfranchi, Gerolamo 2003-01-01 The variability of results in microarray technology is in part due to the fact that independent scans of a single hybridised microarray give spot images that are not quite the same. To solve this problem and turn it to our advantage, we introduced the approach of multiple scanning and of image integration of microarrays. To this end, we have developed specific software that creates a virtual image that statistically summarises a series of consecutive scans of a microarray. We provide evidence that the use of multiple imaging (i) enhances the detection of differentially expressed genes; (ii) increases the image homogeneity; and (iii) reveals false-positive results such as differentially expressed genes that are detected by a single scan but not confirmed by successive scanning replicates. The increase in the final number of differentially expressed genes detected in a microarray experiment with this approach is remarkable; 50% more for microarrays hybridised with targets labelled by reverse transcriptase, and 200% more for microarrays developed with the tyramide signal amplification (TSA) technique. The results have been confirmed by semi-quantitative RT–PCR tests. PMID:14627839 16. CT detection and location of intraorbital foreign bodies. Experiments with wood and glass. PubMed Myllylä, V; Pyhtinen, J; Päivänsalo, M; Tervonen, O; Koskela, P 1987-06-01 The series comprises 27 patients examined by CT to detect, locate or exclude a foreign body. 22 of them actually had an orbital foreign body. In three cases CT was the primary method and showed the foreign body correctly, while in 18 it was first detected in plain films and CT was performed to locate it. 20 metallic foreign bodies were hyperdense in appearance. Two cases had wooden foreign bodies, one with a density of +10 HU and the other hypodense with a value of about -434 to -446 HU. The latter piece of wood was first interpreted falsely as a bubble of gas. The results proved that the detection of metal is easy, but differentiation between wood and gas is problematical. Experiments conducted to determine the CT densities of different pieces of wood gave results varying from -618 HU to +23 HU. The highest densities obtained for glass varied from +522 HU to +2000 HU. The density of a plastic lens was -105 HU. PMID:3037632 17. Detection of dust impacts by the Voyager planetary radio astronomy experiment NASA Technical Reports Server (NTRS) Evans, David R. 1993-01-01 The Planetary Radio Astronomy (PRA) instrument detected large numbers of dust particles during the Voyager 2 encounter with Neptune. The signatures of these impacts are analyzed in some detail. The major conclusions are described. PRA detects impacts from all over the spacecraft body, not just the PRA antennas. The signatures of individual impacts last substantially longer than was expected from complementary Plasma Wave Subsystem (PWS) data acquired by another Voyager experiment. The signatures of individual impacts demonstrate very rapid fluctuations in signal strength, so fast that the data are limited by the speed of response of the instrument. The PRA detects events at a rate consistently lower than does the Plasma Wave subsystem. Even so, the impact rate is so great near the inbound crossing of the ring plane that no reliable estimate of impact rate can be made for this period. The data are consistent with the presence of electrons accelerated by ions within an expanding plasma cloud from the point of impact. An ancillary conclusion is that the anomalous appearance of data acquired at 900 kHz appears to be due to an error in processing the PRA data prior to their delivery rather than due to overload of the PRA instrument. 18. Hazard detection in noise-related incidents - the role of driving experience with battery electric vehicles. PubMed Cocron, Peter; Bachl, Veronika; Früh, Laura; Koch, Iris; Krems, Josef F 2014-12-01 The low noise emission of battery electric vehicles (BEVs) has led to discussions about how to address potential safety issues for other road users. Legislative actions have already been undertaken to implement artificial sounds. In previous research, BEV drivers reported that due to low noise emission they paid particular attention to pedestrians and bicyclists. For the current research, we developed a hazard detection task to test whether drivers with BEV experience respond faster to incidents, which arise due to the low noise emission, than inexperienced drivers. The first study (N=65) revealed that BEV experience only played a minor role in drivers' response to hazards resulting from low BEV noise. The tendency to respond, reaction times and hazard evaluations were similar among experienced and inexperienced BEV drivers; only small trends in the assumed direction were observed. Still, both groups clearly differentiated between critical and non-critical scenarios and responded accordingly. In the second study (N=58), we investigated additionally if sensitization to low noise emission of BEVs had an effect on hazard perception in incidents where the noise difference is crucial. Again, participants in all groups differentiated between critical and non-critical scenarios. Even though trends in response rates and latencies occurred, experience and sensitization to low noise seemed to only play a minor role in detecting hazards due to low BEV noise. An additional global evaluation of BEV noise further suggests that even after a short test drive, the lack of noise is perceived more as a comfort feature than a safety threat. PMID:25302423 19. HI at z 20: The Large Aperture Experiment to Detect the Dark Ages NASA Astrophysics Data System (ADS) Greenhill, Lincoln J.; Werthimer, D.; Taylor, G.; Ellingson, S.; LEDA Collaboration 2012-05-01 When did the first stars form? Did supermassive black holes form at the same time, earlier, or later? One of the great challenges of cosmology today is the study of these first generation objects. The Large Aperture Experiment to Detect the Dark Ages (LEDA) project seeks to detect, in total-power, emission from neutral Hydrogen (21 cm rest wavelength) in the intergalactic medium about 100 million years after the Big Bang (redshifts 20). Detection would deliver the first observational constraints on models of structure formation and the first pockets of star and black holes formation in the Universe. LEDA will develop and integrate by 2013 signal processing instrumentation into the new first station of the Long Wavelength Array (LWA). This comprises a large-N correlator serving all 512 dipole antennas of the LWA1, leveraging a packetized CASPER architecture and combining FPGAs and GPUs for the F and X stages. Iterative calibration and imaging will rely on warped snapshot imaging and be drawn from a GPU-enabled library (cuWARP) that is designed specifically to support wide-field full polarization imaging with fixed dipole arrays. Calibration techniques will include peeling, correction for ionospheric refraction, direction dependent dipole gains, deconvolution via forward modeling, and exploration of pulsar data analysis to improve performance. Accurate calibration and imaging will be crucial requirements for LEDA, necessary to subtract the bright foreground sky and detect the faint neutral Hydrogen signal. From the computational standpoint, LEDA is a O(100) TeraFlop per second challenge that enables a scalable architecture looking toward development of radio arrays requiring power efficient 10 PetaFlop per second performance. Stage two of the Hydrogen Epoch of Reionization Array (HERA2) is one example. 20. Comparisons of target detection in clutter using data from the 1993 FOPEN experiments NASA Astrophysics Data System (ADS) Winter, Edwin M.; Schlangen, Michael J.; Hendrickson, Clark R. 1994-06-01 During 1993, a series of experiments were performed under the Advanced Research Projects Agency (ARPA) sponsorship using the SRI ultra-wide band UHF synthetic aperture radar (SAR). These experiments were performed over a variety of clutter backgrounds to assess the foliage penetration capability of the technology and to investigate target detection in clutter. Experiments were conducted observing tropical rain forest backgrounds in Panama, several different desert backgrounds in the Yuma vicinity, and the mid-latitude temperate forest of Maine. SAR images were formed from the raw data using Differential GPS to aid in the focusing. The three locations represent different levels of foliage cover, ranging from the sparsely vegetated desert sites to the triple canopied rain forest. The characteristics of each site are discussed first through a presentation of photography and SAR imagery. The clutter characteristics are studied through a comparison of the cumulative distributions, which are plotted using a variety of conventions. For each case, at least one reference target is included in the test scene. The signal of that target as processed by a common algorithm will be compared to the processed clutter distribution. 1. Comparisons of target detection clutter using data from the 1993 FOPEN experiments NASA Astrophysics Data System (ADS) Winter, E. M.; Schlangen, M. J.; Hendrickson, C. R. 1994-10-01 During 1993, a series of experiments were performed under the Advanced Research Projects Agency (ARPA) sponsorship using the SRI Ultra-Wide Band UHF Synthetic Aperture Radar (SAR). These experiments were performed over a variety of clutter backgrounds to assess the foliage penetration capability of the technology and to investigate target detection in clutter. Experiments were conducted observing tropical rain forest backgrounds in Panama, several different desert backgrounds in the Yuma vicinity, and the mid-latitude temperate forest of Maine. SAR images were formed from the raw data using Differential GPS to aid in the focusing. The three locations represent different levels of foliage cover, ranging from the sparsely vegetated desert sites to the triple canopied rain forest. The characteristics of each site are discussed first through a presentation of photography and SAR imagery. The clutter characteristics are studied through a comparison of the cumulative distributions, which are plotted using a variety of conventions (e.g., log-normal, normal, Weibull). For each case, at least one reference target is included in the test scene. The signal of that target as processed by a common algorithm will be compared to the processed clutter distribution. 2. Flood Detection and Monitoring by Autonomous Satellite Operations: the ASE Experience NASA Astrophysics Data System (ADS) Ip, F.; Dohm, J. M.; Baker, V. R.; Doggett, T.; Davies, A. G.; Castano, R.; Chien, S.; Cichy, B.; Greeley, R.; Sherwood, R.; Tran, D. Q.; Rabideau, G. 2006-05-01 We developed a satellite-based floodwater classification algorithm, ASE_FLOOD, to autonomously detect, monitor and respond to flooding events as they occur. It monitors selected river locations around the world for flood conditions in near real time through the Autonomous Sciencecraft Experiment (ASE). Normally, an ongoing flood might be missed because of the time required for the spacecraft to send its data to ground controllers for image processing and data analysis. The ASE approach cuts lengthy time lags inherent to taking an observation, transmitting it to the ground for study, and subsequently deciding to direct further satellite observations of an event. By introducing spaceborne data analysis and autonomous decision-making ability, ASE provides an innovative way for early detection, tracking and "rapid response" to dynamic transient flood events without any human intervention or prior knowledge. Tested and proven on NASA's EO-1 spacecraft, ASE's onboard data analysis detects flood/non-flood/cloudy conditions on the ground, and responds to the detected conditions accordingly using its ASE-facilitated autonomous decision making ability. Cloudy scenes and scenes with no significant flooding are dropped and not be transmitted, thereby saving downlink resources. When significant flooding is detected, ASE autonomously triggers the satellite to acquire additional images of the same target or adjacent flood-affected regions on the next orbital passes to track flood progress and map flood extent. This conditional change- based triggering allows the satellite to change its acquisition priorities and retarget its sensors to the emerging flood regions. The ASE approach greatly reduces the response time to floods from 2 weeks down to a possible 3 hours. It optimizes satellite downlink resources by eliminating useless scenes (e.g., cloudy) and preferentially transmitting onboard-derived data of high science value (e.g., time series of floodwater inundation maps). This 3. From experiment to design -- Fault characterization and detection in parallel computer systems using computational accelerators NASA Astrophysics Data System (ADS) Yim, Keun Soo This dissertation summarizes experimental validation and co-design studies conducted to optimize the fault detection capabilities and overheads in hybrid computer systems (e.g., using CPUs and Graphics Processing Units, or GPUs), and consequently to improve the scalability of parallel computer systems using computational accelerators. The experimental validation studies were conducted to help us understand the failure characteristics of CPU-GPU hybrid computer systems under various types of hardware faults. The main characterization targets were faults that are difficult to detect and/or recover from, e.g., faults that cause long latency failures (Ch. 3), faults in dynamically allocated resources (Ch. 4), faults in GPUs (Ch. 5), faults in MPI programs (Ch. 6), and microarchitecture-level faults with specific timing features (Ch. 7). The co-design studies were based on the characterization results. One of the co-designed systems has a set of source-to-source translators that customize and strategically place error detectors in the source code of target GPU programs (Ch. 5). Another co-designed system uses an extension card to learn the normal behavioral and semantic execution patterns of message-passing processes executing on CPUs, and to detect abnormal behaviors of those parallel processes (Ch. 6). The third co-designed system is a co-processor that has a set of new instructions in order to support software-implemented fault detection techniques (Ch. 7). The work described in this dissertation gains more importance because heterogeneous processors have become an essential component of state-of-the-art supercomputers. GPUs were used in three of the five fastest supercomputers that were operating in 2011. Our work included comprehensive fault characterization studies in CPU-GPU hybrid computers. In CPUs, we monitored the target systems for a long period of time after injecting faults (a temporally comprehensive experiment), and injected faults into various types of 4. Tests of WIMP Dark Matter Candidates with Direct Dark Matter Detection Experiments NASA Astrophysics Data System (ADS) Georgescu, Andreea Irina We reexamine the current direct dark matter (DM) detection data for several types of DM candidates, both assuming the Standard Halo Model (SHM) and in a halo-independent manner. We consider the potential signals for light WIMPs that have appeared in three direct detection searches: DAMA, CDMS-II-Si, and CoGeNT, and we analyze their compatibility with the null results of other direct detection experiments. We first consider light WIMPs with exothermic scattering with nuclei (exoDM). Exothermic interactions favor light targets, thus reducing the importance of upper limits derived from Xe targets, the most restrictive of which is at present the LUX limit. In our SHM analysis the CDMS-II-Si and CoGeNT regions become allowed by these bounds, however the SuperCDMS limit rejects both regions for exoDM with isospin-conserving couplings. An isospin-violating coupling of the exoDM, in particular one with a neutron to proton coupling ratio of -0.8 (which we call "Ge-phobic"), maximally reduces the DM coupling to Ge and allows the CDMS-II-Si region to become compatible with all upper bounds. This is also clearly shown in our halo-independent analysis. Next, we extend and correct a recently proposed maximum-likelihood halo-independent method to analyze unbinned direct DM detection data. Instead of the recoil energy as an independent variable, we use the minimum speed a DM particle must have to impart a given recoil energy to a nucleus. This has the advantage of allowing us to apply the method to any type of target composition and interaction, e.g. with general momentum and velocity dependence, and with elastic or inelastic scattering. We prove the method and provide a rigorous statistical interpretation of the results. As first applications, we find that for dark matter particles with elastic spin-independent interactions and neutron to proton coupling ratio ƒn/ƒp = --0.7 ("Xe-phobic", which reduces maximally the coupling to Xe), the WIMP interpretation of the signal observed 5. Temperament, hopelessness, and attempted suicide: direct and indirect effects. PubMed Rosellini, Anthony J; Bagge, Courtney L 2014-08-01 This study evaluated whether hopelessness mediated the relations between temperament and recent suicide attempter status in a psychiatric sample. Negative temperament and positive temperament (particularly the positive emotionality subscale) uniquely predicted levels of hopelessness. Although these temperament constructs also demonstrated significant indirect effects on recent suicide attempter status, the effects were partially (for the broad temperament scales) or fully (for the positive emotionality subscale) mediated by the levels of hopelessness. These findings indicate that a tendency to experience excessive negative emotions as well as a paucity of positive emotions may lead individuals to experience hopelessness. Although temperament may also indirectly influence suicide attempter status, hopelessness mediates these relations. PMID:24494785 6. Trapping and transport of indirect excitons in coupled quantum wells NASA Astrophysics Data System (ADS) Wuenschell, Jeffrey K. Spatially indirect excitons are optically generated composite bosons with a radiative lifetime sufficient to reach thermal equilibrium. This work explores the physics of indirect excitons in coupled quantum wells in the GaAs/AlGaAs system, specifically in the low-temperature, high-density regime. Particular attention is paid to a technique whereby a spatially inhomogeneous strain field is used as a trapping potential. In the process of modeling the trapping profile in wide quantum wells, dramatic effects due to intersubband coupling were observed at high strain. Experimentally, this regime coincides with the abrupt appearance of a dark population of indirect excitons at trap center, an effect originally suspected to be related to Bose-Einstein condensation. Here, the role of band mixing due to the strain-induced distortion of the crystal symmetry will be explored in detail in the context of this effect. Experimental studies presented here and in the literature suggest that Bose-Einstein condensation in indirect exciton systems may be difficult to detect with optical means (e.g., coherence measurements, momentum-space narrowing), possibly due to the strong dipole interaction between indirect excitons. Due to similarities between this system and liquid helium, it may be more fruitful to look for transport-related signatures of condensation, such as super fluidity. Here, a method for performing transport measurements on optically generated indirect excitons is also outlined and preliminary results are presented. 7. Analogue Experiments Identify Possible Precursor Compounds for Chlorohydrocarbons Detected in SAM NASA Astrophysics Data System (ADS) Miller, K.; Summons, R. E.; Eigenbrode, J. L.; Freissinet, C.; Glavin, D. P.; Martin, M. G.; Team, M. 2013-12-01 Since landing at Gale Crater on August 6, 2012, the Sample Analysis at Mars (SAM) instrument suite, aboard the Curiosity Rover, has conducted multiple analyses of scooped and drilled samples and has identified a suite of chlorohydrocarbons including chloromethane, dichloromethane, trichloromethane, chloromethylpropene, and chlorobenzene (Glavin et al., 2013; Leshin et al., 2013). These compounds were identified after samples were pyrolysed at temperatures up to ~835°C through a combination of Evolved Gas Analysis (EGA) and Gas Chromatography Mass Spectrometry (GCMS). Since these chlorinated species were well above the background levels determined by empty cup blanks analyzed prior to solid sample analyses, thermal degradation of oxychlorine phases, such as perchlorate, present in the Martian soil, are the most likely source of chlorine needed to generate these chlorohydrocarbons. Laboratory analogue experiments show that terrestrial organics internal to SAM, such as N-methyl-N(tert-butyldimethylsilyl)trifluoroacetamide (MTBSTFA), a derivatization agent, can react with perchlorates to produce all of the chlorohydrocarbons detected by SAM. However, in pyrolysis-trap-GCMS laboratory experiments with MTBSTFA, C4 compounds are the predominant chlorohydrocarbon observed, whereas on SAM the C1 chlorohydrocarbons dominate (Glavin et al., 2013). This, in addition to the previous identification of chloromethane and dichloromethane by the 1976 Viking missions (Biemann et al., 1977), suggest that there could be another, possibly Martian, source of organic carbon contributing to the formation of the C1 chlorohydrocarbons, or other components of the solid samples analyzed by SAM are having a catalytic effect on chlorohydrocarbon generation. Laboratory analogue experiments investigated a suite of organic compounds that have the potential to accumulate on Mars (Benner et al., 2000) and thus serve as sources of carbon for the formation of chlorohydrocarbons detected by the SAM and 8. Directed Design of Experiments for Validating Probability of Detection Capability of a Testing System NASA Technical Reports Server (NTRS) Generazio, Edward R. (Inventor) 2012-01-01 A method of validating a probability of detection (POD) testing system using directed design of experiments (DOE) includes recording an input data set of observed hit and miss or analog data for sample components as a function of size of a flaw in the components. The method also includes processing the input data set to generate an output data set having an optimal class width, assigning a case number to the output data set, and generating validation instructions based on the assigned case number. An apparatus includes a host machine for receiving the input data set from the testing system and an algorithm for executing DOE to validate the test system. The algorithm applies DOE to the input data set to determine a data set having an optimal class width, assigns a case number to that data set, and generates validation instructions based on the case number. 9. Indirect interactions in the High Arctic. PubMed Roslin, Tomas; Wirta, Helena; Hopkins, Tapani; Hardwick, Bess; Várkonyi, Gergely 2013-01-01 Indirect interactions as mediated by higher and lower trophic levels have been advanced as key forces structuring herbivorous arthropod communities around the globe. Here, we present a first quantification of the interaction structure of a herbivore-centered food web from the High Arctic. Targeting the Lepidoptera of Northeast Greenland, we introduce generalized overlap indices as a novel tool for comparing different types of indirect interactions. First, we quantify the scope for top-down-up interactions as the probability that a herbivore attacking plant species i itself fed as a larva on species j. Second, we gauge this herbivore overlap against the potential for bottom-up-down interactions, quantified as the probability that a parasitoid attacking herbivore species i itself developed as a larva on species j. Third, we assess the impact of interactions with other food web modules, by extending the core web around the key herbivore Sympistis nigrita to other predator guilds (birds and spiders). We find the host specificity of both herbivores and parasitoids to be variable, with broad generalists occurring in both trophic layers. Indirect links through shared resources and through shared natural enemies both emerge as forces with a potential for shaping the herbivore community. The structure of the host-parasitoid submodule of the food web suggests scope for classic apparent competition. Yet, based on predation experiments, we estimate that birds kill as many (8%) larvae of S. nigrita as do parasitoids (8%), and that spiders kill many more (38%). Interactions between these predator guilds may result in further complexities. Our results caution against broad generalizations from studies of limited food web modules, and show the potential for interactions within and between guilds of extended webs. They also add a data point from the northernmost insect communities on Earth, and describe the baseline structure of a food web facing imminent climate change. PMID 10. Evaluating The Indirect Effect of Cirrus Clouds NASA Astrophysics Data System (ADS) Dobbie, S.; Jonas, P. R. What effect would an increase in nucleating aerosols have on the radiative and cloud properties? What error would be incurred by evaluating the indirect effect by taking an evolved cloud and fixing the integrated water content and vary the number of ice crystals? These questions will be addressed in this work. We will use the UK LES cloud resolving model to perform a sensitivity study for cirrus clouds to the indirect effect, and will evaluate approximate methods in the process. In this work, we will initialize the base (no increase of aerosol) cirrus clouds so that the double moment scheme is constrained to agree with observations through the ef- fective radius. Effective radius is calculated using the local concentration and the ice water content. We then perform a sensitivity experiment to investigate the dependence of the average IWC, effective size, and radiative properties (including heating rates) to variations in the nucleation rate. Conclusions will be draw as to the possible ef- fect of changes in aerosol amounts on cirrus. We will determine how sensitive the cloud and radiative properties are to various aerosol increases. We will also discuss the applicability of the Meyer et al. (1992) nucleation formulae for our simulations. It is important to stress that in this work we only change the nucleation rate for the newly forming cloud. By doing this, we are not fixing the total water content and redistributing the water amongst increased ice crystals. We increase the number of aerosols available to be nucleated and allow the model to evolve the size distributions. In this way, there is competition for the water vapour, the ice particles are evolved dynamically with different fall speeds, the conversion rates to other hydrometers (such as aggregates) are affected, and the heating rates are different due to the different size distributions that evolve. We will look at how the water content, the distribution of water, and the radiative properties are affected 11. Indirect Interactions in the High Arctic PubMed Central Roslin, Tomas; Wirta, Helena; Hopkins, Tapani; Hardwick, Bess; Várkonyi, Gergely 2013-01-01 Indirect interactions as mediated by higher and lower trophic levels have been advanced as key forces structuring herbivorous arthropod communities around the globe. Here, we present a first quantification of the interaction structure of a herbivore-centered food web from the High Arctic. Targeting the Lepidoptera of Northeast Greenland, we introduce generalized overlap indices as a novel tool for comparing different types of indirect interactions. First, we quantify the scope for top-down-up interactions as the probability that a herbivore attacking plant species i itself fed as a larva on species j. Second, we gauge this herbivore overlap against the potential for bottom-up-down interactions, quantified as the probability that a parasitoid attacking herbivore species i itself developed as a larva on species j. Third, we assess the impact of interactions with other food web modules, by extending the core web around the key herbivore Sympistis nigrita to other predator guilds (birds and spiders). We find the host specificity of both herbivores and parasitoids to be variable, with broad generalists occurring in both trophic layers. Indirect links through shared resources and through shared natural enemies both emerge as forces with a potential for shaping the herbivore community. The structure of the host-parasitoid submodule of the food web suggests scope for classic apparent competition. Yet, based on predation experiments, we estimate that birds kill as many (8%) larvae of S. nigrita as do parasitoids (8%), and that spiders kill many more (38%). Interactions between these predator guilds may result in further complexities. Our results caution against broad generalizations from studies of limited food web modules, and show the potential for interactions within and between guilds of extended webs. They also add a data point from the northernmost insect communities on Earth, and describe the baseline structure of a food web facing imminent climate change. PMID 12. An economical method of indirect labeling of microsatellite primers Technology Transfer Automated Retrieval System (TEKTRAN) In order to reduce the cost of labeling and detection of microsatellite markers for QTL research, an economical method of indirect labeling reported for catfish was modified to work with honey bees. The forward primer of each pair is modified by adding a 19-base sequence at the 5’ end. This same 1... 13. Referencing cross-reactivity of detection antibodies for protein array experiments PubMed Central Lemass, Darragh; O'Kennedy, Richard; Kijanka, Gregor S. 2016-01-01 Protein arrays are frequently used to profile antibody repertoires in humans and animals. High-throughput protein array characterisation of complex antibody repertoires requires a platform-dependent, lot-to-lot validation of secondary detection antibodies. This article details the validation of an affinity-isolated anti-chicken IgY antibody produced in rabbit and a goat anti-rabbit IgG antibody conjugated with alkaline phosphatase using protein arrays consisting of 7,390 distinct human proteins. Probing protein arrays with secondary antibodies in absence of chicken serum revealed non-specific binding to 61 distinct human proteins. The cross-reactivity of the tested secondary detection antibodies points towards the necessity of platform-specific antibody characterisation studies for all secondary immunoreagents. Secondary antibody characterisation using protein arrays enables generation of reference lists of cross-reactive proteins, which can be then excluded from analysis in follow-up experiments. Furthermore, making such cross-reactivity lists accessible to the wider research community may help to interpret data generated by the same antibodies in applications not related to protein arrays such as immunoprecipitation, Western blots or other immunoassays. PMID:27335636 14. Motion-based threat detection using microrods: experiments and numerical simulations. PubMed Ezhilan, Barath; Gao, Wei; Pei, Allen; Rozen, Isaac; Dong, Renfeng; Jurado-Sanchez, Beatriz; Wang, Joseph; Saintillan, David 2015-05-01 Motion-based chemical sensing using microscale particles has attracted considerable recent attention. In this paper, we report on new experiments and Brownian dynamics simulations that cast light on the dynamics of both passive and active microrods (gold wires and gold-platinum micromotors) in a silver ion gradient. We demonstrate that such microrods can be used for threat detection in the form of a silver ion source, allowing for the determination of both the location of the source and concentration of silver. This threat detection strategy relies on the diffusiophoretic motion of both passive and active microrods in the ionic gradient and on the speed acceleration of the Au-Pt micromotors in the presence of silver ions. A Langevin model describing the microrod dynamics and accounting for all of these effects is presented, and key model parameters are extracted from the experimental data, thereby providing a reliable estimate for the full spatiotemporal distribution of the silver ions in the vicinity of the source. PMID:25853933 15. Development of a Photon Detection System in Liquid Argon for the Long-Baseline Neutrino Experiment NASA Astrophysics Data System (ADS) Whittington, Denver; Adams, Brice; Baptista, Brian; Baugh, Brian; Gebhard, Mark; Lang, Michael; Mufson, Stuart; Musser, James; Smith, Paul; Urheim, Jon 2014-03-01 The Long-Baseline Neutrino Experiment (LBNE) will be a premier facility for exploring long-standing questions about the boundaries of the standard model. Acting in concert with the liquid argon time projection chambers underpinning the far detector design, the LBNE photon detection system will capture ultraviolet scintillation light in order to provide valuable timing information for event reconstruction. The team at Indiana University is exploring a design based on acrylic waveguides coated with a wavelength-shifting compound, combined with silicon photomultipliers, to collect and record scintillation light from liquid argon. Large-scale tests of this design are being conducted at the TallBo'' liquid argon dewar facility at Fermilab, where performance studies with cosmic ray events are helping steer decisions for the final detector design. We present an overview of the design and function of this photon detection system and the latest results from the analysis of data collected during these tests. Photon Detector R&D Team at Indiana University. 16. Contextualizing neuro-collaborations: reflections on a transdisciplinary fMRI lie detection experiment PubMed Central Littlefield, Melissa M.; Fitzgerald, Des; Knudsen, Kasper; Tonks, James; Dietz, Martin J. 2014-01-01 Recent neuroscience initiatives (including the E.U.’s Human Brain Project and the U.S.’s BRAIN Initiative) have reinvigorated discussions about the possibilities for transdisciplinary collaboration between the neurosciences, the social sciences, and the humanities. As STS scholars have argued for decades, however, such inter- and transdisciplinary collaborations are potentially fraught with tensions between researchers. This essay build on such claims by arguing that the tensions of transdisciplinary research also exist within researchers’ own experiences of working between disciplines - a phenomenon that we call “disciplinary double consciousness” (DDC). Building on previous work that has characterized similar spaces (and especially on the Critical Neuroscience literature), we argue that “neuro-collaborations” inevitably engage researchers in DDC - a phenomenon that allows us to explore the useful dissonance that researchers can experience when working between a “home” discipline and a secondary discipline. Our case study is a five-year research project in functional magnetic resonance imaging (fMRI) lie detection involving a transdisciplinary research team made up of social scientists, a neuroscientist, and a humanist. In addition to theorizing neuro-collaborations from the inside-out, this essay presents practical suggestions for developing transdisciplinary infrastructures that could support future neuro-collaborations. PMID:24744713 17. An Experiment to Detect Lunar Horizon Glow with the Lunar Orbit Laser Altimeter Laser Ranging Telescope NASA Astrophysics Data System (ADS) Smith, David E.; Zuber, Maria T.; Barker, Michael; Mazarico, Erwan; Neumann, Gregory A.; McClanahan, Timothy P.; Sun, Xiaoli 2016-04-01 Lunar horizon glow (LHG) was an observation by the Apollo astronauts of a brightening of the horizon around the time of sunrise. The effect has yet to be fully explained or confirmed by instruments on lunar orbiting spacecraft despite several attempts. The Lunar Reconnaissance Orbiter (LRO) spacecraft carries the laser altimeter (LOLA) instrument which has a 2.5 cm aperture telescope for Earth-based laser ranging (LR) mounted and bore-sighted with the high gain antenna (HGA). The LR telescope is connected to LOLA by a fiber-glass cable to one of its 5 detectors. For the LGH experiments the LR telescope is pointed toward the horizon shortly before lunar sunrise with the intent of observing any forward scattering of sunlight due to the presence of dust or particles in the field of view. Initially, the LR telescope is pointed at the dark lunar surface, which provides a measure of the dark count, and moves toward the lunar limb so as to measure the brightness of the sky just above the lunar limb immediately prior to lunar sunrise. At no time does the sun shine directly into the LR telescope, although the LR telescope is pointed as close to the sun as the 1.75-degree field of view permits. Experiments show that the LHG signal seen by the astronauts can be detected with a four-second integration of the noise counts. 18. Contextualizing neuro-collaborations: reflections on a transdisciplinary fMRI lie detection experiment. PubMed Littlefield, Melissa M; Fitzgerald, Des; Knudsen, Kasper; Tonks, James; Dietz, Martin J 2014-01-01 Recent neuroscience initiatives (including the E.U.'s Human Brain Project and the U.S.'s BRAIN Initiative) have reinvigorated discussions about the possibilities for transdisciplinary collaboration between the neurosciences, the social sciences, and the humanities. As STS scholars have argued for decades, however, such inter- and transdisciplinary collaborations are potentially fraught with tensions between researchers. This essay build on such claims by arguing that the tensions of transdisciplinary research also exist within researchers' own experiences of working between disciplines - a phenomenon that we call "disciplinary double consciousness" (DDC). Building on previous work that has characterized similar spaces (and especially on the Critical Neuroscience literature), we argue that "neuro-collaborations" inevitably engage researchers in DDC - a phenomenon that allows us to explore the useful dissonance that researchers can experience when working between a "home" discipline and a secondary discipline. Our case study is a five-year research project in functional magnetic resonance imaging (fMRI) lie detection involving a transdisciplinary research team made up of social scientists, a neuroscientist, and a humanist. In addition to theorizing neuro-collaborations from the inside-out, this essay presents practical suggestions for developing transdisciplinary infrastructures that could support future neuro-collaborations. PMID:24744713 19. 7 CFR 2903.4 - Indirect costs. Code of Federal Regulations, 2010 CFR 2010-01-01 ... AGRICULTURE BIODIESEL FUEL EDUCATION PROGRAM General Information § 2903.4 Indirect costs. (a) For the Biodiesel Fuel Education Program, applicants should use the current indirect cost rate negotiated with... 20. 7 CFR 2903.4 - Indirect costs. Code of Federal Regulations, 2012 CFR 2012-01-01 ... AGRICULTURE BIODIESEL FUEL EDUCATION PROGRAM General Information § 2903.4 Indirect costs. (a) For the Biodiesel Fuel Education Program, applicants should use the current indirect cost rate negotiated with... 1. 7 CFR 2903.4 - Indirect costs. Code of Federal Regulations, 2013 CFR 2013-01-01 ... AGRICULTURE BIODIESEL FUEL EDUCATION PROGRAM General Information § 2903.4 Indirect costs. (a) For the Biodiesel Fuel Education Program, applicants should use the current indirect cost rate negotiated with... 2. 7 CFR 2903.4 - Indirect costs. Code of Federal Regulations, 2011 CFR 2011-01-01 ... AGRICULTURE BIODIESEL FUEL EDUCATION PROGRAM General Information § 2903.4 Indirect costs. (a) For the Biodiesel Fuel Education Program, applicants should use the current indirect cost rate negotiated with... 3. 7 CFR 2903.4 - Indirect costs. Code of Federal Regulations, 2014 CFR 2014-01-01 ... AGRICULTURE BIODIESEL FUEL EDUCATION PROGRAM General Information § 2903.4 Indirect costs. (a) For the Biodiesel Fuel Education Program, applicants should use the current indirect cost rate negotiated with... 4. An Investigation of Backgrounds in the DEAP-3600 Dark Matter Direct Detection Experiment NASA Astrophysics Data System (ADS) Veloce, Laurelle Maria Astronomical and cosmological observations reveal that the majority of the matter in our universe is made of an unknown, non-luminous substance called dark matter. Many experimental attempts are underway to directly detect particle dark matter, which is very difficult to measure due to the expected low interaction rate with normal matter. DEAP-3600 is a direct dark matter search experiment located two kilometres underground at SNOLAB, in Sudbury, Ontario. DEAP-3600 will make use of liquid argon as the detector material, which scintillates as charged particles pass through. The work presented here is an investigation of expected background sources in the DEAP detector. Because DEAP-3600 is a noble liquid-based experiment, a thin film of [1,1,4,4]-tetraphenyl-[1,3]-butadiene (TPB) is coated on the detector walls to shift the scintillation peak from the UV to visible regime for detection. However, alphas passing through TPB produce scintillation signals which can mimic recoil events. Because scintillation properties can change with temperature, we have conducted an investigation of alpha-induced TPB scintillation at temperatures ranging from 300 K to 3.4 K. We were able to characterize the light yield and decay times, and demonstrated that these background events should be distinguishable from true recoil events in liquid argon, thus enabling DEAP-3600 to achieve higher dark matter sensitivity. Additionally, we investigate the performance of the liquid argon purification systems, specifically the activated charcoal used for radon filtration. Previous measurements with the DEAP prototype experiment have demonstrated the necessity of removing radon from the argon prior to filling the detector, due to the release of contaminates from the argon storage systems. Charcoal radon filters are extremely efficient, however, if the emanation rate of the charcoal is too high, there is the possibility of re-contamination. We performed a measurement of the radon emanation rate of a 5. Detecting Complex Organic Compounds Using the SAM Wet Chemistry Experiment on Mars NASA Astrophysics Data System (ADS) Freissinet, C.; Buch, A.; Glavin, D. P.; Brault, A.; Eigenbrode, J. L.; Kashyap, S.; Martin, M. G.; Miller, K.; Mahaffy, P. R.; Team, M. 2013-12-01 The search for organic molecules on Mars can provide important first clues of abiotic chemistry and/or extinct or extant biota on the planet. Gas Chromatography Mass Spectrometry (GC-MS) is currently the most relevant space-compatible analytical tool for the detection of organic compounds. Nevertheless, GC separation is intrinsically restricted to volatile molecules, and many molecules of astrobiological interest are chromatographically refractory or polar. To analyze these organics such as amino acids, nucleobases and carboxylic acids in the Martian regolith, an additional derivatization step is required to transform them into volatile derivatives that are amenable to GC analysis. As part of the Sample Analysis at Mars (SAM) experiment onboard Mars Science Laboratory (MSL) Curiosity rover, a single-step protocol of extraction and chemical derivatization with the silylating reagent N-methyl-N-(tert-butyldimethylsilyl)-trifluoroacetamide (MTBSTFA) has been developed to reach a wide range of astrobiology-relevant refractory organic molecules (Mahaffy et al. 2012; Stalport et al. 2012). Seven cups in the SAM instrument are devoted to MTBSTFA derivatization. However, this chemical reaction adds a protective silyl group in place of each labile hydrogen, which makes the molecule non-identifiable in common mass spectra libraries. Therefore, we have created an extended library of mass spectra of MTBSTFA derivatized compounds of interest, considering their potential occurrence in Mars soils. We then looked specifically for MTBSTFA derivatized compounds using the existing and the newly created library, in various Mars analog soils. To enable a more accurate interpretation of the in situ derivatization GC-MS results that will be obtained by SAM, the lab experiments were performed as close as possible to the SAM flight instrument experimental conditions. Our first derivatization experiments display promising results, the laboratory system permitting an extraction and detection 6. Pupillometry reveals increased pupil size during indirect request comprehension. PubMed Tromp, Johanne; Hagoort, Peter; Meyer, Antje S 2016-06-01 Fluctuations in pupil size have been shown to reflect variations in processing demands during lexical and syntactic processing in language comprehension. An issue that has not received attention is whether pupil size also varies due to pragmatic manipulations. In two pupillometry experiments, we investigated whether pupil diameter was sensitive to increased processing demands as a result of comprehending an indirect request versus a direct statement. Adult participants were presented with 120 picture-sentence combinations that could be interpreted either as an indirect request (a picture of a window with the sentence "it's very hot here") or as a statement (a picture of a window with the sentence "it's very nice here"). Based on the hypothesis that understanding indirect utterances requires additional inferences to be made on the part of the listener, we predicted a larger pupil diameter for indirect requests than statements. The results of both experiments are consistent with this expectation. We suggest that the increase in pupil size reflects additional processing demands for the comprehension of indirect requests as compared to statements. This research demonstrates the usefulness of pupillometry as a tool for experimental research in pragmatics. PMID:26110545 7. 19 CFR 10.541 - Indirect materials. Code of Federal Regulations, 2011 CFR 2011-04-01 ... 19 Customs Duties 1 2011-04-01 2011-04-01 false Indirect materials. 10.541 Section 10.541 Customs... Rules of Origin § 10.541 Indirect materials. An indirect material, as defined in § 10.502(j) of this subpart, will be considered to be an originating material without regard to where it is produced, and... 8. 19 CFR 10.603 - Indirect materials. Code of Federal Regulations, 2011 CFR 2011-04-01 ... 19 Customs Duties 1 2011-04-01 2011-04-01 false Indirect materials. 10.603 Section 10.603 Customs... States Free Trade Agreement Rules of Origin § 10.603 Indirect materials. An indirect material, as defined in § 10.582(m) of this subpart, will be considered to be an originating material without regard... 9. 19 CFR 10.460 - Indirect materials. Code of Federal Regulations, 2011 CFR 2011-04-01 ... 19 Customs Duties 1 2011-04-01 2011-04-01 false Indirect materials. 10.460 Section 10.460 Customs... of Origin § 10.460 Indirect materials. An indirect material, as defined in § 10.402(o), will be considered to be an originating material without regard to where it is produced. Example. Chilean Producer... 10. 46 CFR 154.1720 - Indirect refrigeration. Code of Federal Regulations, 2012 CFR 2012-10-01 ... 46 Shipping 5 2012-10-01 2012-10-01 false Indirect refrigeration. 154.1720 Section 154.1720... § 154.1720 Indirect refrigeration. A refrigeration system that is used to cool acetaldehyde, ethylene oxide, or methyl bromide, must be an indirect refrigeration system that does not use vapor compression.... 11. 46 CFR 154.1720 - Indirect refrigeration. Code of Federal Regulations, 2013 CFR 2013-10-01 ... 46 Shipping 5 2013-10-01 2013-10-01 false Indirect refrigeration. 154.1720 Section 154.1720... § 154.1720 Indirect refrigeration. A refrigeration system that is used to cool acetaldehyde, ethylene oxide, or methyl bromide, must be an indirect refrigeration system that does not use vapor compression.... 12. 46 CFR 154.1720 - Indirect refrigeration. Code of Federal Regulations, 2014 CFR 2014-10-01 ... 46 Shipping 5 2014-10-01 2014-10-01 false Indirect refrigeration. 154.1720 Section 154.1720... § 154.1720 Indirect refrigeration. A refrigeration system that is used to cool acetaldehyde, ethylene oxide, or methyl bromide, must be an indirect refrigeration system that does not use vapor compression.... 13. 19 CFR 10.541 - Indirect materials. Code of Federal Regulations, 2014 CFR 2014-04-01 ... 19 Customs Duties 1 2014-04-01 2014-04-01 false Indirect materials. 10.541 Section 10.541 Customs... Rules of Origin § 10.541 Indirect materials. An indirect material, as defined in § 10.502(j) of this subpart, will be considered to be an originating material without regard to where it is produced, and... 14. 19 CFR 10.541 - Indirect materials. Code of Federal Regulations, 2013 CFR 2013-04-01 ... 19 Customs Duties 1 2013-04-01 2013-04-01 false Indirect materials. 10.541 Section 10.541 Customs... Rules of Origin § 10.541 Indirect materials. An indirect material, as defined in § 10.502(j) of this subpart, will be considered to be an originating material without regard to where it is produced, and... 15. 19 CFR 10.603 - Indirect materials. Code of Federal Regulations, 2013 CFR 2013-04-01 ... 19 Customs Duties 1 2013-04-01 2013-04-01 false Indirect materials. 10.603 Section 10.603 Customs... States Free Trade Agreement Rules of Origin § 10.603 Indirect materials. An indirect material, as defined in § 10.582(m) of this subpart, will be considered to be an originating material without regard... 16. 19 CFR 10.460 - Indirect materials. Code of Federal Regulations, 2014 CFR 2014-04-01 ... 19 Customs Duties 1 2014-04-01 2014-04-01 false Indirect materials. 10.460 Section 10.460 Customs... of Origin § 10.460 Indirect materials. An indirect material, as defined in § 10.402(o), will be considered to be an originating material without regard to where it is produced. Example. Chilean Producer... 17. 19 CFR 10.1024 - Indirect materials. Code of Federal Regulations, 2014 CFR 2014-04-01 ... 19 Customs Duties 1 2014-04-01 2014-04-01 false Indirect materials. 10.1024 Section 10.1024... Agreement Rules of Origin § 10.1024 Indirect materials. An indirect material, as defined in § 10.1002(n) of.... Korean Producer A produces good C using non-originating material B. Producer A imports... 18. 19 CFR 10.603 - Indirect materials. Code of Federal Regulations, 2010 CFR 2010-04-01 ... 19 Customs Duties 1 2010-04-01 2010-04-01 false Indirect materials. 10.603 Section 10.603 Customs... States Free Trade Agreement Rules of Origin § 10.603 Indirect materials. An indirect material, as defined in § 10.582(m) of this subpart, will be considered to be an originating material without regard... 19. 19 CFR 10.460 - Indirect materials. Code of Federal Regulations, 2010 CFR 2010-04-01 ... 19 Customs Duties 1 2010-04-01 2010-04-01 false Indirect materials. 10.460 Section 10.460 Customs... of Origin § 10.460 Indirect materials. An indirect material, as defined in § 10.402(o), will be considered to be an originating material without regard to where it is produced. Example. Chilean Producer... 20. 19 CFR 10.2024 - Indirect materials. Code of Federal Regulations, 2014 CFR 2014-04-01 ... 19 Customs Duties 1 2014-04-01 2014-04-01 false Indirect materials. 10.2024 Section 10.2024... Agreement Rules of Origin § 10.2024 Indirect materials. An indirect material, as defined in § 10.2013(i), will be considered to be an originating material without regard to where it is produced.... 1. 19 CFR 10.541 - Indirect materials. Code of Federal Regulations, 2010 CFR 2010-04-01 ... 19 Customs Duties 1 2010-04-01 2010-04-01 false Indirect materials. 10.541 Section 10.541 Customs... Rules of Origin § 10.541 Indirect materials. An indirect material, as defined in § 10.502(j) of this subpart, will be considered to be an originating material without regard to where it is produced, and... 2. 19 CFR 10.3024 - Indirect materials. Code of Federal Regulations, 2014 CFR 2014-04-01 ... 19 Customs Duties 1 2014-04-01 2014-04-01 false Indirect materials. 10.3024 Section 10.3024... Promotion Agreement Rules of Origin § 10.3024 Indirect materials. An indirect material, as defined in § 10.3013(h), will be considered to be an originating material without regard to where it is... 3. 19 CFR 10.603 - Indirect materials. Code of Federal Regulations, 2014 CFR 2014-04-01 ... 19 Customs Duties 1 2014-04-01 2014-04-01 false Indirect materials. 10.603 Section 10.603 Customs... States Free Trade Agreement Rules of Origin § 10.603 Indirect materials. An indirect material, as defined in § 10.582(m) of this subpart, will be considered to be an originating material without regard... 4. 19 CFR 10.924 - Indirect materials. Code of Federal Regulations, 2014 CFR 2014-04-01 ... 19 Customs Duties 1 2014-04-01 2014-04-01 false Indirect materials. 10.924 Section 10.924 Customs... Rules of Origin § 10.924 Indirect materials. An indirect material, as defined in § 10.902(m) of this subpart, will be considered to be an originating material without regard to where it is produced.... 5. 19 CFR 10.460 - Indirect materials. Code of Federal Regulations, 2013 CFR 2013-04-01 ... 19 Customs Duties 1 2013-04-01 2013-04-01 false Indirect materials. 10.460 Section 10.460 Customs... of Origin § 10.460 Indirect materials. An indirect material, as defined in § 10.402(o), will be considered to be an originating material without regard to where it is produced. Example. Chilean Producer... 6. 7 CFR 2500.044 - Indirect costs. Code of Federal Regulations, 2013 CFR 2013-01-01 ... Closeout § 2500.044 Indirect costs. Indirect cost rates for grants and cooperative agreements shall be determined in accordance with the applicable assistance regulations and cost principles, unless superseded by... 7 Agriculture 15 2013-01-01 2013-01-01 false Indirect costs. 2500.044 Section 2500.044... 7. 7 CFR 2500.044 - Indirect costs. Code of Federal Regulations, 2014 CFR 2014-01-01 ... Closeout § 2500.044 Indirect costs. Indirect cost rates for grants and cooperative agreements shall be determined in accordance with the applicable assistance regulations and cost principles, unless superseded by... 7 Agriculture 15 2014-01-01 2014-01-01 false Indirect costs. 2500.044 Section 2500.044... 8. 48 CFR 31.203 - Indirect costs. Code of Federal Regulations, 2014 CFR 2014-10-01 ... cost of that or any other final cost objective. (c) The contractor shall accumulate indirect costs by... for allocating indirect costs is the cost accounting period during which such costs are incurred and... 48 Federal Acquisition Regulations System 1 2014-10-01 2014-10-01 false Indirect costs.... 9. 38 CFR 17.261 - Indirect costs. Code of Federal Regulations, 2013 CFR 2013-07-01 ... 38 Pensions, Bonuses, and Veterans' Relief 1 2013-07-01 2013-07-01 false Indirect costs. 17.261... Exchange of Information § 17.261 Indirect costs. The grantee shall allocate expenditures as between direct and indirect costs according to generally accepted accounting procedures. The amount allocated... 10. 38 CFR 17.261 - Indirect costs. Code of Federal Regulations, 2014 CFR 2014-07-01 ... 38 Pensions, Bonuses, and Veterans' Relief 1 2014-07-01 2014-07-01 false Indirect costs. 17.261... Exchange of Information § 17.261 Indirect costs. The grantee shall allocate expenditures as between direct and indirect costs according to generally accepted accounting procedures. The amount allocated... 11. 48 CFR 31.203 - Indirect costs. Code of Federal Regulations, 2013 CFR 2013-10-01 ... cost of that or any other final cost objective. (c) The contractor shall accumulate indirect costs by... for allocating indirect costs is the cost accounting period during which such costs are incurred and... 48 Federal Acquisition Regulations System 1 2013-10-01 2013-10-01 false Indirect costs.... 12. 38 CFR 17.261 - Indirect costs. Code of Federal Regulations, 2012 CFR 2012-07-01 ... 38 Pensions, Bonuses, and Veterans' Relief 1 2012-07-01 2012-07-01 false Indirect costs. 17.261... Exchange of Information § 17.261 Indirect costs. The grantee shall allocate expenditures as between direct and indirect costs according to generally accepted accounting procedures. The amount allocated... 13. 38 CFR 17.261 - Indirect costs. Code of Federal Regulations, 2010 CFR 2010-07-01 ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Indirect costs. 17.261... Exchange of Information § 17.261 Indirect costs. The grantee shall allocate expenditures as between direct and indirect costs according to generally accepted accounting procedures. The amount allocated... 14. 38 CFR 17.261 - Indirect costs. Code of Federal Regulations, 2011 CFR 2011-07-01 ... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Indirect costs. 17.261... Exchange of Information § 17.261 Indirect costs. The grantee shall allocate expenditures as between direct and indirect costs according to generally accepted accounting procedures. The amount allocated... 15. Indirect Cost Reimbursement: An Industrial View. ERIC Educational Resources Information Center Bolton, Robert 1987-01-01 The meaning of indirect costs in an industrial environment is discussed. Other factors considered are corporate policies; nature of work being supported; the uniqueness of the work; who is doing the negotiating for industry; and indirect rates. Suggestions are offered for approaches to indirect cost reimbursement. (Author/MLW) 16. 19 CFR 10.924 - Indirect materials. Code of Federal Regulations, 2012 CFR 2012-04-01 ... 19 Customs Duties 1 2012-04-01 2012-04-01 false Indirect materials. 10.924 Section 10.924 Customs... Rules of Origin § 10.924 Indirect materials. An indirect material, as defined in § 10.902(m) of this subpart, will be considered to be an originating material without regard to where it is produced.... 17. 19 CFR 10.460 - Indirect materials. Code of Federal Regulations, 2012 CFR 2012-04-01 ... 19 Customs Duties 1 2012-04-01 2012-04-01 false Indirect materials. 10.460 Section 10.460 Customs... of Origin § 10.460 Indirect materials. An indirect material, as defined in § 10.402(o), will be considered to be an originating material without regard to where it is produced. Example. Chilean Producer... 18. 19 CFR 10.1024 - Indirect materials. Code of Federal Regulations, 2012 CFR 2012-04-01 ... 19 Customs Duties 1 2012-04-01 2012-04-01 false Indirect materials. 10.1024 Section 10.1024... Agreement Rules of Origin § 10.1024 Indirect materials. An indirect material, as defined in § 10.1002(n) of.... Korean Producer A produces good C using non-originating material B. Producer A imports... 19. 19 CFR 10.541 - Indirect materials. Code of Federal Regulations, 2012 CFR 2012-04-01 ... 19 Customs Duties 1 2012-04-01 2012-04-01 false Indirect materials. 10.541 Section 10.541 Customs... Rules of Origin § 10.541 Indirect materials. An indirect material, as defined in § 10.502(j) of this subpart, will be considered to be an originating material without regard to where it is produced, and... 20. Quantitative Analysis of Sulfate in Water by Indirect EDTA Titration ERIC Educational Resources Information Center Belle-Oudry, Deirdre 2008-01-01 The determination of sulfate concentration in water by indirect EDTA titration is an instructive experiment that is easily implemented in an analytical chemistry laboratory course. A water sample is treated with excess barium chloride to precipitate sulfate ions as BaSO[subscript 4](s). The unprecipitated barium ions are then titrated with EDTA.… 1. Indirect costs of rheumatoid arthritis PubMed Central Raciborski, Filip; Kwiatkowska, Brygida 2015-01-01 It is estimated that in Poland about 400,000 persons in general suffer from inflammatory joint diseases, including rheumatoid arthritis (RA). Epidemiological surveys documenting the frequency and disturbance of musculoskeletal disorders in the Polish population are few in number. Most of the estimations are based on epidemiological data from other countries (prevalence of 0.5–1%). According to the data of the National Health Fund in Poland 135,000–157,000 persons in total are treated because of rheumatoid arthritis per year [ICD10 (International Statistical Classification of Diseases and Related Health Problems): M05, M06]. In the case of this group of diseases indirect costs significantly outweigh the direct costs. Indirect costs increase together with activity level of the disease. The cost analysis of productivity loss of RA patients indicates that sickness absenteeism and informal care are the most burdensome. At the national level it amounts in total from 1.2 billion to 2.8 billion PLN per year, depending on the method of analysis. These costs could be significantly reduced through early diagnosis and introduction of effective treatment. PMID:27407258 2. Confounding effects of indirect connections on causality estimation. PubMed Vakorin, Vasily A; Krakovska, Olga A; McIntosh, Anthony R 2009-10-30 Addressing the issue of effective connectivity, this study focuses on effects of indirect connections on inferring stable causal relations: partial transfer entropy. We introduce a Granger causality measure based on a multivariate version of transfer entropy. The statistic takes into account the influence of the rest of the network (environment) on observed coupling between two given nodes. This formalism allows us to quantify, for a specific pathway, the total amount of indirect coupling mediated by the environment. We show that partial transfer entropy is a more sensitive technique to identify robust causal relations than its bivariate equivalent. In addition, we demonstrate the confounding effects of the variation in indirect coupling on the detectability of robust causal links. Finally, we consider the problem of model misspecification and its effect on the robustness of the observed connectivity patterns, showing that misspecifying the model may be an issue even for model-free information-theoretic approach. PMID:19628006 3. Detection of Aeromagnetic Field Changes Using an Unmanned Autonomous Helicopter: Repeated Experiments at Tarumae Volcano (Japan) NASA Astrophysics Data System (ADS) Hashimoto, T.; Koyama, T.; Yanagisawa, T.; Yoshimoto, M.; Ohminato, T.; Kaneko, T. 2015-12-01 Volcanic eruptions often prohibit humans from approaching active craters. Meanwhile, it is important, especially at the initial stage of an eruption, to perform visual surveillance, geophysical/chemical measurements and material sampling in the vicinity of the craters. Besides scientific purposes, information from such surveys is helpful for the local government in deciding the response to volcanic unrest. We started airborne surveys using an unmanned helicopter on a trial basis in cooperation with the Hokkaido Regional Development Bureau. As a part of the project, we repeated aeromagnetic surveys over Mt. Tarumae (1,041m), one of the active volcanoes in northern Japan in 2011, 2012 and 2013. Owing to its high accuracy of positioning control in the autonomous flight with the aid of GPS navigation and the fairly small magnetic field gradient in the air, temporal changes up to 30 nT were successfully detected through a direct comparison between separate surveys. The field changes in the air were mostly consistent with those on the ground surface, which suggested remagnetization due to cooling beneath the summit lava dome. Through our three-year experiments, the unmanned helicopter was proved to be useful for aeromagnetic monitoring. Although the system still has some limitations in terms of maximum flight altitude and operational range from the base station, we emphasize the following three advantages of this technique. (1) Operation without exposing human to volcanic hazards. (2) Straightforward data processing procedure to obtain temporal magnetic field changes, which is especially important in an emergency response such as an ongoing unrest. (3) Great reduction of the cost to maintain ground-based monitoring stations for many years. Acknowledgments: We express sincere thanks to Muroran and Sapporo Development and Construction Departments of the HRDB for the cooperation in the field experiments using their unmanned helicopter. 4. The microbe capture experiment in space: Fluorescence microscopic detection of microbes captured by aerogel NASA Astrophysics Data System (ADS) Sugino, Tomohiro; Yokobori, Shin-Ichi; Yang, Yinjie; Kawaguchi, Yuko; Okudaira, Kyoko; Tabata, Makoto; Kawai, Hideyuki; Hasegawa, Sunao; Yamagishi, Akihiko Microbes have been collected at the altitude up to about 70 km in the sampling experiment done by several groups[1]. We have also collected high altitude microbes, by using an airplane and balloons[2][3][4][5]. We collected new deinococcal strain (Deinococcus aetherius and Deinococ-cus aerius) and several strains of spore-forming bacilli from stratosphere[2][4][5]. However, microbe sampling in space has never been reported. On the other hand, "Panspermia" hy-pothesis, where terrestrial life is originated from outside of Earth, has been proposed[6][7][8][9]. Recent report suggesting existence of the possible microbe fossils in the meteorite of Mars origin opened the serious debate on the possibility of migration of life embedded in meteorites (and cosmic dusts)[10][11]. If we were able to find terrestrial microbes in space, it would suggest that the terrestrial life can travel between astronomical bodies. We proposed a mission "Tanpopo: Astrobiology Exposure and Micrometeoroid Capture Experiments" to examine possible inter-planetary migration of microbes, organic compounds and meteoroids on Japan Experimental Module of the International Space Station (ISS)[12]. Two of six sub themes in this mission are directly related to interplanetary migration of microbes. One is the direct capturing experi-ment of microbes (probably within the particles such as clay) in space by the exposed ultra-low density aerogel. Another is the exposure experiment to examine survivability of the microbes in harsh space environment. They will tell us the possibility of interplanetary migration of microbes (life) from Earth to outside of Earth (or vise versa). In this report, we will report whether aerogel that have been used for the collection of space debris and cosmic dusts can be used for microbe sampling in space. We will discuss how captured particles by aerogel can be detected with DNA-specific fluorescent dye, and how to distinguish microbes from other mate-rials (i.e. aerogel and 5. Very Long Baseline Interferometry Experiment on Giant Radio Pulses of Crab Pulsar toward Fast Radio Burst Detection NASA Astrophysics Data System (ADS) Takefuji, K.; Terasawa, T.; Kondo, T.; Mikami, R.; Takeuchi, H.; Misawa, H.; Tsuchiya, F.; Kita, H.; Sekido, M. 2016-08-01 We report on a very long baseline interferometry (VLBI) experiment on giant radio pulses (GPs) from the Crab pulsar in the radio 1.4–1.7 GHz range to demonstrate a VLBI technique for searching for fast radio bursts (FRBs). We carried out the experiment on 2014 July 26 using the Kashima 34 m and Usuda 64 m radio telescopes of the Japanese VLBI Network (JVN) with a baseline of about 200 km. During the approximately 1 hr observation, we could detect 35 GPs by high-time-resolution VLBI. Moreover, we determined the dispersion measure (DM) to be 56.7585 ± 0.0025 on the basis of the mean DM of the 35 GPs detected by VLBI. We confirmed that the sensitivity of a detection of GPs using our technique is superior to that of a single-dish mode detection using the same telescope. 6. Clinical aspects of indirect immunofluorescence for autoimmune diseases. PubMed 2015-05-01 Because the most common term used in conversations considering autoimmunity is autoantibodies, it is well-expected that the indirect immunofluorescence assay, which detects antibodies directed against various antigens, is one of our most impressive techniques for investigating autoimmune diseases (AIDs). Roughly speaking, the current literature corroborates that this immunopathologic investigation means that autoantibodies detection makes a considerable contribution to both diagnostic and prognostic aspects of AIDs in the clinical setting. However, it varies between different AIDs, autoantibodies, ethnicities or detection methodologies. Directly focusing on the indirect immunofluorescence assay, we present evidence to support this multidimensional variation regarding the subject via reviewing briefly the best-investigated autoantibodies in the well-documented AIDs, including vasculitis, inflammatory bowel disease, scleroderma, autoimmune hepatitis, primary biliary cirrhosis, systemic lupus erythematosus and Sjögren's syndrome. PMID:25786676 7. Vehicle occupancy detection camera position optimization using design of experiments and standard image references NASA Astrophysics Data System (ADS) Paul, Peter; Hoover, Martin; Rabbani, Mojgan 2013-03-01 Camera positioning and orientation is important to applications in domains such as transportation since the objects to be imaged vary greatly in shape and size. In a typical transportation application that requires capturing still images, inductive loops buried in the ground or laser trigger sensors are used when a vehicle reaches the image capture zone to trigger the image capture system. The camera in such a system is in a fixed position pointed at the roadway and at a fixed orientation. Thus the problem is to determine the optimal location and orientation of the camera when capturing images from a wide variety of vehicles. Methods from Design for Six Sigma, including identifying important parameters and noise sources and performing systematically designed experiments (DOE) can be used to determine an effective set of parameter settings for the camera position and orientation under these conditions. In the transportation application of high occupancy vehicle lane enforcement, the number of passengers in the vehicle is to be counted. Past work has described front seat vehicle occupant counting using a camera mounted on an overhead gantry looking through the front windshield in order to capture images of vehicle occupants. However, viewing rear seat passengers is more problematic due to obstructions including the vehicle body frame structures and seats. One approach is to view the rear seats through the side window. In this situation the problem of optimally positioning and orienting the camera to adequately capture the rear seats through the side window can be addressed through a designed experiment. In any automated traffic enforcement system it is necessary for humans to be able to review any automatically captured digital imagery in order to verify detected infractions. Thus for defining an output to be optimized for the designed experiment, a human defined standard image reference (SIR) was used to quantify the quality of the line-of-sight to the rear seats of 8. Rural Family Perspectives and Experiences with Early Infant Hearing Detection and Intervention: A Qualitative Study. PubMed Elpers, Julia; Lester, Cathy; Shinn, Jennifer B; Bush, Matthew L 2016-04-01 Infant hearing loss has the potential to cause significant communication impairment. Timely diagnosis and intervention is essential to preventing permanent deficits. Many infants from rural regions are delayed in diagnosis and treatment of hearing loss. The purpose of this study is to characterize the barriers in timely infant hearing healthcare for rural families following newborn newborn hearing screening (NHS) testing. Using stratified purposeful sampling, the study design involved semi-structured phone interviews with parents/guardians of children who failed NHS testing in the Appalachian region of Kentucky between 2012 and 2014 to describe their experiences with early hearing detection and intervention program. Thematic qualitative analysis was performed on interview transcripts to identify common recurring themes in content. 40 parents/guardians participated in the study and consisted primarily of mothers. Demographic data revealed limited educational levels of the participants and 70 % had state-funded insurance coverage. Participants reported barriers in timely infant hearing healthcare that included poor communication of hearing screening results, difficulty in obtaining outpatient testing, inconsistencies in healthcare information from primary care providers, lack of local resources, insurance-related healthcare delays, and conflict with family and work responsibilities. Most participants expressed a great desire to obtain timely hearing healthcare for their children and expressed a willingness to use resources such as telemedicine to obtain that care. There are multiple barriers to timely rural infant hearing healthcare. Minimizing misinformation and improving access to care are priorities to prevent delayed diagnosis and treatment of hearing loss. PMID:26316007 9. High-resolution detection of Brownian motion for quantitative optical tweezers experiments. PubMed Grimm, Matthias; Franosch, Thomas; Jeney, Sylvia 2012-08-01 We have developed an in situ method to calibrate optical tweezers experiments and simultaneously measure the size of the trapped particle or the viscosity of the surrounding fluid. The positional fluctuations of the trapped particle are recorded with a high-bandwidth photodetector. We compute the mean-square displacement, as well as the velocity autocorrelation function of the sphere, and compare it to the theory of Brownian motion including hydrodynamic memory effects. A careful measurement and analysis of the time scales characterizing the dynamics of the harmonically bound sphere fluctuating in a viscous medium directly yields all relevant parameters. Finally, we test the method for different optical trap strengths, with different bead sizes and in different fluids, and we find excellent agreement with the values provided by the manufacturers. The proposed approach overcomes the most commonly encountered limitations in precision when analyzing the power spectrum of position fluctuations in the region around the corner frequency. These low frequencies are usually prone to errors due to drift, limitations in the detection, and trap linearity as well as short acquisition times resulting in poor statistics. Furthermore, the strategy can be generalized to Brownian motion in more complex environments, provided the adequate theories are available. PMID:23005790 10. Using BigBite to Detect DIS Electrons for the MARATHON Experiment NASA Astrophysics Data System (ADS) Hague, Tyler 2015-04-01 The MARATHON experiment will use the BigBite Spectrometer to extract F2n /F2p from the inelastic cross section ratio of 12 GeV electrons on the mirror nuclei 3 He and 3 H. The BigBite Spectrometer consists of a series of detectors to detect electrons and an array of electronics (the `Front End'') to create triggers in the Data Acquisition System (DAQ). BigBite uses two multi-wire drift chambers to determine the track of particles passing through it, a scintillator array for timing, and two lead-glass detectors for particle identification and a measurement of energy deposition. The Front End uses a series of logic units to create triggers for the DAQ when certain combinations of detectors fire. In this talk an overview of the detectors of the BigBite spectrometer and its Front End electronics setup will be presented. This work is supported by Kent State University, NSF Grant PHY-1405814, and DOE Contract DE-AC05-06OR23177. 11. Detection of an underwater target through modulated lidar experiments at grazing incidence in a deep wave basin. PubMed Pellen, Fabrice; Jezequel, Vincent; Zion, Guy; Jeune, Bernard Le 2012-11-01 The effectiveness of a pulsed radiofrequency modulated lidar and associated processing for underwater target detection at grazing incidence was experimentally assessed in a wave basin 50 m long and 20 m deep, under different conditions of swell produced within this facility to benefit from a controlled interface. This paper reports our experiments and offline data processing results, and describes significant improvements in the probability of detection that demonstrate the interest of using such a technique in this context. PMID:23128721 12. Indirect facilitation becomes stronger with seedling age in a degraded seasonally dry forest NASA Astrophysics Data System (ADS) Torres, Romina C.; Renison, Daniel 2016-01-01 In seasonally dry forests direct facilitation by woody species due to amelioration of harsh abiotic conditions could be important during germination and early establishment of tree seedlings, and under some species but not others. Recent research suggests that at later stages facilitation by woody species may be indirect due to protection of saplings from herbivores, implying that under absence of herbivores reforestation programs may plant saplings in unprotected open sites. We used the native tree Lithraea molleoides from central Argentina as a model species to test this hypothesis. We performed a seeding and planting experiment simulating early and late establishment respectively, which included 234 study plots situated in herbaceous, shrub and tree patches of differing species composition and under two herbivore treatments (grazed and ungrazed) and replicated at three sites. Seedling counts averaged 0.82% of the sown seeds after 6 months, were highest under shrubs and lowest in open patches, and were influenced by woody species composition only in tree patches (all P values < 0.05). At seedling stages we detected no influence of herbivory (P = 0.4) nor of indirect facilitation due to herbivory (herbivory × patch type P = 0.7). Survival of planted saplings was 53% after 3 years and over winter dieback affected 76% of the saplings. At sapling stages we found an increasing importance of indirect facilitation through protection from herbivores, as we recorded the highest sapling survival and growth at tree and shrub patches and the lowest in open patches (all P values < 0.001), and a negative effect of livestock (P < 0.001) mainly on the open patches (herbivory × patch type P = 0.07 and P = 0.001 for survival and growth, respectively). We found no significant influence of woody species composition on sapling survival and growth (all P values > 0.05). We conclude that direct facilitation is involved at all studied stages while indirect facilitation becomes 13. First detection and energy measurement of recoil ions following beta decay in a Penning trap with the WITCH experiment NASA Astrophysics Data System (ADS) Beck, M.; Coeck, S.; Kozlov, V. Yu.; Breitenfeldt, M.; Delahaye, P.; Friedag, P.; Glück, F.; Herbane, M.; Herlert, A.; Kraev, I. S.; Mader, J.; Tandecki, M.; Van Gorp, S.; Wauters, F.; Weinheimer, Ch.; Wenander, F.; Severijns, N. 2011-03-01 The WITCH experiment (Weak Interaction Trap for CHarged particles) will search for exotic interactions by investigating the β - ν angular correlation via the measurement of the recoil energy spectrum after β -decay. As a first step the recoil ions from the β-_{} -decay of 124In stored in a Penning trap have been detected. The evidence for the detection of recoil ions is shown and the properties of the ion cloud that forms the radioactive source for the experiment in the Penning trap are presented. 14. Separate spheres and indirect benefits PubMed Central Brock, Dan W 2003-01-01 On any plausible account of the basis for health care resource prioritization, the benefits and costs of different alternative resource uses are relevant considerations in the prioritization process. Consequentialists hold that the maximization of benefits with available resources is the only relevant consideration. Non-consequentialists do not reject the relevance of consequences of benefits and costs, but insist that other considerations, and in particular the distribution of benefits and costs, are morally important as well. Whatever one's particular account of morally justified standards for the prioritization of different health interventions, we must be able to measure those interventions' benefits and costs. There are many theoretical and practical difficulties in that measurement, such as how to weigh extending life against improving health and quality of life as well as how different quality of life improvements should be valued, but they are not my concern here. This paper addresses two related issues in assessing benefits and costs for health resource prioritization. First, should benefits be restricted only to health benefits, or include as well other non health benefits such as economic benefits to employers from reducing the lost work time due to illness of their employees? I shall call this the Separate Spheres problem. Second, should only the direct benefits, such as extending life or reducing disability, and direct costs, such as costs of medical personnel and supplies, of health interventions be counted, or should other indirect benefits and costs be counted as well? I shall call this the Indirect Benefits problem. These two issues can have great importance for a ranking of different health interventions by either a cost/benefit or cost effectiveness analysis (CEA) standard. PMID:12773217 15. Psychotic Experiences and Working Memory: A Population-Based Study Using Signal-Detection Analysis PubMed Central Rossi, Rodolfo; Zammit, Stanley; Button, Katherine S.; Munafò, Marcus R.; Lewis, Glyn; David, Anthony S. 2016-01-01 Psychotic Experiences (PEs) during adolescence index increased risk for psychotic disorders and schizophrenia in adult life. Working memory (WM) deficits are a core feature of these disorders. Our objective was to examine the relationship between PEs and WM in a general population sample of young people in a case control study. 4744 individuals of age 17–18 from Bristol and surrounding areas (UK) were analyzed in a cross-sectional study nested within the Avon Longitudinal Study of Parents and Children (ALSPAC) birth cohort study. The dependent variable was PEs, assessed using the semi-structured Psychosis-Like Symptom Interview (PLIKSi). The independent variable was performance on a computerized numerical n-back working memory task. Signal-Detection Theory indices, including standardized hits rate, false alarms rate, discriminability index (d’) and response bias (c) from 2-Back and 3-Back tasks were calculated. 3576 and 3527 individuals had complete data for 2-Back and 3-Back respectively. Suspected/definite PEs prevalence was 7.9% (N = 374). Strongest evidence of association was seen between PEs and false alarms on the 2-Back, (odds ratio (OR) = 1.17 [95% confidence intervals (CI) 1.01, 1.35]) and 3-back (OR = 1.35 [1.18, 1.54]) and with c (OR = 1.59 [1.09, 2.34]), and lower d’ (OR = 0.76 [0.65, 0.89]), on the 3-Back. Adjustment for several potential confounders, including general IQ, drug exposure and different psycho-social factors, and subsequent multiple imputation of missing data did not materially alter the results. WM is impaired in young people with PEs in the general population. False alarms, rather than poor accuracy, are more closely related to PEs. Such impairment is consistent with different neuropsychological models of psychosis focusing on signal-to-noise discrimination, probabilistic reasoning and impaired reality monitoring as a basis of psychotic symptoms. PMID:27120349 16. Nonlinear acoustic experiments for landmine detection: the significance of the top-plate normal modes NASA Astrophysics Data System (ADS) Korman, Murray S.; Alberts, W. C. K., II; Sabatier, James M. 2004-09-01 In nonlinear acoustic detection experiments involving a buried inert VS 2.2 anti-tank landmine, airborne sound at two closely spaced primary frequencies f1 and f2 couple into the ground and interact nonlinearly with the soil-top pressure plate interface. Scattering generates soil vibration at the surface at the combination frequencies | m f1 +- n f2 | , where m and n are integers. The normal component of the particle velocity at the soil surface has been measured with a laser Doppler velocimeter (LDV) and with a geophone by Sabatier et. al. [SPIE Proceedings Vol. 4742, (695-700), 2002; Vol. 5089, (476-486), 2003] at the gravel lane test site. Spatial profiles of the particle velocity measured for both primary components and for various combination frequencies indicate that the modal structure of the mine is playing an important role. Here, an experimental modal analysis is performed on a VS 1.6 inert anti-tank mine that is resting on sand but is not buried. Five top-plate mode shapes are described. The mine is then buried in dry finely sifted natural loess soil and excited at f1 = 120 Hz and f2 = 130 Hz. Spatial profiles at the primary components and the nonlinearly generated f1 - (f2 - f1) component are characterized by a single peak. For the 2f1+f2 and 2f2 + f1 components, the doubly peaked profiles can be attributed to the familiar mode shape of a timpani drum (that is shifted lower in frequency due to soil mass loading). Other nonlinear profiles appear to be due to a mixture of modes. This material is based upon work supported by the U. S. Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate under Contract DAAB15-02-C-0024. 17. Hook Wire Localization Procedure and Early Detection of Breast Cancer - Our Experience PubMed Central Dimitrovska, Maja Jakimovska; Mitreska, Nadica; Lazareska, Menka; Jovanovska, Elizabeta Stojovska; Dodevski, Ace; Stojkoski, Aleksandar 2015-01-01 AIM: The purpose of this study is to describe our experience with needle localization technique in diagnosing small breast cancers. MATERIAL AND METHODS: This retrospective study included a hundred and twenty patients’ with impalpable breast lesions and they underwent wire localization. All patients had mammography, ultrasound exam and pathohystological results. We use Mammomat Inspiration Siemens digital unit for diagnosing mammography, machine - Lorad Affinity with fenestrated compressive pad for wire localization and ultrasound machine Acuson X300 with linear array probe 10 MhZ. We use two types of wire: Bard hook wire and Kopans breast lesion localization needle, Cook. Comparative radiologic and pathologic data were collected and analyzed. RESULTS: In 120 asymptomatic women, 68 malignancies and 52 benign findings were detected with mammography and ultrasound. The mean age for patients with malignancy was 58.6 years. According BI-RADS classification for mammography the distribution is our group was: BI-RADS 3 was presented in 6 (8.82%) patients, BI-RADS 4 was presented in 56 (82.35%) patients and BI-RADS 5 was present in 6 (8.82%) of the patients. Most wire localizations were performed under mammographic guidance in 58 from 68 patients with malignant lesions (85.29%) and with ultrasound in 10 (14.7%). According the mammographic findings patients with mass on mammograms were 29 (42.65%), mass with calcifications 9 (13.23%), calcifications 20 (29.41%) and architectural distortions or asymmetry 10 (14.71%). CONCLUSION: Wire localization is a well established technique for the management of impalpable breast lesions. PMID:27275234 18. Nouvelles Limites sur la Detection Directe de la Matiere Sombre avec l'Experience PICASSO NASA Astrophysics Data System (ADS) Piro, Marie-Cecile Astronomical and cosmological observations strongly suggest the presence of an exotic form of non-relativistic, non-baryonic matter that would represent 26% of the actual energy-matter content of the Universe. This so-called cold dark matter would be composed of Weakly Interactive Massive Particles (WIMP). PICASSO (Project In CAnada to Search for Supersymmetric Objects) aims to detect directly one of the dark matter candidates proposed in the framework of supersymmetric extensions of the standard model : the neutralino. The experiment is installed in the SNOLAB underground laboratory at Sudbury (Ontario) and uses superheated C4F10 droplets detectors, a variant of bubble chamber technique. Phase transitions in the superheated liquids are triggered by 19F recoils caused by the elastic collision with neutralinos and create an acoustic signal which is recorded by piezoelectric sensors. This thesis presents recent progress in PICASSO leading to a substantially increased sensitivity in the search of neutralinos. New fabrication and purification procedures allowed a background reduction of about a factor 10 of the major detectors contamination caused by alpha emitters. Detailed studies allowed to localize these emitters in the detectors. In addition, data analysis efforts were able to improve substantially the discrimination between alpha particle induced events and those created by nuclear recoils. New analysis tools were also developed in order to discriminate between particle induced and non-particle induced events, such as electronic backgrounds and acoustic noise signals. An important new background suppression mechanism at higher temperatures led to the present improved sensitivity of PICASSO at low WIMP masses. 19. Indirect Lightning Safety Assessment Methodology SciTech Connect Ong, M M; Perkins, M P; Brown, C G; Crull, E W; Streit, R D 2009-04-24 Lightning is a safety hazard for high-explosives (HE) and their detonators. In the However, the current flowing from the strike point through the rebar of the building The methodology for estimating the risk from indirect lighting effects will be presented. It has two parts: a method to determine the likelihood of a detonation given a lightning strike, and an approach for estimating the likelihood of a strike. The results of these two parts produce an overall probability of a detonation. The probability calculations are complex for five reasons: (1) lightning strikes are stochastic and relatively rare, (2) the quality of the Faraday cage varies from one facility to the next, (3) RF coupling is inherently a complex subject, (4) performance data for abnormally stressed detonators is scarce, and (5) the arc plasma physics is not well understood. Therefore, a rigorous mathematical analysis would be too complex. Instead, our methodology takes a more practical approach combining rigorous mathematical calculations where possible with empirical data when necessary. Where there is uncertainty, we compensate with conservative approximations. The goal is to determine a conservative estimate of the odds of a detonation. In Section 2, the methodology will be explained. This report will discuss topics at a high-level. The reasons for selecting an approach will be justified. For those interested in technical details, references will be provided. In Section 3, a simple hypothetical example will be given to reinforce the concepts. While the methodology will touch on all the items shown in Figure 1, the focus of this report is the indirect effect, i.e., determining the odds of a detonation from given EM fields. Professor Martin Uman from the University of Florida has been characterizing and defining extreme lightning strikes. Using Professor Uman's research, Dr. Kimball Merewether at Sandia National Laboratory in Albuquerque calculated the EM fields inside a Faraday-cage type 20. Preparation of recombinant African horse sickness virus VP7 antigen via a simple method and validation of a VP7-based indirect ELISA for the detection of group-specific IgG antibodies in horse sera. PubMed Maree, Sonja; Paweska, Janusz T 2005-04-01 This paper describes the production and purification of a group-specific recombinant protein VP7 of African horse sickness virus serotype 3 (AHSV-3) and validation of an I-ELISA for the detection of IgG-antibodies to VP7 in horse sera. Baculovirus-expressed VP7 crystals were purified from infected insect cells. Analytical accuracy of the I-ELISA was examined using sera (n = 38) from an experimentally infected horse, from foals born to vaccinated mares, from guinea-pigs immunized with nine serotypes of AHSV, and from sera of animals infected with other orbiviruses. Compared to traditional serological assays, the I-ELISA was more sensitive in detection of the earliest immunological response in an infected horse and declining levels of maternal immunity in foals. Antibodies to all nine serotypes of AHSV could be detected. Cross-reactivity to related orbiviruses was not observed. Diagnostic accuracy of the I-ELISA was assessed by testing sera from vaccinated horses (n = 358) residing in AHS-enzootic areas and from unvaccinated horses (n = 481) residing in an AHS-free area. Sera were categorised as positive or negative for antibodies to AHSV using virus neutralisation tests. The TG-ROC analysis was used for the selection of the cut-off value. At a cut-off of 11.9 of the high positive control serum (percentage positivity), the I-ELISA specificity was 100%, sensitivity 99.4%, and the Jouden index was 0.99. PMID:15737417 1. Indirect conductimetric assay of antibacterial activities. PubMed Sawai, J; Doi, R; Maekawa, Y; Yoshikawa, T; Kojima, H 2002-11-01 The applicability of indirect conductimetric assays for evaluation of antibacterial activity was examined. The minimal inhibitory concentration (MIC) obtained by the indirect method was consistent with that by the direct conductimetric assay and the turbidity method. The indirect assay allows use of growth media, which cannot be used in the direct conductimetric assay, making it possible to evaluate the antibacterial activity of insoluble or slightly soluble materials with high turbidity, such as antibacterial ceramic powders. PMID:12407467 2. A Target Indirect Thrust Measurement Method of Pulse Detonation Engine NASA Astrophysics Data System (ADS) Huang, Xiqiao; Xiong, Yuefei; Li, Chao; Zheng, Longxi; Li, Qing 2015-05-01 An indirect thrust measurement method based on impulse of a target plate was developed, and a new thrust measurement system (TMS) was successfully designed and constructed. A series of multi-cycle experiments on thrust measurement were conducted to investigate the feasibility of this method with the newly-built indirect TMS. The thrust measurement of PDE was made at different plate target axial positions and operating frequencies. All the experiments were conducted using gasoline as fuel and air as oxidant. The experimental results implied that the thrust of PDE by using the indirect impulse method was a function of the target plate axial position, and there existed an optimum measurement position for PDE with a diameter of 60 mm. The optimum target plate position located at 3.33. According to the experimental results, the thrusts obtained by using indirect TMS were less than the actual values, and so the observed value of thrust was modified in order to make the thrust more reliable. A relative accurate calibration formula depending on the operating frequency was found. 3. Two stage indirect evaporative cooling system DOEpatents Bourne, Richard C.; Lee, Brian E.; Callaway, Duncan 2005-08-23 A two stage indirect evaporative cooler that moves air from a blower mounted above the unit, vertically downward into dry air passages in an indirect stage and turns the air flow horizontally before leaving the indirect stage. After leaving the dry passages, a major air portion travels into the direct stage and the remainder of the air is induced by a pressure drop in the direct stage to turn 180.degree. and returns horizontally through wet passages in the indirect stage and out of the unit as exhaust air. 4. Indirect Self-Destructiveness and Emotional Intelligence. PubMed Tsirigotis, Konstantinos 2016-06-01 While emotional intelligence may have a favourable influence on the life and psychological and social functioning of the individual, indirect self-destructiveness exerts a rather negative influence. The aim of this study has been to explore possible relations between indirect self-destructiveness and emotional intelligence. A population of 260 individuals (130 females and 130 males) aged 20-30 (mean age of 24.5) was studied by using the Polish version of the chronic self-destructiveness scale and INTE, i.e., the Polish version of the assessing emotions scale. Indirect self-destructiveness has significant correlations with all variables of INTE (overall score, factor I, factor II), and these correlations are negative. The intensity of indirect self-destructiveness differentiates significantly the height of the emotional intelligence and vice versa: the height of the emotional intelligence differentiates significantly the intensity of indirect self-destructiveness. Indirect self-destructiveness has negative correlations with emotional intelligence as well as its components: the ability to recognize emotions and the ability to utilize emotions. The height of emotional intelligence differentiates the intensity of indirect self-destructiveness, and vice versa: the intensity of indirect self-destructiveness differentiates the height of emotional intelligence. It seems advisable to use emotional intelligence in the prophylactic and therapeutic work with persons with various types of disorders, especially with the syndrome of indirect self-destructiveness. PMID:26164838 5. Proton-induced direct and indirect damage of plasmid DNA. PubMed Vyšín, Luděk; Pachnerová Brabcová, Kateřina; Štěpán, Václav; Moretto-Capelle, Patrick; Bugler, Beatrix; Legube, Gaelle; Cafarelli, Pierre; Casta, Romain; Champeaux, Jean Philippe; Sence, Martine; Vlk, Martin; Wagner, Richard; Štursa, Jan; Zach, Václav; Incerti, Sebastien; Juha, Libor; Davídková, Marie 2015-08-01 Clustered DNA damage induced by 10, 20 and 30 MeV protons in pBR322 plasmid DNA was investigated. Besides determination of strand breaks, additional lesions were detected using base excision repair enzymes. The plasmid was irradiated in dry form, where indirect radiation effects were almost fully suppressed, and in water solution containing only minimal residual radical scavenger. Simultaneous irradiation of the plasmid DNA in the dry form and in the solution demonstrated the contribution of the indirect effect as prevalent. The damage composition slightly differed when comparing the results for liquid and dry samples. The obtained data were also subjected to analysis concerning different methodological approaches, particularly the influence of irradiation geometry, models used for calculation of strand break yields and interpretation of the strand breaks detected with the enzymes. It was shown that these parameters strongly affect the results. PMID:26007308 6. Study on signal intensity of low field nuclear magnetic resonance via an indirect coupling measurement NASA Astrophysics Data System (ADS) Jiang, Feng-Ying; Wang, Ning; Jin, Yi-Rong; Deng, Hui; Tian, Ye; Lang, Pei-Lin; Li, Jie; Chen, Ying-Fei; Zheng, Dong-Ning 2013-04-01 We carry out an ultra-low-field nuclear magnetic resonance (NMR) experiment based on high-Tc superconducting quantum interference devices (SQUIDs). The measurement field is in a micro-tesla range (~10 μT-100 μT) and the experiment is conducted in a home-made magnetically-shielded-room (MSR). The measurements are performed by the indirect coupling method in which the signal of nuclei precession is indirectly coupled to the SQUID through a tuned copper coil transformer. In such an arrangement, the interferences of applied measurement and polarization field to the SQUID sensor are avoided and the performance of the SQUID is not destroyed. In order to compare the detection sensitivity obtained by using the SQUID with that achieved using a conventional low-noise-amplifier, we perform the measurements using a commercial room temperature amplifier. The results show that in a wide frequency range (~1 kHz-10 kHz) the measurements with the SQUID sensor exhibit a higher signal-to-noise ratio. Further, we discuss the dependence of NMR peak magnitude on measurement frequency. We attribute the reduction of the peak magnitude at high frequency to the increased field inhomogeneity as the measurement field increases. This is verified by compensating the field gradient using three sets of gradient coils. 7. Evaluation of different indirect measures of rate of drug absorption in comparative pharmacokinetic studies. PubMed Lacey, L F; Keene, O N; Duquesnoy, C; Bye, A 1994-02-01 As indirect measures of rate of drug absorption (metrics), maximum plasma concentration (Cmax) is confounded by extent of drug absorption and the time to reach Cmax (tmax) is a discrete variable, dependent on blood sampling frequency. Building on the work of Endrenyi et al., we have compared different metrics, including Cmax/area under the curve of concentration versus time from time zero to infinity (AUC infinity), partial AUC from zero to tmax (AUCp), and Cmax.tmax with simulated experiments. Importantly, the performance of these metrics was assessed with the results of actual pharmacokinetic studies involving Glaxo drugs. The results of the simulated and real experiments were consistent and produced the following unambiguous findings: (1) Cmax/AUC infinity is a more powerful metric than Cmax in establishing bioequivalence when the formulations are truly bioequivalent; (2) Cmax/AUC infinity is more sensitive than Cmax at detecting differences in rate of absorption when they exist; and (3) the treatment ratios for AUCp, AUCp/AUC infinity, and Cmax.tmax are very imprecisely estimated and are of no practical value as measures of rate of absorption. Of the metrics examined, Cmax/AUC infinity is the most sensitive and powerful indirect measure of rate of drug absorption in comparative pharmacokinetic studies involving immediate-release dosage forms and should be used instead of Cmax in bioequivalence testing. PMID:8169791 8. Experience in the detection and suppression of torsional vibration from mud logging data SciTech Connect Fear, M.J.; Abbassian, F. 1994-12-31 Vibration detection from mud logging systems has revealed that torsional vibration is common in harsh drilling environments, and is a major cause of bit and drillstring failures. Suppressing this type of vibration with an automated vibration detection system, torque feedback, and rigsite vibration suppression guidelines has produced a significant improvement in drilling performance. 9. Detection experiments with humans implicate visual predation as a driver of colour polymorphism dynamics in pygmy grasshoppers PubMed Central 2013-01-01 Background Animal colour patterns offer good model systems for studies of biodiversity and evolution of local adaptations. An increasingly popular approach to study the role of selection for camouflage for evolutionary trajectories of animal colour patterns is to present images of prey on paper or computer screens to human ‘predators’. Yet, few attempts have been made to confirm that rates of detection by humans can predict patterns of selection and evolutionary modifications of prey colour patterns in nature. In this study, we first analyzed encounters between human ‘predators’ and images of natural black, grey and striped colour morphs of the polymorphic Tetrix subulata pygmy grasshoppers presented on background images of unburnt, intermediate or completely burnt natural habitats. Next, we compared detection rates with estimates of capture probabilities and survival of free-ranging grasshoppers, and with estimates of relative morph frequencies in natural populations. Results The proportion of grasshoppers that were detected and time to detection depended on both the colour pattern of the prey and on the type of visual background. Grasshoppers were detected more often and faster on unburnt backgrounds than on 50% and 100% burnt backgrounds. Striped prey were detected less often than grey or black prey on unburnt backgrounds; grey prey were detected more often than black or striped prey on 50% burnt backgrounds; and black prey were detected less often than grey prey on 100% burnt backgrounds. Rates of detection mirrored previously reported rates of capture by humans of free-ranging grasshoppers, as well as morph specific survival in the wild. Rates of detection were also correlated with frequencies of striped, black and grey morphs in samples of T. subulata from natural populations that occupied the three habitat types used for the detection experiment. Conclusions Our findings demonstrate that crypsis is background-dependent, and implicate visual predation 10. Transfer Rate Edited experiment for the selective detection of Chemical Exchange via Saturation Transfer (TRE-CEST) NASA Astrophysics Data System (ADS) Friedman, Joshua I.; Xia, Ding; Regatte, Ravinder R.; Jerschow, Alexej 2015-07-01 Chemical Exchange Saturation Transfer (CEST) magnetic resonance experiments have become valuable tools in magnetic resonance for the detection of low concentration solutes with far greater sensitivity than direct detection methods. Accurate measures of rates of chemical exchange provided by CEST are of particular interest to biomedical imaging communities where variations in chemical exchange can be related to subtle variations in biomarker concentration, temperature and pH within tissues using MRI. Despite their name, however, traditional CEST methods are not truly selective for chemical exchange and instead detect all forms of magnetization transfer including through-space NOE. This ambiguity crowds CEST spectra and greatly complicates subsequent data analysis. We have developed a Transfer Rate Edited CEST experiment (TRE-CEST) that uses two different types of solute labeling in order to selectively amplify signals of rapidly exchanging proton species while simultaneously suppressing 'slower' NOE-dominated magnetization transfer processes. This approach is demonstrated in the context of both NMR and MRI, where it is used to detect the labile amide protons of proteins undergoing chemical exchange (at rates ⩾ 30 s-1) while simultaneously eliminating signals originating from slower (∼5 s-1) NOE-mediated magnetization transfer processes. TRE-CEST greatly expands the utility of CEST experiments in complex systems, and in-vivo, in particular, where it is expected to improve the quantification of chemical exchange and magnetization transfer rates while enabling new forms of imaging contrast. 11. Detecting small gravity change in field measurement: simulations and experiments of the superconducting gravimeter—iGrav NASA Astrophysics Data System (ADS) Kao, Ricky; Kabirzadeh, Hojjat; Kim, Jeong Woo; Neumeyer, Juergen; Sideris, Michael G. 2014-08-01 In order to detect small gravity changes in field measurements, such as with CO2 storage, we designed simulations and experiments to validate the capabilities of the iGrav superconducting gravimeter. Qualified data processing was important to obtain the residual gravity from the iGrav's raw gravity signals, without the tidal components, atmosphere, polar motion and hydrological effects. Two simulations and four designed experiments are presented in this study. The first simulation detected the gravity change during CO2 injection. The residual gravity of CO2 leakage was targeted with the second simulation from the main storage reservoir to secondary space underground. The designed experiments monitored the situation of gravity anomalies in the iGrav's records. These tests focused on short-term gravity anomalies, such as gravity changes, step functions, repeat observations and gradient measurements from the iGrav, rather than on long-term tidal effects. The four laboratory experiments detected a decrease in gravity of -0.56 ± 0.15 µGal (10-8 m s-2) with a 92.8 kg weight on the top of the iGrav. A step function occurred in the gravity signals, when the tilt control was out of balance. We also used a professional camera dolly with a track to observe repeated horizontal movements and an electric lift table for controlled vertical movements to measure the average gradient of -2.67 ± 0.01 µGal cm-1. 12. 48 CFR 31.203 - Indirect costs. Code of Federal Regulations, 2011 CFR 2011-10-01 ... coverage, the contractor shall follow the criteria and guidance in 48 CFR 9904.406 for selecting the cost... REQUIREMENTS CONTRACT COST PRINCIPLES AND PROCEDURES Contracts With Commercial Organizations 31.203 Indirect... final cost objectives. No final cost objective shall have allocated to it as an indirect cost any... 13. 29 CFR 452.119 - Indirect elections. Code of Federal Regulations, 2013 CFR 2013-07-01 ... 29 Labor 2 2013-07-01 2013-07-01 false Indirect elections. 452.119 Section 452.119 Labor... STANDARDS GENERAL STATEMENT CONCERNING THE ELECTION PROVISIONS OF THE LABOR-MANAGEMENT REPORTING AND DISCLOSURE ACT OF 1959 Election Procedures; Rights of Members § 452.119 Indirect elections. National... 14. 29 CFR 452.119 - Indirect elections. Code of Federal Regulations, 2010 CFR 2010-07-01 ... 29 Labor 2 2010-07-01 2010-07-01 false Indirect elections. 452.119 Section 452.119 Labor... STANDARDS GENERAL STATEMENT CONCERNING THE ELECTION PROVISIONS OF THE LABOR-MANAGEMENT REPORTING AND DISCLOSURE ACT OF 1959 Election Procedures; Rights of Members § 452.119 Indirect elections. National... 15. 29 CFR 452.119 - Indirect elections. Code of Federal Regulations, 2011 CFR 2011-07-01 ... 29 Labor 2 2011-07-01 2011-07-01 false Indirect elections. 452.119 Section 452.119 Labor... STANDARDS GENERAL STATEMENT CONCERNING THE ELECTION PROVISIONS OF THE LABOR-MANAGEMENT REPORTING AND DISCLOSURE ACT OF 1959 Election Procedures; Rights of Members § 452.119 Indirect elections. National... 16. 29 CFR 452.119 - Indirect elections. Code of Federal Regulations, 2012 CFR 2012-07-01 ... 29 Labor 2 2012-07-01 2012-07-01 false Indirect elections. 452.119 Section 452.119 Labor... STANDARDS GENERAL STATEMENT CONCERNING THE ELECTION PROVISIONS OF THE LABOR-MANAGEMENT REPORTING AND DISCLOSURE ACT OF 1959 Election Procedures; Rights of Members § 452.119 Indirect elections. National... 17. 27 CFR 6.26 - Indirect interest. Code of Federal Regulations, 2011 CFR 2011-04-01 ... 27 Alcohol, Tobacco Products and Firearms 1 2011-04-01 2011-04-01 false Indirect interest. 6.26 Section 6.26 Alcohol, Tobacco Products and Firearms ALCOHOL AND TOBACCO TAX AND TRADE BUREAU, DEPARTMENT OF THE TREASURY LIQUORS âTIED-HOUSEâ Unlawful Inducements Interest in Retail License § 6.26 Indirect interest. Industry member interest in... 18. 27 CFR 6.32 - Indirect interest. Code of Federal Regulations, 2013 CFR 2013-04-01 ... 27 Alcohol, Tobacco Products and Firearms 1 2013-04-01 2013-04-01 false Indirect interest. 6.32 Section 6.32 Alcohol, Tobacco Products and Firearms ALCOHOL AND TOBACCO TAX AND TRADE BUREAU, DEPARTMENT OF THE TREASURY ALCOHOL âTIED-HOUSEâ Unlawful Inducements Interest in Retail Property § 6.32 Indirect interest. Industry member interest in... 19. 27 CFR 6.26 - Indirect interest. Code of Federal Regulations, 2013 CFR 2013-04-01 ... 27 Alcohol, Tobacco Products and Firearms 1 2013-04-01 2013-04-01 false Indirect interest. 6.26 Section 6.26 Alcohol, Tobacco Products and Firearms ALCOHOL AND TOBACCO TAX AND TRADE BUREAU, DEPARTMENT OF THE TREASURY ALCOHOL âTIED-HOUSEâ Unlawful Inducements Interest in Retail License § 6.26 Indirect interest. Industry member interest in... 20. 27 CFR 6.26 - Indirect interest. Code of Federal Regulations, 2012 CFR 2012-04-01 ... 27 Alcohol, Tobacco Products and Firearms 1 2012-04-01 2012-04-01 false Indirect interest. 6.26 Section 6.26 Alcohol, Tobacco Products and Firearms ALCOHOL AND TOBACCO TAX AND TRADE BUREAU, DEPARTMENT OF THE TREASURY LIQUORS âTIED-HOUSEâ Unlawful Inducements Interest in Retail License § 6.26 Indirect interest. Industry member interest in... 1. 27 CFR 6.26 - Indirect interest. Code of Federal Regulations, 2014 CFR 2014-04-01 ... 27 Alcohol, Tobacco Products and Firearms 1 2014-04-01 2014-04-01 false Indirect interest. 6.26 Section 6.26 Alcohol, Tobacco Products and Firearms ALCOHOL AND TOBACCO TAX AND TRADE BUREAU, DEPARTMENT OF THE TREASURY ALCOHOL âTIED-HOUSEâ Unlawful Inducements Interest in Retail License § 6.26 Indirect interest. Industry member interest in... 2. 27 CFR 6.32 - Indirect interest. Code of Federal Regulations, 2011 CFR 2011-04-01 ... 27 Alcohol, Tobacco Products and Firearms 1 2011-04-01 2011-04-01 false Indirect interest. 6.32 Section 6.32 Alcohol, Tobacco Products and Firearms ALCOHOL AND TOBACCO TAX AND TRADE BUREAU, DEPARTMENT OF THE TREASURY LIQUORS âTIED-HOUSEâ Unlawful Inducements Interest in Retail Property § 6.32 Indirect interest. Industry member interest in... 3. 27 CFR 6.32 - Indirect interest. Code of Federal Regulations, 2012 CFR 2012-04-01 ... 27 Alcohol, Tobacco Products and Firearms 1 2012-04-01 2012-04-01 false Indirect interest. 6.32 Section 6.32 Alcohol, Tobacco Products and Firearms ALCOHOL AND TOBACCO TAX AND TRADE BUREAU, DEPARTMENT OF THE TREASURY LIQUORS âTIED-HOUSEâ Unlawful Inducements Interest in Retail Property § 6.32 Indirect interest. Industry member interest in... 4. 27 CFR 6.32 - Indirect interest. Code of Federal Regulations, 2014 CFR 2014-04-01 ... 27 Alcohol, Tobacco Products and Firearms 1 2014-04-01 2014-04-01 false Indirect interest. 6.32 Section 6.32 Alcohol, Tobacco Products and Firearms ALCOHOL AND TOBACCO TAX AND TRADE BUREAU, DEPARTMENT OF THE TREASURY ALCOHOL âTIED-HOUSEâ Unlawful Inducements Interest in Retail Property § 6.32 Indirect interest. Industry member interest in... 5. 19 CFR 10.1024 - Indirect materials. Code of Federal Regulations, 2013 CFR 2013-04-01 ... considered originating. Although non-originating material B must undergo the applicable tariff shift in order... Agreement Rules of Origin § 10.1024 Indirect materials. An indirect material, as defined in § 10.1002(n) of.... Korean Producer A produces good C using non-originating material B. Producer A imports... 6. 19 CFR 10.924 - Indirect materials. Code of Federal Regulations, 2013 CFR 2013-04-01 ... considered originating. Although non-originating material B must undergo the applicable tariff shift in order... Rules of Origin § 10.924 Indirect materials. An indirect material, as defined in § 10.902(m) of this subpart, will be considered to be an originating material without regard to where it is produced.... 7. Indirect Costs of Federally Supported Research. ERIC Educational Resources Information Center Brown, Kenneth T. 1981-01-01 Addressed is the problem of increasing indirect costs in federally supported research at universities and colleges. Effects of this increase are examined, using data on National Institutes of Health grants to educational institutions for examples. Discussed is the establishment of uniform indirect cost rates to modify the present policy. (CS) 8. 48 CFR 31.203 - Indirect costs. Code of Federal Regulations, 2012 CFR 2012-10-01 ... 48 Federal Acquisition Regulations System 1 2012-10-01 2012-10-01 false Indirect costs. 31.203 Section 31.203 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION GENERAL CONTRACTING REQUIREMENTS CONTRACT COST PRINCIPLES AND PROCEDURES Contracts With Commercial Organizations 31.203 Indirect costs. (a) For contracts subject... 9. Indirect Costs in Universities. ACE Special Report. ERIC Educational Resources Information Center Woodrow, Raymond J. Indirect costs of sponsored research projects and educational programs are as necessary as are the direct costs. This report demonstrates that they are real costs and that sponsors such as the Federal Government receive more than equitable treatment in the computation and application of indirect costs. The areas discussed include: the computation… 10. Indirect Costs of University Research: Background Information. ERIC Educational Resources Information Center Voet, Tony Vander This paper is intended to provide a solid base of information about the treatment of indirect university research costs in various jurisdictions and to highlight some of the factors that have contributed to increased interest in the issues surrounding the funding of indirect costs of research. University research in Ontario has continued to evolve… 11. 7 CFR 2500.044 - Indirect costs. Code of Federal Regulations, 2012 CFR 2012-01-01 ... 7 Agriculture 15 2012-01-01 2012-01-01 false Indirect costs. 2500.044 Section 2500.044 Agriculture Regulations of the Department of Agriculture (Continued) OFFICE OF ADVOCACY AND OUTREACH, DEPARTMENT OF AGRICULTURE OAO FEDERAL FINANCIAL ASSISTANCE PROGRAMS-GENERAL AWARD ADMINISTRATIVE PROCEDURES Post-Award and Closeout § 2500.044 Indirect... 12. Indirect Cost Rate Composition and Myths. ERIC Educational Resources Information Center Selby, Stephen E. 1984-01-01 In response to criticism of the rise in indirect cost rates and their effect on federally funded research, the methods for calculating and applying indirect costs rates according to the new cost principles applicable to sponsored agreements are examined, and specific criticisms are addressed. (MSE) 13. 19 CFR 10.603 - Indirect materials. Code of Federal Regulations, 2012 CFR 2012-04-01 ... Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY... States Free Trade Agreement Rules of Origin § 10.603 Indirect materials. An indirect material, as defined in § 10.582(m) of this subpart, will be considered to be an originating material without regard... 14. 46 CFR 154.1720 - Indirect refrigeration. Code of Federal Regulations, 2010 CFR 2010-10-01 ... 46 Shipping 5 2010-10-01 2010-10-01 false Indirect refrigeration. 154.1720 Section 154.1720 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) CERTAIN BULK DANGEROUS CARGOES SAFETY STANDARDS FOR SELF-PROPELLED VESSELS CARRYING BULK LIQUEFIED GASES Special Design and Operating Requirements § 154.1720 Indirect refrigeration.... 15. 46 CFR 154.1720 - Indirect refrigeration. Code of Federal Regulations, 2011 CFR 2011-10-01 ... 46 Shipping 5 2011-10-01 2011-10-01 false Indirect refrigeration. 154.1720 Section 154.1720 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) CERTAIN BULK DANGEROUS CARGOES SAFETY STANDARDS FOR SELF-PROPELLED VESSELS CARRYING BULK LIQUEFIED GASES Special Design and Operating Requirements § 154.1720 Indirect refrigeration.... 16. 27 CFR 6.26 - Indirect interest. Code of Federal Regulations, 2010 CFR 2010-04-01 ... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Indirect interest. 6.26... OF THE TREASURY LIQUORS âTIED-HOUSEâ Unlawful Inducements Interest in Retail License § 6.26 Indirect interest. Industry member interest in retail licenses includes any interest acquired by corporate... 17. 27 CFR 6.32 - Indirect interest. Code of Federal Regulations, 2010 CFR 2010-04-01 ... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Indirect interest. 6.32... OF THE TREASURY LIQUORS âTIED-HOUSEâ Unlawful Inducements Interest in Retail Property § 6.32 Indirect interest. Industry member interest in retail property includes any interest acquired by corporate... 18. Initial experiences in the photoacoustic detection of melanoma metastases in resected lymph nodes NASA Astrophysics Data System (ADS) Grootendorst, D.; Jose, J.; Van der Jagt, P.; Van der Weg, W.; Nagel, K.; Wouters, M.; Van Boven, H.; Van Leeuwen, T. G.; Steenbergen, W.; Ruers, T.; Manohar, S. 2011-03-01 Accurate lymph node analysis is essential to determine the prognosis and treatment of patients suffering from melanoma. The initial results of a tomographic photoacoustic modality to detect melanoma metastases in resected lymph nodes are presented based on phantom models and a human lymph node. The results show melanoma metastases detection is feasible and the setup is capable of distinguishing absorbing structures down to 1 mm. In addition, the use of longer laser wavelengths could result in an image containing a higher contrast ratio. Future research shall be focused on using the melanin characteristics to improve contrast and detection possibilities. 19. Experiment Observation on Acoustic Forward Scattering for Underwater Moving Object Detection NASA Astrophysics Data System (ADS) Lei, Bo; Ma, Yuan-Liang; Yang, Kun-De 2011-03-01 The problem of detecting an object in shallow water by observing changes in the acoustic field as the object passes between an acoustic source and receiver is addressed. A signal processing scheme based on forward scattering is proposed to detect the perturbed field in the presence of the moving object. The periodic LFM wideband signal is transmitted and a sudden change of field is acquired using a normalized median filter. The experimental results on the lake show that the proposed scheme is successful for the detection of a slowly moving object in the bistatic blind zone. 20. Indirect Signatures of Gravitino Dark Matter SciTech Connect Ibarra, Alejandro 2008-11-23 Supersymmetric models provide very interesting scenarios to account for the dark matter of the Universe. In this talk we discuss scenarios with gravitino dark matter in R-parity breaking vacua, which not only reproduce very naturally the observed dark matter relic density, but also lead to a thermal history of the Universe consistent with the observed abundances of primordial elements and the observed matter-antimatter asymmetry. In this class of scenarios the dark matter gravitinos are no longer stable, but decay with very long lifetimes into Standard Model particles, thus opening the possibility of their indirect detection. We have computed the expected contribution from gravitino decay to the primary cosmic rays and we have found that a gravitino with a mass of m{sub 2/3}{approx}150 GeV and a lifetime of {tau}{sub 3/2}{approx}10{sup 26} s could simultaneously explain the EGRET anomaly in the extragalactic gamma-ray background and the HEAT excess in the positron fraction. 1. Experience in the use of hyperspectral data for the detection of vegetation containing narcotic substances NASA Astrophysics Data System (ADS) Sedelnikov, V. P.; Lukashevich, E. L.; Karpukhina, O. A. 2014-12-01 This paper provides the characteristics of an experimental sample of a hyperspectral videospectrometer Sokol-SCP and presents examples of the hyperspectral data received as a result of flight tests. The results of the detection of vegetation containing narcotic substances by spectral attributes using the obtained hyperspectral information are considered. The opportunity for using the hyperspectral data for detection of cannabis and papaver sites, including those in mixed crops with masking vegetation, is confirmed. 2. High power laser diodes for the NASA direct detection laser transceiver experiment NASA Technical Reports Server (NTRS) Seery, Bernard D.; Holcomb, Terry L. 1988-01-01 High-power semiconductor laser diodes selected for use in the NASA space laser communications experiments are discussed. The diode selection rationale is reviewed, and the laser structure is shown. The theory and design of the third mirror lasers used in the experiments are addressed. 3. AMIDAS-II: Upgrade of the AMIDAS package and website for direct Dark Matter detection experiments and phenomenology NASA Astrophysics Data System (ADS) Shan, Chung-Lin 2014-12-01 In this paper, we give a detailed user's guide to the AMIDAS (A Model-Independent Data Analysis System) package and website, which is developed for online simulations and data analyses for direct Dark Matter detection experiments and phenomenology. Recently, the whole AMIDAS package and website system has been upgraded to the second phase: AMIDAS-II, for including the new developed Bayesian analysis technique. AMIDAS has the ability to do full Monte Carlo simulations as well as to analyze real/pseudo data sets either generated by another event generating programs or recorded in direct DM detection experiments. Moreover, the AMIDAS-II package can include several “user-defined” functions into the main code: the (fitting) one-dimensional WIMP velocity distribution function, the nuclear form factors for spin-independent and spin-dependent cross sections, artificial/experimental background spectrum for both of simulation and data analysis procedures, as well as different distribution functions needed in Bayesian analyses. 4. Electric breakdown and ionization detection in normal liquid and superfluid 4He for the SNA nEDM experiment NASA Astrophysics Data System (ADS) Karcz, Maciej A new experiment to search for the neutron electric dipole moment (nEDM) is under construction at the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory. The SNS nEDM experiment is a national collaboration spanning over 20 universities and laboratories with more than 100 physicists and engineers contributing to the research and development. The search for a nEDM is a precision test of time reversal symmetry in particle physics, in the absence of a discovery, the SNS nEDM experiment seeks to improve the present limit on the nEDM value by two orders of magnitude. A non-zero value of the nEDM would help to explain the asym- metry between matter and anti-matter in the universe by providing an additional source of charge conjugation and parity symmetry violation, a necessary ingredient in the theory of baryogenesis in the early universe. The nEDM experiment will measure the Larmor precession frequency of neutrons by detecting scintillation from neutron capture by a dilute concentration of 3He inside a bath of superfluid 4He. Neutron capture by 3He is spin-dependent and the magnetic moments of the neutron and the 3He nucleus are comparable. A direct measurement of the precession frequency of polarized 3He and scintillation from neutron capture allows for the relative precession frequencies of 3He and the neutron to be determined. The experiment will then look for changes in the relative precession of 3He and neutrons under the influence of strong electric fields. 3He has negligible EDM and therefore any deviation due to an applied electric field would be from a nEDM. The nEDM experiment will need to apply strong electric fields inside superfluid (SF) 4He and it was necessary to investigate the ability of SF 4He to sustain electric fields. An experiment to study electric breakdown in superfluid 4He was constructed at the Indiana University Center for Exploration of Energy and Matter (CEEM). The experiment studied the electric breakdown behavior of liquid 5. Dolphin biosonar target detection in noise: wrap up of a past experiment. PubMed Au, Whitlow W L 2014-07-01 The target detection capability of bottlenose dolphins in the presence of artificial masking noise was first studied by Au and Penner [J. Acoust. Soc. Am. 70, 687-693 (1981)] in which the dolphins' target detection threshold was determined as a function of the ratio of the echo energy flux density and the estimated received noise spectral density. Such a metric was commonly used in human psychoacoustics despite the fact that the echo energy flux density is not compatible with noise spectral density which is averaged intensity per Hz. Since the earlier detection in noise studies, two important parameters, the dolphin integration time applicable to broadband clicks and the dolphin's auditory filter shape, were determined. The inclusion of these two parameters allows for the estimation of the received energy flux density of the masking noise so that the dolphin target detection can now be determined as a function of the ratio of the received energy of the echo over the received noise energy. Using an integration time of 264 μs and an auditory bandwidth of 16.7 kHz, the ratio of the echo energy to noise energy at the target detection threshold is approximately 1 dB. PMID:24993190 6. Correlation of geophysical factors with results of gravity wave detection experiments SciTech Connect Sazeeva, N.N. 1986-04-01 The possible influence of variations in the daily-average sunspot number (W) and geomagnetic-wave amplitude (Ap) on the detections of gravitational-radiation events (GREs) reported by Brown et al. (1982) for a 440-d period in 1979-1981 is investigated statistically. An era-superposition technique is applied to compare 18 GRE periods and 20 non-GRE periods of 7 d each. Both Ap and W are found to be correlated with the GRE signals (the Ap peaking on the day of the GRE), and a bias toward daylight hours for GRE detection (62 percent of GREs in daylight and 43 percent of those between 10 AM and 2 PM local time) is noted. It is inferred that shielded gravitational-wave antennas may be subject to atmospheric EM noise too weak to be detected with available magnetometers. 10 references. 7. Specific IgM, IgG and IgG1 directed against Toxoplasma gondii detected by flow cytometry and their potential as serologic tools to support clinical indirect fundoscopic presumed diagnosis of ocular disease. PubMed Martins, Livia Mattos; Rangel, Alba Lucinia Peixoto; Peixe, Ricardo Guerra; Silva-dos-Santos, Priscila Pinto; Lemos, Elenice Moreira; Martins-Filho, Olindo Assis; Bahia-Oliveira, Lilian Maria Garcia 2015-02-01 In the present study we evaluated the anti-Toxoplasma gondii immunoglobulin profiles of a group of 118 individuals living in an endemic area. The aim of the study was to select biomarkers to support the ophthalmological diagnosis of retinal/retinochoroidal scars presumably caused by T. gondii infection. Overall anti-T. gondii reactivity of the IgM, IgG, IgA, IgE and IgG subclasses was investigated by flow cytometry-based anti-fixed tachyzoite antibodies (FC-AFTA) in four groups of subjects, referred to as: i) TOXO(L)--seropositive patients with retinal/retinochoroidal scars presumably caused by T. gondii infection; these patients were further subdivided according to morphological aspects of their ocular scar lesions as A, B or C; ii) TOXO(NL)--seropositive patients without ocular scar lesions; iii) NEG(L)--T. gondii seronegative patients presenting retinal lesions; and iv) NEG(NL)--T. gondii seronegative without retinal lesions (negative controls). Our data demonstrated that anti-T. gondii IgG profiles were able to discriminate the mean reactivity of TOXO(L) from all other clinical groups. Analysis of anti-T. gondii immunoglobulin profiles revealed that IgM and IgG were good biomarkers capable of discriminating between individual reactivity in patients with retinal/retinochoroidal scars presumably caused by T. gondii infection [TOXO(L)] from those caused by other clinical conditions. Furthermore, anti-T. gondii IgG1 reactivity was able to discriminate TOXO(L) from all other clinical groups. In conclusion, the pre-selected IgM, IgG and IgG1 anti-T. gondii antibody subclasses were able to segregate both TOXO(L) and the other subgroups, including the scar lesion group types (A, B, C), from other clinical conditions. These results suggest the applicability of this technique in the clinical laboratory to detect putative biomarker for diagnosis of ocular lesions in T. gondii-infected patients. Studies in other areas implementing the methods described in the present study 8. Operational Experience with the Scattering Matrix Arc Detection System on the JET ITER-Like Antenna NASA Astrophysics Data System (ADS) Vrancken, M.; Lerche, E.; Blackman, T.; Dumortier, P.; Durodié, F.; Evrard, M.; Goulding, R. H.; Graham, M.; Huygen, S.; Jacquet, P.; Kaye, A.; Mayoral, M.-L.; Nightingale, M. P. S.; Ongena, J.; Van Eester, D.; Van Schoor, M.; Vervier, M.; Weynants, R. 2009-11-01 The Scattering Matrix Arc Detection System (SMAD) has been fully deployed on all 4 sets of Resonant Double Loop (RDL), Vacuum Transmission Line (VTL) and Antenna Pressurised Transmission Lines (APTL) of the JET ICRF ITER-Like Antenna (ILA) and this has been indispensable for operating at low (real) T-point impedance values to investigate ELM tolerance. This paper describes the necessity of the SMAD vs VSWR (Voltage Standing Wave Ratio) protection system, SMAD commissioning, problems and a number of typical events detected by the SMAD system during operation on plasma. 9. Operational Experience with the Scattering Matrix Arc Detection System on the JET ITER-Like Antenna SciTech Connect Vrancken, M.; Lerche, E.; Dumortier, P.; Durodie, F.; Evrard, M.; Huygen, S.; Ongena, J.; Van Eester, D.; Van Schoor, M.; Vervier, M.; Weynants, R. 2009-11-26 The Scattering Matrix Arc Detection System (SMAD) has been fully deployed on all 4 sets of Resonant Double Loop (RDL), Vacuum Transmission Line (VTL) and Antenna Pressurised Transmission Lines (APTL) of the JET ICRF ITER-Like Antenna (ILA) and this has been indispensable for operating at low (real) T-point impedance values to investigate ELM tolerance. This paper describes the necessity of the SMAD vs VSWR (Voltage Standing Wave Ratio) protection system, SMAD commissioning, problems and a number of typical events detected by the SMAD system during operation on plasma. 10. Optical detection of middle ear infection using spectroscopic techniques: phantom experiments NASA Astrophysics Data System (ADS) Zhang, Hao; Huang, Jing; Li, Tianqi; Svanberg, Sune; Svanberg, Katarina 2015-05-01 A noninvasive optical technique, which is based on a combination of reflectance spectroscopy and gas in scattering media absorption spectroscopy, is demonstrated. It has the potential to improve diagnostics of middle ear infections. An ear phantom prepared with a tissue cavity, which was covered with scattering material, was used for spectroscopic measurements. Diffuse reflectance spectra of the phantom eardrum were measured with a reflectance probe. The presence of oxygen and water vapor as well as gas exchange in the phantom cavity were studied with a specially designed fiber-optic probe for backscattering detection geometry. The results suggest that this method can be developed for improved clinical detection of middle ear infection. 11. Evolution of spite through indirect reciprocity. PubMed Central Johnstone, Rufus A.; Bshary, Redouan 2004-01-01 How can cooperation persist in the face of a temptation to 'cheat'? Several recent papers have suggested that the answer may lie in indirect reciprocity. Altruistic individuals may benefit by eliciting altruism from observers, rather than (as in direct reciprocity) from the recipient of the aid they provide. Here, we point out that indirect reciprocity need not always favour cooperation; by contrast, it may support spiteful behaviour, which is costly for the both actor and recipient. Existing theory suggests spite is unlikely to persist, but we demonstrate that it may do so when spiteful individuals are less likely to incur aggression from observers (a negative form of indirect reciprocity). PMID:15347514 12. 13C direct detected COCO-TOCSY: A tool for sequence specific assignment and structure determination in protonless NMR experiments NASA Astrophysics Data System (ADS) Balayssac, Stéphane; Jiménez, Beatriz; Piccioli, Mario 2006-10-01 A novel experiment is proposed to provide inter-residue sequential correlations among carbonyl spins in 13C detected, protonless NMR experiments. The COCO-TOCSY experiment connects, in proteins, two carbonyls separated from each other by three, four or even five bonds. The quantitative analysis provides structural information on backbone dihedral angles ϕ as well as on the side chain dihedral angles of Asx and Glx residues. This is the first dihedral angle constraint that can be obtained via a protonless approach. About 75% of backbone carbonyls in Calbindin D 9K, a 75 aminoacid dicalcium protein, could be sequentially connected via a COCO-TOCSY spectrum. 49 3J values were measured and related to backbone ϕ angles. Structural information can be extended to the side chain orientation of aminoacids containing carbonyl groups. Additionally, long range homonuclear coupling constants, 4JCC and 5JCC, could be measured. This constitutes an unprecedented case for proteins of medium and small size. 13. An experiment to detect and locate lightning associated with eruptions of Redoubt Volcano USGS Publications Warehouse Hoblitt, R.P. 1994-01-01 A commercially-available lightning-detection system was temporarily deployed near Cook Inlet, Alaska in an attempt to remotely monitor volcanogenic lightning associated with eruptions of Redoubt Volcano. The system became operational on February 14, 1990; lightning was detected in 11 and located in 9 of the 13 subsequent eruptions. The lightning was generated by ash clouds rising from pyroclastic density currents produced by collapse of a lava dome emplaced near Redoubt's summit. Lightning discharge (flash) location was controlled by topography, which channeled the density currents, and by wind direction. In individual eruptions, early flashes tended to have a negative polarity (negative charge is lowered to ground) while late flashes tended to have a positive polarity (positive charge is lowered to ground), perhaps because the charge-separation process caused coarse, rapid-settling particles to be negatively charged and fine, slow-settling particles to be positively charged. Results indicate that lightning detection and location is a useful adjunct to seismic volcano monitoring, particularly when poor weather or darkness prevents visual observation. The simultaneity of seismicity and lightning near a volcano provides the virtual certainty that an ash cloud is present. This information is crucial for aircraft safety and to warn threatened communities of impending tephra falls. The Alaska Volcano Observatory has now deployed a permanent lightning-detection network around Cook Inlet. ?? 1994. 14. Directed Design of Experiments (DOE) for Determining Probability of Detection (POD) Capability of NDE Systems (DOEPOD) NASA Technical Reports Server (NTRS) Generazio, Edward R. 2007-01-01 This viewgraph presentation reviews some of the problems that are encountered by designers of Non-Destructive Examination (NDE) have in determining the probability of detection. According to the author "[the] NDE community should not blindly accept statistical results due to lack of knowledge." This is an attempt to bridge the gap between people doing NDE, and statisticians. 15. Solar flares in soft X-rays detected in the Coronas-F experiment NASA Astrophysics Data System (ADS) Pankov, V. M.; Prokhin, V. L.; Khavenson, N. G.; Gusev, A. A. 2009-12-01 The RPS-1 spectrometer on the board of the Coronas-F satellite detecting solar X-rays in the range of 3-31.5 keV using a CdTe detector is described and some results of the observation of weak solar flares are presented. 16. Is pepsin detected in the saliva of patients who experience pharyngeal reflux? PubMed Central Printza, A; Speletas, M; Triaridis, S; Wilson, J 2007-01-01 Objectives: To investigate if pepsin is detected, with an activity assay, in the saliva of patients with a clinical diagnosis of laryngopharyngeal reflux (LPR) and can therefore be used as a diagnostic marker of laryngopharyngeal reflux. Study design: Pilot, prospective study. Methods: Adult participants with a clinical diagnosis of LPR collected whole saliva samples on regular intervals for a day, and upon experiencing symptoms attributed to LPR. Patients were selected on the basis of presence of severe symptoms and laryngoscopic findings of laryngopharyngeal reflux and symptoms of gastroesopharyngeal reflux. They reported voice disorders, dysphagia, throat clearing, excessive secretions, breathing difficulties, cough, globus sensation and throat pain. Control participants reported the absence of pharyngeal and laryngeal symptoms and of symptoms of gastroesophageal reflux. Saliva samples were assayed with fibrinogen on an agarose gel plate. The detection of pepsin was based on the presence of peptic activity which was qualitatively evaluated. Results: The control participants had negative assays. No saliva samples from the LPR patients, collected at regular sampling, tested positive for pepsin. All the samples collected at the presence of symptoms and following regurgitation episodes tested negative for pepsin. Saliva samples pH ranged from 7 to 8. Conclusions: Pepsin was not detected, with an activity assay, in the saliva of patients with a clinical diagnosis of LPR. A concentration method might be more sensitive although saliva and swallowing physiology renders the detection of pepsin in the saliva difficult. PMID:19582210 17. The Experiences and Involvement of Grandparents in Hearing Detection and Intervention ERIC Educational Resources Information Center McNee, Chelsea M.; Jackson, Carla W. 2012-01-01 The purpose of this study was to examine the involvement of grandparents during hearing detection and intervention. Data were collected and analyzed from survey responses of 50 parents and 35 grandparents of children of varying ages who have hearing loss. Parents described important types of support that grandparents provided including frequent… 18. DETECTING LAND COVER CHANGE AT THE JORNADA EXPERIMENT RANGE, NEW MEXICO, WITH ASTER EMISSIVITIES Technology Transfer Automated Retrieval System (TEKTRAN) Multispectral thermal infrared remote sensing of surface emissivities can detect and monitor long term land cover changes over arid regions. The technique is based on the association between broadband emissivity and density of sparsely covered terrains. The association exists regardless of plant col... 19. EVALUATION OF GEOPHYSICAL METHODS FOR THE DETECTION OF SUBSURFACE TETRACHLOROETHYLENE IN CONTROLLED SPILL EXPERIMENTS EPA Science Inventory The purpose of the work was to determine the capability of various geophysical methods to detect PCE in the subsurface. Measurements were made with ten different geophysical techniques before, during, and after the PCE injection. This approach provided a clear identification of a... 20. Clinical Experiments of Communication by ALS Patient Utilizing Detecting Event-Related Potential NASA Astrophysics Data System (ADS) Kanou, Naoyuki; Sakuma, Kenji; Nakashima, Kenji Amyotrophic Lateral Sclerosis(ALS) patients are unable to successfully communicate their desires, although their mentality is normal, and so, the necessity of Communication Aids(CA) for ALS patients is realized. Therefore, the authors are focused on Event-Related Potential(ERP) which is elicited primarily for the target by visual and auditory stimuli. P200, N200 and P300 are components of ERP. These are potentials that are elicited when the subject focuses attention on stimuli that appears infrequently. ALS patient participated in two experiments. In the first experiment, a target word out of five words on a computer display was specified. The five words were linked to an each electric appliance, allowing the ALS patient to switch on a target appliance by ERP. In the second experiment, a target word in a 5×5 matrix was specified by measure of ERP. The rows and columns of the matrix were reversed randomly. The word on a crossing point of rows and columns including the target word, was specified as the target word. The rate of correct judgment in the first and second experiments were 100% in N200 and 96% in P200. For practical use of this system, it is very important to determine suitable communication algorithms for each patient by performing these experiments evaluating the results.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6810011267662048, "perplexity": 6135.083440838443}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698543434.57/warc/CC-MAIN-20161202170903-00121-ip-10-31-129-80.ec2.internal.warc.gz"}
https://mathoverflow.net/questions/70022/sh-sh-map-represents-the-category-of-sheaves-on-a-stack
# (Sh,Sh-map) represents the category of sheaves on a stack. I'm trying to understand the following theorem, but I don't think I'm reading it correctly. Let $(\mathcal{C},J)$ be a site (with a subcanonical topology). Write $\mathcal{C}/X$ for the groupoid of objects over $X\in \mathcal{C}$. Let $\mbox{Sh}:\mathcal{C}^{op} \rightarrow \mbox{Gpds}$ be the functor taking $X$ to the category of sheaves on $\mathcal{C}/X$ and isomorphisms of sheaves, and let $\mbox{Sh-map}:\mathcal{C}^{op} \rightarrow \mbox{Gpds}$ be the functor taking $X$ to the category whose objects are sheaf morphisms $\mathscr{F} \rightarrow \mathscr{G}$ and whose morphisms are commuting squares of sheaves determined by isomorphisms $\mathscr{F}_1 \stackrel{\sim}{\rightarrow} \mathscr{F}_2$ and $\mathscr{G}_1 \stackrel{\sim}{\rightarrow} \mathscr{G}_2$. These are in fact both stacks on $\mathcal{C}$, and moreover they determine a category-object $(\mbox{Sh},\mbox{Sh-map})$ in the category of stacks. Theorem: The category of sheaves on a stack $\mathscr{M}$ is equivalent to the category of morphisms of stacks $\mathscr{M} \rightarrow (\mbox{Sh,Sh-map})$. That is, the objects are the 1-morphisms and the morphisms are the 2-morphisms. I'd like to interpret this to mean that the objects of $Shv(\mathscr{M})$ are associated to 1-morphisms $\mathscr{M} \rightarrow \mbox{Sh}$, and that the morphisms of $Shv(\mathscr{M})$ are associated to 2-morphisms in $Hom_{Stacks}(\mathscr{M},\mbox{Sh})$, which in turn should be the same as 1-morphisms $\mathscr{M} \rightarrow \mbox{Sh-map}$. But there a number of problems with this. First, given a sheaf $\mathcal{F} \in Shv(\mathscr{M})$ I'm having trouble constructing a natural transformation $\mathscr{M} \rightarrow \mbox{Sh}$. Perhaps I shouldn't, but to check this I'm using a test object $X\in \mathcal{C}$. By Yoneda, an object of $\mathscr{M}(X)$ is the same as a 1-morphism of stacks $f:X\rightarrow \mathscr{M}$, and so I obtain an object of $Sh(X)$ (i.e. a sheaf on $\mathcal{C}/X$) via $(\alpha:Y\rightarrow X) \mapsto \mathcal{F}(f\alpha:Y \rightarrow X \rightarrow \mathscr{M})$. That's natural enough. Again by Yoneda, a morphism in $\mathscr{M}(X)$ is a 2-morphism between maps $f,g:X\rightarrow \mathscr{M}$ of stacks, i.e. a section $s:X\rightarrow X\times_\mathscr{M} X$ of the projection from the 2-category fiber product. Out of this, I'm supposed to construct a natural transformation from the sheaf $(\alpha:Y\rightarrow X) \mapsto \mathcal{F}(f\alpha:Y \rightarrow X \rightarrow \mathscr{M})$ to the sheaf $(\alpha:Y\rightarrow X) \mapsto \mathcal{F}(g\alpha:Y \rightarrow X \rightarrow \mathscr{M})$. But the only structure in place to give me such a thing is a morphism in $Stacks/\mathscr{M}$ between $f\alpha$ and $g\alpha$, and I don't see how to construct this. Second, a 2-morphism between 1-morphisms $f,g\in Hom_{Stacks}(\mathscr{M},\mbox{Sh})$ is a section $s:\mathscr{M} \rightarrow \mathscr{M} \times_{\mbox{Sh}} \mathscr{M}$. Thus for any $(\alpha:X\rightarrow \mathscr{M})\in \mathscr{M}(X)$, we get an object $(\alpha,\beta:X \rightarrow \mathscr{M},\varphi:f\alpha \stackrel{\sim}{\rightarrow} g\alpha)\in (\mathscr{M}\times_{\mbox{Sh}}\mathscr{M})(X)$. On the other hand, a 1-morphism $\mathscr{M} \rightarrow \mbox{Sh-map}$ is for each $\alpha:X \rightarrow \mathscr{M}$ an arbitrary morphism on sheaves on $\mathcal{C}/X$. These can't be the same. By the way, I've tried to do (what I think is) the right thing and work out the sheaf in $Shv(\mbox{Sh})$ associated to the 1-morphism $\mbox{Id}:\mbox{Sh} \rightarrow \mbox{Sh}$, following Yoneda and all. From the above, it's easy to see what this sheaf should do to morphisms $X\rightarrow \mbox{Sh}$ from a representable stack. But it appears that I need to make choices if I want to say what it does to arbitrary morphisms of stacks $\mathscr{N} \rightarrow \mbox{Sh}$. Perhaps instead I should take a limit or colimit over its application to the full subcategory of representable stacks over $\mathscr{N}$? The notes you are reading seem to disagree with more commonly accepted language (cf. SGA1 Exp 13, Vistoli's notes, or the Stacks project). Some of this seems to be an attempt at expository ease, e.g., the parenthetical remark in example 8.2 ("We will mention the following technical difficulties but will ignore them for now:") where "for now" really means forever. Oddly enough, one of the mentioned technical difficulties is more or less what prevents $\text{Sh}$ and $\text{Sh-map}$ from having natural stack structures in the sense of the notes - pullback is not strictly functorial. This un-naturality is why the common definition of stack is different - the notion of stack in the notes corresponds to the usual notion of stack in groupoids equipped with a splitting (or cleavage). The use of the category object $(\text{Sh}, \text{Sh-map})$ is a kludge to replace the usual stack $Sh/\mathcal{C}$ (in categories rather than groupoids) whose objects are sheaves over comma categories, and whose morphisms over any $f: U \to V$ in $\mathcal{C}$ are $f$-maps of sheaves - see Examples 3.20 and 4.11 in Vistoli. The author of the notes employs $\text{Sh-map}$ in order to add non-invertible sheaf maps, because the 2-morphisms in $Hom_{Stacks}(\mathcal{M}, \text{Sh})$ are all invertible. In other words, you have to throw away the 2-morphisms that are given to you by $\text{Sh}$, and use the larger collection of possibly non-invertible two-morphisms afforded by $\text{Sh-map}$. Once you have done that, I think your main problems are resolved. You've already worked out the object part of getting from a sheaf on $Stacks/\mathcal{M}$ to a natural transformation from $\mathcal{M}$ to $\text{Sh}$. If you have a morphism $\beta: X \to Z$ in $\mathcal{C}$, and $f: Z \to \mathcal{M}$, then $\beta$ induces a morphism of stacks over $\mathcal{M}$. If I'm not mistaken, the sheaf $\mathcal{F}$ takes this to the map in $\text{Sh}$ given by base change: $$\left( (\alpha: Y \mapsto Z) \mapsto \mathcal{F}(f \circ \alpha) \right) \mapsto \left( \beta^* \alpha: Y \times_Z X \to X) \mapsto \mathcal{F}(f \circ \beta \circ \beta^*\alpha) \right)$$ Similarly, you can get from a sheaf map on $Stacks/\mathcal{M}$ to a natural transformation from $\mathcal{M}$ to $\text{Sh-map}$. There seems to be a lot of additional checking necessary for proving the equivalence, which I don't feel like doing for you (sorry). Let me try to strip off all the stack language, which confuses me, and recast what I think is your question just in terms of category theory. (If I have misinterpreted which part is your question, I apologize. The only question mark in your post is in the very last line, but I don't think that's the main question.) You are, I believe, in the following situation. You have some ambient Cartesian category ($Stacks(M)$). You have a test object $M$ in your category, which happens to be the terminal object, if I'm not mistaken, but I don't think this matters. You have a category object $C_1 \rightrightarrows C_0$ internal to your category. Then in particular for any test object $M$, there is a category (in SET) whose morphisms are the arrows $M \to C_1$, and whose objects are the arrows $M \to C_0$ --- indeed, being a category object is equivalent to being a representable presheaf valued in categories. Then you also have a theorem that recognizing this category as some other more interesting category ($Sheaves(M)$). This almost sums up your paragraph after the theorem. But complicating the matter is that you don't have some ambient Cartesian category. Rather, $Stacks(M)$ is a (Cartesian) 2-category. So now we have two separate notions. Indeed, as a 2-category, it is among other things enriched in categories, so that you have in fact a category of morphisms $M \to C_0$, for example. In the previous paragraph, I was only considering this as a set. So then perhaps your question is the following. You have an ambient (Cartesian) 2-category, and let's assume it to be strict, a test object $M$, and an object $C_0$. Then $\hom(M,C_0)$ is a 1-category. The objects of this 1-category is the unenriched hom, which I will denote as $|\hom(M,C_0)|$. You'd like to recognize the morphisms of $\hom(M,C_0)$ as the set $|\hom(M,C_1)|$ for some particular $C_1$. Suppose that you have good Cartesian-closedness conditions, and an "inner hom" $\underline{\hom}$. Then what you'd like is a "walking arrow object" $Arr$ (which you do have with even stronger conditions, with buzzwords like "tensored over CAT"), and then $C_1 = \underline{\hom}(Arr,C_0)$. My guess is that the category of stacks on $M$ does have all of these strong closedness and tensored conditions. Moreover, my guess is that the correctly-implemented inner-hom construction in the previous paragraph, applied when $C_0 = Sh(M)$, yields $C_1 = ShMap(M)$. Maybe these are originally the versions with target CAT, not GPD, but then the trick is to post-compose with the 2-functor CAT$\to$GPD that keeps only the invertible morphisms, and make sure that this doesn't change too much. I hope this recasting helps, and that I didn't utterly misinterpret your question. This is about the limit of my category theory, and well past what I know about stacks properly.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9178980588912964, "perplexity": 202.56363743558362}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587593.0/warc/CC-MAIN-20211024173743-20211024203743-00508.warc.gz"}
http://robertmarks.org/REPRINTS/2017%20Radar%20waveform%20optimization%20for%20ambiguity%20function%20properties%20and%20dynamic%20spectral%20mask%20requirements%20based%20on%20communication%20receiver%20locations.htm
By Topic # Radar waveform optimization for ambiguity function properties and dynamic spectral mask requirements based on communication receiver locations Abstract: A dynamically determined spectral mask for radar transmission, based on changing wireless communication handset locations and maximum acceptable handset interference power levels, is used to optimize the radar waveform. The alternating projections optimization is based on the spectral mask and a minimization template for the ambiguity function, which provides range/Doppler combinations for which the ambiguity should be minimized. This optimization is continually re-performed as the spectral mask is adjusted based on handset location changes. Several different scenarios are demonstrated in which this optimization is shown to be successful. This algorithm will use information about communication handsets provided through an ad-hoc network to tune the radar waveform. Date of Conference: 15-18 Jan. 2017 Date Added to IEEE Xplore: 27 March 2017 ISBN Information: Electronic ISSN: 2164-2974 Publisher: IEEE SECTION I. ## Introduction Given the increasing use of the wireless radio spectrum, new spectrum sharing techniques for radar and communication are in high demand. Dynamic spectrum for sharing between radar and communications has been prominently featured in the 3.55 to 3.65 GHz and 5 GHz radar bands. We recently proposed and described the construction of a dynamic spectral mask for the radar transmission based on the locations and maximum acceptable interference power levels of nearby communication handset receivers [1]. In the present paper, we use the dynamically determined spectral mask with an alternating-projections waveform optimization to optimize the waveform for ambiguity function (AF) and spectral mask compliance [2], [3]. This method will be useful for implementing cognitive radar, defined as a radar that can sense and respond to its environment [4] and is aided by knowledge that it gains from its environment [5]. Haykin discusses adjusting radar transmitter power in order to not exceed the maximum interference temperature of a receiver [6] and altering the radar transmitter waveform based on multiple criteria [7]. Mahmoud describes spectrum shaping for communications transmission based on a flexible spectral mask and licensed users in a band [8]. Srinivasa suggests applying a spectral mask to secondary users based on interference to primary users [9]. SECTION II. ## Ambiguity-Function Waveform Optimization Using a Dynamic Spectral Mask Radar transmission can be dynamically controlled based on the real-time positions of surrounding wireless communication spectrum users and their maximum acceptable interference power levels [1]. The Friis equation can be used to find the maximum power that can be transmitted by the radar at the handset's frequency without interfering, based on acceptable receiver power: View Source where    $P_{t}$ is the power transmitted by the radar at the handset's frequency,    $P_{r}$ is the received power from the radar transmission at the wireless handset, and  $R$ is the distance between the radar transmitter and wireless handset [1]. Antenna gains of 1 are assumed, but the method can be easily modified for non-unity gain values. Maximum acceptable interference power levels and positions of the wireless communication handsets must be available to the radar, perhaps via a wireless network. This is reasonable given parallel developments in wireless ad-hoc networks for cognitive radio and radar operations. Optimization of the radar waveform to meet ambiguity function requirements and spectral criteria is detailed in a recent conference paper [10] and summarized briefly here. Woodward's ambiguity function is the output of the radar's correlation operation for a waveform  $x(t)$ at range displacement  $\tau$ and Doppler displacement  $u$ from the actual range and Doppler of a desired target [11]: View Source An alternating projections approach is used to project the AF of the initial waveform onto the ambiguity minimization function  $M(\tau, u)$ [10], such that View Source The minimization function is used to drive the AF to be minimized at certain range/Doppler combinations of interest in the ambiguity plane (such as range-Doppler combinations of potentially interfering targets). Projection of the AF    $\chi_{x}(\tau, u)$ onto a set of two-dimensional minimized functions  $\mathbf{R}$, which satisfy (4) [10] is given by View Source where  $\mathbf{B}$ is the set of range-Doppler combinations not satisfying (3). The waveform must also be projected onto the set of waveforms with pulse duration  $T$ seconds and bandwidth  $B$ Hz, then onto the sets of waveforms meeting energy requirements, peak-to-average power ratio (PAPR) requirements, and the spectral mask. The spectral mask projection is equivalent to filtering with the mask. SECTION III. ## Measurement Results At the operating frequency of 3.3 GHz, a vector signal generator was used for waveform generation and an oscilloscope was used to measure the waveform, controlled by MATLAB, which was used for scenario simulations. A scenario was constructed consisting of a radar transmitter and numerous wireless handsets communicating with two base stations. The handsets are assigned random positions and velocities within the defined spatial range, and are not allowed to leave the boundaries. The receiver sensitivity of each handset is assumed to increase with increasing distance from its base station, as the handset must discern lower-power signals. As sensitivity increases, it is assumed that the maximum acceptable interference power decreases. Thus, each handset's position with respect to both its base station and the radar transmitter will affect the dynamic spectral mask. Figure 1 shows a position scenario snapshot with several moving communication handsets, corresponding to two different communication base stations, moving near a radar transmitter. Figure 2 shows the spectral mask for the position scenario described by Fig. 1, and Figure 3 shows the AF minimization template, as well as the optimized AF after 25 iterations. No PAPR constraint is imposed in the optimization shown in Figures 2 and 3. Figure 3 shows that the AF is very small in the range/Doppler regions indicated for minimization. When a PAPR requirement is imposed on Fig. 1 scenario, a wider-bandwidth waveform results (Fig. 4). The dynamic spectral mask notches the transmitted spectrum. Figure 5 shows that the PAPR constraint causes ambiguity to leak into the minimization regions at certain range-Doppler combinations. Figure 6 shows the waveform resulting from the minimization template of Fig. 7, using the Fig. 1 scenario of Fig. 1 with no PAPR requirement. This template is designed to maximize range resolution, and produces a wide-bandwidth waveform (Fig. 6) that is very close to spectral mask limitations. The AF shows good compliance with the minimization template. Fig. 1. Snapshot of position scenario with radar transmitter, communication handsets, and communication base stations after 25 iterations Fig. 2. Dynamic spectral mask (red and blue) and optimized waveform (green) derived from the fig. 1 position scenario after 25 iterations, with no PAPR constraint. Blue dots represent allowable radar transmit power density values mapped from the handsets' maximum interference power density values. Fig. 3. Ambiguity function minimization template (left) and optimized simulated ambiguity function (right) after 25 iterations SECTION III. ## Conclusions A dynamic radar waveform optimization has been demonstrated, based on a real-time spectral mask generated from the changing positions of surrounding communication handsets and including ambiguity function minimization objectives, spectral mask constraints, and PAPR requirements. This optimization is expected to be useful in cognitive and adaptive radar optimization to meet spectral requirements dictated by coexistence with wireless communications. Fig. 4. Dynamic spectral mask (red and blue) and optimized waveform (green) derived from the fig. 1 position scenario after 25 iterations, with maximum PAPR of 2 dB Fig. 5. Ambiguity function minimization template (left) and optimized simulated ambiguity function (right) for the fig. 1 position scenario and maximum PAPR of 2 dB after 25 iterations Fig. 6. Dynamic spectral mask (red and blue) and optimized waveform (green) derived from the fig. 1 position scenario after 25 iterations Fig. 7. Ambiguity function minimization template (left) and optimized simulated ambiguity function (right) after 15 iterations of the alternating projections optimization. ### ACKNOWLEDGMENT This work has been funded by the National Science Foundation (Grant No. ECCS-1343316). ## Keywords • IEEE Keywords • Author Keywords ## Authors Baylor University, Waco, TX 76798, USA Baylor University, Waco, TX 76798, USA Baylor University, Waco, TX 76798, USA Naval Research Laboratory, Washington, DC 20375, USA Baylor University, Waco, TX 76798, USA
{"extraction_info": {"found_math": true, "script_math_tex": 12, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2408735305070877, "perplexity": 2044.953733212707}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805466.25/warc/CC-MAIN-20171119080836-20171119100836-00164.warc.gz"}
https://bibbase.org/network/publication/baqu-acousticcorrelatesofspanishstressinfluentandnonfluentaphasiaapreliminarystudy
In ICPhS 2015. Proceedings of the 18th International Congress of Phonetic Sciences, Glasgow. The University of Glasgow. @inproceedings{baque_acoustic_2015, Bdsk-Url-1 = {https://www.internationalphoneticassociation.org/icphs-proceedings/ICPhS2015/Papers/ICPHS0538.pdf}}
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.16239731013774872, "perplexity": 6134.289926312051}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178349708.2/warc/CC-MAIN-20210224223004-20210225013004-00405.warc.gz"}
https://www.physicsforums.com/threads/points-of-intersection-of-parametric-lines.687916/
# Points of intersection of Parametric Lines 1. Apr 25, 2013 ### Dramen 1. The problem statement, all variables and given/known data I'm told to find the 2 points the two curves P and Q will intersect on and the parametric equations are: $$P (x=t, y=2t-1)$$ $$Q (x=3t-t^2, y=t+1)$$ 3. The attempt at a solution I know i'm supposed to set x-equations and y-equations equal to each and solve so that $$t=3t-t^2$$ for x $$2t-1=t+1$$ for y and when I solve them I get $$t=0$$ and $$t=2$$ for x and $$t=2$$ for y the problem is I can't seem to find another t-value for y Also I'm not completely sure if I can use t interchangeably between the two equations when solving for them or if I should consider the t to be two separate and unique variables like $$t_1$$ and $$t_2$$ 2. Apr 25, 2013 ### LCKurtz That is the problem. You can't assume the curves cross for the same value of the parameters. So call one parameter $t$ and the other $s$ and try setting the $x$ values and $y$ values equal. 3. Apr 25, 2013 ### Dramen I did just that quite a while ago when my instructor had hinted at that idea and this is what I came up with $$t=3s-s^2$$ $$t=s(3-s)$$ so that for x t=s and t=3-s so that any same two values fit in the first equality(?) and only 1.5 solves the equality in the second for y it is $$2t-1=s+1$$ $$2t=s+2$$ so that for y t=2 and s=2 I'm still not sure how to find a second point of intersection. Since the first point is (1.5,2)? 4. Apr 25, 2013 ### LCKurtz Ignoring the other bad logic, you have two equations in two unknowns. Solve them correctly. 5. Apr 25, 2013 ### Dramen I'm no good when trying to solve for 2 unknowns algebraically like this, because first thought is to substitute $$t=3s-s^2$$ into $$2t-1=s+1$$ but that won't work. And by graph for the first equation t=s=0 or 2 and the second t=s=2 6. Apr 25, 2013 ### LCKurtz Yes it will. Try it. 7. Apr 25, 2013 ### Dramen ok I did that so that $$2(3s-s^2)-1=s+1$$ setting it to 0 gives me $$-2s^2+5s-2=0$$ and solving for that I get s=1/2 and s=2 then I plug those answers into $$t=3s-s^2$$ so that t=1.25 and t=2 so then my points of intersection are at P (1.25,1.5) and (2,3) Q (1.25,1.5) and (2,3) Last edited: Apr 25, 2013 8. Apr 25, 2013 ### LCKurtz So check your work. Do your s and t values work in their equations? If so, do the two curves go through those two points for the corresponding values of s and t? 9. Apr 25, 2013 ### Dramen Yep I checked it and the numbers work. Thanks for the nudges in the right direction.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7965632677078247, "perplexity": 899.5112905486804}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648003.58/warc/CC-MAIN-20180322205902-20180322225902-00072.warc.gz"}
https://rd.springer.com/chapter/10.1007%2F978-3-319-75771-1_21
# Tuples • John Hunt Chapter ## Abstract In this chapter we introduce Scala tuples, which are useful (and very simple construct) for grouping instances together.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9179432392120361, "perplexity": 9541.424484799727}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027317274.5/warc/CC-MAIN-20190822151657-20190822173657-00274.warc.gz"}
https://community.wolfram.com/groups/-/m/t/1886323
GROUPS: # Incorrect reduction Posted 1 year ago 2469 Views | 14 Replies | 1 Total Likes | Is there an incorrect theorem embedded in wolfram alpha, or is this a bug? See image.Any time I enter " (i (2 x^* - 1))/(2 (x^* - 1) x^* ) " it reduces to "i / x^* " I need to show an equivalence, but when this is on the right hand side of the equation, it reduces to this " i/x^* " nonsense. Any ideas? I see why it's doing it (order of operations), but it's wrong. 14 Replies Sort By: Posted 1 year ago Here's an example... Posted 1 year ago Incorrect reduction Posted 1 year ago Common factors are cancelled (standard in rational function fields). Think of it as a removable singularity, one that, well, got removed. Posted 1 year ago (2 (5 - I 7) - 1) is not the same as 2 ((5 - I 7) - 1) Posted 1 year ago Daniel, you might want to look at this again. Look at my numerical example I provided. Then go ahead and enter the same calculation using just x instead of the complex conjugate of x. It won’t cancel in either of these, for the same reason it shouldn’t for the former. It’s inconsistent. Posted 1 year ago Ah, yes. Bad precedence. I filed a bug report for this. Posted 1 year ago Thanks, and here's a visual just in case. Posted 1 year ago And Gustavo is right'o! But not exactly as I entered it. Posted 1 year ago Okay, weird...just narrowed it down. In my first image I didn't include the equation as I entered it. It's a bug in how it's reading my input. Take a look at this image now, as it includes what I actually type. Posted 1 year ago The temporary work around is to put additional parenthesis around the 2 x^* in the numerator. Posted 1 year ago I am confused. What is x^* ? Posted 1 year ago X^* means complex conjugate of X. Posted 1 year ago I understand now.X^* is interpreted as the complex conjugate in natural language, but it is a syntax error in Mathematica. Posted 1 year ago Yep. Seems that way. Reply to this discussion Community posts can be styled and formatted using the Markdown syntax.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3962760865688324, "perplexity": 1419.1999170388456}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585460.87/warc/CC-MAIN-20211022052742-20211022082742-00346.warc.gz"}
https://math.stackexchange.com/questions/2956529/problem-defining-root-automorphisms-on-galois-group
# Problem defining root automorphisms on Galois Group I'm trying find the Galois Group of $$f(x)=x^4+5x^2+5$$. I've finded the roots: $$\alpha_1=i \sqrt{\frac{5-\sqrt5}{2}}$$; $$\alpha_2=-i \sqrt{\frac{5-\sqrt5}{2}}$$; $$\alpha_3=i \sqrt{\frac{5+\sqrt5}{2}}$$; $$\alpha_4=-i \sqrt{\frac{5+\sqrt5}{2}}$$; And i've finded that: $$\alpha_1=i \sqrt{\frac{5-\sqrt5}{2}}$$; $$\alpha_2=- \alpha_1$$; $$\alpha_3=\frac{\sqrt5}{\alpha_1}$$; $$\alpha_4=-\frac{\sqrt5}{\alpha_1}$$; But, the problem is defining the automorphisms. If i define the automorphisms like this: $$\sigma_1(\alpha_1)=\alpha_1$$; $$\sigma_2(\alpha_1)=-\alpha_1$$; $$\sigma_3(\alpha_1)=\frac{\sqrt5}{\alpha_1}$$; $$\sigma_4(\alpha_1)=-\frac{\sqrt5}{\alpha_1}$$; The solution of the exercise says that I should get $$\mathbb{Z_4}$$, but none of the automorphisms give me a generator of all group. Am I defining automorphisms well? The automorphisms is well defined, but i ignored that$$\sqrt5 \in \mathbb{Q}(\alpha_1)$$ and $$\sqrt5=2 \alpha_1^2+5$$, then the automorphisms $$\sigma_3, \sigma_4$$ have order $$4$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 18, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9756403565406799, "perplexity": 1157.067917370327}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027318375.80/warc/CC-MAIN-20190823104239-20190823130239-00251.warc.gz"}
https://www.gamedev.net/forums/topic/288733-iterating-through-vectors/
# iterating through vectors This topic is 4994 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts I do it a lot. Forward and back. No only do I do it a lot, I examine a variable in each item in the vector for one reason or another. How slow and stupid is this? ex: std::vector<CClass*> Vector; std::vector<CClass*>::iterator my_iter; for(my_iter = Vector.begin(); my_iter != Vector.end(); my_iter++) { if((*my_iter)->someVar == someOtherVar) { DoSomeWork(); } } What do you think? ##### Share on other sites Might be worth looking at functors, for a number of reasons (Meyers Effective STL has the best section on this that I've read). Then you'd end up with something like this (if I haven't got any code wrong). // Assuming checking ints, change as required (even template if you want)class functor{ functor(int checkVar) : checkVar_(checkVar) {} void operator() (const CClass* &in) { if(in->someVar == checkVar_) DoSomeWork(); } private: int checkVar_;};// Need to include <algorithm>std::for_each(Vector.begin(), Vector.end(), functor(someOtherVar)); Jim. Edit : Digging out the Meyer book - he cites 3 reasons for using algorithms : Efficiency (for example, in your example you calculate Vector.end() every time you run through the loop, as opposed to once only) Correctness (less likely to get errors with a library function) Maintainability (working from a known vocabulary) ##### Share on other sites Quote: Original post by JimPriceMight be worth looking at functors, for a number of reasons (Meyers Effective STL has the best section on this that I've read). Then you'd end up with something like this (if I haven't got any code wrong).*** Source Snippet Removed ***Jim.Edit : Digging out the Meyer book - he cites 3 reasons for using algorithms :Efficiency (for example, in your example you calculate Vector.end() every time you run through the loop, as opposed to once only)Correctness (less likely to get errors with a library function)Maintainability (working from a known vocabulary) I'll have to look into that. But how bad is iterating through a vector? Really bad? the ones I use arent that huge (yet). ##### Share on other sites I'll be honest, I can't personally quantify what improvements you might make - and even Meyer, in the same article, pretty much says that the use of algorithms versus hand-crafted loops is situationally dependent - and in your example, it sounds like he'd probably hand-craft (to avoid the creation of a functor that ostensibly does very little). The other efficiency considerations (and these are the ones that are noted as more important than just re-calculating Vector.end() each time) I suspect are more relevant to specific algoithms, as opposed to for_each. The interesting example is where he talks about how the STL algorithms are likely to be optimised for the container you are using - for example, a deque is likely to store it's data in one or more fixed size arrays - and therefore the STL can use this knowledge to use pointer-based traversal instead of iterator-based traversal. He quotes some (container-specific) implementations as being 20% faster than "normal" traversal. Finally, I thought I'd add some links to Guru of the Week that deal with this. Firstly, a general examination of creation of temporaries using hand-crafted loops: GotW #3. And finally my favorite : creating mastermind using algorithms: GotW #41. Jim. ##### Share on other sites What about the find() method? Does that work on vectors? I was under the impression it didn't. I would think that would be a better way to do what I want. Basically, I've got two vectors full of pointers to one of my own classes. I want to search one vector for an object that meets a certain criteria. Once I find that object, I want to add it to the second vector (I've just been using push_back()) and then remove it from the first. I'm using vectors instead of lists because I need the my_list[] syntax. I use an int to hold my current position in the list (for various reasons). ##### Share on other sites Morning, Vector has no find member function, but the algorithm functions work with all iterators. So you'd use something like this (off the top of my head, so no guarantees it will work) #include <vector>#include <algorithm>// You said one of your own classes, so taking it literally // - although the extension to polymorphism is obviousclass Base{//stuff};bool FindCorrectPointer(const Base* &in){ if(in->YesPlease) return true; return false;}int main(){ typedef std::vector<Base*> MyTypedef; typedef MyTypedef::iterator MyIterator; MyTypedef myContainer1; MyTypedef myContainer2; MyIterator myIterator;// This line becomes more complicated if you want to pass parameters to the search function// Need to look up function adapters - like bind2nd// Also note find_if only returns first occurrence // - but you can get around this using a loop containing find_if that exits when it hits myContainer1.end() myIterator = std::find_if(myContainer1, myContainer2, FindCorrectPointer());// Check to see if this element existed if(myIterator != myContainer1.end()) { myContainer2.push_back(*myIterator);// The erase function now points at the next element in the container myIterator = myContainer1.erase(myIterator); }}; Benefits - if later on you decide to change to storage container, you can now do it just be changing the typedef statements at the beginning (no need to worry about [] and storing ints). The one thing you might want to change is the use of find - member functions (when present) are always preferred to algorithms, for the same reasons as mentioned previously (ie they are tailored to the underlying structure of the container). HTH, Jim. Edit : loads of silly typos ##### Share on other sites The main problem with using STL algorithms for what you want to do is that algorithms work over a range of items, completely. You want to find one value, copy it, remove it, and stop (right?). See, the really nice thing about using STL algos instead of for loops is that they're more predictable... i.e. when you see a for loop, you have to go scan through its contents to see if there are any 'break's or any statements that affect the iterating variable. OTOH, when you need those things you have to write a for loop. Anyway, efficiency is not really your main worry, because erasing from vectors is damn slow... and how does that affect iterators? I know some operations invalidate iterators, but I'm not sure when or how. You should be fine though, since you don't use the iterator after you erase it. If you gave a more specific description of what you're trying to accomplish overall, STL might be able to help you do it differently (i.e. if you maintained sorted vectors, you could use lower_bound(), or maybe [] syntax isn't so important after all and you can use std::set). ##### Share on other sites Quote: The main problem with using STL algorithms for what you want to do is that algorithms work over a range of items, completely. You want to find one value, copy it, remove it, and stop (right?). Er - I don't think they do. The find and find_if algorithms stop when they hit first object found - otherwise they wouldn't be able to return an iterator (I suppose they could return a std::vector of iterators, for example, but I'm guessing they don't do this for exactly the reason you point out). That's why you have to write code like this: std::vector<int> Vec;std::vector<int>::iterator It;// Possibly populate Vecwhile(It != Vec.end()){ It = std::find(Vec.begin(), Vec.end(), value); if(It != Vec.end()) { It->DoSomething;// And you could use erase here, for example, because it returns an iterator to the next element of Vec }} Here's a comparison of hand-crafted, algorithm and member function codes (pretty much taken from effective STL). Take this code: set<int> s;// Populate s with 1,000,000 ints//Method 1:set<int>::iterator i; // Have to define here so I can use i outside the loopfor(i = s.begin(); i != s.end(); ++i){ if(*i == 727) break;}//Method 2:set<int>::iterator i = std::find(s.begin(), s.end(), 727);//Method 3:set<int>::iterator i = s.find(727); Method 1 : stops when it hits first 727 - so could loop once or a million times. Method 2 : stops when it hits first 727 - so could loop once or a million times. Method 3 : stops when it hits first 727 - but set is stored as a red-black tree, so at most loops about 40 times. This is why member functions are preferred to algorithms - an algorithm has to be generic so it can work with any iterator-providing container, but a member function can be written to take advantage of the structure of the container. OK - so this doesn't make a big difference with std::vector in these simple examples (unless the underlying implementation uses pointer-based traversal, as vectors are typically stored as a big array), but if there is a change in container type then this becomes relevant. Quote: OTOH, when you need those things you have to write a for loop. No you don't - for the reasons mentioned above. But I do agree that choice of hand-crafted versus algorithm is situationally dependent - but it's not as clear cut as this. Quote: Anyway, efficiency is not really your main worry, because erasing from vectors is damn slow This is a good point and is something I'd overlooked. Depending upon how frequently you're doing it a std::list might be more efficient. Quote: I know some operations invalidate iterators, but I'm not sure when or how. Another reason for using algorithms, because they do know what's happening to the iterator. Jim. ##### Share on other sites it looks to me like you are using the wrong container for the job. if you are looping thruogh the container looking for a specific value frequently, you should be using either a std::set or a std::map. this is what these containers were designed for! each std::container has its own place and purpose, don't restrict yourself to only using vectors. i think i use almost all of the containers in my current project (deque,set,map,vector,list..) your code would turn to (something like) this: std::set<int>::iterator it = my_set.find(_checkVar);if(it != my_set.end()){ DoSomeWork();} not only does this make your code cleaner and easier to write, but it also makes it faster. sorry if this was already suggested, im at work and could only skim through the thread. [Edited by - graveyard filla on December 16, 2004 1:14:15 PM] ##### Share on other sites Quote: Original post by benutneI do it a lot. Forward and back. No only do I do it a lot, I examine a variable in each item in the vector for one reason or another.How slow and stupid is this?ex:*** Source Snippet Removed ***What do you think? Prefer ++iter to iter++, iter++ creates a temporary. 1. 1 2. 2 3. 3 JoeJ 12 4. 4 5. 5 • 12 • 16 • 13 • 20 • 12 • ### Forum Statistics • Total Topics 632176 • Total Posts 3004590 ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18092601001262665, "perplexity": 1860.441510703391}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221215487.79/warc/CC-MAIN-20180820003554-20180820023554-00140.warc.gz"}
https://jp.maplesoft.com/support/help/maple/view.aspx?path=StringTools/Sentences&L=J
StringTools - Maple Programming Help Home : Support : Online Help : Programming : Names and Strings : StringTools Package : English Text : StringTools/Sentences StringTools Sentences approximate segmentation of a string of English text into sentences Calling Sequence Sentences( s ) Parameters s - Maple string Description • The Sentences(s) command attempts to split a string, presumed to be composed of English language text, into its constituent sentences. It does this by recognizing sentence boundaries. The beginning and the end of the input string are regarded as sentence boundaries in all cases. Internal sentence boundaries are recognized by the presence of a sentence terminator, which is one of the following: Period . Exclamation point ! Question mark ? Colon : • A small number of built-in patterns are used to recognize some exceptions. • Note that you can also use the RegSplit command with the fixed string "\n\n" as the splitting pattern to segment English text into paragraphs. • All of the StringTools package commands treat strings as (null-terminated) sequences of $8$-bit (ASCII) characters.  Thus, there is no support for multibyte character encodings, such as unicode encodings. Examples > $\mathrm{with}\left(\mathrm{StringTools}\right):$ > $\mathrm{Sentences}\left("This is a\nsentence. Can we have another? Yes, here\text{'}s one more."\right)$ ${"This is a sentence."}{,}{"Can we have another?"}{,}{"Yes, here\text{'}s one more."}$ (1)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8759593963623047, "perplexity": 2282.397958913221}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347398233.32/warc/CC-MAIN-20200528061845-20200528091845-00301.warc.gz"}
https://rdrr.io/cran/ldstatsHD/man/eqCorTestByRows.html
# eqCorTestByRows: Correlation matrices test by rows ### Description Tests whether the gth row of a correlation matrix is either non-zero or different to the same row of another correlation matrix. Allows for paired data. ### Usage 1 2 3 eqCorTestByRows(D1, D2 = NULL, testStatistic = c("AS", "max"), nite = 200, paired = FALSE, exact = TRUE, subMatComp = FALSE, iniP = 1, finP = NULL, conf.level = 0.95) ### Arguments D1 first population dataset in matrix n_1\times p form. D2 second population dataset in matrix n_2\times p form. If D2 = NULL non-zero correlation rows test is performed instead. testStatistic test statistic used for the hypothesis testing: name that uniquely identifies "AS" for average of squares based test and "max" for an extreme value test. nite number of iterations used to generate the permuted samples. paired if TRUE, observations in D1 and D2 are assumed to be matched (n_1 must be equal to n_2). exact permuted samples method: if TRUE it forces to have the exact same number of observations in the two conditions in the samples exchanging process. If FALSE, permutations are made exchanging matched observations from the two datasets randomly with probability equal to 0.5. subMatComp used to reduce computational time when using the test in very high dimensional data. If TRUE correlation sub-matrices are used, if FALSE, whole correlation matrices are computed. iniP only for subMatComp = TRUE and D2 != NULL. First row to be tested. finP only for subMatComp = TRUE and D2 != NULL. Last row to be tested. conf.level confidence level of the interval. ### Details This test uses a sum of squares based test statistic as given by the adjusted squared correlation cor2mean.adj as well as an extreme value based test statistic as given by max. Null distributions are approximated differently when testing equality of two correlation rows and testing if correlation rows are equal to zero. In the first case, permuted samples are used to construct the confidence interval (see details in eqCorrMatTest). In the latter, they are found using Monte Carlo samples. For instance, n iid observations from a normal distribution N(0,1) are generated. Then, the adjusted square (or absolute maximum) correlations between these montecarlo samples and the original data D1 are found. ### Value An object of class eqCorTestByRows containing the following components: AStest average of squares test statistics. pvalAS average of squares test p-values. ciAS average of of squares test statistic confidence interval. Maxtest extreme value test statistics. pvalMax extreme value test p-values. ciMax extreme value test statistic confidence interval. ### Author(s) Caballe, Adria <[email protected]>, Natalia Bochkina and Claus Mayer. ### References to come. plot.eqCorTestByRows for graphical representation. eqCorrMatTest for testing equality of two correlation matrices. ### Examples 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 #### data EX2 <- pcorSimulatorJoint(nobs = 200, nclusters = 3, nnodesxcluster = c(60,40,50), pattern = "pow", diffType = "cluster", dataDepend = "diag", pdiff = 0.5) #### eq corr by rows ## not run #test1 <- eqCorTestByRows(EX2$D1, EX2$D2, testStatistic = c("AS", "max"), # nite = 200, paired = TRUE, exact = TRUE, subMatComp = FALSE, # iniP = 1, finP = 40, conf.level = 0.95) #print(test1) #### zero corr by rows #test2 <- eqCorTestByRows(EX2\$D1, testStatistic = c("AS", "max"), nite = 1000, # conf.level = 0.95) #print(test2) Search within the ldstatsHD package Search all R packages, documentation and source code Questions? Problems? Suggestions? or email at [email protected]. Please suggest features or report bugs with the GitHub issue tracker. All documentation is copyright its authors; we didn't write any of that.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27150505781173706, "perplexity": 5014.25915946754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121752.57/warc/CC-MAIN-20170423031201-00308-ip-10-145-167-34.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/182889/two-problems-with-prime-numbers
# Two problems with prime numbers Problem 1. Prove that there exists $n\in\mathbb{N}$ such that in interval $(n^2, \ (n+1)^2)$ there are at least $1000$ prime numbers. Problem 2. Let $s_n=p_1+p_2+...+p_n$ where $p_i$ is the $i$-th prime number. Prove that for every $n$, there exists $k\in\mathbb{N}$ such that $s_n<k^2<s_{n+1}$. I've found these two a while ago and they interested me. But don't have any ideas. - Problem 2: For any positive real $x$, there is a square between $x$ and $x+2\sqrt{x}+2$. Therefore it will suffice to show that $p_{n+1}\geq 2\sqrt{s_n}+2$. We have $s_{n}\leq np_n$ and $p_{n+1}\geq p_n+2$, so we just need to show $p_n\geq 2\sqrt{np_n}$, i.e., $p_n\geq 4n$. That this holds for all sufficiently large $n$ follows either from a Chebyshev-type estimate $\pi(x)\asymp\frac{x}{\log(x)}\,$ (we could also use PNT, but we don't need the full strength of this theorem), or by noting that fewer than $\frac{1}{4}$ of the residue classes mod $210=2\cdot3\cdot5\cdot7$ are coprime to $210$. We can check that statement by hand for small $n$. There have already been a couple of answers, but here is my take on problem 1: Suppose the statement is false. It follows that $\pi(x)\leq 1000\sqrt{x}$ for all $x$. This contradicts Chebyshev's estimate $\pi(x)\asymp \frac{x}{\log(x)}$ - For the first one, you can prove that there is a positive integer $n$ such that $\pi((n+1)^2 - 1) - \pi(n^{2}) \geqslant 1000$, where $\pi$ is the prime counting function, using the Prime Number Theorem. For the second one, I believe Bertrand's Postulate may be useful. - Solution to the first: By inspection of the primes, pick $n = 8715$. Note that $n^2 = 75951225 < 75951233$, and $(n+1)^2 = 75968656 > 75968723$. Now $75951233$ and $75968723$ are primes, with $\ge 1000$ primes between them so we're done. [1] [1] The $4446857$th prime is $75951233$ and the $4447859$th prime is $75968723$ (source: http://primes.utm.edu/nthprime/index.php). Further, $4447859 - 4446857 = 1002$. Exercise: Show that $n = 8715$ is the minimum $n$ satisfying the claim. - Here's my attempt on the first part of the question, I'm not sure how valid it is, though; so I'd appreciate feedback/corrections: It is known that: $\pi(n)\sim\frac{n}{\ln{n}}$. Where $\pi(n)$ is the prime counting function on $n$ (i.e. the number of primes less than or equal to $n$). Therefore, we are looking to show that there exists some $n$ such that: $$\frac{(n+1)^{2}}{2\ln{(n+1)}}-\frac{n^{2}}{2\ln{n}}\geqslant1000$$ Let us call $\pi_{d}(n)=\frac{(n+1)^{2}}{2\ln{(n+1)}}-\frac{n^{2}}{2\ln{n}}$. Therefore, we have: $$\frac{d\pi_{d}}{dn}=\left(\frac{n}{2\left(\ln{n}\right)^{2}}+\frac{n+1}{\ln{(n+1)}}\right)-\left(\frac{n}{\ln{n}}+\frac{n+1}{2\left(\ln{(n+1)}\right)^{2}}\right)$$ Therefore $\frac{d\pi_{d}}{dn}\gt0$, $\forall n\gt0$. Therefore, as $\pi_{d}(n)$ is monotonically increasing: $\exists n:\pi((n+1)^{2})-\pi(n^{2})\geqslant1000$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9882581830024719, "perplexity": 85.02102140386813}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701314683/warc/CC-MAIN-20130516104834-00080-ip-10-60-113-184.ec2.internal.warc.gz"}
https://www.physicsoverflow.org/43100/how-find-the-speed-which-he-left-the-slope-how-do-calculate-it
# How to find the speed at which he left the slope?How do i calculate it? + 0 like - 0 dislike 155 views A racer took-off from a 30o slope which is 1 m above ground elevation. He remains in mid air for 1.5 s before touching down at point C. Determine a)        the speed at which he left the slope. b)        the horizontal distance S from A to C. c)         the height h, measured from ground elevation. Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor)   Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar$ysicsOverf$\varnothing$owThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4539205729961395, "perplexity": 1809.8951506127084}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178347321.0/warc/CC-MAIN-20210224194337-20210224224337-00558.warc.gz"}
https://www.teachask.com/2020/11/cbse-class-10-maths-exam-2020-chapter.html
# CBSE Class 10 Maths Exam 2020: Chapter-wise Important Formulas, Theorems & Properties for Last Minute Revision We have collated all important formulas from all chapters of CBSE Class 10 Maths. Important terms and properties which are useful in the Class 10 Maths calculations are also provided in this article for quick revision before the exam. Mathematics is one such subject which often gives nightmares to students. While Maths is a little tricky, it is not difficult. It just requires a thorough understanding of the concepts, regular practice and a good hold on all important formulas to score high in the Maths subject. To help students get all important formulas, theorems and properties at one place, we have collated the chapter-wise formulas along with important terms & properties occurring in Class 10 Maths.  Students must grasp all the formulas and theorems included in chapters like Triangles, Polynomials, Coordinate Geometry, Trigonometry and Mensuration as these chapters carry high weightage for Maths Board Exam 2020. Check below the important formulas, terms and properties for CBSE Class 10 Maths Exam 2020: 1. Real Numbers: Euclid’s Division Algorithm (lemma): According to Euclid’s Division Lemma if we have two positive integers a and b, then there exist unique integers q and r  such that a = bq + r, where 0 ≤ r ≤ b. (Here, a = dividend, b = divisor, q = quotient and r = remainder.) 2. Polynomials: (i) (a + b)2 = a2 + 2ab + b2 (ii) (a – b)2  = a2 – 2ab + b2 (iii) a2 – b= (a + b) (a – b) (iv) (a + b)3  = a3 + b3 + 3ab(a + b) (v) (a – b)3  =  a3 – b3 – 3ab(a – b) (vi) a3 + b3 = (a + b) (a– ab + b2) (vii) a3 – b3  = (a – b) (a+ ab + b2) (viii) a4 – b4 = (a2)2 – (b2)2 = (a2 + b2) (a2 – b2) = (a2 + b2) (a + b) (a – b) (ix) (a + b + c) 2  = a2 + b2 + c2 + 2ab + 2bc + 2ac (x) (a + b – c) 2  =  a2 + b2 + c2 + 2ab – 2bc – 2ca (xi) (a – b + c)2  = a2 + b2 + c2 – 2ab – 2bc + 2ca (xii) (a – b – c)2  = a2 + b2 + c2 – 2ab +  2bc – 2ca (xiii) a3 + b3 + c3 – 3abc  = (a + b + c)(a2 + b2 + c2 – ab – bc – ca) 3. Linear Equations in Two Variables: For the pair of linear equations a1 + b1y + c1 = 0  and  a2 + b2y + c2 = 0, the nature of roots (zeroes) or solutions is determined as follows: (i) If a1/a2 ≠ b1/b2 then we get a unique solution and the pair of linear equations in two variables are consistent. Here, the graph consists of two intersecting lines. (i) If a1/a2 ≠ b1/b2 ≠ c1/c2, then there exists no solution and the pair of linear equations in two variables are said to be inconsistent. Here, the graph consists of parallel lines. (iii) If a1/a2 = b1/b2 = c1/c2, then there exists infinitely many solutions and the pair of lines are coincident and therefore, dependent and consistent. Here, the graph consists of coincident lines. For a quadratic equation, ax+ bx + c = 0 • Sum of roots = –b/a • Product of roots = c/a • If roots of a quadratic equation are given, then the quadratic equation can be represented as: x2 – (sum of the roots)x + product of the roots = 0 • If Discriminant > 0, then the roots the quadratic equation are real and unequal/unique. • If Discriminant = 0, then the roots the quadratic equation are real and equal. • If Discriminant < 0, then the roots the quadratic equation are imaginary (not real). • Important Formulas - Boats and Streams (i) DownstreamIn water, the direction along the stream is called downstream.(ii) UpstreamIn water, the direction against the stream is called upstream.(iii) Let the speed of a boat in still water be u km/hr and the speed of the stream be v km/hr, then Speed downstream = (u + v) km/hr Speed upstream = (u - v) km/hr. 5. Arithmetic Progression: • nth Term of an Arithmetic Progression: For a given AP, where a is the first term, d is the common difference, n is the number of terms, its nth term (an) is given as a= a + (n−1)×d • Sum of First n Terms of an Arithmetic Progression, Sn is given as: 6. Similarity of Triangles: • If two triangles are similar then ratio of their sides are equal. • Theorem on the area of similar triangles: If two triangles are similar, then the ratio of the area of both triangles is proportional to the square of the ratio of their corresponding sides. 7. Coordinate Gemetry: • Distance Formulae: Consider a line having two point A(x1, y1) and B(x2, y2), then the distance of these points is given as: • Section Formula: If a point p divides a line AB with coordinates A(x1, y1) and B(x2, y2), in ratio m:n, then the coordinates of the point p are given as: • Mid Point Formula: The coordinates of the mid-point of a line AB with coordinates A(x1, y1) and B(x2, y2), are given as: • Area of a Triangle: Consider the triangle formed by the points A(x1, y1) and B(x2, y2) and C(x3, y3) then the area of a triangle is given as- 8. Trigonometry: In a right-angled triangle, the Pythagoras theorem states (perpendicular )+ ( base )2 = ( hypotenuse )2 Important trigonometric properties: (with P = perpendicular, B = base and H = hypotenuse) • SinA = P / H • CosA = B / H • TanA = P / B • CotA = B / P • CosecA = H / P • SecA = H/B Trigonometric Identities: • sin2A + cos2A=1 • tan2A +1 = sec2A • cot2A + 1= cosec2A Relations between trigonometric identities are given below: Trigonometric Ratios of Complementary Angles are given as follows: • sin (90° – A) = cos A • cos (90° – A) = sin A • tan (90° – A) = cot A • cot (90° – A) = tan A • sec (90° – A) = cosec A • cosec (90° – A) = sec A Values of Trigonometric Ratios of 0° and 90° are tabulated below: 9. Circles: Important properties related to circles: • Equal chord of a circle are equidistant from the centre. • The perpendicular drawn from the centre of a circle, bisects the chord of the circle. • The angle subtended at the centre by an arc = Double the angle at any part of the circumference of the circle. • Angles subtended by the same arc in the same segment are equal. • To a circle, if a tangent is drawn and a chord is drawn from the point of contact, then the angle made between the chord and the tangent is equal to the angle made in the alternate segment. • The sum of opposite angles of a cyclic quadrilateral is always 180o. Important formulas related to circles: • Area of a Segment of a Circle: If AB is a chord which divides the circle into two parts, then the bigger part is known as major segment and smaller one is called minor segment. Here, Area of the segment APB = Area of the sector OAPB – Area of ∆ OAB 10. Mensuration: Check below the important formulas for areas and volumes of solids: 11. Statistics: For Ungrouped Data: Mean: The mean value of a variable is defined as the sum of all the values of the variable divided by the number of values. Median: The median of a set of data values is the middle value of the data set when it has been arranged in ascending order.  That is, from the smallest value to the highest value. Median is calculated as Where n is the number of values in the data. If the number of values in the data set is even, then the median is the average of the two middle values. Mode: Mode of a statistical data is the value of that variable which has the maximum frequency For Grouped Data: Mean: If x1, x2, x3,......xn are observations with respective frequencies f1, f2, f3,.....fn then mean is given as: Median: For the given data, we need to have class interval, frequency distribution and cumulative frequency distribution. Then, median is calculated as Where l = lower limit of median class, n = number of observations, cf = cumulative frequency of class preceding the median class, f = frequency of median class, h = class size (assuming class size to be equal) Mode: Modal class: The class interval having highest frequency is called the modal class and Mode is obtained using the modal class. Where l = lower limit of the modal class, h = size of the class interval (assuming all class sizes to be equal), f1 = frequency of the modal class, f0 = frequency of the class preceding the modal class, f2 = frequency of the class succeeding the modal class. 12. Probability: Understanding the basic concepts and learning all the important formulas is extremely sufficient to pass the Maths exam with flying colours. If you know the formulas very well then it will not take much time for you to solve questions in exam paper. So, keep practicing with the list of important formulas given above in this article.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9101865887641907, "perplexity": 1291.0907637346704}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500813.58/warc/CC-MAIN-20230208123621-20230208153621-00763.warc.gz"}
http://math.stackexchange.com/questions/883890/can-all-integration-be-thought-of-as-projections
# Can all integration be thought of as projections? For example, the integral of the function f(x) could be thought of the projection of f on the function g, where g is identically 1. Following this logic, can we think of the multiplication of f and g as the area between f and g, no matter how complicated f or g is? But then this is a bit more if we think of the area between f and g as the projection of f on g, but how do we explain negative area i.e. integrating sin(x). I don't know if I was taught the right calculus but I've never seen a textbook that introduces the notion of the integral of f and g as the projection of f on g. Why is that? And if possible, can someone please provide me a good notes on this subject? Treating integration from a projection perspective. - This is one of the basic ideas behind Riesz representation. –  Adam Hughes Jul 31 '14 at 17:36 I won't do that. Integrals are more basic than projections. I mean, if somebody wakes me up at 3 a.m. and abruptly asks me what an integral is, I would probably blabber something about areas under curves, not about projections. This said, it surely is a nice train of ideas that is interesting to explore once. –  Giuseppe Negro Jul 31 '14 at 18:07 Just to expand a little, the Riesz representation theorem has these ideas embedded into it. Generically, on a Hilbert space of $\Bbb C$-valued functions, you will see $$\int f(x)\overline{g}(x)\,d\mu(x)$$ (if they have values in $\Bbb R$ the complex conjugation over $g$ disappears) as an inner product of two functions, which--you may remember from basic vector calculus--was how you talked about projecting one vector onto another $$\text{proj}_{\mathbf{v}}(\mathbf{u})={\mathbf{u}\cdot\mathbf{v}\over\lVert\mathbf{v}\rVert^2}\mathbf{v}$$ which (if you go back even further) you may remember you had the inner product come in by drawing triangles with the law of cosines. Albeit, there is no notion of "area" going on with this sort of thing. It measures more just ideas of "amount of one vector in the direction of another." However, you do have a nice interpretation of negatives, since we have the formula $$\cos(\theta)={\mathbf{u}\cdot\mathbf{v}\over \lVert\mathbf{u}\rVert\lVert\mathbf{v}\rVert}$$ where $\theta$ is the angle between the two vectors, so that negative numbers just mean the angle is not in the range $-{\pi\over 2}\le\theta\le {\pi\over 2}$. It's quite natural that you would not have found this in a calculus class, it requires some good amount of linear algebra and proving that that formula gives an inner-product on a space of square-integrable functions is much harder mathematics than just calculus. Any book on Hilbert spaces which mentions a suitably generalized version of the Riesz representation theorem should be sufficiently satisfying for someone seeking to pursue this line of thought, and any good mathematics library should have $n+1$ books on the subject if you just look in the catalog for "Hilbert spaces." A quick googling of "Hilbert space" yields many results, such as these notes. A proof that $L^2$ of a measure space is a Hilbert space is a classical result in any functional analysis textbook, and follows from the Minkowski inequality. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9489361643791199, "perplexity": 290.68926003081}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438043062723.96/warc/CC-MAIN-20150728002422-00225-ip-10-236-191-2.ec2.internal.warc.gz"}
https://tug.org/pipermail/xetex/2009-July/013636.html
# [XeTeX] Bullets, itemize environment not working in xelatex Apostolos Syropoulos asyropoulos at yahoo.com Mon Jul 13 13:30:09 CEST 2009 It works just fine here. BTW, I have noticed that you are using an almost ancient version of XeTeX, which might be the root of your problem. A.S. ---------------------- Apostolos Syropoulos Xanthi, Greece > >From: Hajder <hajderr at gmail.com> >To: xetex at tug.org >Sent: Monday, July 13, 2009 2:16:40 PM >Subject: [XeTeX] Bullets, itemize environment not working in xelatex > >>Hi all > >The following code does not display bullets in the .pdf > >\documentclass[a4paper,12pt]{article} >\usepackage{fontspec} >\begin{document} > \begin{itemize} > \item Item1 > \item Item2 >> \item Bullet not working Xelatex > \end{itemize} >\end{document} > >The problem is when loading the fontspec package. If I remove that line >or replace it with another package, like amsmath, the bullets are displayed. > >Regards, >H > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://tug.org/pipermail/xetex/attachments/20090713/fe5f699a/attachment-0001.html>
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9854718446731567, "perplexity": 24062.22367627252}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703509973.34/warc/CC-MAIN-20210117051021-20210117081021-00117.warc.gz"}
https://community.jmp.com/t5/Discussions/Augmenting-a-screening-DoE/td-p/11984
## Augmenting a screening DoE Community Trekker Joined: Apr 18, 2015 I’d like to explore beyond the lower limit of a previous DoE – is it acceptable to augment that old DoE with a new one that has all the original factors, but one or more of the factors has a new limit? 1 ACCEPTED SOLUTION Accepted Solutions Solution Yes, in JMP, select DOE > Augment Design. Identify the responses and factors from the initial experiment that you want to carry forward into the next design. Click Augment (last of five buttons at the bottom of the dialog). At this point, you can change the factor range, among other things. Learn it once, use it forever! 2 REPLIES Solution Yes, in JMP, select DOE > Augment Design. Identify the responses and factors from the initial experiment that you want to carry forward into the next design. Click Augment (last of five buttons at the bottom of the dialog). At this point, you can change the factor range, among other things. Learn it once, use it forever! Community Trekker Joined: Apr 18, 2015 Thanks, MarkBailey!
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9008604288101196, "perplexity": 2661.949249707336}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818690035.53/warc/CC-MAIN-20170924152911-20170924172911-00047.warc.gz"}
https://cerncourier.com/a/neutrino-physics-shines-bright-in-heidelberg/
# Neutrino physics shines bright in Heidelberg 9 July 2018 The 28th International Conference on Neutrino Physics and Astrophysics took place in Heidelberg, Germany, on 4–9 June. It was organised by the Max Planck Institute for Nuclear Physics and the Karlsruhe Institute of Technology. With 814 registrations, 400 posters and the presence of Nobel laureates, Art McDonald and Takaaki Kajita, it was the most attended of the series to date – showcasing many new results. Several experiments presented their results for the first time at Neutrino 2018. T2K in Japan and NOvA in the US updated their results, strengthening their indication of leptonic CP violation and normal-neutrino mass ordering, and improving their precision in measuring the atmospheric oscillation parameters. Taken together with the Super-Kamiokande results of atmospheric neutrino oscillations, these experiments provide a 2σ indication of leptonic CP violation and a 3σ indication of normal mass ordering. In particular, NOvA presented the first 4σ evidence of ν̅μν̅e transitions compatible with three-neutrino oscillations. The next-generation long-baseline experiments DUNE and Hyper-Kamiokande in the US and Japan, respectively, were discussed in depth. These experiments have the capability to measure CP violation and mass ordering in the neutrino sector with a sensitivity of more than 5σ, with great potential in other searches like proton decay, supernovae, solar and atmospheric neutrinos, and indirect dark-matter searches. All the reactor experiments – Daya Bay, Double Chooz and Reno – have improved their results, providing precision measurements of the oscillation parameter θ13 and of the reactor antineutrino spectrum. The Daya Bay experiment, integrating 1958 days of data taking, with more than four million antineutrino events on tape, is capable of measuring the reactor mixing angle and the effective mass splitting with a precision of 3.4% and 2.8%, respectively. The next-generation reactor experiment JUNO, aiming at taking data in 2021, was also presented. The third day of the conference focused on neutrinoless double-beta decay (NDBD) experiments and neutrino telescopes. EXO, KamLAND-Zen, GERDA, Majorana Demonstrator, CUORE and SNO+ presented their latest NDBD search results, which probe whether neutrinos are Majorana particles, and their plans for the short-term future. The new GERDA results pushed their NDBD lifetime limit based on germanium detectors to 0.9 × 1026 years (90% CL), which represents the best real measurement towards a zero-background next-generation NDBD experiment.  CUORE also updated its results based on tellurium to 0.15 × 1026 years. Neutrino telescopes are of great interest for multi-messenger studies of astrophysical objects at high energies. Both IceCube in Antarctica and ANTARES in the Mediterranean were discussed, together with their follow-up IceCube Gen2 and KM3NeT facilities. IceCube has already collected 7.5 years of data, selecting 103 events (60 of which have an energy of more than 60 TeV) and a best-fit power law of E–2.87. IceCube does not provide any evidence for neutrino point sources and the measured νe:νμτ neutrino-flavour composition is 0.35:0.45:0.2. A recent development in neutrino physics has been the first observation of coherent elastic neutrino–nucleus scattering as discussed by the COHERENT experiment (CERN Courier October 2017 p8), which opens the possibility of searches for new physics. A very welcome development at Neutrino 2018 was the presentation of preliminary results from the KATRIN collaboration about the tritium beta-decay end-point spectrum measurement, which allows a direct measurement of neutrino masses. The experiment has just been inaugurated at KIT in Germany and aims to start data taking in early 2019 with a sensitivity of about 0.24 eV after five years. The strategic importance of a laboratory measurement of neutrino masses cannot be overestimated. A particularly lively session at this year’s event was the one devoted to sterile-neutrino searches. Five short-baseline nuclear reactor experiments (DANSS, NEOS, STEREO, PROSPECT and SoLid) presented their latest results and plans regarding the so-called reactor antineutrino anomaly. These are experiments aimed at detecting the oscillation effects of sterile neutrinos at reactors free from any assumption about antineutrino fluxes. There was no reported evidence for sterile oscillations, with the exception of the DANSS experiment reporting a 2.8σ effect, which is not in good agreement with previous measurements of this anomaly. These experiments are only at the beginning of data taking and more refined results are expected in the near future, even though it is unlikely that any of them will be able to provide a final sterile-neutrino measurement with a sensitivity much greater than 3σ. Further discussion was raised by the results reported by MiniBooNE at Fermilab, which reports a 4.8σ excess of electron-like events by combining their neutrino and antineutrino runs. The result is compatible with the 3.8σ excess reported by the LSND experiment about 20 years ago in an experiment taking data in a neutrino beam created by pion decays at rest at Los Alamos. Concerns are raised by the fact that even sterile-neutrino oscillations do not fit the data very well, while backgrounds potentially do (and the MicroBooNE experiment is taking data at Fermilab with the specific purpose of precisely measuring the MiniBooNE backgrounds). Furthermore, as discussed by Michele Maltoni in his talk about the global picture of sterile neutrinos, no sterile neutrino model can, at the same time, accommodate the presumed evidence of νμνe oscillations by MiniBooNE and the null results reported by several different experiments (among which is MiniBooNE itself) regarding νμ disappearance at the same Δm2. The lively sessions at Neutrino 2018, summarised in the final two beautiful talks by Francesco Vissani (theory) and Takaaki Kajita (experiment), reinforce the vitality of this field at this time (see A golden age for neutrinos).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8484721779823303, "perplexity": 2846.437055993255}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178373241.51/warc/CC-MAIN-20210305183324-20210305213324-00504.warc.gz"}
http://www.opengl.org/discussion_boards/archive/index.php/t-142282.html
PDA View Full Version : Setup MikeH 07-09-2001, 04:50 PM New to OpenGl, I'm getting errors on the Borland 55 command-line cmpilr -- 'Multiple decalaration of 'WINGDIAPI' and similar messages.. Has anyone heard of this ? I did as follows: glut.h, glu.h, gl.h --> \include\gl opengl32.lib, glut32.lib, glu32.lib -- \lib\ and opengl32.dll, glut32.dll and glu32.dll --> Win\system\ isn't that correct ? I didn't compile the glut. Do I have to ? any help would be greatly appreciated.. Tia MikeH JLawson 07-09-2001, 05:22 PM AAH! I've got those same damn problems too! I keep getting "multiple declaration" errors, and I'm new to oGL also. So anything you find out, please tell me! 07-10-2001, 04:31 AM did you try to convert the dll's to borland format. RedZen 07-10-2001, 04:54 AM Hehehe, you use the makelib.exe file that comes with Borland cpp. Then you have to create lib's and headers for glut. If you use the standard ones it gives errors. The standard opengl libraries however work fine. Read up on the makelib.exe to see what parameters it takes etc. I can't remember offhand. I know it takes the dll as an argument though. Deiussum 07-10-2001, 05:15 AM Where did you get the glut.h you are using? With the version I have, you technically don't need to include glu.h or gl.h because the glut.h takes care of that for you. It also takes care of some other things for you. Typically, you need to include windows.h BEFORE you include gl.h becuase windows.h defines WINGDIAPI and APIENTRY, which is then used in gl.h. The version of glut.h that I got takes care of this for you with the following code... /* GLUT 3.7 now tries to avoid including <windows.h> to avoid name space pollution, but Win32's <GL/gl.h> needs APIENTRY and WINGDIAPI defined properly. */ # if 0 # define WIN32_LEAN_AND_MEAN # include <windows.h> # else /* XXX This is from Win32's <windef.h> */ # ifndef APIENTRY # define GLUT_APIENTRY_DEFINED # if (_MSC_VER >= 800) &amp;#0124; &amp;#0124; defined(_STDCALL_SUPPORTED) # define APIENTRY __stdcall # else # define APIENTRY # endif # endif /* XXX This is from Win32's <winnt.h> */ # ifndef CALLBACK # if (defined(_M_MRX000) &amp;#0124; &amp;#0124; defined(_M_IX86) &amp;#0124; &amp;#0124; defined(_M_ALPHA) &amp;#0124; &amp;#0124; defined(_M_PPC)) &amp;&amp; !defined(MIDL_PASS) # define CALLBACK __stdcall # else # define CALLBACK # endif # endif /* XXX This is from Win32's <wingdi.h> and <winnt.h> */ # ifndef WINGDIAPI # define GLUT_WINGDIAPI_DEFINED # define WINGDIAPI __declspec(dllimport) # endif /* XXX This is from Win32's <ctype.h> */ # ifndef _WCHAR_T_DEFINED typedef unsigned short wchar_t; # define _WCHAR_T_DEFINED # endif # endif MikeH 07-10-2001, 05:42 AM Thanks Deiussum, RedZen, Stone, JLawson -- I think I got it, but I'm still not sure if it's right. I downloaded some glut version -- supposedly borland specific -- from a site pointed to in an early post on this board, rnamd the .h's, dlls and libs, and copied over them. Deiussum: I copied/pated this thread and seved it, for the makefile info, in case I do have to convert old files -- (tho I didn't really want to unless I *had* to, but still might). A friend sent me an opengl image but I don't think it is shading correctly. Anyone tell me where I can get the 'Redbook' examples ? to test it more thoughly ? Help greatly appretiated.. thanks again.. MikeH Deiussum 07-10-2001, 06:00 AM
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8099483251571655, "perplexity": 15644.872024691062}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383077/warc/CC-MAIN-20130516092623-00098-ip-10-60-113-184.ec2.internal.warc.gz"}
http://tug.org/pipermail/texhax/2009-February/011815.html
# [texhax] Meaning of Code frag Philip TAYLOR (Ret'd) P.Taylor at Rhul.Ac.Uk Sun Feb 22 16:59:44 CET 2009 As a non-mathematician, it looks more to me as if it denotes the absolute value (which cannot be negative). Philip TAYLOR -------- P. R. Stanley wrote: > Hi > [start code] > f(x)=O(g(x))\mbox{ as }x\to a > if and only if there exist positive numbers d and M such that > |f(x)| \le \; M |g(x)|\mbox{ for }|x - a| < \delta. > If g(x) is non-zero for values of x > [end code] > > Is the | enclosing the functions used to denote order of growth? > > Cheers > Paul > > _______________________________________________ > TeX FAQ: http://www.tex.ac.uk/faq > Mailing list archives: http://tug.org/pipermail/texhax/
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9982225894927979, "perplexity": 18489.98842746974}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863834.46/warc/CC-MAIN-20180620182802-20180620202802-00391.warc.gz"}
https://www.jiskha.com/questions/1821849/a-radio-tower-is-located-450-feet-from-a-building-from-a-window-in-the-building-a-person
# precalculus A radio tower is located 450 feet from a building. From a window in the building, a person determines that the angle of elevation to the top of the tower is 24 and that the angle of depression to the bottom of the tower is 21 . How tall is the tower? 1. 👍 2. 👎 3. 👁 4. ℹ️ 5. 🚩 1. 450 tan(24º) + 450 tan(21º) 1. 👍 2. 👎 3. ℹ️ 4. 🚩 2. Tan21 = 450/x X = 1172 Ft. = hor. distance between bldg. and tower. Tan24 = h1/1172 h1 = 1172*Tan24 = 522 Ft. ht. = 522 + 450 = 1. 👍 2. 👎 3. ℹ️ 4. 🚩 ## Similar Questions 1. ### Pre calc A water tower is located x = 375 ft from a building (see the figure). From a window in the building, an observer notes that the angle of elevation to the top of the tower is 39° and that the angle of depression to the bottom of 2. ### Calculus A ladder 15 feet long is leaning against a building so that end X is on level ground and end Y is on the wall. Point 0 is where the wall meets the ground. X is moving away from the building at a constant rate of 1/2 foot per 3. ### precalculus A radio tower is located 450 feet from a building. From a window in the building, a person determines that the angle of elevation to the top of the tower is 29 degrees and that the angle of depression to the bottom of the tower is 4. ### MATH A radio tower 500 feet high is located on the side of a hill with an inclination to the horizontal of 5 degrees. How long should two guy wires be if they are to connect to the top of the tower and be secured at two points 100 feet 1. ### Trigonometry A building is 50 feet high. At a distance away from the building, an observer notices that the angle of elevation to the top of the building is 41 degrees. How far is the observer from the base of the building? 2. ### precal A contractor needs to know the height of a building to estimate the cost of a job. From a point 84 feet away from the base of the building, the angle of elevation to the top of the building is found to be 44 degrees and 23'. Find 3. ### MATH the figure shows that the angle of elevation to the top of the building changes from 20 degree to 40 degree as an observer advances 75 feet toward the building. Find the height of the building to the nearest feet. 4. ### trigonometry If the height of the building is 250 feet, what is the distance from the top of a building to the tip of its shadow? A. 75.4 feet B. 113.1 feet C. 0.003 feet D. 331.3 feet 1. ### Math HELP!!!!!!!! A superhero is trying to leap over a tall building. The function f(x)=-16x^2+200x gives the superheroes height in feet as a function of time. The building is 612 feet high. Will the superhero make it over the building? Explain. 2. ### calculus A ladder 13 feet long is leaning against the side of a building. If the foot of the ladder is pulled away from the building at a constant rate of 8 inches per second, how fast is the area of the triangle formed by the ladder, the 3. ### math a window in one building, the angle of elevation to the top of a second, taller building is 38°. The angle of depression to the base of the taller building is 51°. Determine the height of the second, taller building to the 4. ### TRIG WORD PROBLEMS from 25 feet away from the base of a building, the angle of elevation from the ground to the top of a building is measured 38 degrees. how tall is the building. WHat i did was put 25 feet on the base of the triangle the angle
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8426117300987244, "perplexity": 568.3149035248537}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662520817.27/warc/CC-MAIN-20220517194243-20220517224243-00082.warc.gz"}
http://www.ams.org/mathscinet-getitem?mr=915726
MathSciNet bibliographic data MR915726 32A10 (30E99 43A85) Grinberg, Eric L. A boundary analogue of Morera's theorem in the unit ball of ${\bf C}\sp n$${\bf C}\sp n$. Proc. Amer. Math. Soc. 102 (1988), no. 1, 114–116. Article For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews.
{"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9960771203041077, "perplexity": 4745.34658107488}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701174607.44/warc/CC-MAIN-20160205193934-00160-ip-10-236-182-209.ec2.internal.warc.gz"}
https://stacks.math.columbia.edu/tag/0DW5
Lemma 47.20.3. Let $A$ be a Noetherian ring. If there exists a finite $A$-module $\omega _ A$ such that $\omega _ A[0]$ is a dualizing complex, then $A$ is Cohen-Macaulay. Proof. We may replace $A$ by the localization at a prime (Lemma 47.15.6 and Algebra, Definition 10.104.6). In this case the result follows immediately from Lemma 47.20.2. $\square$ In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9936076402664185, "perplexity": 265.95507859564515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499470.19/warc/CC-MAIN-20230128023233-20230128053233-00361.warc.gz"}
https://fabricebaudoin.wordpress.com/category/uncategorized/
# Category Archives: Uncategorized ## Lecture 6. Rough paths Fall 2017 In the previous lecture we defined the Young’s integral when and with . The integral path has then a bounded -variation. Now, if is a Lipschitz map, then the integral, is only defined when , that is for . With … Continue reading ## MA5311. Take home exam Exercise 1. Solve Exercise 44 in Chapter 1 of the book. Exercise 2.  Solve Exercise 3 in Chapter 1 of the book. Exercise 3.  Solve Exercise 39 in Chapter 1 of the book. Exercise 4. The heat kernel on is given by . By … Continue reading ## MA5161. Take home exam Exercise 1. The Hermite polynomial of order is defined as Compute . Show that if is a Brownian motion, then the process is a martingale. Show that   Exercise 2. (Probabilistic proof of Liouville theorem) By using martingale methods, prove that if … Continue reading ## MA5311. Take home exam due 03/20 Solve Problems 1,2,8,9,10,11 in Milnor’s book. (Extra credit for problem 6) ## MA5161. Take home exam. Due 03/20 Exercise 1. Let . Let be a continuous Gaussian process such that for , Show that for every , there is a positive random variable such that , for every and such that for every , \textbf{Hint:} You may use … Continue reading
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.994641900062561, "perplexity": 1108.5546518384474}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818686169.5/warc/CC-MAIN-20170920033426-20170920053426-00611.warc.gz"}
https://codedump.io/share/bZ3lTYbsbdkT/1/php-pregmatch-exact-match-and-get-whats-inside-brackets
user7133318 - 8 months ago 49 PHP Question # PHP preg_match exact match and get whats inside brackets Lets say I have the following string: ``````"**link(http://google.com)*{Google}**" `````` And I want to use preg_match to find the EXACT text `**link(http://google.com)` but the text inside the brackets changes all the time. I used to use: ``````preg_match('#\((.*?)\)#', \$text3, \$match2); `````` Which would get what is inside the brackets which is good but if I had: `*hwh(http://google.com)**` it would get whats inside of that. So how can i get whats inside the brackets if, in front of the brackets has `**link` ? `~(?:\*\*link\(([^\)]+)\))~` will match contents in the brackets for all inputs that look like `**link(URL)` but do not contain extra `)` inside URLs. See the example on Regexr: http://regexr.com/3en33 . The whole example: ``````\$text = '"**link(http://google.com)*{Google}**"
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8677642345428467, "perplexity": 1979.339833165394}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886102819.58/warc/CC-MAIN-20170817013033-20170817033033-00667.warc.gz"}
http://www.obs-hp.fr/www/preprints/pp133/PP133.HTML
[ Other OHP preprints ] # THE RICH SPECTROSCOPY OF REFLECTION NEBULAE ## C. Moutou 1, L. Verstraete 2, K. Sellgren 3, A. Léger 2 1Observatoire de Haute Provence, St Michel, France 2Institut d'Astrophysique Spatiale, Orsay, France 3Ohio State University, Columbus, USA ### Abstract: The ISO-SWS spectra of two bright reflection nebulae, NGC 7023 and NGC 2023, are presented. We discuss the emission of molecular hydrogen from these photodissociated interfaces. Details of the aromatic infrared band profiles as well as the continuum emission are also analysed. The ISO-LWS spectrum of NGC 7023 is also presented, at two positions in the nebula. The dust temperature at the brightest far-infrared position of NGC 7023 is estimated to be 45 K. Key words: ISO; reflection nebula; dust emission bands; molecular hydrogen emission. # INTRODUCTION ISO is providing us with high-resolution spectra of the Aromatic Infrared Bands between 3 and 13 µm (hereafter AIBs) in a wide variety of galactic environments. At the spectral resolution of ISOCAM-CVF (= 40), the AIBs have been shown to be similar in interstellar regions with effective radiation fields ranging from 1 to 104 times the interstellar radiation field (Boulanger et al. 1998 and references therein). Using SWS data at higher spectral resolution, we show here that there are differences between the AIB spectra of two reflection nebulae, NGC 2023 and NGC 7023. The physical conditions of the gas, associated with the AIBs, are discussed through the analysis of the molecular hydrogen emission. # COMPARING THE SWS SPECTRA ## Observations The SWS spectrum of NGC 7023 was obtained with the AOT01 observing mode in 1996 and is fully described in Moutou et al. (1998) and Sellgren et al. (1999, in preparation). Its mean spectral resolution is = 1000 (speed 4). A restricted part of this spectrum has been observed at higher spectral resolution (Moutou et al. 1999). The SWS spectrum of NGC 2023 was observed at a spectral resolution of = 500 (AOT01, speed 3) in 1998. Data reduction has been done within the SWS-Interactive Analysis environment. Our observations have higher spectral resolution than previously published mid-IR spectra of NGC 7023 and NGC 2023 from the Kuiper Airborne Obsevatory (Sellgren et al. 1985) and of NGC 7023 with ISOPHOT-S and ISOCAM (Laureijs et al. 1996; Cesarsky et al. 1996). NGC 7023 and NGC 2023 are two bright reflection nebulae irradiated by hot stars. The SWS aperture for NGC 7023 was centered 27'' W 34'' N (position 1) of HD 200775 (Teff=17,000 K). For NGC 2023, the SWS aperture was 60'' S of HD 37903 (Teff=22,000 K). Figure 1 shows the mid-IR spectra of the two nebulae, with an offset added to NGC 7023 for clarity (see caption of Fig.1). The relative contribution of continuous emission and AIBs to the total emitted energy is comparable in both objects. More than 60% of the 3 - 20µm energy is emitted in the AIBs. Reflection nebulae have a high feature-to-continuum ratio: this makes the study of faint details in the AIB profiles much easier than in more strongly irradiated sources where the AIBs are drowned by a strong mid-IR continuum (e.g., planetary nebula, H II region interfaces). ## Molecular hydrogen emission We detect many pure H2 rotational lines in both nebulae as well as some ro-vibrational (or fluorescent) lines in NGC 7023. This emission comes from photodissociated gas lumped into filaments (Lemaire et al. 1996, Field et al. 1998). We derive excitation diagrams (Fig.2) from the line fluxes. In the case of NGC 7023, our upper level column densities (from the S(0) line at 28.22µm through to the S(4) line at 8.02µm) are well-fitted by a single rotational temperature of 411 K if one adopts an ortho-to-para ratio (hereafter Rop) of 1. These results, although derived from observations with a much larger beamsize, are consistent with the findings of Lemaire et al. (1996). Indeed, the warm gas emission we detect is probably dominated by the filaments shown in the Lemaire et al. map. For NGC 2023, using the S(1) at 17.03µm through to the S(3) at 9.66µm lines, we find a rotational temperature of 333 K also with Rop=1. While determining the rotational temperature, we have left aside higher levels (Tu > 5000 K) because their populations are strongly affected by the UV pumping (and subsequent radiative decay) of the H2 molecule. Densities of about 105 cm-3 have been inferred for both nebulae (Lemaire et al. 1996; Field et al. 1998; Martini et al. 1997). At these high densities, the low rotational transitions (from upper levels with Tu < 5000 K) we are discussing here are collisionally thermalized and the rotational temperature is equal to the gas temperature, Tgas. Our low ortho-to-para ratio confirms earlier work (Chrysostomou et al. 1993 and references therein) on photodissociated interfaces. The H2 molecule forms on the surface of dust grains and is expected to be rejected in the gas with a high vibration-rotation energy content and a Rop value close to 3. After formation, Rop can be changed by gas-phase spin exchange reactions of H2 with atomic hydrogen and protons for Tgas>300 K or by H2-grain collisions for Tgas<300 K. A value of Rop=1 corresponds to an equilibrium temperature Teq 80 K (Burton et al. 1992). The fact that the dust temperature ( 40 K, see §3) and the gas temperature (300 - 400 K) are both very different from Teq points at the importance of out-of-equilibrium effects; this point is also highlighted by the high Rop values (between 2 and 3) predicted by recent stationary models (Draine & Bertoldi 1996). We note that H2 rotational populations in equilibrium at 40 K and 300 - 400 K correspond to Rop=0.15 and 3 respectively. Two scenarios have been proposed by Chrysostomou et al. to explain the observed low Rop values. First, if the newly formed molecular hydrogen resides long enough on the surface of the dust grain, Rop will be between 3 and the equilibrium value of 0.15 set by the grain temperature Tdust. To get Rop=1 , the residence time of H2 on the dust grain should be approximately half the time required for the H2 rotational populations to reach equilibrium at Tdust after formation (see Fig.9 in Chrysostomou et al.). Alternatively, in cold gas, Rop is fixed at a low value by gas-grain interactions (0.15 for Tdust=40 K). As the photodissociation front propagates into the molecular cloud at velocities of the order of 1 km/s, cold gas is advected through the hot (Tgas of a few 100 K) interface. In the hot gas, spin exchange reactions can drive the low Rop to the observed value. The residence time of H2 on the grain goes as exp (450K/Tdust) (Tielens & Allamandola 1987) while the rate for spin exchange reactions goes as exp(-3200K/Tgas). From photodissociation region models, variations of Tgas are expected to be much larger than those of Tdust: Rop should thus vary much more rapidly in the second scenario (cold gas advection) than in the former (modified H2 formation). High spatial resolution observations yielding Rop profiles across photodissociated interfaces should help discriminate the 2 scenarios. ## Band Profiles We compare here qualitatively the AIB profiles of both nebulae. More quantitative results will be presented in a forthcoming paper (Sellgren et al. 1999). We remark that the AIB spectrum of NGC 2023 multiplied by 2.2 superimposes nicely to that of NGC 7023. As the AIB flux is proportional to the radiation field intensity (Sellgren et al. 1985), this suggests that the radiation field in NGC 7023 is twice as strong as that of NGC 2023 (at the positions given in §1). The two AIB spectra look very similar (width, position of the AIBs) as expected in view of the similar physical conditions (density, radiation field) derived for NGC 7023 (Lemaire et al. 1996) and NGC 2023 (Steinman-Cameron et al. 1997; Field et al. 1998). There are, however, significant spectral differences in the AIB profiles which we detail below. These differences reflect changes in the physico-chemical state of the AIB carriers. Figure 3a shows that the 3.3 and the 3.4µm AIBs have identical profiles in both nebulae. The 3.4/3.3µm band ratio remains thus the same while the radiation field intensity is multiplied by a factor 2. Figure 3b shows that the 6.2 µm AIB is asymmetrical and vary slightly between the two nebulae. Both profiles show a pronounced wing towards long wavelengths, which is interpreted as the consequence of anharmonic couplings during the cooling of the molecule (Barker et al. 1987; Joblin et al. 1995). In NGC 2023 the local continuum is more important, with respect to the 6.2µm band, and has a steeper rise than in NGC 7023. This suggests that the underlying continuum has a different origin from the AIBs. Figure 3c demonstrates that the 7 - 9µm range is the most complex. The main AIBs are at 7.6, 7.8, and 8.6µm. The intensity of the 7.6µm-component is stronger in NGC 7023 than in NGC 2023. Furthermore, the blue shoulder at 7.45µm observed in NGC 7023 (see also Moutou et al. 1998) is not seen in NGC 2023. Roelfsema et al. and Verstraete et al. (1996) have shown that the profile of the 7.7''µm AIB and the distribution of energy between the 7.6, 7.8 and 8.6µm AIBs varies with the radiation field. In the PAH model, the 7.7''µm AIB falls in the spectral range where the effects of ionisation are the most dramatic (Pauzat et al. 1997; Langhoff 1996). Such a prominent 7.6µm feature as well as the 7.45µm blue wing are unusual and have only been seen towards compact H II regions (Roelfsema et al. 1996) and towards the post-AGB star HR 4049 (Molster et al. 1996). These latter authors attributed the 7.45 and 7.6µm new features to small ionized PAHs. Ionized PAHs have a strong 7.7/11.3µm band ratio (Langhoff 1996). Since the AIB spectrum in NGC 2023 is scalable to that of NGC 7023 (see Fig.3), the 7.7/11.3µm band ratio is the same for both nebula implying that the fraction of ionized PAHs is the same in both cases. >From the absence of the 7.45µm-band and the weaker 7.6µm feature in NGC 2023, we conclude that the carriers of these bands are produced by other processes than photoionization (photochemical evolution, fragmentation...). Figure 3d shows that the 11.3µm AIB has a profile similar to the 6.2µm band. A red asymmetry is observed, which again suggests strong anharmonic effects. We note that the red wing of the 11.3µm band is more pronounced in NGC 2023. The 11.3µm band also shows some weak sub-features within its profile. In particular, the 11.0µm band (previously detected by Witteborn et al. 1989 and Roche et al. 1991) is clearly seen in both nebulae. The 11.0µm band also appears in many objects with a similar ratio to the 11.3µm band (Molster et al. 1996; Roelfsema et al., Verstraete et al. 1996) while the 7.7/11.3µm band ratio presents large variations (compare for instance the AIB spectrum of the M17-SW interface and that of NGC 7023). This again rules out photoionization as the cause of the 11.0µm-band in moderatly excited sources. # LWS SPECTRA IN NGC 7023 Finally, we show the complete LWS spectrum (AOT01 observing mode) of NGC 7023, at two nebular positions labeled 1 and 2. Unfortunately, we did not obtain any LWS spectrum of NGC 2023. The LWS position 1 in NGC 7023 is the same as for SWS (see §1) and is close to the far-infrared peak of Whitcomb et al. (1981). Position 2 is located 100''N of the star. The LWS beam is 80'' in diameter, so the fields slightly overlap. Position 1 has been observed in the guaranteed time program of J.P. Baluteau and Position 2 was part of our open-time program on reflection nebulae. Both spectra are shown in Figure 4. The data reduction was done with the LWS-Interactive Analysis and ISAP softwares. The LWS continuum emission is due to big dust grains in thermal equilibrium with the radiation field. We estimate the dust temperature by fitting a modified blackbody emission curve to the spectra with a dust emissivity law proportional to the frequency . The fits are shown on Figure 4. For Position 1, we also used the SWS spectrum of NGC 7023 (not shown here) to further constrain the fit. The effective temperature of dust grains at Position 1 is Tdust = 45 ± 2K, while at Position 2, the temperature drops to Tdust = 30 ± 2K. These temperatures are compatible with the temperature map obtained by Whitcomb et al. (1981). Poorer fits to the LWS spectra were obtained with a dust emissivity proportional to 2 (in this latter case, the above temperatures would drop by approximately 6 K). For a dust emissivity ~ , the radiation field goes as T5 and so the change in Tdust implies that the radiation field is stronger at Position 1 by a factor of 8 than at Position 2. Atomic emission lines accounting for the cooling of the gas are visible on both spectra, namely CII (158µm), OI (63µm) and NI (145µm). They will be discussed elsewhere. # ACKNOWLEDGMENTS We are very grateful to J.P. Baluteau for providing us the LWS spectra of NGC 7023. We acknowledge NATO Collaborative Research Grant 951347. ## References Barker J.R., Allamandola L.J., Tielens AGGM, 1987, ApJ 315, L61 Boulanger, F. et al., 1998, A&A 339, 194 Burton, M.G., Hollenbach, D.J., Tielens, A.G.G.M., 1992: ApJ 399, 563 Cesarsky D. et al., 1996, A&A 315, L305 Chrysostomou, A., Brand, P.W.J.L, Burton, M.G., Moorhouse, A., 1993: MNRAS 265, 329 Draine,B.T., Bertoldi,F., 1996: ApJ 468, 269 Field D., Lemaire J. L., Pineau des Forêts G., Gerin M., Leach S., Rostas F. & Rouan D., 1998, A&A 333, 280 Joblin, C., Salama, F., Allamandola, L., 1995, J. Chem Phys. 102 (24), 9743 Langhoff S.R, 1996, J. Phys. Chem. 100, 8, 2819 Laureijs R. et al., 1996, A&A 315, L313 Lemaire J.L., Field D., Gerin M., Leach S., Pineau des Forêts G., Rostas F. & Rouan D., 1996, A&A 308, 895 Martini P., Sellgren K. & Hora J.L., 1997, ApJ 484, 296 Molster F.J. et al. 1996, A&A 315, L373 Moutou C. et al., 1998, in Star formation with the Infrared Space Observatory, ASP Conf. Ser. 132, eds. Joao L. Yun and René Liseau, p. 47 Moutou C., Sellgren K., Verstraete L., Léger A., 1999, submitted to A&A Pauzat F., Talbi, D. & Ellinger, Y., 1997, A&A 319, 318 Roche P.F., Aitken D.K. & Smith C.H., 1991, MNRAS 252, 282 Roefselma et al., 1996, A&A 315, L289 Sellgren K., Allamandola, L.J., Bregman, J.D., Werner, M.W. & Wooden, D.H., 1985, ApJ 299, 416 Steinman-Cameron,T.Y., Haas,M.R., Tielens,A.G.G.M., 1997: ApJ 478, 261 Tielens, AGGM, & Allamandola, LJ 1987, in Interstellar Processes, ed. D. J. Hollenbach & H. A. Thronson (Dordrecht: Reidel), p. 397 Verstraete L., Puget J. L., Falgarone E., et al., 1996, A&A 315, L337 Whitcomb S.E., Gatley I., Hildebrand R.H., Keene J., Sellgren K. & Werner M.W., 1981, ApJ 246, 416 Witteborn F.C., Sandford S.A., Bregman J.D., Allamandola L.J., Cohen M., Wooden D.H. & Graps A.L., 1989, ApJ 341, 270 THE RICH SPECTROSCOPY OF REFLECTION NEBULAE This document was generated using the LaTeX2HTML translator Version 97.1 (release) (July 13th, 1997) Copyright © 1993, 1994, 1995, 1996, 1997, Nikos Drakos, Computer Based Learning Unit, University of Leeds. The command line arguments were: latex2html -split +0 -local_icons pp133.tex. The translation was initiated by on 1/15/1999 #### Footnotes ...NEBULAE ISO is an ESA project with instruments funded by ESA Member States (especially the PI countries: France, Germany, the Netherlands and the United Kingdom) and with the participation of ISAS and NASA. ...well-fitted We did not include the S(5) line at 6.91µm in the fit because its profile is much broader than the other pure H2 rotational lines suggesting contamination by another line. ...fragmentation...) We also note that the 7.7/11.3 µm band ratio in HR 4049 is 4 times that of NGC 7023: such a variation points at very different fractions of ionized PAHs. The fact that the 7.45 and 7.6 µm features are seen in both spectra reinforces our conclusion that ionization cannot be the major way of producing these bands. ...band This is not true for strongly irradiated regions where the AIB spectrum is deeply modified (strong 8.6 and 11.0µm-bands). 1/15/1999
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9132683277130127, "perplexity": 4738.238153107616}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578527566.44/warc/CC-MAIN-20190419101239-20190419123239-00086.warc.gz"}
http://mymathforum.com/applied-math/35643-polynomial-interpolation.html
My Math Forum Polynomial interpolation Applied Math Applied Math Forum April 29th, 2013, 02:42 PM #1 Newbie   Joined: Feb 2013 Posts: 26 Thanks: 0 Polynomial interpolation How can I find a formula for this polynomial interpolation question? Let $x_i= x_0 +ih, \ i = 0,1,2,3; \ h>0=$. Find a polynomial $p(x)$ of degree $\le 5$ for which $p(x_i)=f(x_i), \ i=0,1,2,3;$ $p'(x_0)=f'(x_0), \ p'(x_2)=f#39;(x_2),$ where $f(x)$ has a continuous derivative of any order in $\mathbb{R}$. Also, derive an error formula for $f(x)-p(x).$ What is the order of approximation for $x\in [x_0,x_3]?$ Any help will be greatly appreciated. Thread Tools Display Modes Linear Mode Similar Threads Thread Thread Starter Forum Replies Last Post comp_sci Abstract Algebra 3 September 29th, 2013 03:25 PM genz Applied Math 0 May 1st, 2013 10:14 PM Zillion Calculus 0 April 17th, 2013 09:44 AM wale06 Applied Math 0 October 24th, 2011 05:05 PM Reuel Applied Math 1 April 2nd, 2010 11:44 PM Contact - Home - Forums - Cryptocurrency Forum - Top
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 9, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6480295658111572, "perplexity": 1510.4454270548974}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987795403.76/warc/CC-MAIN-20191022004128-20191022031628-00157.warc.gz"}
http://tex.stackexchange.com/questions/45989/wrapping-and-aligning-pspicture-in-the-top-left-corner-with-first-line-text?answertab=votes
# Wrapping and aligning pspicture in the top left corner with first line text I want to align the pspicture into the top-left corner of the page. Currently it is slightly below the header text when I want those two to be aligned up together. The table for the Email/Website information is slightly indented so I want to remove that indentation as well. I also want the "Lorem Ipsum Dolor" section to have no indent but I know how to fix that. Right so here's what my .tex file looks like: % LaTeX CV \documentclass[final,a4paper,notitlepage,10pt]{report} %%% Packages %%% \usepackage[utf8]{inputenc} % inputenc for encoding to utf8 \usepackage{auto-pst-pdf} % auto-pst-pdf converts pst to pdf \usepackage{pst-barcode} % pst-barcode try implement QR code \usepackage{multicol} % multicol used for multiple columns \usepackage[paper=a4paper,left=0.7cm,right=0.7cm,top=0.7cm,bottom=0.7cm,noheadfoot]{geometry} % geometry for margins \usepackage{wrapfig} % wrapfig to wrap text around figures \usepackage{mdwlist} % mdwlist for compact enumeration/list items \usepackage[compact]{titlesec} % titlesec for title section layout \usepackage{mdwlist} % mdwlist for compact enumeration/list items % Rule settings \hyphenpenalty=5000 \tolerance=1000 % for mailto command \newcommand{\mailto}[1]{\href{mailto:#1}{#1}} % Name and contact information \newcommand{\name}{FirstName LastName} \newcommand{\addr}{123 Fake Street, London, XX00 0XX, England} \newcommand{\phone}{+44 123 456 6789} \newcommand{\email}{\mailto{[email protected]}} % change itemize to be compact \makecompactlist{compactitemize}{itemize} % tweak title spacing \titlespacing{\section}{0pt}{*2}{*0} \titlespacing{\subsection}{0pt}{*2}{*0} \titlespacing{\subsubsection}{0pt}{*2}{*0} % disables chapter, section and subsection numbering \setcounter{secnumdepth}{-1} % set paragraph indent to 0 \setlength\parindent{0pt} % begin actual CV \begin{document} \begin{wrapfigure}[5]{l}{1.2in} \begin{pspicture}(1in,1in) % The MECARD format is used to exchange contact information. \end{pspicture} \end{wrapfigure} \noindent \textbf{\LARGE \name }\\ \textit{ \begin{tabular}{l l}\\ Web: \website\ & Mobile: \phone \\ \end{tabular} }\\ Objective: Lorem ipsum dolor sit amet, consectetur adipisicing elit.\\ \section{Lorem Ipsum Dolor} Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. \end{document} How can I align the top of the pspicture with the text? Should I use a minipage enviroment? Should I use a different tabular package for the table? The generated PDF document looks similar to: - First a small comment on your example. It is too long for a MWE and did not compile as it had a few issues, such as a closing square bracket when you were loading the geometry package. I produced a smaller file to illustrate the technique for aligning the two blocks. You can use minipages to enclose the blocks and to gain more control. We also enclose them with an fbox to be able to visualize everything. There are many different ways to alignment the minipages, but the easiest in this case is to insert a small rule and push the text up. (The orange rule at the left of the textblock). Once everything is aligned, you can zero the rule width and set \fboxrule0pt. The final finished header should look like: And here is the MWE: \documentclass[final,a4paper,notitlepage,10pt]{report} \usepackage[utf8]{inputenc} % inputenc for encoding to utf8 \usepackage{auto-pst-pdf} % auto-pst-pdf converts pst to pdf \usepackage{pst-barcode} % pst-barcode try implement QR code \usepackage[paper=a4paper,left=0.7cm,right=0.7cm,top=0.7cm,bottom=0.7cm,noheadfoot] {geometry} % geometry for margins % for mailto command \newcommand{\mailto}[1]{\href{mailto:#1}{#1}} % Name and contact information \newcommand{\name}{FirstName LastName} \newcommand{\addr}{123 Fake Street, London, XX00 0XX, England} \newcommand{\phone}{ +44 123 456 6789 } \newcommand{\email}{\mailto{[email protected]}} \setlength\parindent{0pt} \fboxsep0pt \fboxrule0pt \usepackage{hyperref} \begin{document} \fbox{\begin{minipage}[t]{\textwidth} \fbox{\begin{minipage}{1.378in} \begin{pspicture}(1.378in, 1.5in) \psbarcode[]{2}{height=1.378 width=1.378}{qrcode} {\color{white}2} \end{pspicture} \end{minipage}}\hspace{0.3cm}% adjust for horizontal spacing \fbox{\begin{minipage}{0.7\textwidth} \textbf{\LARGE \name }\\ \textit{% \begin{tabular}{l l}\\
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9317020773887634, "perplexity": 6089.345671781636}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500830323.35/warc/CC-MAIN-20140820021350-00192-ip-10-180-136-8.ec2.internal.warc.gz"}
https://eric-fritz.com/articles/deposition/
### Impetus In Autumn 2017, we had an incident involving multiple services crashing and failing to re-initialize. We quickly correlated the time of the crashes with the time our Elastic Stack became unavailable. We then determined that our logging library did not put a maximum capacity on the number of log messages in the publish queue (see gomol issue #20), causing the memory usage of the application to rise when logs could not be pushed out. Eventually, this caused process memory to increase to the point where the OOM Killer kicked in and forcefully stopped the applications. To make matters infinitely worse, our JSON log adapter library required a successful connection to be made on startup before the app could re-initialize (see gomol-json issue #3). Once an application crashed, it could not come back up without someone listening for our precious, precious logs. Ouch. We patched these issues in gomol pull request #21 and gomol-json pull request #3 and updated dependencies in the affected applications. But not all of them. We happened to find out exactly which services were still using the old version of the library because, as luck would have it, our elastic stack once again became unavailable and the exact same behavior recurred. At least the second time was a quick deploy (as the changes were already made to the required libraries), but customers were … not ecstatic, to say the least. ### Overview This inspired us to write Deposition. Deposition is a self-hosted API which allows services to self-report their dependencies. On each successful automated build of the master branch, we extract the dependencies from the resulting application (generally produced as a docker container) and push them to the API along with the product name, product version, and a unique build token (generally the bamboo build number). For each type of project (and each type of project package management), we automatically extract the installed dependencies. For Golang projects, we use Glide to lock dependencies to a VCS revision. This allows us to use the glide.lock file checked into the repository. For Python and JavaScript-based projects, we can use pip freeze and npm ls --json, respectively, to read installed versions directly from the package manager. It’s important to note that we do this inside of the container produced as an artifact so we get the true dependencies present in a production environment (and not dependencies installed as part of the build process or dependencies which just happened to be on a developer machine or build agent). We do the same thing for operating system-level packages. For containers which use Debian as a base, we do the following. HOSTNAME=cat /etc/hostname DEPFILE="debian.${HOSTNAME}.depfile" cat /etc/debian_version >$DEPFILE apt list --installed | cut -d ' ' -f 1-2 | sed "s/\/[^ ]\+ /==/" | sort >> $DEPFILE This creates a file that begins with the OS distribution version, then followed by the names of installed packages and their versions. For containers which use Alpine as a base, we do the following. HOSTNAME=cat /etc/hostname DEPFILE="apk.${HOSTNAME}.depfile" cat /etc/alpine-release > $DEPFILE (apk info && apk info -v) | grep -v "WARNING" | sort \ | xargs -n2 | while read x y; do echo$y | sed "s/$x-/$x==/"; done \ >> \$DEPFILE This is a bit more involved, as apk info gives only installed package names but not versions, and apk info -v gives us a combination of the two (but not in a way which the package name and version are easily separated). The bash pipeline correlates these two sources together, extracts the versions, then writes a file which is more easily (and unambiguously) parsable by the API. The following screenshot shows the products of which Deposition is aware. This list only contains services which are currently self-reporting – we are still in the process of moving older services to use our new build conventions. On the other side of the build pipeline, on each successful deploy, we inform the API that a particular build has made it into a production environment with some unique build token (these could be a marathon path, a server name and binary path, or a Fargate task identifier, or a simple string which is meaningful to your development process). The following screenshot shows all active deployments of a service whose name contains the word api in our Chicago data center. At a glance, we can see exactly which version and which bamboo build created the production artifact (without scraping several APIs and correlating the docker tag of the Marathon task with Bamboo build logs). This allows us to retrieve an always-current list of deployed builds and the active (either deployed or most recent) builds of a product. This also allows us to query, in both directions, the relationship between builds and dependencies. In one direction, we can see all the dependencies of a build (or all aggregate dependencies of a product). In the other direction, we can see all the builds which depend on a particular dependency version (or all aggregate versions of a dependency). ### Instant Deprecation How would this have changed our double-outage example above? Once we determine that there exists a bug, vulnerability, or other reason to deprecate, we can simply flag a particular dependency version with the reason. A dependency flag created by a team does not necessarily apply for another team – however, there are certain roles (security teams and automated CVE list scanners) which can apply a dependency flag for all teams. Also, as Deposition understands the how to compare two versions for particular sources, flagging a dependency at a particular version will also apply that flag to all lower versions of that dependency, as shown below. At present, Deposition understands the semantics of Alpine package versions, Debian package versions, and semantic versions used by pip and npm. When a dependency version is flagged, all builds using that version are flagged, any deployment using a flagged build is flagged, and any product with an active flagged build is flagged. Depending on the team configuration, a JIRA ticket or a GitHub issue is created for all deployments and active builds which were newly flagged by this action. This gives teams something automated to act on instead of having to search for what to replace via the Deposition UI. This also changes the process for future actions: any new build which uses a flagged version is rejected by the Deposition API and the automated build fails, and any new deployment which refers to a flagged build is similarly rejected. This makes it impossibly to accidentally build and deploy a service with vulnerable dependencies (although Deposition does provide an escape hatch to allow such builds through when absolutely necessary).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.16898763179779053, "perplexity": 3416.711900752096}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743960.44/warc/CC-MAIN-20181118011216-20181118032459-00050.warc.gz"}
https://socratic.org/questions/how-do-you-simplify-3x-2-14x-8-2x-2-7x-4-2x-2-9x-5-3x-2-16x-5
Algebra Topics # How do you simplify (3x^2+14x+8)/ (2x^2+7x-4) * (2x^2+9x-5)/(3x^2+16x+5)? Jul 16, 2015 $\frac{3 x + 2}{3 x + 1}$ #### Explanation: Factor the terms: $\frac{3 {x}^{2} + 14 x + 8}{2 {x}^{2} + 7 x - 4} \cdot \frac{2 {x}^{2} + 9 x - 5}{3 {x}^{2} + 16 x + 5} =$ $\frac{\left(3 x + 2\right) \left(x + 4\right)}{\left(2 x - 1\right) \left(x + 4\right)} \cdot \frac{\left(2 x - 1\right) \left(x + 5\right)}{\left(3 x + 1\right) \left(x + 5\right)}$ Cancel the identical terms found in the factorisation: $\frac{\left(3 x + 2\right) \left(x + 4\right)}{\left(2 x - 1\right) \left(x + 4\right)} \cdot \frac{\left(2 x - 1\right) \left(x + 5\right)}{\left(3 x + 1\right) \left(x + 5\right)} =$ $\frac{3 x + 2}{3 x + 1}$ ##### Impact of this question 855 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4698598384857178, "perplexity": 22452.40404163804}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00459.warc.gz"}
https://www.arxiv-vanity.com/papers/hep-ex/0606034/
arXiv Vanity renders academic papers from arXiv as responsive web pages so you don’t have to squint at a PDF. Read this paper on arXiv.org. # Compact storage ring to search for the muon electric dipole moment A. Adelmann, K. Kirch, G.J.G. Onderwater, T. Schietinger Paul Scherrer Institut, CH-5232 Villigen PSI, Switzerland Kernfysisch Versneller Instituut and University of Groningen, NL-9747AA Groningen, The Netherlands ###### Abstract We present the concept of a compact storage ring of less than 0.5 m orbit radius to search for the electric dipole moment of the muon () by adapting the “frozen spin” method. At existing muon facilities a statistics limited sensitivity of can be achieved within one year of data taking. Reaching this precision would demonstrate the viability of this novel technique to directly search for charged particle EDMs and already test a number of Standard Model extensions. At a future, high-power muon facility a statistical reach of seems realistic with this setup. ###### keywords: Electric and magnetic moments, Muons, Storage rings ###### Pacs: 13.40.Em, 14.60.Ef, 29.20.Dh ## 1 Motivation The observed matter-antimatter asymmetry of the Universe is not accounted for by the known extent of CP violation present in the Standard Model (SM) of particle physics. The search for permanent electric dipole moments (EDM) of fundamental particles is regarded as one of the most promising avenues for finding manifestations of additional CP violation, see, e.g., [1]. As these EDMs violate time reversal invariance (T) and parity (P), they also violate CP, if CPT invariance is assumed. Various systems have been under investigation for a long time with limits becoming more and more restrictive, often challenging models beyond the SM. Of particular interest are the searches for the EDM of the neutron [2], the electron [3] and the Hg atom [4]. The muon is the only elementary particle for which the EDM () has been measured directly. Existing limits have been obtained parasitically at storage rings designed to measure the muon’s anomalous magnetic dipole moment. Currently the best limit is at 95% confidence level [5]. This leaves the muon EDM as one of the least tested observables in the realm of the SM, which predicts a negligibly small value  [6]. For the muon there is no ongoing competitive dedicated search for its EDM. Lepton-universality, together with the best current limit on the electron EDM,  [3], suggests a stringent limit on the muon EDM, . There are, however, a number of models in which flavor-violating effects lead to a significant modification of this naive mass scaling (see, e.g., [7, 8, 9, 10, 11, 12, 13]). While the new limits on the branching fraction set by the BABAR and Belle collaborations [14] call for a reappraisal of the predictions of these models, a muon EDM in the range still seems possible. Additional strong motivation to search for a muon EDM in the range to arises from the result of the Brookhaven muon experiment, which challenges the SM prediction with a deviation of about three standard deviations [15, 16]. First of all, it is well known (see, e.g. [17]) and has been re-emphasized [9, 18] that the muon experiment by itself cannot exclude a contribution of to the observed precession frequency. Amazingly, if the experiment observes a beyond-SM effect due to new physics, parameterized as , with , and , it could be entirely due to and . Although a muon EDM as large as seems very unlikely, there is no solid theoretical argument against it either, and only an improvement of the experimental limit can settle the issue. Secondly, it has been pointed out [9] that if indeed , as suggested by the Brookhaven measurement, one should generically also expect , which for would result in , assuming the present values for . This situation calls for a dedicated experimental search for the muon EDM with a sensitivity of or better. In this Letter we introduce an almost table top setup to perform such a search. It is based on a storage ring with an orbit of less than 0.5 m radius, and employs the so-called “frozen spin” technique introduced by Farley, et al. [19]. We show that this experiment would have an intrinsic sensitivity comparable to that of a 7 m radius ring proposed in the past [20, 21]. We address injection into such a small ring and evaluate those systematic errors which depend on the muon momentum. We find that at existing muon beam facilities a sensitivity of can be reached in a year with “one-muon-at-a-time”. This experiment would be statistics limited and could therefore be further improved by one or more orders of magnitude once new strong pulsed muon sources become available. ## 2 Method and sensitivity The basic idea of the “frozen spin” method [19] is to cancel the regular () spin precession in a magnetic storage ring by the addition of a radial electric field. In the presence of a non-zero EDM, , the spin will precess around the direction of the (motional) electric field, →ωe=η2em(→β×→B+→E). (1) In the absence of longitudinal magnetic fields, the precession due to the anomalous magnetic moment is given by →ωa=em[a→B−(a−1γ2−1)→β×→E] (2) with . One can reduce to zero by choosing E=aBβ1−(1+a)β2≃aBβγ2, (3) for . Then, the only precession is the one of Eq. (1) and the spin is “frozen” in case . For , the muon spin, initially parallel to the muon momentum, is moving steadily out of the plane of the orbit. The observable in the experiment is the up–down counting asymmetry due to the muon decay asymmetry. In the following discussion “positron” will refer to both electrons and positrons originating from muon decays. With polarization , lifetime and the number of detected decay positrons , the uncertainty in is to good approximation given by ση=√2γτ(e/m)βBAP√N, (4) which suggests that to obtain the best accuracy it is desirable to use a high magnetic field and high energy muons. But according to Eq. (3) this would require impractically large electric fields. Expressing Eq. (4) in terms of from Eq. (3), ση=√2acγτ(e/m)EAP√N, (5) we see that the boundary condition of a practically limited electric field strength actually favors low values of . Consequently, we consider the use of a low-momentum muon beam and, as a concrete example, the PSI µE1 beamline [22]. Depending on the mode of operation, one obtains up to  s with  MeV/ (, ) from backward decaying pions with  MeV/. The muons arrive in bunches every 19.75 ns (corresponding to the accelerator frequency) with a burst width slightly below 4 ns [23]. The muon polarization is , for the decay asymmetry we use . We consider a scenario with magnetic and electric fields of  T and  MV/m, respectively, corresponding to a ring radius of  m. Choosing a moderate value for the -field allows the use of a normal conducting magnet to switch the field polarity reasonably fast. The muon momentum and the strength of the magnetic field fix the electric field strength at a value that can be readily achieved. Our choice of parameters results in an intrinsic sensitivity of σdμ≃1.1×10−16 e cm/√N. (6) This is comparable to that presented in Ref. [19], based on  GeV/,  MV/m,  T, ,  m, . The idea for the operation of the experiment at the µE1 beam line is to use one muon at a time in the storage ring and observe its decay before the next muon is injected. This way, the high beam intensity is traded off for beam quality and muons suitable for the injection can be selected. Assuming an injection latency of 1 µs and an average observation time of  µs results in more than muon decays per second and allows for detected events per year, thus σdμ≃5×10−23 e cm. (7) ## 3 Storage ring injection Injection into a compact storage ring is a significant challenge. In our application, the velocity of the 125 MeV/ muons is about 23 cm/ns, corresponding to a revolution time of about 11 ns. The use of a conventional kicker device faster than the revolution time may not be feasible. Existing devices are at least an order of magnitude slower, although there are promising developments for the International Linear Collider (ILC) [24]. An alternative and viable scheme for particle injection would be through beam resonance. Injection of electrons into small storage rings using 1/2 (and also 2/3) integer resonances has been demonstrated [25, 26] and can be adapted for muons. This injection method is a time reversal of half integer resonance extraction [27]. In the radial phase space the separatrix of the half integer resonance together with a stable region around the central orbit is created by a so-called perturbator. The perturbator is creating odd multipole fields with field strengths depending on the radial betatron frequency. The muons are injected near the separatrix in the unstable region through an inflector. By ramping down the perturbator field, these muons are captured by expanding the stable region of the phase space. The µE1 beam line at PSI delivers muons in a transverse phase space of approximately  mmmrad when operating within 1% of momentum acceptance (FWHM). The phase space can be reduced by a suitable collimation system to fit the acceptance of the resonance injection scheme. We consider a weak focusing storage ring with a tune . In Fig. 1 we show the results of a simulation demonstrating twenty-turn injection (red) out of a narrow, i.e., collimated phase space (blue). The stable part of the phase space, i.e., the observation phase of the muons, is shown in black. The synchronization of the field ramping and the muon injection can be achieved via triggering on an upstream muon entrance telescope, in combination with the accelerator radio-frequency and the detection of the previous decay positron (or a suitable time-out). The loss in statistics due to the time needed for ramping up the perturbator is of order 10%. The decrease of the actual observation time window due to the time needed for reaching the stable orbit is of the same order. Thus a total loss of statistics of about 15% is expected. The necessity to synchronize the perturbator ramping with a “good” muon comes from the low intensity at the existing muon beam (here the chance to have an acceptable muon in a 20 ns time window is only about 2%). Assuming a pulsed muon source of much larger intensity, the ramping of the perturbator can be synchronized to the machine frequency, which simplifies the injection. Many (e.g., ) muons within one bunch would then be captured into the stable orbit. ## 4 Polarimetry The muon spin orientation is reconstructed from the distribution of the decay positrons. Due to the magnetic field of the storage ring, the positrons will be bent towards the inner side of the ring. Both the efficiency for detecting a decay positron and the analyzing power, i.e., sensitivity to muon polarization, must be optimized. The simplest and most straightforward detection system only distinguishes upward and downward going positrons. The number of upward versus downward going positrons is independent of the muon energy. In this case, the analyzing power for a vertical muon spin component is typically . Efficiencies can be several tens of percent. It was shown in [5] that detecting the vertical positron angle is less prone to systematic errors. The additional information on the positron also improves the statistical sensitivity of the experiment. Contrary to the up–down counting asymmetry, the vertical angle does depend on the muon energy and is inversely proportional to . Because the width of the vertical angle distribution has the same dependence, the relative precision does not depend on the muon momentum. As a guard against systematic errors, however, a larger signal and thus a lower muon momentum is preferred. ## 5 Systematic effects and countermeasures Two categories of systematic errors can be distinguished: (1) those that lead to an actual growth of the polarization into the vertical plane; and (2) those that lead to an apparent vertical polarization component. In [19], the six dominant effects and their counter measures are discussed. The setup described in this Letter does not introduce additional systematic error sources, so that these counter measures are applicable to our setup as well. We briefly discuss those systematic errors which are affected by the lower muon momentum. An important source of systematic error is the existence of an average electric field component along the magnetic field. The resulting false EDM is ηfalse≃2a2γ2βEvEr. (8) At our momentum, the increased sensitivity due to a lower is more than compensated by the low : , as compared to the experiment suggested in, e.g., [21], for which . A systematic error at the level of the statistical reach () leads to the (modest) requirement . Furthermore, when switching from clockwise to counter-clockwise injection, the false EDM remains the same, whereas the true EDM signal changes sign [19]. A net longitudinal magnetic field , combined with an initial transverse polarization leads to a false EDM of order ηfalse≃2γβBLBTPTPL. (9) Assuming an initial transverse-to-longitudinal polarization of 10% leads to the requirement to match the statistical uncertainty. The latter corresponds to a current of 13 mA flowing through the orbit of the stored muons, or an electric field change of 2.5 GV/m/s perpendicular to the orbit and synchronized to the measuring cycle. Variation of the initial transverse polarization, or allowing a slow residual precession will expose this component. Since the experimental setup is quite small, shimming undesired field component to an acceptable level will be considerably easier than in a larger setup. Moreover, to match the relatively modest statistical reach leads to rather relaxed requirements on these field perturbations. Systematic errors of the second category include shifts and rotations of the detectors. The optimal measuring cycle is about two lifetimes and thus scales with . For , the growth of the up–down counting asymmetry is . Static detector and beam displacements therefore cannot lead to a false EDM signal. The effect of random motion, i.e., not correlated with the measurement cycle, is reduced by six orders of magnitude (the square-root of the number of measuring cycles), and therefore also not of any concern. Only if the motion is synchronous with the measuring cycle, false EDM signals may appear. The detector position relative to the average muon decay vertex determines the systematic error, so both detector and beam motion must be considered. Especially the latter is of concern, because it is intrinsically synchronized with the measurement. For a displacement along , the resulting false EDM when using the up–down counting ratio is ηfalse≃γτ103ddt(δl), (10) with the typical scale of the experimental setup. This places the rather stringent limit  0.1 µm/. The positron momentum dependences of the true and this false EDM signal are significantly different so that they can be disentangled. ## 6 Conclusion We have described a compact muon storage ring based on a novel resonant injection scheme as a viable setup to measure the EDM of the muon using the frozen spin technique. Such a measurement would demonstrate the feasibility of a still unexplored technique for the direct search for EDMs of charged particles and would serve as a stepping stone for future applications of this promising method. At existing muon facilities (PSI µE1 beamline [22]) a sensitivity of seems reachable in one year of data taking, an improvement of the existing limit by more than three orders of magnitude. Already at this level of precision, several interesting physics tests are possible. First, it could unambiguously exclude the EDM to be the explanation of the difference between the measured anomalous magnetic moment and its SM prediction. It would furthermore test various SM extensions, in particular those that do not respect lepton universality. In view of the possible advent of new, more powerful pulsed muon sources, the same experimental scheme can be realized but with considerably more muons per bunch being injected into the ring. It appears realistic to expect accelerators with on the order of 100 kHz repetition rates and more than muons stored per bunch. The statistical sensitivity of the described approach would then reach down to or better. Although systematic issues at this level of precision have been discussed in some detail in [19], more detailed studies would be needed. ## Acknowledgements We are grateful to M. Böge, W. Fetscher, K. Jungmann, S. Ritt, and A. Streun for fruitful discussions. Furthermore, we acknowledge that J.P. Miller independently suggested the use of low-momentum muons to exploit the “frozen-spin” method. The work by C.J.G.O. is funded through an Innovational Research Grant of the Netherlands Organization for Scientific Research (NWO). ## References • [1] M. Pospelov, A. Ritz, Ann. Phys. 318 (2005) 119. • [2] C.A. Baker, et al., Phys. Rev. Lett. 97 (2006) 131801. • [3] B.C. Regan, et al., Phys. Rev. Lett. 88 (2002) 071805. • [4] M.V. Romalis, W.C. Griffith, J.P. Jacobs, E.N. Fortson, Phys. Rev. Lett. 86 (2001) 2505. • [5] Muon Collaboration, G.W. Bennett, et al., submitted to Phys. Rev. D, arXiv:0811.1207. • [6] M.E. Pospelov, I.B. Khriplovich, Sov. J. Nucl. Phys. 53 (1991) 638. • [7] K.S. Babu, S.M. Barr, I. Dorsner, Phys. Rev. D 64 (2001) 053009. • [8] K.S. Babu, B. Dutta, R.N. Mohapatra, Phys. Rev. Lett. 85 (2000) 5064. • [9] J.L. Feng, K.T. Matchev, Y. Shadmi, Nucl. Phys. B 613 (2001) 366. • [10] A. Romanino, A. Strumia, Nucl. Phys. B 622 (2002) 73. • [11] A. Pilaftsis, Nucl. Phys. B 644 (2002) 263. • [12] K.S. Babu, J.C. Pati, Phys. Rev. D 68 (2003) 035004. • [13] A. Bartl, W. Majerotto, W. Porod, D. Wyler, Phys. Rev. D 68 (2003) 053005. • [14] BABAR Collaboration, B. Aubert, et al., Phys. Rev. Lett. 95 (2005) 041802; Belle Collaboration, K  Hayasaka, et al., Phys. Lett. B 666 (2008) 16. • [15] Muon Collaboration, G.W. Bennett, et al., Phys. Rev. D 73 (2006) 072003. • [16] B.L. Roberts, Nucl. Phys. B Proc. Suppl. 155 (2006) 372. • [17] J. Bailey, et al., J. Phys. G: Nucl. Phys. 4 (1978) 345. • [18] J.L. Feng, K.T. Matchev, Y. Shadmi, Phys. Lett. B 555 (2003) 89. • [19] F.J.M. Farley, et al., Phys. Rev. Lett. 93 (2004) 052001. • [20] Y.K. Semertzidis, et al., arXiv:hep-ph/0012087 (2000). • [21] M. Aoki, et al., J-PARC Letter of Intent: Search for a Permanent Muon Electric Dipole Moment at the Level (2003). • [22] The PSI µE1 beamline, see http://aea.web.psi.ch/beam2lines/beam_mue1.html. • [23] I.C. Barnett, et al., Nucl. Instrum. Methods Phys. Res., Sect. A 455 (2000) 329. • [24] T. Naito, H. Hayanoa, M. Kuriki, N. Terunuma, J. Urakawa, Nucl. Instrum. Methods Phys. Res., Sect. A 571 (2007) 599. • [25] H. Yamada, Nucl. Instrum. Methods Phys. Res., Sect. B 199 (2003) 509. • [26] D. Hasegawa, et al., Proceedings 14 Symposium on Accelerator Science and Technology, Tsukuba, Japan, 2003, p. 111. • [27] T. Takayama, Nucl. Instrum. Methods Phys. Res., Sect. B 24/25 (1987) 420.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9344269037246704, "perplexity": 1264.2827159394826}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735881.90/warc/CC-MAIN-20200804161521-20200804191521-00341.warc.gz"}
http://www.vistaheads.com/forums/microsoft-public-windows-vista-file-management/943-offline-files.html
# Offline Files ## microsoft.public.windows.vista.file management 02-28-2007 Jeffrey S. Sparks Posts: n/a Offline Files I keep getting an error when trying to sync offline files. "The process cannot access the file because it is being used by another process." I have never been able to get offline files to work in Vista (business edition) but it worked fine in XP pro? These are just documents from a variety of different sources. I can see the files and folders listed offline but can't actually open them offline. They are stored on a windows 2003 SBS Server. Any help would be greatly appreciated. Jeff 02-28-2007 Jill Zoeller [MSFT] Posts: n/a Re: Offline Files I'm checking on this for you with the Offline Files team. -- This posting is provided "AS IS" with no warranties, and confers no rights. team blog at http://blogs.technet.com/filecab/default.aspx. "Jeffrey S. Sparks" <[email protected]> wrote in message [email protected]... >I keep getting an error when trying to sync offline files. "The process >cannot access the file because it is being used by another process." > > I have never been able to get offline files to work in Vista (business > edition) but it worked fine in XP pro? These are just documents from a > variety of different sources. I can see the files and folders listed > offline but can't actually open them offline. > They are stored on a windows 2003 SBS Server. > > Any help would be greatly appreciated. > > Jeff 02-28-2007 Jill Zoeller [MSFT] Posts: n/a Re: Offline Files The error you're seeing is self-explanatory--something is holding the file open. What file is it? When you say that you can't actually open files offline, do you get an error? What happens exactly? -- This posting is provided "AS IS" with no warranties, and confers no rights. team blog at http://blogs.technet.com/filecab/default.aspx. "Jeffrey S. Sparks" <[email protected]> wrote in message [email protected]... >I keep getting an error when trying to sync offline files. "The process >cannot access the file because it is being used by another process." > > I have never been able to get offline files to work in Vista (business > edition) but it worked fine in XP pro? These are just documents from a > variety of different sources. I can see the files and folders listed > offline but can't actually open them offline. > They are stored on a windows 2003 SBS Server. > > Any help would be greatly appreciated. > > Jeff 02-28-2007 Jill Zoeller [MSFT] Posts: n/a Re: Offline Files Does the server happen to be a NAS device? -- This posting is provided "AS IS" with no warranties, and confers no rights. team blog at http://blogs.technet.com/filecab/default.aspx. "Jeffrey S. Sparks" <[email protected]> wrote in message [email protected]... >I keep getting an error when trying to sync offline files. "The process >cannot access the file because it is being used by another process." > > I have never been able to get offline files to work in Vista (business > edition) but it worked fine in XP pro? These are just documents from a > variety of different sources. I can see the files and folders listed > offline but can't actually open them offline. > They are stored on a windows 2003 SBS Server. > > Any help would be greatly appreciated. > > Jeff 02-28-2007 Jeffrey S. Sparks Posts: n/a Re: Offline Files No, it's a server (asus tusl Motherboard, 1 gb ram, raid array, etc) running "Jill Zoeller [MSFT]" <[email protected]> wrote in message news:%[email protected]... > Does the server happen to be a NAS device? > > -- > This posting is provided "AS IS" with no warranties, and confers no > rights. > > team blog at http://blogs.technet.com/filecab/default.aspx. > > > "Jeffrey S. Sparks" <[email protected]> wrote in message > [email protected]... >>I keep getting an error when trying to sync offline files. "The process >>cannot access the file because it is being used by another process." >> >> I have never been able to get offline files to work in Vista (business >> edition) but it worked fine in XP pro? These are just documents from a >> variety of different sources. I can see the files and folders listed >> offline but can't actually open them offline. >> They are stored on a windows 2003 SBS Server. >> >> Any help would be greatly appreciated. >> >> Jeff > > 02-28-2007 Jeffrey S. Sparks Posts: n/a Re: Offline Files The network share is on a server called alpha for which I have full control of this folder: \\alpha\home_folders$\jssparks I mapped the share to my laptop as drive z: I right clicked on documents, selected location and chose z: and elected to move all of the files. I then turned on offline files for that folder after which it says preparing files so they are always available offline. I get a TON of errors (like 543 errors) saying the same thing for every single file. "The process cannot access the file because it is being used by another process." When I looked at the offline files control panel applet (control panel > Networking > Offline files) It shows the correct amount of space being used. When I'm offline I can see all the folders and files in explorer but when I try to open one it says the file is currently unavailable for use on this computer. I get the same error no matter which file i try to open while offline. They are a variety of type of documents (word 2003, excel, pdf, tif, gif files, etc...) Jeff "Jill Zoeller [MSFT]" <[email protected]> wrote in message news:[email protected]... > The error you're seeing is self-explanatory--something is holding the file > open. What file is it? > > When you say that you can't actually open files offline, do you get an > error? What happens exactly? > > -- > This posting is provided "AS IS" with no warranties, and confers no > rights. > > Want to learn more about Windows file and storage technologies? Visit our > team blog at http://blogs.technet.com/filecab/default.aspx. > > > "Jeffrey S. Sparks" <[email protected]> wrote in message > [email protected]... >>I keep getting an error when trying to sync offline files. "The process >>cannot access the file because it is being used by another process." >> >> I have never been able to get offline files to work in Vista (business >> edition) but it worked fine in XP pro? These are just documents from a >> variety of different sources. I can see the files and folders listed >> offline but can't actually open them offline. >> They are stored on a windows 2003 SBS Server. >> >> Any help would be greatly appreciated. >> >> Jeff > > #7 (permalink) 02-28-2007 Jill Zoeller [MSFT] Posts: n/a Re: Offline Files I'm checking with the Offline Files team. I've seen reports of this error for a NAS server but not SBS. -- This posting is provided "AS IS" with no warranties, and confers no rights. Want to learn more about Windows file and storage technologies? Visit our team blog at http://blogs.technet.com/filecab/default.aspx. "Jeffrey S. Sparks" <[email protected]> wrote in message news:[email protected]... > No, it's a server (asus tusl Motherboard, 1 gb ram, raid array, etc) > running Windows small business server 2003. > > > "Jill Zoeller [MSFT]" <[email protected]> wrote in message > news:%[email protected]... >> Does the server happen to be a NAS device? >> >> -- >> This posting is provided "AS IS" with no warranties, and confers no >> rights. >> >> Want to learn more about Windows file and storage technologies? Visit our >> team blog at http://blogs.technet.com/filecab/default.aspx. >> >> >> "Jeffrey S. Sparks" <[email protected]> wrote in message >> [email protected]... >>>I keep getting an error when trying to sync offline files. "The process >>>cannot access the file because it is being used by another process." >>> >>> I have never been able to get offline files to work in Vista (business >>> edition) but it worked fine in XP pro? These are just documents from a >>> variety of different sources. I can see the files and folders listed >>> offline but can't actually open them offline. >>> They are stored on a windows 2003 SBS Server. >>> >>> Any help would be greatly appreciated. >>> >>> Jeff >> >> > #8 (permalink) 02-28-2007 Jill Zoeller [MSFT] Posts: n/a Re: Offline Files Here's what we think is happening. When you redirect your My Documents folder, those files are automatically cached (made available offline). Because you then try to mark them available as offline, we think these two sync operations are fighting each other for access to the files. You shouldn't need to mark them available as offline--just let folder redirection complete. Normally the "always available offline" option in the UI is disabled when the folder is pinned by Folder Redirection. If you were able to select it we suspect that folder redirection hadn't yet pinned it (race condition). Let me know if this works for you. -- This posting is provided "AS IS" with no warranties, and confers no rights. Want to learn more about Windows file and storage technologies? Visit our team blog at http://blogs.technet.com/filecab/default.aspx. "Jeffrey S. Sparks" <[email protected]> wrote in message news:[email protected]... > The network share is on a server called alpha for which I have full > control of this folder: > > \\alpha\home_folders$\jssparks > > I mapped the share to my laptop as drive z: > > I right clicked on documents, selected location and chose z: and elected > to move all of the files. > > I then turned on offline files for that folder after which it says > preparing files so they are always available offline. > > I get a TON of errors (like 543 errors) saying the same thing for every > single file. "The process cannot access the file because it is being used > by another process." > > When I looked at the offline files control panel applet (control panel > > Networking > Offline files) It shows the correct amount of space being > used. When I'm offline I can see all the folders and files in explorer but > when I try to open one it says the file is currently unavailable for use > on this computer. I get the same error no matter which file i try to open > while offline. They are a variety of type of documents (word 2003, excel, > pdf, tif, gif files, etc...) > > Jeff > > "Jill Zoeller [MSFT]" <[email protected]> wrote in message > news:[email protected]... >> The error you're seeing is self-explanatory--something is holding the >> file open. What file is it? >> >> When you say that you can't actually open files offline, do you get an >> error? What happens exactly? >> >> -- >> This posting is provided "AS IS" with no warranties, and confers no >> rights. >> >> team blog at http://blogs.technet.com/filecab/default.aspx. >> >> >> "Jeffrey S. Sparks" <[email protected]> wrote in message >> [email protected]... >>>I keep getting an error when trying to sync offline files. "The process >>>cannot access the file because it is being used by another process." >>> >>> I have never been able to get offline files to work in Vista (business >>> edition) but it worked fine in XP pro? These are just documents from a >>> variety of different sources. I can see the files and folders listed >>> offline but can't actually open them offline. >>> They are stored on a windows 2003 SBS Server. >>> >>> Any help would be greatly appreciated. >>> >>> Jeff >> >> > 02-28-2007 Jill Zoeller [MSFT] Posts: n/a Re: Offline Files How long are you waiting following the redirection? It takes time for those files to be added to the cache. The progress of that redirection sync is not reported through Sync Center. -- This posting is provided "AS IS" with no warranties, and confers no rights. team blog at http://blogs.technet.com/filecab/default.aspx. "Jeffrey S. Sparks" <[email protected]> wrote in message news:[email protected]... > Ok, i started out with the documents in my documents folder on my laptop. > I deleted the Mapped drive to the server folder and then told it to move > the documents folder to \\alpha\home_folders$\jssparks. > > It moved them, synced and there were no errors. However, the files are > still not available offline. If I select "work offline" or just unplug > the network cable all of the icons for the files grey out and won't open. > > Jeff > > > "Jill Zoeller [MSFT]" <[email protected]> wrote in message > news:[email protected]... >> Here's what we think is happening. When you redirect your My Documents >> folder, those files are automatically cached (made available offline). >> Because you then try to mark them available as offline, we think these >> two sync operations are fighting each other for access to the files. You >> shouldn't need to mark them available as offline--just let folder >> redirection complete. Normally the "always available offline" option in >> the UI is disabled when the folder is pinned by Folder Redirection. If >> you were able to select it we suspect that folder redirection hadn't yet >> pinned it (race condition). >> >> Let me know if this works for you. >> >> -- >> This posting is provided "AS IS" with no warranties, and confers no >> rights. >> >> Want to learn more about Windows file and storage technologies? Visit our >> team blog at http://blogs.technet.com/filecab/default.aspx. >> >> >> "Jeffrey S. Sparks" <[email protected]> wrote in message >> news:[email protected]... >>> The network share is on a server called alpha for which I have full >>> control of this folder: >>> >>> \\alpha\home_folders$\jssparks >>> >>> I mapped the share to my laptop as drive z: >>> >>> I right clicked on documents, selected location and chose z: and elected >>> to move all of the files. >>> >>> I then turned on offline files for that folder after which it says >>> preparing files so they are always available offline. >>> >>> I get a TON of errors (like 543 errors) saying the same thing for every >>> single file. "The process cannot access the file because it is being >>> used by another process." >>> >>> When I looked at the offline files control panel applet (control panel > >>> Networking > Offline files) It shows the correct amount of space being >>> used. When I'm offline I can see all the folders and files in explorer >>> but when I try to open one it says the file is currently unavailable for >>> use on this computer. I get the same error no matter which file i try >>> to open while offline. They are a variety of type of documents (word >>> 2003, excel, pdf, tif, gif files, etc...) >>> >>> Jeff >>> >>> "Jill Zoeller [MSFT]" <[email protected]> wrote in message >>> news:[email protected]... >>>> The error you're seeing is self-explanatory--something is holding the >>>> file open. What file is it? >>>> >>>> When you say that you can't actually open files offline, do you get an >>>> error? What happens exactly? >>>> >>>> -- >>>> This posting is provided "AS IS" with no warranties, and confers no >>>> rights. >>>> >>>> our team blog at http://blogs.technet.com/filecab/default.aspx. >>>> >>>> >>>> "Jeffrey S. Sparks" <[email protected]> wrote in message >>>> [email protected]... >>>>>I keep getting an error when trying to sync offline files. "The >>>>>process cannot access the file because it is being used by another >>>>>process." >>>>> >>>>> I have never been able to get offline files to work in Vista (business >>>>> edition) but it worked fine in XP pro? These are just documents from >>>>> a variety of different sources. I can see the files and folders >>>>> listed offline but can't actually open them offline. >>>>> They are stored on a windows 2003 SBS Server. >>>>> >>>>> Any help would be greatly appreciated. >>>>> >>>>> Jeff >>>> >>>> >>> >> >> > 02-28-2007 Jeffrey S. Sparks Posts: n/a Re: Offline Files It's been a couple of hours now... Jeff "Jill Zoeller [MSFT]" <[email protected]> wrote in message news:%[email protected]... > How long are you waiting following the redirection? It takes time for > those files to be added to the cache. The progress of that redirection > sync is not reported through Sync Center. > > -- > This posting is provided "AS IS" with no warranties, and confers no > rights. > > team blog at http://blogs.technet.com/filecab/default.aspx. > > > "Jeffrey S. Sparks" <[email protected]> wrote in message > news:[email protected]... >> Ok, i started out with the documents in my documents folder on my laptop. >> I deleted the Mapped drive to the server folder and then told it to move >> the documents folder to \\alpha\home_folders$\jssparks. >> >> It moved them, synced and there were no errors. However, the files are >> still not available offline. If I select "work offline" or just unplug >> the network cable all of the icons for the files grey out and won't open. >> >> Jeff >> >> >> "Jill Zoeller [MSFT]" <[email protected]> wrote in message >> news:[email protected]... >>> Here's what we think is happening. When you redirect your My Documents >>> folder, those files are automatically cached (made available offline). >>> Because you then try to mark them available as offline, we think these >>> two sync operations are fighting each other for access to the files. You >>> shouldn't need to mark them available as offline--just let folder >>> redirection complete. Normally the "always available offline" option in >>> the UI is disabled when the folder is pinned by Folder Redirection. If >>> you were able to select it we suspect that folder redirection hadn't yet >>> pinned it (race condition). >>> >>> Let me know if this works for you. >>> >>> -- >>> This posting is provided "AS IS" with no warranties, and confers no >>> rights. >>> >>> Want to learn more about Windows file and storage technologies? Visit >>> our team blog at http://blogs.technet.com/filecab/default.aspx. >>> >>> >>> "Jeffrey S. Sparks" <[email protected]> wrote in message >>> news:[email protected]... >>>> The network share is on a server called alpha for which I have full >>>> control of this folder: >>>> >>>> \\alpha\home_folders$\jssparks >>>> >>>> I mapped the share to my laptop as drive z: >>>> >>>> I right clicked on documents, selected location and chose z: and >>>> elected to move all of the files. >>>> >>>> I then turned on offline files for that folder after which it says >>>> preparing files so they are always available offline. >>>> >>>> I get a TON of errors (like 543 errors) saying the same thing for every >>>> single file. "The process cannot access the file because it is being >>>> used by another process." >>>> >>>> When I looked at the offline files control panel applet (control panel >>>> > Networking > Offline files) It shows the correct amount of space >>>> being used. When I'm offline I can see all the folders and files in >>>> explorer but when I try to open one it says the file is currently >>>> unavailable for use on this computer. I get the same error no matter >>>> which file i try to open while offline. They are a variety of type of >>>> documents (word 2003, excel, pdf, tif, gif files, etc...) >>>> >>>> Jeff >>>> >>>> "Jill Zoeller [MSFT]" <[email protected]> wrote in message >>>> news:[email protected]... >>>>> The error you're seeing is self-explanatory--something is holding the >>>>> file open. What file is it? >>>>> >>>>> When you say that you can't actually open files offline, do you get an >>>>> error? What happens exactly? >>>>> >>>>> -- >>>>> This posting is provided "AS IS" with no warranties, and confers no >>>>> rights. >>>>> >>>>> our team blog at http://blogs.technet.com/filecab/default.aspx. >>>>> >>>>> >>>>> "Jeffrey S. Sparks" <[email protected]> wrote in message >>>>> [email protected]... >>>>>>I keep getting an error when trying to sync offline files. "The >>>>>>process cannot access the file because it is being used by another >>>>>>process." >>>>>> >>>>>> I have never been able to get offline files to work in Vista >>>>>> (business edition) but it worked fine in XP pro? These are just >>>>>> documents from a variety of different sources. I can see the files >>>>>> and folders listed offline but can't actually open them offline. >>>>>> They are stored on a windows 2003 SBS Server. >>>>>> >>>>>> Any help would be greatly appreciated. >>>>>> >>>>>> Jeff >>>>> >>>>> >>>> >>> >>> >> > > Thread Tools Display Modes Linear Mode Posting Rules You may not post new threads You may not post replies You may not post attachments You may not edit your posts vB code is On Smilies are On [IMG] code is On HTML code is OffTrackbacks are On Pingbacks are On Refbacks are Off Thread Thread Starter Forum Replies Last Post Offline Files not available? =?Utf-8?B?cGxhbmV0ZjE=?= microsoft.public.windows.vista.file management 5 02-28-2007 14:15 Offline files not available =?Utf-8?B?SmVybmVqIEthc2U=?= microsoft.public.windows.vista.file management 1 02-28-2007 14:15 Offline files =?Utf-8?B?YndhdHNvbjY3?= microsoft.public.windows.vista.file management 1 02-28-2007 14:15 Cannot access offline folders in different domains Alberto Di Meglio microsoft.public.windows.vista.file management 1 02-28-2007 14:14 All times are GMT +1. The time now is 05:31.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5358464121818542, "perplexity": 7500.949773709796}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422118973352.69/warc/CC-MAIN-20150124170253-00199-ip-10-180-212-252.ec2.internal.warc.gz"}
https://www.freelancer.co.ke/projects/Report-Writing/need-writer-with-experience-LaTeX/
# We need a writer with experience in LaTeX Software Write 6 pieces of literature Review (2200w each; 13,200w TOTAL) and use LaTeX software. Ujuzi: LaTeX, Kuandika Ripoti Kuhusu Muajiri: ( Maoni 2 ) Mont Kiara, Malaysia ## 13 freelancers are bidding on average $498 for this job coolwriter30 I am a competent academic writer, with high lead times and turnaround time, proposal writer, masters level dissertation writer, PHD level dissertation and thesis writer, PHD defense PowerPoint presentation writer, argu Zaidi$250 USD kwa siku 10 (Maoni 59) 5.8 VolKa $250 USD kwa siku 10 (Maoni 35) 5.3 eamora2014 Hello, I hope this message finds you well. I'm Mathematician, LaTeX developer and technical writer. I specialize in typesetting science, technology, engineering, and mathematics documents. You can see some sample Zaidi$250 USD kwa siku 10 (Maoni 34) 4.8 $555 USD kwa siku 10 (Maoni 6) 4.1 anacris1 Report writing is my area of expertise; I am confident about. I guaranty professional quality work with surety of flawlessness on pertinent technical information. I mobilize all my resources to bring about a perfect te Zaidi$555 USD kwa siku 10 (Maoni 19) 4.6 hrdezigns Hi, i can help you with your literature review. Can provide samples. I dont work in a team like others and take one project at a time. Most importantly, i will work within your budget range and will submit within the g Zaidi $250 USD kwa siku 7 (Maoni 6) 2.8$255 USD kwa siku 5 (Maoni 2) 2.3 daisyondwari Hello, I am Daisy, a zealous, customer-focused and creative research and academic writing expert. I deliver excellent and professional work for my clients in various academic and research disciplines. My areas of profi Zaidi $333 USD kwa siku 5 (Maoni 3) 2.2 majitbosrihimel I am a professional freelancer with years of experience about writing and latex software. You will be 100% satisfied with my service. My work will speak for me. I am waiting for your response and hope that you are goi Zaidi$250 USD kwa siku 5 (Maoni 1) 1.2 hemantmayatra - Completed two Math Books (Discrete Math, Partial Differential) with more than 400 pages and completed mathematical equation with more than 50 mathematical figures. - Published 2 paper on IEEE Conferences, Well aware Zaidi $555 USD kwa siku 15 (Maoni 1) 1.3 nileshpujara90 Hello I hope you are enjoying the beautiful weather Well, as far as content writing is concerned it should be thought-provoking as well as informative at the same time. Content writing is all about using words Zaidi$250 USD kwa siku 10 (Maoni 0) 0.0 McCaslin With excellent research and creative writing skills I am the ideal candidate for almost any article that you need written. Articles require facts and relevant information to back up those facts, as well as the importan Zaidi $750 USD kwa siku 6 (Maoni 0) 0.0$1666 USD kwa siku 10 (Maoni 0) 1.8 $555 USD kwa siku 10 (Maoni 0) 0.0 ConciseContent "Dear Sir or Madam, I am native English speaker from the United Kingdom with a bachelor’s degree in writing and master’s degree in historical research and writing. My work experience includes more than a decade as a Zaidi$555 USD kwa siku 10 (Maoni 0) 0.0
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18734894692897797, "perplexity": 19790.114132919567}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948587577.92/warc/CC-MAIN-20171216104016-20171216130016-00426.warc.gz"}
https://chemistry.stackexchange.com/questions/48916/hplc-peak-area-vs-concentration
HPLC: peak area vs concentration Many text books say that there is a proportionality between the peak area of the chromatogram and the concentration. However, in my opinion, there should be a proportionality between the concentration and the peak intensity, not the peak area. The peak intensity is increasing proportionally with respect to the absorbance of the substance. Hence, by using the Beer-Lamberant law, we can induce there is a proportionality between peak intenstiy and the concentration. Is there any mathematical relation between the peak area and the concentration? Any help will be appreciated. • Welcome to chemistry.SE! If you had any questions about the policies of our community, please ‎visit the help center. Apr 3, 2016 at 18:43 • You asked the wrong question. The question should be "Why isn't peak intensity as good as peak area to determine concentration using HPLC?" – MaxW Apr 3, 2016 at 18:47 • @MaxW That can be an answer for my question, but I wonder if we can deduce the relation between them mathematically, by using the fact that the area is the integral value of peak intensty with respect to the time. Apr 3, 2016 at 18:51 • – MaxW Apr 3, 2016 at 23:00 • Present a correlation for concentration and area used to carry out the validations? Jun 17, 2021 at 4:05 However, in my opinion, there should be a proportionality between the concentration and the peak intensity, not the peak area. There is a proportionality between both peak area vs. concentration and peak height vs. concentration. 1. Peak height is proportional to the instantaneous amount of analyte that is transiting the detector. 2. Peak area is proportional to the sum of all of analyte moleucles that have transited the detector. 3. From 1 and 2 you might be able to infer the relationship betweeen peak height $h$ and area $A$: $$A = \int{h(t)\;dt}$$ 4. People are usually interested in the total amount of substance injected into the column. (If they know the injection volume, they can calculate the concentration from this value.) The total amount of substance would be calculated from the peak area. 5. The maximal peak height is proportional to the peak area, but only if the peak "shape" is constant. Here are some scenarios where peak shape will not be constant: 1. You want to compare injection #2 you made on your column two years ago to injection #2000 that you made recently. Due to column degradation, the recent injections have much more noticeable peak tailing. Because the longer tail leads to wider, asymmetric peaks, there will be less total height at the peak maximum relative to "better", symmetric peaks. 2. You change the flow rate of the HPLC. All peaks are narrower, and thus, higher. 3. The scan rate of your detector changes, and the detector reports "counts" or "intensity" rather than counts per time. This is actually very common for mass spectrometers, but less common for absorbance detectors. If you scan twice as fast in MS, your peak heights go down by approximately twofold, but you have data points twice as often, so the peak area is relatively unchanged. 4. Your detector undersamples the peak. Say your chromatography is nice and the "real" peak shape is nicely gaussian, but is ~ten seconds wide and your detector only gives you a data point every two seconds. Say that due to very small random drifts in retention time, on some injections one of the data points coincides exactly with the maximum of the "real" peak, but in others, it is a little bit off. The peak heights in this scenario will vary considerably more than the peak areas. (If the recorded maximum is off-center from the true maximum, there will be two data points that are higher, i.e. "closer" to the maximum than if the data maximum coincides with the true maximum, so integration will partially correct this error.) Essentially, the peak maximum is a single-point sample from the gaussian distribution of the peak, while the area is a several-point sample from the gaussian distribution, so it has better sampling properties. Is there any mathematical relation between the peak area and the concentration? Yes, there is, but it depends on the peak shape. For perfectly gaussian peaks, $$h_{max} = \frac{A}{\sigma \sqrt{\tau}}$$ where $\tau$ is $2 \pi$ and $\sigma$ is the width of the peak, which is related to the full-width at half-maximum of the peak by $\mathrm{FWHM} = 2 \sigma \sqrt{2 \ln 2}$. However, for non-gaussian peaks, this relationship does not hold. • Does this mean that I can't use absorbance intensity (maximum peak height) to make my concentration calibration curve if I have a peak that's not bb (like vb, bv, or vv)? Oct 25, 2016 at 23:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8348922729492188, "perplexity": 918.7024253300108}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571222.74/warc/CC-MAIN-20220810222056-20220811012056-00731.warc.gz"}